hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9c37e55ec5a0db1bf750b6a46b4287c773e8e006 | 97,153 | py | Python | sdk/python/pulumi_azure/containerservice/kubernetes_cluster_node_pool.py | pulumi/pulumi-azure | c62b6c1828de1facfd0d92425b72e22e229b0afc | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/containerservice/kubernetes_cluster_node_pool.py | pulumi/pulumi-azure | c62b6c1828de1facfd0d92425b72e22e229b0afc | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/containerservice/kubernetes_cluster_node_pool.py | pulumi/pulumi-azure | c62b6c1828de1facfd0d92425b72e22e229b0afc | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['KubernetesClusterNodePoolArgs', 'KubernetesClusterNodePool']
@pulumi.input_type
class KubernetesClusterNodePoolArgs:
def __init__(__self__, *,
kubernetes_cluster_id: pulumi.Input[str],
vm_size: pulumi.Input[str],
availability_zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enable_auto_scaling: Optional[pulumi.Input[bool]] = None,
enable_host_encryption: Optional[pulumi.Input[bool]] = None,
enable_node_public_ip: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
fips_enabled: Optional[pulumi.Input[bool]] = None,
kubelet_config: Optional[pulumi.Input['KubernetesClusterNodePoolKubeletConfigArgs']] = None,
kubelet_disk_type: Optional[pulumi.Input[str]] = None,
linux_os_config: Optional[pulumi.Input['KubernetesClusterNodePoolLinuxOsConfigArgs']] = None,
max_count: Optional[pulumi.Input[int]] = None,
max_pods: Optional[pulumi.Input[int]] = None,
min_count: Optional[pulumi.Input[int]] = None,
mode: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
node_count: Optional[pulumi.Input[int]] = None,
node_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
node_public_ip_prefix_id: Optional[pulumi.Input[str]] = None,
node_taints: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
orchestrator_version: Optional[pulumi.Input[str]] = None,
os_disk_size_gb: Optional[pulumi.Input[int]] = None,
os_disk_type: Optional[pulumi.Input[str]] = None,
os_sku: Optional[pulumi.Input[str]] = None,
os_type: Optional[pulumi.Input[str]] = None,
pod_subnet_id: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[str]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
spot_max_price: Optional[pulumi.Input[float]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ultra_ssd_enabled: Optional[pulumi.Input[bool]] = None,
upgrade_settings: Optional[pulumi.Input['KubernetesClusterNodePoolUpgradeSettingsArgs']] = None,
vnet_subnet_id: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a KubernetesClusterNodePool resource.
:param pulumi.Input[str] kubernetes_cluster_id: The ID of the Kubernetes Cluster where this Node Pool should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] vm_size: The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] availability_zones: A list of Availability Zones where the Nodes in this Node Pool should be created in. Changing this forces a new resource to be created.
:param pulumi.Input[bool] enable_auto_scaling: Whether to enable [auto-scaler](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler). Defaults to `false`.
:param pulumi.Input[bool] enable_host_encryption: Should the nodes in this Node Pool have host encryption enabled? Defaults to `false`.
:param pulumi.Input[bool] enable_node_public_ip: Should each node have a Public IP Address? Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[str] eviction_policy: The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are `Deallocate` and `Delete`. Changing this forces a new resource to be created.
:param pulumi.Input[bool] fips_enabled: Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created.
:param pulumi.Input['KubernetesClusterNodePoolKubeletConfigArgs'] kubelet_config: A `kubelet_config` block as defined below.
:param pulumi.Input[str] kubelet_disk_type: The type of disk used by kubelet. Possible Values are `OS`.
:param pulumi.Input['KubernetesClusterNodePoolLinuxOsConfigArgs'] linux_os_config: A `linux_os_config` block as defined below.
:param pulumi.Input[int] max_count: The maximum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be greater than or equal to `min_count`.
:param pulumi.Input[int] max_pods: The maximum number of pods that can run on each agent. Changing this forces a new resource to be created.
:param pulumi.Input[int] min_count: The minimum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be less than or equal to `max_count`.
:param pulumi.Input[str] mode: Should this Node Pool be used for System or User resources? Possible values are `System` and `User`. Defaults to `User`.
:param pulumi.Input[str] name: The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created.
:param pulumi.Input[int] node_count: The initial number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` (inclusive) for user pools and between `1` and `1000` (inclusive) for system pools and must be a value in the range `min_count` - `max_count`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] node_labels: A map of Kubernetes labels which should be applied to nodes in this Node Pool. Changing this forces a new resource to be created.
:param pulumi.Input[str] node_public_ip_prefix_id: Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. `enable_node_public_ip` should be `true`. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] node_taints: A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g `key=value:NoSchedule`). Changing this forces a new resource to be created.
:param pulumi.Input[str] orchestrator_version: Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade)
:param pulumi.Input[int] os_disk_size_gb: The Agent Operating System disk size in GB. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_disk_type: The type of disk which should be used for the Operating System. Possible values are `Ephemeral` and `Managed`. Defaults to `Managed`. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_sku: OsSKU to be used to specify Linux OSType. Not applicable to Windows OSType. Possible values include: `Ubuntu`, `CBLMariner`. Defaults to `Ubuntu`. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_type: The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are `Linux` and `Windows`. Defaults to `Linux`.
:param pulumi.Input[str] pod_subnet_id: The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] priority: The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this forces a new resource to be created.
:param pulumi.Input[str] proximity_placement_group_id: The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created.
:param pulumi.Input[float] spot_max_price: The maximum price you're willing to pay in USD per Virtual Machine. Valid values are `-1` (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the resource.
:param pulumi.Input[bool] ultra_ssd_enabled: Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to `false`. See [the documentation](https://docs.microsoft.com/en-us/azure/aks/use-ultra-disks) for more information.
:param pulumi.Input['KubernetesClusterNodePoolUpgradeSettingsArgs'] upgrade_settings: A `upgrade_settings` block as documented below.
:param pulumi.Input[str] vnet_subnet_id: The ID of the Subnet where this Node Pool should exist.
"""
pulumi.set(__self__, "kubernetes_cluster_id", kubernetes_cluster_id)
pulumi.set(__self__, "vm_size", vm_size)
if availability_zones is not None:
pulumi.set(__self__, "availability_zones", availability_zones)
if enable_auto_scaling is not None:
pulumi.set(__self__, "enable_auto_scaling", enable_auto_scaling)
if enable_host_encryption is not None:
pulumi.set(__self__, "enable_host_encryption", enable_host_encryption)
if enable_node_public_ip is not None:
pulumi.set(__self__, "enable_node_public_ip", enable_node_public_ip)
if eviction_policy is not None:
pulumi.set(__self__, "eviction_policy", eviction_policy)
if fips_enabled is not None:
pulumi.set(__self__, "fips_enabled", fips_enabled)
if kubelet_config is not None:
pulumi.set(__self__, "kubelet_config", kubelet_config)
if kubelet_disk_type is not None:
pulumi.set(__self__, "kubelet_disk_type", kubelet_disk_type)
if linux_os_config is not None:
pulumi.set(__self__, "linux_os_config", linux_os_config)
if max_count is not None:
pulumi.set(__self__, "max_count", max_count)
if max_pods is not None:
pulumi.set(__self__, "max_pods", max_pods)
if min_count is not None:
pulumi.set(__self__, "min_count", min_count)
if mode is not None:
pulumi.set(__self__, "mode", mode)
if name is not None:
pulumi.set(__self__, "name", name)
if node_count is not None:
pulumi.set(__self__, "node_count", node_count)
if node_labels is not None:
pulumi.set(__self__, "node_labels", node_labels)
if node_public_ip_prefix_id is not None:
pulumi.set(__self__, "node_public_ip_prefix_id", node_public_ip_prefix_id)
if node_taints is not None:
pulumi.set(__self__, "node_taints", node_taints)
if orchestrator_version is not None:
pulumi.set(__self__, "orchestrator_version", orchestrator_version)
if os_disk_size_gb is not None:
pulumi.set(__self__, "os_disk_size_gb", os_disk_size_gb)
if os_disk_type is not None:
pulumi.set(__self__, "os_disk_type", os_disk_type)
if os_sku is not None:
pulumi.set(__self__, "os_sku", os_sku)
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if pod_subnet_id is not None:
pulumi.set(__self__, "pod_subnet_id", pod_subnet_id)
if priority is not None:
pulumi.set(__self__, "priority", priority)
if proximity_placement_group_id is not None:
pulumi.set(__self__, "proximity_placement_group_id", proximity_placement_group_id)
if spot_max_price is not None:
pulumi.set(__self__, "spot_max_price", spot_max_price)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if ultra_ssd_enabled is not None:
pulumi.set(__self__, "ultra_ssd_enabled", ultra_ssd_enabled)
if upgrade_settings is not None:
pulumi.set(__self__, "upgrade_settings", upgrade_settings)
if vnet_subnet_id is not None:
pulumi.set(__self__, "vnet_subnet_id", vnet_subnet_id)
@property
@pulumi.getter(name="kubernetesClusterId")
def kubernetes_cluster_id(self) -> pulumi.Input[str]:
"""
The ID of the Kubernetes Cluster where this Node Pool should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "kubernetes_cluster_id")
@kubernetes_cluster_id.setter
def kubernetes_cluster_id(self, value: pulumi.Input[str]):
pulumi.set(self, "kubernetes_cluster_id", value)
@property
@pulumi.getter(name="vmSize")
def vm_size(self) -> pulumi.Input[str]:
"""
The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "vm_size")
@vm_size.setter
def vm_size(self, value: pulumi.Input[str]):
pulumi.set(self, "vm_size", value)
@property
@pulumi.getter(name="availabilityZones")
def availability_zones(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of Availability Zones where the Nodes in this Node Pool should be created in. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "availability_zones")
@availability_zones.setter
def availability_zones(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "availability_zones", value)
@property
@pulumi.getter(name="enableAutoScaling")
def enable_auto_scaling(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to enable [auto-scaler](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler). Defaults to `false`.
"""
return pulumi.get(self, "enable_auto_scaling")
@enable_auto_scaling.setter
def enable_auto_scaling(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_auto_scaling", value)
@property
@pulumi.getter(name="enableHostEncryption")
def enable_host_encryption(self) -> Optional[pulumi.Input[bool]]:
"""
Should the nodes in this Node Pool have host encryption enabled? Defaults to `false`.
"""
return pulumi.get(self, "enable_host_encryption")
@enable_host_encryption.setter
def enable_host_encryption(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_host_encryption", value)
@property
@pulumi.getter(name="enableNodePublicIp")
def enable_node_public_ip(self) -> Optional[pulumi.Input[bool]]:
"""
Should each node have a Public IP Address? Defaults to `false`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "enable_node_public_ip")
@enable_node_public_ip.setter
def enable_node_public_ip(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_node_public_ip", value)
@property
@pulumi.getter(name="evictionPolicy")
def eviction_policy(self) -> Optional[pulumi.Input[str]]:
"""
The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are `Deallocate` and `Delete`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "eviction_policy")
@eviction_policy.setter
def eviction_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eviction_policy", value)
@property
@pulumi.getter(name="fipsEnabled")
def fips_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created.
"""
return pulumi.get(self, "fips_enabled")
@fips_enabled.setter
def fips_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "fips_enabled", value)
@property
@pulumi.getter(name="kubeletConfig")
def kubelet_config(self) -> Optional[pulumi.Input['KubernetesClusterNodePoolKubeletConfigArgs']]:
"""
A `kubelet_config` block as defined below.
"""
return pulumi.get(self, "kubelet_config")
@kubelet_config.setter
def kubelet_config(self, value: Optional[pulumi.Input['KubernetesClusterNodePoolKubeletConfigArgs']]):
pulumi.set(self, "kubelet_config", value)
@property
@pulumi.getter(name="kubeletDiskType")
def kubelet_disk_type(self) -> Optional[pulumi.Input[str]]:
"""
The type of disk used by kubelet. Possible Values are `OS`.
"""
return pulumi.get(self, "kubelet_disk_type")
@kubelet_disk_type.setter
def kubelet_disk_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kubelet_disk_type", value)
@property
@pulumi.getter(name="linuxOsConfig")
def linux_os_config(self) -> Optional[pulumi.Input['KubernetesClusterNodePoolLinuxOsConfigArgs']]:
"""
A `linux_os_config` block as defined below.
"""
return pulumi.get(self, "linux_os_config")
@linux_os_config.setter
def linux_os_config(self, value: Optional[pulumi.Input['KubernetesClusterNodePoolLinuxOsConfigArgs']]):
pulumi.set(self, "linux_os_config", value)
@property
@pulumi.getter(name="maxCount")
def max_count(self) -> Optional[pulumi.Input[int]]:
"""
The maximum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be greater than or equal to `min_count`.
"""
return pulumi.get(self, "max_count")
@max_count.setter
def max_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_count", value)
@property
@pulumi.getter(name="maxPods")
def max_pods(self) -> Optional[pulumi.Input[int]]:
"""
The maximum number of pods that can run on each agent. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "max_pods")
@max_pods.setter
def max_pods(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_pods", value)
@property
@pulumi.getter(name="minCount")
def min_count(self) -> Optional[pulumi.Input[int]]:
"""
The minimum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be less than or equal to `max_count`.
"""
return pulumi.get(self, "min_count")
@min_count.setter
def min_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "min_count", value)
@property
@pulumi.getter
def mode(self) -> Optional[pulumi.Input[str]]:
"""
Should this Node Pool be used for System or User resources? Possible values are `System` and `User`. Defaults to `User`.
"""
return pulumi.get(self, "mode")
@mode.setter
def mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "mode", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="nodeCount")
def node_count(self) -> Optional[pulumi.Input[int]]:
"""
The initial number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` (inclusive) for user pools and between `1` and `1000` (inclusive) for system pools and must be a value in the range `min_count` - `max_count`.
"""
return pulumi.get(self, "node_count")
@node_count.setter
def node_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "node_count", value)
@property
@pulumi.getter(name="nodeLabels")
def node_labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of Kubernetes labels which should be applied to nodes in this Node Pool. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "node_labels")
@node_labels.setter
def node_labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "node_labels", value)
@property
@pulumi.getter(name="nodePublicIpPrefixId")
def node_public_ip_prefix_id(self) -> Optional[pulumi.Input[str]]:
"""
Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. `enable_node_public_ip` should be `true`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "node_public_ip_prefix_id")
@node_public_ip_prefix_id.setter
def node_public_ip_prefix_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "node_public_ip_prefix_id", value)
@property
@pulumi.getter(name="nodeTaints")
def node_taints(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g `key=value:NoSchedule`). Changing this forces a new resource to be created.
"""
return pulumi.get(self, "node_taints")
@node_taints.setter
def node_taints(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "node_taints", value)
@property
@pulumi.getter(name="orchestratorVersion")
def orchestrator_version(self) -> Optional[pulumi.Input[str]]:
"""
Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade)
"""
return pulumi.get(self, "orchestrator_version")
@orchestrator_version.setter
def orchestrator_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "orchestrator_version", value)
@property
@pulumi.getter(name="osDiskSizeGb")
def os_disk_size_gb(self) -> Optional[pulumi.Input[int]]:
"""
The Agent Operating System disk size in GB. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_disk_size_gb")
@os_disk_size_gb.setter
def os_disk_size_gb(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "os_disk_size_gb", value)
@property
@pulumi.getter(name="osDiskType")
def os_disk_type(self) -> Optional[pulumi.Input[str]]:
"""
The type of disk which should be used for the Operating System. Possible values are `Ephemeral` and `Managed`. Defaults to `Managed`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_disk_type")
@os_disk_type.setter
def os_disk_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "os_disk_type", value)
@property
@pulumi.getter(name="osSku")
def os_sku(self) -> Optional[pulumi.Input[str]]:
"""
OsSKU to be used to specify Linux OSType. Not applicable to Windows OSType. Possible values include: `Ubuntu`, `CBLMariner`. Defaults to `Ubuntu`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_sku")
@os_sku.setter
def os_sku(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "os_sku", value)
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[pulumi.Input[str]]:
"""
The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are `Linux` and `Windows`. Defaults to `Linux`.
"""
return pulumi.get(self, "os_type")
@os_type.setter
def os_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "os_type", value)
@property
@pulumi.getter(name="podSubnetId")
def pod_subnet_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "pod_subnet_id")
@pod_subnet_id.setter
def pod_subnet_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "pod_subnet_id", value)
@property
@pulumi.getter
def priority(self) -> Optional[pulumi.Input[str]]:
"""
The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "priority")
@priority.setter
def priority(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "priority", value)
@property
@pulumi.getter(name="proximityPlacementGroupId")
def proximity_placement_group_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "proximity_placement_group_id")
@proximity_placement_group_id.setter
def proximity_placement_group_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "proximity_placement_group_id", value)
@property
@pulumi.getter(name="spotMaxPrice")
def spot_max_price(self) -> Optional[pulumi.Input[float]]:
"""
The maximum price you're willing to pay in USD per Virtual Machine. Valid values are `-1` (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "spot_max_price")
@spot_max_price.setter
def spot_max_price(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "spot_max_price", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags to assign to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="ultraSsdEnabled")
def ultra_ssd_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to `false`. See [the documentation](https://docs.microsoft.com/en-us/azure/aks/use-ultra-disks) for more information.
"""
return pulumi.get(self, "ultra_ssd_enabled")
@ultra_ssd_enabled.setter
def ultra_ssd_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "ultra_ssd_enabled", value)
@property
@pulumi.getter(name="upgradeSettings")
def upgrade_settings(self) -> Optional[pulumi.Input['KubernetesClusterNodePoolUpgradeSettingsArgs']]:
"""
A `upgrade_settings` block as documented below.
"""
return pulumi.get(self, "upgrade_settings")
@upgrade_settings.setter
def upgrade_settings(self, value: Optional[pulumi.Input['KubernetesClusterNodePoolUpgradeSettingsArgs']]):
pulumi.set(self, "upgrade_settings", value)
@property
@pulumi.getter(name="vnetSubnetId")
def vnet_subnet_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Subnet where this Node Pool should exist.
"""
return pulumi.get(self, "vnet_subnet_id")
@vnet_subnet_id.setter
def vnet_subnet_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "vnet_subnet_id", value)
@pulumi.input_type
class _KubernetesClusterNodePoolState:
def __init__(__self__, *,
availability_zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enable_auto_scaling: Optional[pulumi.Input[bool]] = None,
enable_host_encryption: Optional[pulumi.Input[bool]] = None,
enable_node_public_ip: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
fips_enabled: Optional[pulumi.Input[bool]] = None,
kubelet_config: Optional[pulumi.Input['KubernetesClusterNodePoolKubeletConfigArgs']] = None,
kubelet_disk_type: Optional[pulumi.Input[str]] = None,
kubernetes_cluster_id: Optional[pulumi.Input[str]] = None,
linux_os_config: Optional[pulumi.Input['KubernetesClusterNodePoolLinuxOsConfigArgs']] = None,
max_count: Optional[pulumi.Input[int]] = None,
max_pods: Optional[pulumi.Input[int]] = None,
min_count: Optional[pulumi.Input[int]] = None,
mode: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
node_count: Optional[pulumi.Input[int]] = None,
node_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
node_public_ip_prefix_id: Optional[pulumi.Input[str]] = None,
node_taints: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
orchestrator_version: Optional[pulumi.Input[str]] = None,
os_disk_size_gb: Optional[pulumi.Input[int]] = None,
os_disk_type: Optional[pulumi.Input[str]] = None,
os_sku: Optional[pulumi.Input[str]] = None,
os_type: Optional[pulumi.Input[str]] = None,
pod_subnet_id: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[str]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
spot_max_price: Optional[pulumi.Input[float]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ultra_ssd_enabled: Optional[pulumi.Input[bool]] = None,
upgrade_settings: Optional[pulumi.Input['KubernetesClusterNodePoolUpgradeSettingsArgs']] = None,
vm_size: Optional[pulumi.Input[str]] = None,
vnet_subnet_id: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering KubernetesClusterNodePool resources.
:param pulumi.Input[Sequence[pulumi.Input[str]]] availability_zones: A list of Availability Zones where the Nodes in this Node Pool should be created in. Changing this forces a new resource to be created.
:param pulumi.Input[bool] enable_auto_scaling: Whether to enable [auto-scaler](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler). Defaults to `false`.
:param pulumi.Input[bool] enable_host_encryption: Should the nodes in this Node Pool have host encryption enabled? Defaults to `false`.
:param pulumi.Input[bool] enable_node_public_ip: Should each node have a Public IP Address? Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[str] eviction_policy: The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are `Deallocate` and `Delete`. Changing this forces a new resource to be created.
:param pulumi.Input[bool] fips_enabled: Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created.
:param pulumi.Input['KubernetesClusterNodePoolKubeletConfigArgs'] kubelet_config: A `kubelet_config` block as defined below.
:param pulumi.Input[str] kubelet_disk_type: The type of disk used by kubelet. Possible Values are `OS`.
:param pulumi.Input[str] kubernetes_cluster_id: The ID of the Kubernetes Cluster where this Node Pool should exist. Changing this forces a new resource to be created.
:param pulumi.Input['KubernetesClusterNodePoolLinuxOsConfigArgs'] linux_os_config: A `linux_os_config` block as defined below.
:param pulumi.Input[int] max_count: The maximum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be greater than or equal to `min_count`.
:param pulumi.Input[int] max_pods: The maximum number of pods that can run on each agent. Changing this forces a new resource to be created.
:param pulumi.Input[int] min_count: The minimum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be less than or equal to `max_count`.
:param pulumi.Input[str] mode: Should this Node Pool be used for System or User resources? Possible values are `System` and `User`. Defaults to `User`.
:param pulumi.Input[str] name: The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created.
:param pulumi.Input[int] node_count: The initial number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` (inclusive) for user pools and between `1` and `1000` (inclusive) for system pools and must be a value in the range `min_count` - `max_count`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] node_labels: A map of Kubernetes labels which should be applied to nodes in this Node Pool. Changing this forces a new resource to be created.
:param pulumi.Input[str] node_public_ip_prefix_id: Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. `enable_node_public_ip` should be `true`. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] node_taints: A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g `key=value:NoSchedule`). Changing this forces a new resource to be created.
:param pulumi.Input[str] orchestrator_version: Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade)
:param pulumi.Input[int] os_disk_size_gb: The Agent Operating System disk size in GB. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_disk_type: The type of disk which should be used for the Operating System. Possible values are `Ephemeral` and `Managed`. Defaults to `Managed`. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_sku: OsSKU to be used to specify Linux OSType. Not applicable to Windows OSType. Possible values include: `Ubuntu`, `CBLMariner`. Defaults to `Ubuntu`. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_type: The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are `Linux` and `Windows`. Defaults to `Linux`.
:param pulumi.Input[str] pod_subnet_id: The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] priority: The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this forces a new resource to be created.
:param pulumi.Input[str] proximity_placement_group_id: The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created.
:param pulumi.Input[float] spot_max_price: The maximum price you're willing to pay in USD per Virtual Machine. Valid values are `-1` (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the resource.
:param pulumi.Input[bool] ultra_ssd_enabled: Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to `false`. See [the documentation](https://docs.microsoft.com/en-us/azure/aks/use-ultra-disks) for more information.
:param pulumi.Input['KubernetesClusterNodePoolUpgradeSettingsArgs'] upgrade_settings: A `upgrade_settings` block as documented below.
:param pulumi.Input[str] vm_size: The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created.
:param pulumi.Input[str] vnet_subnet_id: The ID of the Subnet where this Node Pool should exist.
"""
if availability_zones is not None:
pulumi.set(__self__, "availability_zones", availability_zones)
if enable_auto_scaling is not None:
pulumi.set(__self__, "enable_auto_scaling", enable_auto_scaling)
if enable_host_encryption is not None:
pulumi.set(__self__, "enable_host_encryption", enable_host_encryption)
if enable_node_public_ip is not None:
pulumi.set(__self__, "enable_node_public_ip", enable_node_public_ip)
if eviction_policy is not None:
pulumi.set(__self__, "eviction_policy", eviction_policy)
if fips_enabled is not None:
pulumi.set(__self__, "fips_enabled", fips_enabled)
if kubelet_config is not None:
pulumi.set(__self__, "kubelet_config", kubelet_config)
if kubelet_disk_type is not None:
pulumi.set(__self__, "kubelet_disk_type", kubelet_disk_type)
if kubernetes_cluster_id is not None:
pulumi.set(__self__, "kubernetes_cluster_id", kubernetes_cluster_id)
if linux_os_config is not None:
pulumi.set(__self__, "linux_os_config", linux_os_config)
if max_count is not None:
pulumi.set(__self__, "max_count", max_count)
if max_pods is not None:
pulumi.set(__self__, "max_pods", max_pods)
if min_count is not None:
pulumi.set(__self__, "min_count", min_count)
if mode is not None:
pulumi.set(__self__, "mode", mode)
if name is not None:
pulumi.set(__self__, "name", name)
if node_count is not None:
pulumi.set(__self__, "node_count", node_count)
if node_labels is not None:
pulumi.set(__self__, "node_labels", node_labels)
if node_public_ip_prefix_id is not None:
pulumi.set(__self__, "node_public_ip_prefix_id", node_public_ip_prefix_id)
if node_taints is not None:
pulumi.set(__self__, "node_taints", node_taints)
if orchestrator_version is not None:
pulumi.set(__self__, "orchestrator_version", orchestrator_version)
if os_disk_size_gb is not None:
pulumi.set(__self__, "os_disk_size_gb", os_disk_size_gb)
if os_disk_type is not None:
pulumi.set(__self__, "os_disk_type", os_disk_type)
if os_sku is not None:
pulumi.set(__self__, "os_sku", os_sku)
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if pod_subnet_id is not None:
pulumi.set(__self__, "pod_subnet_id", pod_subnet_id)
if priority is not None:
pulumi.set(__self__, "priority", priority)
if proximity_placement_group_id is not None:
pulumi.set(__self__, "proximity_placement_group_id", proximity_placement_group_id)
if spot_max_price is not None:
pulumi.set(__self__, "spot_max_price", spot_max_price)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if ultra_ssd_enabled is not None:
pulumi.set(__self__, "ultra_ssd_enabled", ultra_ssd_enabled)
if upgrade_settings is not None:
pulumi.set(__self__, "upgrade_settings", upgrade_settings)
if vm_size is not None:
pulumi.set(__self__, "vm_size", vm_size)
if vnet_subnet_id is not None:
pulumi.set(__self__, "vnet_subnet_id", vnet_subnet_id)
@property
@pulumi.getter(name="availabilityZones")
def availability_zones(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of Availability Zones where the Nodes in this Node Pool should be created in. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "availability_zones")
@availability_zones.setter
def availability_zones(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "availability_zones", value)
@property
@pulumi.getter(name="enableAutoScaling")
def enable_auto_scaling(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to enable [auto-scaler](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler). Defaults to `false`.
"""
return pulumi.get(self, "enable_auto_scaling")
@enable_auto_scaling.setter
def enable_auto_scaling(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_auto_scaling", value)
@property
@pulumi.getter(name="enableHostEncryption")
def enable_host_encryption(self) -> Optional[pulumi.Input[bool]]:
"""
Should the nodes in this Node Pool have host encryption enabled? Defaults to `false`.
"""
return pulumi.get(self, "enable_host_encryption")
@enable_host_encryption.setter
def enable_host_encryption(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_host_encryption", value)
@property
@pulumi.getter(name="enableNodePublicIp")
def enable_node_public_ip(self) -> Optional[pulumi.Input[bool]]:
"""
Should each node have a Public IP Address? Defaults to `false`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "enable_node_public_ip")
@enable_node_public_ip.setter
def enable_node_public_ip(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_node_public_ip", value)
@property
@pulumi.getter(name="evictionPolicy")
def eviction_policy(self) -> Optional[pulumi.Input[str]]:
"""
The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are `Deallocate` and `Delete`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "eviction_policy")
@eviction_policy.setter
def eviction_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "eviction_policy", value)
@property
@pulumi.getter(name="fipsEnabled")
def fips_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created.
"""
return pulumi.get(self, "fips_enabled")
@fips_enabled.setter
def fips_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "fips_enabled", value)
@property
@pulumi.getter(name="kubeletConfig")
def kubelet_config(self) -> Optional[pulumi.Input['KubernetesClusterNodePoolKubeletConfigArgs']]:
"""
A `kubelet_config` block as defined below.
"""
return pulumi.get(self, "kubelet_config")
@kubelet_config.setter
def kubelet_config(self, value: Optional[pulumi.Input['KubernetesClusterNodePoolKubeletConfigArgs']]):
pulumi.set(self, "kubelet_config", value)
@property
@pulumi.getter(name="kubeletDiskType")
def kubelet_disk_type(self) -> Optional[pulumi.Input[str]]:
"""
The type of disk used by kubelet. Possible Values are `OS`.
"""
return pulumi.get(self, "kubelet_disk_type")
@kubelet_disk_type.setter
def kubelet_disk_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kubelet_disk_type", value)
@property
@pulumi.getter(name="kubernetesClusterId")
def kubernetes_cluster_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Kubernetes Cluster where this Node Pool should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "kubernetes_cluster_id")
@kubernetes_cluster_id.setter
def kubernetes_cluster_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kubernetes_cluster_id", value)
@property
@pulumi.getter(name="linuxOsConfig")
def linux_os_config(self) -> Optional[pulumi.Input['KubernetesClusterNodePoolLinuxOsConfigArgs']]:
"""
A `linux_os_config` block as defined below.
"""
return pulumi.get(self, "linux_os_config")
@linux_os_config.setter
def linux_os_config(self, value: Optional[pulumi.Input['KubernetesClusterNodePoolLinuxOsConfigArgs']]):
pulumi.set(self, "linux_os_config", value)
@property
@pulumi.getter(name="maxCount")
def max_count(self) -> Optional[pulumi.Input[int]]:
"""
The maximum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be greater than or equal to `min_count`.
"""
return pulumi.get(self, "max_count")
@max_count.setter
def max_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_count", value)
@property
@pulumi.getter(name="maxPods")
def max_pods(self) -> Optional[pulumi.Input[int]]:
"""
The maximum number of pods that can run on each agent. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "max_pods")
@max_pods.setter
def max_pods(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_pods", value)
@property
@pulumi.getter(name="minCount")
def min_count(self) -> Optional[pulumi.Input[int]]:
"""
The minimum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be less than or equal to `max_count`.
"""
return pulumi.get(self, "min_count")
@min_count.setter
def min_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "min_count", value)
@property
@pulumi.getter
def mode(self) -> Optional[pulumi.Input[str]]:
"""
Should this Node Pool be used for System or User resources? Possible values are `System` and `User`. Defaults to `User`.
"""
return pulumi.get(self, "mode")
@mode.setter
def mode(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "mode", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="nodeCount")
def node_count(self) -> Optional[pulumi.Input[int]]:
"""
The initial number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` (inclusive) for user pools and between `1` and `1000` (inclusive) for system pools and must be a value in the range `min_count` - `max_count`.
"""
return pulumi.get(self, "node_count")
@node_count.setter
def node_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "node_count", value)
@property
@pulumi.getter(name="nodeLabels")
def node_labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of Kubernetes labels which should be applied to nodes in this Node Pool. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "node_labels")
@node_labels.setter
def node_labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "node_labels", value)
@property
@pulumi.getter(name="nodePublicIpPrefixId")
def node_public_ip_prefix_id(self) -> Optional[pulumi.Input[str]]:
"""
Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. `enable_node_public_ip` should be `true`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "node_public_ip_prefix_id")
@node_public_ip_prefix_id.setter
def node_public_ip_prefix_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "node_public_ip_prefix_id", value)
@property
@pulumi.getter(name="nodeTaints")
def node_taints(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g `key=value:NoSchedule`). Changing this forces a new resource to be created.
"""
return pulumi.get(self, "node_taints")
@node_taints.setter
def node_taints(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "node_taints", value)
@property
@pulumi.getter(name="orchestratorVersion")
def orchestrator_version(self) -> Optional[pulumi.Input[str]]:
"""
Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade)
"""
return pulumi.get(self, "orchestrator_version")
@orchestrator_version.setter
def orchestrator_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "orchestrator_version", value)
@property
@pulumi.getter(name="osDiskSizeGb")
def os_disk_size_gb(self) -> Optional[pulumi.Input[int]]:
"""
The Agent Operating System disk size in GB. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_disk_size_gb")
@os_disk_size_gb.setter
def os_disk_size_gb(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "os_disk_size_gb", value)
@property
@pulumi.getter(name="osDiskType")
def os_disk_type(self) -> Optional[pulumi.Input[str]]:
"""
The type of disk which should be used for the Operating System. Possible values are `Ephemeral` and `Managed`. Defaults to `Managed`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_disk_type")
@os_disk_type.setter
def os_disk_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "os_disk_type", value)
@property
@pulumi.getter(name="osSku")
def os_sku(self) -> Optional[pulumi.Input[str]]:
"""
OsSKU to be used to specify Linux OSType. Not applicable to Windows OSType. Possible values include: `Ubuntu`, `CBLMariner`. Defaults to `Ubuntu`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_sku")
@os_sku.setter
def os_sku(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "os_sku", value)
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[pulumi.Input[str]]:
"""
The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are `Linux` and `Windows`. Defaults to `Linux`.
"""
return pulumi.get(self, "os_type")
@os_type.setter
def os_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "os_type", value)
@property
@pulumi.getter(name="podSubnetId")
def pod_subnet_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "pod_subnet_id")
@pod_subnet_id.setter
def pod_subnet_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "pod_subnet_id", value)
@property
@pulumi.getter
def priority(self) -> Optional[pulumi.Input[str]]:
"""
The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "priority")
@priority.setter
def priority(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "priority", value)
@property
@pulumi.getter(name="proximityPlacementGroupId")
def proximity_placement_group_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "proximity_placement_group_id")
@proximity_placement_group_id.setter
def proximity_placement_group_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "proximity_placement_group_id", value)
@property
@pulumi.getter(name="spotMaxPrice")
def spot_max_price(self) -> Optional[pulumi.Input[float]]:
"""
The maximum price you're willing to pay in USD per Virtual Machine. Valid values are `-1` (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "spot_max_price")
@spot_max_price.setter
def spot_max_price(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "spot_max_price", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags to assign to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="ultraSsdEnabled")
def ultra_ssd_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to `false`. See [the documentation](https://docs.microsoft.com/en-us/azure/aks/use-ultra-disks) for more information.
"""
return pulumi.get(self, "ultra_ssd_enabled")
@ultra_ssd_enabled.setter
def ultra_ssd_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "ultra_ssd_enabled", value)
@property
@pulumi.getter(name="upgradeSettings")
def upgrade_settings(self) -> Optional[pulumi.Input['KubernetesClusterNodePoolUpgradeSettingsArgs']]:
"""
A `upgrade_settings` block as documented below.
"""
return pulumi.get(self, "upgrade_settings")
@upgrade_settings.setter
def upgrade_settings(self, value: Optional[pulumi.Input['KubernetesClusterNodePoolUpgradeSettingsArgs']]):
pulumi.set(self, "upgrade_settings", value)
@property
@pulumi.getter(name="vmSize")
def vm_size(self) -> Optional[pulumi.Input[str]]:
"""
The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "vm_size")
@vm_size.setter
def vm_size(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "vm_size", value)
@property
@pulumi.getter(name="vnetSubnetId")
def vnet_subnet_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the Subnet where this Node Pool should exist.
"""
return pulumi.get(self, "vnet_subnet_id")
@vnet_subnet_id.setter
def vnet_subnet_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "vnet_subnet_id", value)
class KubernetesClusterNodePool(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
availability_zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enable_auto_scaling: Optional[pulumi.Input[bool]] = None,
enable_host_encryption: Optional[pulumi.Input[bool]] = None,
enable_node_public_ip: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
fips_enabled: Optional[pulumi.Input[bool]] = None,
kubelet_config: Optional[pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolKubeletConfigArgs']]] = None,
kubelet_disk_type: Optional[pulumi.Input[str]] = None,
kubernetes_cluster_id: Optional[pulumi.Input[str]] = None,
linux_os_config: Optional[pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolLinuxOsConfigArgs']]] = None,
max_count: Optional[pulumi.Input[int]] = None,
max_pods: Optional[pulumi.Input[int]] = None,
min_count: Optional[pulumi.Input[int]] = None,
mode: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
node_count: Optional[pulumi.Input[int]] = None,
node_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
node_public_ip_prefix_id: Optional[pulumi.Input[str]] = None,
node_taints: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
orchestrator_version: Optional[pulumi.Input[str]] = None,
os_disk_size_gb: Optional[pulumi.Input[int]] = None,
os_disk_type: Optional[pulumi.Input[str]] = None,
os_sku: Optional[pulumi.Input[str]] = None,
os_type: Optional[pulumi.Input[str]] = None,
pod_subnet_id: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[str]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
spot_max_price: Optional[pulumi.Input[float]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ultra_ssd_enabled: Optional[pulumi.Input[bool]] = None,
upgrade_settings: Optional[pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolUpgradeSettingsArgs']]] = None,
vm_size: Optional[pulumi.Input[str]] = None,
vnet_subnet_id: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
## Import
Kubernetes Cluster Node Pools can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:containerservice/kubernetesClusterNodePool:KubernetesClusterNodePool pool1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.ContainerService/managedClusters/cluster1/agentPools/pool1
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Sequence[pulumi.Input[str]]] availability_zones: A list of Availability Zones where the Nodes in this Node Pool should be created in. Changing this forces a new resource to be created.
:param pulumi.Input[bool] enable_auto_scaling: Whether to enable [auto-scaler](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler). Defaults to `false`.
:param pulumi.Input[bool] enable_host_encryption: Should the nodes in this Node Pool have host encryption enabled? Defaults to `false`.
:param pulumi.Input[bool] enable_node_public_ip: Should each node have a Public IP Address? Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[str] eviction_policy: The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are `Deallocate` and `Delete`. Changing this forces a new resource to be created.
:param pulumi.Input[bool] fips_enabled: Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolKubeletConfigArgs']] kubelet_config: A `kubelet_config` block as defined below.
:param pulumi.Input[str] kubelet_disk_type: The type of disk used by kubelet. Possible Values are `OS`.
:param pulumi.Input[str] kubernetes_cluster_id: The ID of the Kubernetes Cluster where this Node Pool should exist. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolLinuxOsConfigArgs']] linux_os_config: A `linux_os_config` block as defined below.
:param pulumi.Input[int] max_count: The maximum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be greater than or equal to `min_count`.
:param pulumi.Input[int] max_pods: The maximum number of pods that can run on each agent. Changing this forces a new resource to be created.
:param pulumi.Input[int] min_count: The minimum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be less than or equal to `max_count`.
:param pulumi.Input[str] mode: Should this Node Pool be used for System or User resources? Possible values are `System` and `User`. Defaults to `User`.
:param pulumi.Input[str] name: The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created.
:param pulumi.Input[int] node_count: The initial number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` (inclusive) for user pools and between `1` and `1000` (inclusive) for system pools and must be a value in the range `min_count` - `max_count`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] node_labels: A map of Kubernetes labels which should be applied to nodes in this Node Pool. Changing this forces a new resource to be created.
:param pulumi.Input[str] node_public_ip_prefix_id: Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. `enable_node_public_ip` should be `true`. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] node_taints: A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g `key=value:NoSchedule`). Changing this forces a new resource to be created.
:param pulumi.Input[str] orchestrator_version: Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade)
:param pulumi.Input[int] os_disk_size_gb: The Agent Operating System disk size in GB. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_disk_type: The type of disk which should be used for the Operating System. Possible values are `Ephemeral` and `Managed`. Defaults to `Managed`. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_sku: OsSKU to be used to specify Linux OSType. Not applicable to Windows OSType. Possible values include: `Ubuntu`, `CBLMariner`. Defaults to `Ubuntu`. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_type: The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are `Linux` and `Windows`. Defaults to `Linux`.
:param pulumi.Input[str] pod_subnet_id: The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] priority: The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this forces a new resource to be created.
:param pulumi.Input[str] proximity_placement_group_id: The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created.
:param pulumi.Input[float] spot_max_price: The maximum price you're willing to pay in USD per Virtual Machine. Valid values are `-1` (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the resource.
:param pulumi.Input[bool] ultra_ssd_enabled: Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to `false`. See [the documentation](https://docs.microsoft.com/en-us/azure/aks/use-ultra-disks) for more information.
:param pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolUpgradeSettingsArgs']] upgrade_settings: A `upgrade_settings` block as documented below.
:param pulumi.Input[str] vm_size: The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created.
:param pulumi.Input[str] vnet_subnet_id: The ID of the Subnet where this Node Pool should exist.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: KubernetesClusterNodePoolArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
## Import
Kubernetes Cluster Node Pools can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:containerservice/kubernetesClusterNodePool:KubernetesClusterNodePool pool1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.ContainerService/managedClusters/cluster1/agentPools/pool1
```
:param str resource_name: The name of the resource.
:param KubernetesClusterNodePoolArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(KubernetesClusterNodePoolArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
availability_zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enable_auto_scaling: Optional[pulumi.Input[bool]] = None,
enable_host_encryption: Optional[pulumi.Input[bool]] = None,
enable_node_public_ip: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
fips_enabled: Optional[pulumi.Input[bool]] = None,
kubelet_config: Optional[pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolKubeletConfigArgs']]] = None,
kubelet_disk_type: Optional[pulumi.Input[str]] = None,
kubernetes_cluster_id: Optional[pulumi.Input[str]] = None,
linux_os_config: Optional[pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolLinuxOsConfigArgs']]] = None,
max_count: Optional[pulumi.Input[int]] = None,
max_pods: Optional[pulumi.Input[int]] = None,
min_count: Optional[pulumi.Input[int]] = None,
mode: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
node_count: Optional[pulumi.Input[int]] = None,
node_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
node_public_ip_prefix_id: Optional[pulumi.Input[str]] = None,
node_taints: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
orchestrator_version: Optional[pulumi.Input[str]] = None,
os_disk_size_gb: Optional[pulumi.Input[int]] = None,
os_disk_type: Optional[pulumi.Input[str]] = None,
os_sku: Optional[pulumi.Input[str]] = None,
os_type: Optional[pulumi.Input[str]] = None,
pod_subnet_id: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[str]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
spot_max_price: Optional[pulumi.Input[float]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ultra_ssd_enabled: Optional[pulumi.Input[bool]] = None,
upgrade_settings: Optional[pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolUpgradeSettingsArgs']]] = None,
vm_size: Optional[pulumi.Input[str]] = None,
vnet_subnet_id: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = KubernetesClusterNodePoolArgs.__new__(KubernetesClusterNodePoolArgs)
__props__.__dict__["availability_zones"] = availability_zones
__props__.__dict__["enable_auto_scaling"] = enable_auto_scaling
__props__.__dict__["enable_host_encryption"] = enable_host_encryption
__props__.__dict__["enable_node_public_ip"] = enable_node_public_ip
__props__.__dict__["eviction_policy"] = eviction_policy
__props__.__dict__["fips_enabled"] = fips_enabled
__props__.__dict__["kubelet_config"] = kubelet_config
__props__.__dict__["kubelet_disk_type"] = kubelet_disk_type
if kubernetes_cluster_id is None and not opts.urn:
raise TypeError("Missing required property 'kubernetes_cluster_id'")
__props__.__dict__["kubernetes_cluster_id"] = kubernetes_cluster_id
__props__.__dict__["linux_os_config"] = linux_os_config
__props__.__dict__["max_count"] = max_count
__props__.__dict__["max_pods"] = max_pods
__props__.__dict__["min_count"] = min_count
__props__.__dict__["mode"] = mode
__props__.__dict__["name"] = name
__props__.__dict__["node_count"] = node_count
__props__.__dict__["node_labels"] = node_labels
__props__.__dict__["node_public_ip_prefix_id"] = node_public_ip_prefix_id
__props__.__dict__["node_taints"] = node_taints
__props__.__dict__["orchestrator_version"] = orchestrator_version
__props__.__dict__["os_disk_size_gb"] = os_disk_size_gb
__props__.__dict__["os_disk_type"] = os_disk_type
__props__.__dict__["os_sku"] = os_sku
__props__.__dict__["os_type"] = os_type
__props__.__dict__["pod_subnet_id"] = pod_subnet_id
__props__.__dict__["priority"] = priority
__props__.__dict__["proximity_placement_group_id"] = proximity_placement_group_id
__props__.__dict__["spot_max_price"] = spot_max_price
__props__.__dict__["tags"] = tags
__props__.__dict__["ultra_ssd_enabled"] = ultra_ssd_enabled
__props__.__dict__["upgrade_settings"] = upgrade_settings
if vm_size is None and not opts.urn:
raise TypeError("Missing required property 'vm_size'")
__props__.__dict__["vm_size"] = vm_size
__props__.__dict__["vnet_subnet_id"] = vnet_subnet_id
super(KubernetesClusterNodePool, __self__).__init__(
'azure:containerservice/kubernetesClusterNodePool:KubernetesClusterNodePool',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
availability_zones: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enable_auto_scaling: Optional[pulumi.Input[bool]] = None,
enable_host_encryption: Optional[pulumi.Input[bool]] = None,
enable_node_public_ip: Optional[pulumi.Input[bool]] = None,
eviction_policy: Optional[pulumi.Input[str]] = None,
fips_enabled: Optional[pulumi.Input[bool]] = None,
kubelet_config: Optional[pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolKubeletConfigArgs']]] = None,
kubelet_disk_type: Optional[pulumi.Input[str]] = None,
kubernetes_cluster_id: Optional[pulumi.Input[str]] = None,
linux_os_config: Optional[pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolLinuxOsConfigArgs']]] = None,
max_count: Optional[pulumi.Input[int]] = None,
max_pods: Optional[pulumi.Input[int]] = None,
min_count: Optional[pulumi.Input[int]] = None,
mode: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
node_count: Optional[pulumi.Input[int]] = None,
node_labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
node_public_ip_prefix_id: Optional[pulumi.Input[str]] = None,
node_taints: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
orchestrator_version: Optional[pulumi.Input[str]] = None,
os_disk_size_gb: Optional[pulumi.Input[int]] = None,
os_disk_type: Optional[pulumi.Input[str]] = None,
os_sku: Optional[pulumi.Input[str]] = None,
os_type: Optional[pulumi.Input[str]] = None,
pod_subnet_id: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[str]] = None,
proximity_placement_group_id: Optional[pulumi.Input[str]] = None,
spot_max_price: Optional[pulumi.Input[float]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ultra_ssd_enabled: Optional[pulumi.Input[bool]] = None,
upgrade_settings: Optional[pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolUpgradeSettingsArgs']]] = None,
vm_size: Optional[pulumi.Input[str]] = None,
vnet_subnet_id: Optional[pulumi.Input[str]] = None) -> 'KubernetesClusterNodePool':
"""
Get an existing KubernetesClusterNodePool resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Sequence[pulumi.Input[str]]] availability_zones: A list of Availability Zones where the Nodes in this Node Pool should be created in. Changing this forces a new resource to be created.
:param pulumi.Input[bool] enable_auto_scaling: Whether to enable [auto-scaler](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler). Defaults to `false`.
:param pulumi.Input[bool] enable_host_encryption: Should the nodes in this Node Pool have host encryption enabled? Defaults to `false`.
:param pulumi.Input[bool] enable_node_public_ip: Should each node have a Public IP Address? Defaults to `false`. Changing this forces a new resource to be created.
:param pulumi.Input[str] eviction_policy: The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are `Deallocate` and `Delete`. Changing this forces a new resource to be created.
:param pulumi.Input[bool] fips_enabled: Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolKubeletConfigArgs']] kubelet_config: A `kubelet_config` block as defined below.
:param pulumi.Input[str] kubelet_disk_type: The type of disk used by kubelet. Possible Values are `OS`.
:param pulumi.Input[str] kubernetes_cluster_id: The ID of the Kubernetes Cluster where this Node Pool should exist. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolLinuxOsConfigArgs']] linux_os_config: A `linux_os_config` block as defined below.
:param pulumi.Input[int] max_count: The maximum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be greater than or equal to `min_count`.
:param pulumi.Input[int] max_pods: The maximum number of pods that can run on each agent. Changing this forces a new resource to be created.
:param pulumi.Input[int] min_count: The minimum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be less than or equal to `max_count`.
:param pulumi.Input[str] mode: Should this Node Pool be used for System or User resources? Possible values are `System` and `User`. Defaults to `User`.
:param pulumi.Input[str] name: The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created.
:param pulumi.Input[int] node_count: The initial number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` (inclusive) for user pools and between `1` and `1000` (inclusive) for system pools and must be a value in the range `min_count` - `max_count`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] node_labels: A map of Kubernetes labels which should be applied to nodes in this Node Pool. Changing this forces a new resource to be created.
:param pulumi.Input[str] node_public_ip_prefix_id: Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. `enable_node_public_ip` should be `true`. Changing this forces a new resource to be created.
:param pulumi.Input[Sequence[pulumi.Input[str]]] node_taints: A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g `key=value:NoSchedule`). Changing this forces a new resource to be created.
:param pulumi.Input[str] orchestrator_version: Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade)
:param pulumi.Input[int] os_disk_size_gb: The Agent Operating System disk size in GB. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_disk_type: The type of disk which should be used for the Operating System. Possible values are `Ephemeral` and `Managed`. Defaults to `Managed`. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_sku: OsSKU to be used to specify Linux OSType. Not applicable to Windows OSType. Possible values include: `Ubuntu`, `CBLMariner`. Defaults to `Ubuntu`. Changing this forces a new resource to be created.
:param pulumi.Input[str] os_type: The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are `Linux` and `Windows`. Defaults to `Linux`.
:param pulumi.Input[str] pod_subnet_id: The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] priority: The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this forces a new resource to be created.
:param pulumi.Input[str] proximity_placement_group_id: The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created.
:param pulumi.Input[float] spot_max_price: The maximum price you're willing to pay in USD per Virtual Machine. Valid values are `-1` (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags to assign to the resource.
:param pulumi.Input[bool] ultra_ssd_enabled: Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to `false`. See [the documentation](https://docs.microsoft.com/en-us/azure/aks/use-ultra-disks) for more information.
:param pulumi.Input[pulumi.InputType['KubernetesClusterNodePoolUpgradeSettingsArgs']] upgrade_settings: A `upgrade_settings` block as documented below.
:param pulumi.Input[str] vm_size: The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created.
:param pulumi.Input[str] vnet_subnet_id: The ID of the Subnet where this Node Pool should exist.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _KubernetesClusterNodePoolState.__new__(_KubernetesClusterNodePoolState)
__props__.__dict__["availability_zones"] = availability_zones
__props__.__dict__["enable_auto_scaling"] = enable_auto_scaling
__props__.__dict__["enable_host_encryption"] = enable_host_encryption
__props__.__dict__["enable_node_public_ip"] = enable_node_public_ip
__props__.__dict__["eviction_policy"] = eviction_policy
__props__.__dict__["fips_enabled"] = fips_enabled
__props__.__dict__["kubelet_config"] = kubelet_config
__props__.__dict__["kubelet_disk_type"] = kubelet_disk_type
__props__.__dict__["kubernetes_cluster_id"] = kubernetes_cluster_id
__props__.__dict__["linux_os_config"] = linux_os_config
__props__.__dict__["max_count"] = max_count
__props__.__dict__["max_pods"] = max_pods
__props__.__dict__["min_count"] = min_count
__props__.__dict__["mode"] = mode
__props__.__dict__["name"] = name
__props__.__dict__["node_count"] = node_count
__props__.__dict__["node_labels"] = node_labels
__props__.__dict__["node_public_ip_prefix_id"] = node_public_ip_prefix_id
__props__.__dict__["node_taints"] = node_taints
__props__.__dict__["orchestrator_version"] = orchestrator_version
__props__.__dict__["os_disk_size_gb"] = os_disk_size_gb
__props__.__dict__["os_disk_type"] = os_disk_type
__props__.__dict__["os_sku"] = os_sku
__props__.__dict__["os_type"] = os_type
__props__.__dict__["pod_subnet_id"] = pod_subnet_id
__props__.__dict__["priority"] = priority
__props__.__dict__["proximity_placement_group_id"] = proximity_placement_group_id
__props__.__dict__["spot_max_price"] = spot_max_price
__props__.__dict__["tags"] = tags
__props__.__dict__["ultra_ssd_enabled"] = ultra_ssd_enabled
__props__.__dict__["upgrade_settings"] = upgrade_settings
__props__.__dict__["vm_size"] = vm_size
__props__.__dict__["vnet_subnet_id"] = vnet_subnet_id
return KubernetesClusterNodePool(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="availabilityZones")
def availability_zones(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
A list of Availability Zones where the Nodes in this Node Pool should be created in. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "availability_zones")
@property
@pulumi.getter(name="enableAutoScaling")
def enable_auto_scaling(self) -> pulumi.Output[Optional[bool]]:
"""
Whether to enable [auto-scaler](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler). Defaults to `false`.
"""
return pulumi.get(self, "enable_auto_scaling")
@property
@pulumi.getter(name="enableHostEncryption")
def enable_host_encryption(self) -> pulumi.Output[Optional[bool]]:
"""
Should the nodes in this Node Pool have host encryption enabled? Defaults to `false`.
"""
return pulumi.get(self, "enable_host_encryption")
@property
@pulumi.getter(name="enableNodePublicIp")
def enable_node_public_ip(self) -> pulumi.Output[Optional[bool]]:
"""
Should each node have a Public IP Address? Defaults to `false`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "enable_node_public_ip")
@property
@pulumi.getter(name="evictionPolicy")
def eviction_policy(self) -> pulumi.Output[Optional[str]]:
"""
The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are `Deallocate` and `Delete`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "eviction_policy")
@property
@pulumi.getter(name="fipsEnabled")
def fips_enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created.
"""
return pulumi.get(self, "fips_enabled")
@property
@pulumi.getter(name="kubeletConfig")
def kubelet_config(self) -> pulumi.Output[Optional['outputs.KubernetesClusterNodePoolKubeletConfig']]:
"""
A `kubelet_config` block as defined below.
"""
return pulumi.get(self, "kubelet_config")
@property
@pulumi.getter(name="kubeletDiskType")
def kubelet_disk_type(self) -> pulumi.Output[str]:
"""
The type of disk used by kubelet. Possible Values are `OS`.
"""
return pulumi.get(self, "kubelet_disk_type")
@property
@pulumi.getter(name="kubernetesClusterId")
def kubernetes_cluster_id(self) -> pulumi.Output[str]:
"""
The ID of the Kubernetes Cluster where this Node Pool should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "kubernetes_cluster_id")
@property
@pulumi.getter(name="linuxOsConfig")
def linux_os_config(self) -> pulumi.Output[Optional['outputs.KubernetesClusterNodePoolLinuxOsConfig']]:
"""
A `linux_os_config` block as defined below.
"""
return pulumi.get(self, "linux_os_config")
@property
@pulumi.getter(name="maxCount")
def max_count(self) -> pulumi.Output[Optional[int]]:
"""
The maximum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be greater than or equal to `min_count`.
"""
return pulumi.get(self, "max_count")
@property
@pulumi.getter(name="maxPods")
def max_pods(self) -> pulumi.Output[int]:
"""
The maximum number of pods that can run on each agent. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "max_pods")
@property
@pulumi.getter(name="minCount")
def min_count(self) -> pulumi.Output[Optional[int]]:
"""
The minimum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be less than or equal to `max_count`.
"""
return pulumi.get(self, "min_count")
@property
@pulumi.getter
def mode(self) -> pulumi.Output[Optional[str]]:
"""
Should this Node Pool be used for System or User resources? Possible values are `System` and `User`. Defaults to `User`.
"""
return pulumi.get(self, "mode")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="nodeCount")
def node_count(self) -> pulumi.Output[int]:
"""
The initial number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` (inclusive) for user pools and between `1` and `1000` (inclusive) for system pools and must be a value in the range `min_count` - `max_count`.
"""
return pulumi.get(self, "node_count")
@property
@pulumi.getter(name="nodeLabels")
def node_labels(self) -> pulumi.Output[Mapping[str, str]]:
"""
A map of Kubernetes labels which should be applied to nodes in this Node Pool. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "node_labels")
@property
@pulumi.getter(name="nodePublicIpPrefixId")
def node_public_ip_prefix_id(self) -> pulumi.Output[Optional[str]]:
"""
Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. `enable_node_public_ip` should be `true`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "node_public_ip_prefix_id")
@property
@pulumi.getter(name="nodeTaints")
def node_taints(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g `key=value:NoSchedule`). Changing this forces a new resource to be created.
"""
return pulumi.get(self, "node_taints")
@property
@pulumi.getter(name="orchestratorVersion")
def orchestrator_version(self) -> pulumi.Output[str]:
"""
Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade)
"""
return pulumi.get(self, "orchestrator_version")
@property
@pulumi.getter(name="osDiskSizeGb")
def os_disk_size_gb(self) -> pulumi.Output[int]:
"""
The Agent Operating System disk size in GB. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_disk_size_gb")
@property
@pulumi.getter(name="osDiskType")
def os_disk_type(self) -> pulumi.Output[Optional[str]]:
"""
The type of disk which should be used for the Operating System. Possible values are `Ephemeral` and `Managed`. Defaults to `Managed`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_disk_type")
@property
@pulumi.getter(name="osSku")
def os_sku(self) -> pulumi.Output[str]:
"""
OsSKU to be used to specify Linux OSType. Not applicable to Windows OSType. Possible values include: `Ubuntu`, `CBLMariner`. Defaults to `Ubuntu`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "os_sku")
@property
@pulumi.getter(name="osType")
def os_type(self) -> pulumi.Output[Optional[str]]:
"""
The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are `Linux` and `Windows`. Defaults to `Linux`.
"""
return pulumi.get(self, "os_type")
@property
@pulumi.getter(name="podSubnetId")
def pod_subnet_id(self) -> pulumi.Output[Optional[str]]:
"""
The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "pod_subnet_id")
@property
@pulumi.getter
def priority(self) -> pulumi.Output[Optional[str]]:
"""
The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "priority")
@property
@pulumi.getter(name="proximityPlacementGroupId")
def proximity_placement_group_id(self) -> pulumi.Output[Optional[str]]:
"""
The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "proximity_placement_group_id")
@property
@pulumi.getter(name="spotMaxPrice")
def spot_max_price(self) -> pulumi.Output[Optional[float]]:
"""
The maximum price you're willing to pay in USD per Virtual Machine. Valid values are `-1` (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "spot_max_price")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A mapping of tags to assign to the resource.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="ultraSsdEnabled")
def ultra_ssd_enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to `false`. See [the documentation](https://docs.microsoft.com/en-us/azure/aks/use-ultra-disks) for more information.
"""
return pulumi.get(self, "ultra_ssd_enabled")
@property
@pulumi.getter(name="upgradeSettings")
def upgrade_settings(self) -> pulumi.Output[Optional['outputs.KubernetesClusterNodePoolUpgradeSettings']]:
"""
A `upgrade_settings` block as documented below.
"""
return pulumi.get(self, "upgrade_settings")
@property
@pulumi.getter(name="vmSize")
def vm_size(self) -> pulumi.Output[str]:
"""
The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "vm_size")
@property
@pulumi.getter(name="vnetSubnetId")
def vnet_subnet_id(self) -> pulumi.Output[Optional[str]]:
"""
The ID of the Subnet where this Node Pool should exist.
"""
return pulumi.get(self, "vnet_subnet_id")
| 58.245204 | 297 | 0.687699 | 12,841 | 97,153 | 5.007865 | 0.026633 | 0.082963 | 0.08598 | 0.039296 | 0.967017 | 0.961745 | 0.957127 | 0.954545 | 0.952726 | 0.93814 | 0 | 0.00289 | 0.21637 | 97,153 | 1,667 | 298 | 58.280144 | 0.841775 | 0.407841 | 0 | 0.910609 | 1 | 0 | 0.126469 | 0.047959 | 0 | 0 | 0 | 0 | 0 | 1 | 0.168959 | false | 0.000982 | 0.006876 | 0 | 0.277014 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9c550e81305fbf93d51bbed2236ad0f59edd5919 | 11,192 | py | Python | telas.py | mjanibelli/not-a-hero | 0cb796e23c180cebeb264155aae8555322d915f4 | [
"MIT"
] | null | null | null | telas.py | mjanibelli/not-a-hero | 0cb796e23c180cebeb264155aae8555322d915f4 | [
"MIT"
] | null | null | null | telas.py | mjanibelli/not-a-hero | 0cb796e23c180cebeb264155aae8555322d915f4 | [
"MIT"
] | null | null | null | from PPlay.mouse import *
from PPlay.window import *
from PPlay.gameimage import *
from PPlay.sound import *
import cenas
import fase1_plus
def tela_vitoria_fase1_boa():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
fundo = GameImage("imagens/fase1/sky_cloud.jpg")
texto = GameImage("imagens/fase1/texto_vitoria_bom.png")
botao_continuar = GameImage("imagens/menu_inicial/botao_continuar.png")
botao_continuar.x = janela.width / 2 - 80
botao_continuar.y = janela.height - 100
while True:
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_continuar):
cenas.cena_1_boa()
fundo.draw()
botao_continuar.draw()
texto.draw()
janela.update()
def tela_vitoria_fase1_ruim():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
fundo = GameImage("imagens/fase1/sky_cloud.jpg")
texto = GameImage("imagens/fase1/texto_vitoria_ruim.png")
botao_continuar = GameImage("imagens/menu_inicial/botao_continuar.png")
botao_continuar.x = janela.width / 2 - 80
botao_continuar.y = janela.height - 100
while True:
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_continuar):
cenas.cena_1_ruim()
fundo.draw()
botao_continuar.draw()
texto.draw()
janela.update()
def tela_vitoria_fase2_boa():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
back1 = GameImage("imagens/fase2/background1.png")
back2 = GameImage("imagens/fase2/background2.png")
back3 = GameImage("imagens/fase2/background3.png")
back4 = GameImage("imagens/fase2/background4.png")
texto = GameImage("imagens/fase2/texto_vitoria_bom.png")
botao_continuar = GameImage("imagens/menu_inicial/botao_continuar.png")
botao_continuar.x = janela.width / 2 - 80
botao_continuar.y = janela.height - 100
while True:
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_continuar):
cenas.cena_2_boa()
back1.draw()
back2.draw()
back3.draw()
back4.draw()
botao_continuar.draw()
texto.draw()
janela.update()
def tela_vitoria_fase2_ruim():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
back1 = GameImage("imagens/fase2/background1.png")
back2 = GameImage("imagens/fase2/background2.png")
back3 = GameImage("imagens/fase2/background3.png")
back4 = GameImage("imagens/fase2/background4.png")
texto = GameImage("imagens/fase2/texto_vitoria_ruim.png")
botao_continuar = GameImage("imagens/menu_inicial/botao_continuar.png")
botao_continuar.x = janela.width / 2 - 80
botao_continuar.y = janela.height - 100
while True:
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_continuar):
cenas.cena_2_ruim()
back1.draw()
back2.draw()
back3.draw()
back4.draw()
botao_continuar.draw()
texto.draw()
janela.update()
def tela_vitoria_fase3_boa():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
fundo = GameImage("imagens/fase3/background.jpg")
texto = GameImage("imagens/fase3/texto_vitoria_bom.png")
botao_continuar = GameImage("imagens/menu_inicial/botao_continuar.png")
botao_continuar.x = janela.width / 2 - 80
botao_continuar.y = janela.height - 100
while True:
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_continuar):
cenas.cena_3_boa()
fundo.draw()
botao_continuar.draw()
texto.draw()
janela.update()
def tela_vitoria_fase3_ruim():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
fundo = GameImage("imagens/fase3/background.jpg")
texto = GameImage("imagens/fase3/texto_vitoria_ruim.png")
botao_continuar = GameImage("imagens/menu_inicial/botao_continuar.png")
botao_continuar.x = janela.width / 2 - 80
botao_continuar.y = janela.height - 100
while True:
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_continuar):
cenas.cena_3_ruim()
fundo.draw()
botao_continuar.draw()
texto.draw()
janela.update()
def tela_jogo_nao_salvo():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
teclado = janela.get_keyboard()
mouse = Mouse()
fundo = GameImage("imagens/menu_inicial/fundo.png")
aviso = GameImage("imagens/telas/aviso_save_nao_encontrado.png")
botao_voltar = GameImage("imagens/telas/botao_voltar.png")
aviso.x = 200
aviso.y = 150
botao_voltar.x = 325
botao_voltar.y = janela.height - 110
while True:
# Entrada de Dados
if teclado.key_pressed("esc"):
break
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_voltar):
break
# Desenhos
fundo.draw()
aviso.draw()
botao_voltar.draw()
# Update
janela.update()
def tela_creditos():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
teclado = janela.get_keyboard()
mouse = Mouse()
fundo = GameImage("imagens/menu_inicial/fundo.png")
texto_creditos = GameImage("imagens/telas/texto_creditos.png")
botao_voltar = GameImage("imagens/telas/botao_voltar.png")
botao_voltar.x = janela.width / 2 - 80
botao_voltar.y = 480
while True:
# Entrada de Dados
if teclado.key_pressed("esc"):
break
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_voltar):
break
# Desenhos
fundo.draw()
texto_creditos.draw()
botao_voltar.draw()
# Update
janela.update()
def tela_creditos_final():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
fundo = GameImage("imagens/menu_inicial/fundo.png")
texto_creditos = GameImage("imagens/telas/texto_creditos.png")
botao_sair = GameImage("imagens/menu_inicial/botao_sair.png")
botao_sair.x = janela.width / 2 - 80
botao_sair.y = 480
while True:
# Entrada de Dados
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_sair):
janela.close()
break
# Desenhos
fundo.draw()
texto_creditos.draw()
botao_sair.draw()
# Update
janela.update()
def tela_controles():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
teclado = janela.get_keyboard()
mouse = Mouse()
fundo = GameImage("imagens/menu_inicial/fundo.png")
texto_creditos = GameImage("imagens/telas/texto_controles.png")
botao_voltar = GameImage("imagens/telas/botao_voltar.png")
botao_voltar.x = janela.width / 2 - 80
botao_voltar.y = 480
while True:
# Entrada de Dados
if teclado.key_pressed("esc"):
break
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_voltar):
break
# Desenhos
fundo.draw()
texto_creditos.draw()
botao_voltar.draw()
# Update
janela.update()
def tela_jogo_nao_fechado():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
teclado = janela.get_keyboard()
mouse = Mouse()
fundo = GameImage("imagens/menu_inicial/fundo.png")
texto_aviso = GameImage("imagens/telas/texto_aviso_new_game_plus.png")
botao_voltar = GameImage("imagens/telas/botao_voltar.png")
botao_voltar.x = janela.width / 2 - 80
botao_voltar.y = 480
while True:
# Entrada de Dados
if teclado.key_pressed("esc"):
break
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_voltar):
break
# Desenhos
fundo.draw()
texto_aviso.draw()
botao_voltar.draw()
# Update
janela.update()
def tela_aviso_ngplus():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
fundo = GameImage("imagens/menu_inicial/fundo.png")
texto_aviso = GameImage("imagens/telas/aviso_ngplus.png")
botao_continuar = GameImage("imagens/menu_inicial/botao_continuar.png")
botao_continuar.x = janela.width / 2 - 80
botao_continuar.y = 480
while True:
# Entrada de Dados
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_continuar):
fase1_plus.iniciar_fase1()
# Desenhos
fundo.draw()
texto_aviso.draw()
botao_continuar.draw()
# Update
janela.update()
def tela_final_boa():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
musica = Sound("musicas/musica_final.ogg")
musica.set_volume(5)
musica.set_repeat(True)
fundo = GameImage("imagens/cenas/fundo_final.jpg")
texto = GameImage("imagens/cenas/texto_cena_final_boa.png")
botao_finalizar_jogo = GameImage("imagens/menu_inicial/botao_finalizar_jogo.png")
botao_finalizar_jogo.x = janela.width / 2 - 80
botao_finalizar_jogo.y = janela.height - 50
while True:
musica.play()
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_finalizar_jogo):
tela_creditos_final()
fundo.draw()
texto.draw()
botao_finalizar_jogo.draw()
janela.update()
def tela_final_ruim():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
musica = Sound("musicas/musica_final.ogg")
musica.set_volume(5)
musica.set_repeat(True)
fundo = GameImage("imagens/cenas/fundo_inicio.jpg")
texto = GameImage("imagens/cenas/texto_cena_final_ruim.png")
botao_finalizar_jogo = GameImage("imagens/menu_inicial/botao_finalizar_jogo.png")
botao_finalizar_jogo.x = janela.width / 2 - 80
botao_finalizar_jogo.y = janela.height - 50
while True:
musica.play()
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_finalizar_jogo):
tela_creditos_final()
fundo.draw()
texto.draw()
botao_finalizar_jogo.draw()
janela.update()
def tela_final_ngplus():
janela = Window(800, 600)
janela.set_title("Not a Hero!")
mouse = Mouse()
musica = Sound("musicas/musica_final.ogg")
musica.set_volume(5)
musica.set_repeat(True)
fundo = GameImage("imagens/menu_inicial/fundo.png")
texto = GameImage("imagens/telas/texto_ngplus.png")
botao_finalizar_jogo = GameImage("imagens/menu_inicial/botao_finalizar_jogo.png")
botao_finalizar_jogo.x = janela.width / 2 - 80
botao_finalizar_jogo.y = janela.height - 50
while True:
musica.play()
if mouse.is_button_pressed(1) and mouse.is_over_object(botao_finalizar_jogo):
tela_creditos_final()
fundo.draw()
texto.draw()
botao_finalizar_jogo.draw()
janela.update() | 25.847575 | 85 | 0.64698 | 1,405 | 11,192 | 4.92669 | 0.078292 | 0.117885 | 0.052008 | 0.070211 | 0.940913 | 0.930367 | 0.917076 | 0.911875 | 0.889627 | 0.882548 | 0 | 0.030435 | 0.242584 | 11,192 | 433 | 86 | 25.847575 | 0.786127 | 0.017602 | 0 | 0.807018 | 0 | 0 | 0.178828 | 0.162704 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.021053 | 0 | 0.073684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9c6abdfb4b1a392794d203699cbe7f0d41d2639a | 147 | py | Python | venv/Lib/site-packages/fbs/installer/fedora.py | Acuf5928/check- | 4b993e0bcee33434506565dab11ece3dfa9c5cab | [
"MIT"
] | 1 | 2020-03-30T00:08:41.000Z | 2020-03-30T00:08:41.000Z | venv/Lib/site-packages/fbs/installer/fedora.py | Acuf5928/check- | 4b993e0bcee33434506565dab11ece3dfa9c5cab | [
"MIT"
] | null | null | null | venv/Lib/site-packages/fbs/installer/fedora.py | Acuf5928/check- | 4b993e0bcee33434506565dab11ece3dfa9c5cab | [
"MIT"
] | 2 | 2018-12-29T07:49:59.000Z | 2020-03-18T02:44:31.000Z | from fbs.installer.linux import generate_installer_files, run_fpm
def create_installer_fedora():
generate_installer_files()
run_fpm('rpm') | 29.4 | 65 | 0.802721 | 20 | 147 | 5.5 | 0.65 | 0.309091 | 0.4 | 0.454545 | 0.509091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115646 | 147 | 5 | 66 | 29.4 | 0.846154 | 0 | 0 | 0 | 1 | 0 | 0.02027 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
131c51b8ca822f25571d14d9418d31348036af74 | 106,350 | py | Python | utils/indigo-service/service/test/api/indigo_test.py | f1nzer/Indigo | 59efbd0be0b42f449f706c3a3c8d094e483e5ef4 | [
"Apache-2.0"
] | null | null | null | utils/indigo-service/service/test/api/indigo_test.py | f1nzer/Indigo | 59efbd0be0b42f449f706c3a3c8d094e483e5ef4 | [
"Apache-2.0"
] | null | null | null | utils/indigo-service/service/test/api/indigo_test.py | f1nzer/Indigo | 59efbd0be0b42f449f706c3a3c8d094e483e5ef4 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import os
import unittest
import requests
# @unittest.skip("Skip libraries test case")
class IndigoTestCase(unittest.TestCase):
def setUp(self):
service_url = "http://front/v2"
if (
"INDIGO_SERVICE_URL" in os.environ
and len(os.environ["INDIGO_SERVICE_URL"]) > 0
):
service_url = os.environ["INDIGO_SERVICE_URL"]
self.url_prefix = "{}/indigo".format(service_url)
def tearDown(self):
pass
@staticmethod
def get_headers(d):
headers = {
"Content-Type": "application/json",
"Accept": "application/json",
}
data = json.dumps(d)
headers["Content-Length"] = len(data)
return headers, data
formats = (
"chemical/x-mdl-molfile",
"chemical/x-daylight-smiles",
"chemical/x-cml",
"chemical/x-inchi",
)
aromatized_mols = {
"chemical/x-mdl-molfile": (
"""
-INDIGO-01000000002D
6 6 0 0 0 0 0 0 0 0999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 4 0 0 0 0
2 3 4 0 0 0 0
3 4 4 0 0 0 0
4 5 4 0 0 0 0
5 6 4 0 0 0 0
6 1 4 0 0 0 0
M END
""",
"""
-INDIGO-07261614562D
6 6 0 0 0 0 0 0 0 0999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 4 0 0 0 0
1 3 4 0 0 0 0
2 4 4 0 0 0 0
3 5 4 0 0 0 0
4 6 4 0 0 0 0
5 6 4 0 0 0 0
M END
""",
),
"chemical/x-daylight-smiles": ("c1ccccc1",),
"chemical/x-cml": (
"""<?xml version="1.0" ?>
<cml>
<molecule title="">
<atomArray>
<atom id="a0" elementType="C" />
<atom id="a1" elementType="C" />
<atom id="a2" elementType="C" />
<atom id="a3" elementType="C" />
<atom id="a4" elementType="C" />
<atom id="a5" elementType="C" />
</atomArray>
<bondArray>
<bond atomRefs2="a0 a1" order="A" />
<bond atomRefs2="a1 a2" order="A" />
<bond atomRefs2="a2 a3" order="A" />
<bond atomRefs2="a3 a4" order="A" />
<bond atomRefs2="a4 a5" order="A" />
<bond atomRefs2="a5 a0" order="A" />
</bondArray>
</molecule>
</cml>
""",
"""<?xml version="1.0" ?>
<cml>
<molecule>
<atomArray>
<atom id="a0" elementType="C" />
<atom id="a1" elementType="C" />
<atom id="a2" elementType="C" />
<atom id="a3" elementType="C" />
<atom id="a4" elementType="C" />
<atom id="a5" elementType="C" />
</atomArray>
<bondArray>
<bond atomRefs2="a0 a1" order="A" />
<bond atomRefs2="a1 a2" order="A" />
<bond atomRefs2="a2 a3" order="A" />
<bond atomRefs2="a3 a4" order="A" />
<bond atomRefs2="a4 a5" order="A" />
<bond atomRefs2="a5 a0" order="A" />
</bondArray>
</molecule>
</cml>
""",
"""<?xml version="1.0" ?>
<cml>
<molecule>
<atomArray>
<atom id="a0" elementType="C" />
<atom id="a1" elementType="C" />
<atom id="a2" elementType="C" />
<atom id="a3" elementType="C" />
<atom id="a4" elementType="C" />
<atom id="a5" elementType="C" />
</atomArray>
<bondArray>
<bond atomRefs2="a0 a1" order="A" />
<bond atomRefs2="a0 a2" order="A" />
<bond atomRefs2="a1 a3" order="A" />
<bond atomRefs2="a2 a4" order="A" />
<bond atomRefs2="a3 a5" order="A" />
<bond atomRefs2="a4 a5" order="A" />
</bondArray>
</molecule>
</cml>
""",
),
"chemical/x-inchi": ("InChI=1S/C6H6/c1-2-4-6-5-3-1/h1-6H",),
}
dearomatized_mols = {
"chemical/x-mdl-molfile": (
"""
-INDIGO-01000000002D
6 6 0 0 0 0 0 0 0 0999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0 0 0 0
2 3 1 0 0 0 0
3 4 2 0 0 0 0
4 5 1 0 0 0 0
5 6 2 0 0 0 0
6 1 1 0 0 0 0
M END
""",
"""
-INDIGO-07261614382D
6 6 0 0 0 0 0 0 0 0999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 2 0 0 0 0
3 4 1 0 0 0 0
4 5 2 0 0 0 0
5 6 1 0 0 0 0
6 1 2 0 0 0 0
M END
""",
"""
-INDIGO-07261614432D
6 6 0 0 0 0 0 0 0 0999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0 0 0 0
1 3 1 0 0 0 0
2 4 1 0 0 0 0
3 5 2 0 0 0 0
4 6 2 0 0 0 0
5 6 1 0 0 0 0
M END
""",
),
"chemical/x-daylight-smiles": ("C1C=CC=CC=1", "C1=CC=CC=C1"),
"chemical/x-cml": (
"""<?xml version="1.0" ?>
<cml>
<molecule title="">
<atomArray>
<atom id="a0" elementType="C" />
<atom id="a1" elementType="C" />
<atom id="a2" elementType="C" />
<atom id="a3" elementType="C" />
<atom id="a4" elementType="C" />
<atom id="a5" elementType="C" />
</atomArray>
<bondArray>
<bond atomRefs2="a0 a1" order="2" />
<bond atomRefs2="a1 a2" order="1" />
<bond atomRefs2="a2 a3" order="2" />
<bond atomRefs2="a3 a4" order="1" />
<bond atomRefs2="a4 a5" order="2" />
<bond atomRefs2="a5 a0" order="1" />
</bondArray>
</molecule>
</cml>
""",
"""<?xml version="1.0" ?>
<cml>
<molecule>
<atomArray>
<atom id="a0" elementType="C" />
<atom id="a1" elementType="C" />
<atom id="a2" elementType="C" />
<atom id="a3" elementType="C" />
<atom id="a4" elementType="C" />
<atom id="a5" elementType="C" />
</atomArray>
<bondArray>
<bond atomRefs2="a0 a1" order="1" />
<bond atomRefs2="a1 a2" order="2" />
<bond atomRefs2="a2 a3" order="1" />
<bond atomRefs2="a3 a4" order="2" />
<bond atomRefs2="a4 a5" order="1" />
<bond atomRefs2="a5 a0" order="2" />
</bondArray>
</molecule>
</cml>
""",
"""<?xml version="1.0" ?>
<cml>
<molecule>
<atomArray>
<atom id="a0" elementType="C" />
<atom id="a1" elementType="C" />
<atom id="a2" elementType="C" />
<atom id="a3" elementType="C" />
<atom id="a4" elementType="C" />
<atom id="a5" elementType="C" />
</atomArray>
<bondArray>
<bond atomRefs2="a0 a1" order="2" />
<bond atomRefs2="a0 a2" order="1" />
<bond atomRefs2="a1 a3" order="1" />
<bond atomRefs2="a2 a4" order="2" />
<bond atomRefs2="a3 a5" order="2" />
<bond atomRefs2="a4 a5" order="1" />
</bondArray>
</molecule>
</cml>
""",
"""<?xml version="1.0" ?>
<cml>
<molecule>
<atomArray>
<atom id="a0" elementType="C" />
<atom id="a1" elementType="C" />
<atom id="a2" elementType="C" />
<atom id="a3" elementType="C" />
<atom id="a4" elementType="C" />
<atom id="a5" elementType="C" />
</atomArray>
<bondArray>
<bond atomRefs2="a0 a1" order="2" />
<bond atomRefs2="a1 a2" order="1" />
<bond atomRefs2="a2 a3" order="2" />
<bond atomRefs2="a3 a4" order="1" />
<bond atomRefs2="a4 a5" order="2" />
<bond atomRefs2="a5 a0" order="1" />
</bondArray>
</molecule>
</cml>
""",
),
"chemical/x-inchi": ("InChI=1S/C6H6/c1-2-4-6-5-3-1/h1-6H",),
}
def test_info(self):
headers, data = self.get_headers({})
result = requests.get(
self.url_prefix + "/info", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertIn("Indigo", result_data)
def test_aromatize_correct(self):
formats = (
"chemical/x-mdl-molfile",
"chemical/x-daylight-smiles",
"chemical/x-cml",
"chemical/x-inchi",
)
for input_format in formats:
for output_format in formats:
result = requests.post(
self.url_prefix + "/aromatize",
headers={
"Content-Type": input_format,
"Accept": output_format,
},
data=self.dearomatized_mols[input_format][0],
)
self.assertEqual(200, result.status_code)
self.assertEqual(output_format, result.headers["Content-Type"])
if output_format in (
"chemical/x-mdl-molfile",
"chemical/x-mdl-rxnfile",
): # Skip Molfile date
self.assertIn(
"\n".join(result.text.splitlines()[2:]).strip(),
[
"\n".join(m.splitlines()[2:]).strip()
for m in self.aromatized_mols[output_format]
],
)
else:
self.assertIn(
result.text, self.aromatized_mols[output_format]
)
def test_aromatize_selected(self):
headers, data = self.get_headers(
{
"struct": "C1C=CC=CC=1.C1C=CC=CC=1",
"output_format": "chemical/x-daylight-smiles",
"selected": [0, 1, 2, 3, 4, 5],
}
)
result = requests.post(
self.url_prefix + "/aromatize", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("c1ccccc1.c1ccccc1", result_data["struct"])
def test_aromatize_selected_2(self):
headers, data = self.get_headers(
{
"struct": "\n Ketcher 10071619282D 1 1.00000 0.00000 0\n\n 13 13 0 0 0 999 V2000\n 0.0000 0.8660 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0\n 0.9996 0.8661 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 1.4996 1.7321 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 2.4996 1.7322 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 2.9997 0.8661 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 2.4997 0.0001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 1.4997 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 5.8651 2.0001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 4.9990 1.5001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 4.9990 0.5001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 5.8650 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 6.7311 0.5000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 6.7311 1.5000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0\n 1 2 1 0 0 0\n 2 3 4 0 0 0\n 3 4 4 0 0 0\n 4 5 4 0 0 0\n 5 6 4 0 0 0\n 6 7 4 0 0 0\n 7 2 4 0 0 0\n 8 9 2 0 0 0\n 9 10 1 0 0 0\n 10 11 2 0 0 0\n 11 12 1 0 0 0\n 12 13 2 0 0 0\n 13 8 1 0 0 0\nM END\n",
"selected": [7, 8, 9, 10, 11, 12],
"output_format": "chemical/x-daylight-smiles",
"options": {
"smart-layout": True,
"ignore-stereochemistry-errors": True,
},
}
)
result = requests.post(
self.url_prefix + "/aromatize", headers=headers, data=data
)
result_data = json.loads(result.text)
self.assertEqual(200, result.status_code)
self.assertEqual("Nc1ccccc1.c1ccccc1", result_data["struct"])
def test_smiles_wrong(self):
result = requests.post(
self.url_prefix + "/aromatize",
headers={
"Content-Type": "chemical/x-daylight-smiles",
"Accept": "chemical/x-daylight-smiles",
},
data="c1ccccc2",
)
self.assertEqual(400, result.status_code)
self.assertEqual(
"IndigoException: molecule auto loader: SMILES loader: unexpected end of input",
result.text,
)
def test_headers_is_rxn(self):
result = requests.post(
self.url_prefix + "/aromatize",
headers={"Content-Type": "chemical/x-daylight-smiles"},
data="CC",
)
self.assertEqual(200, result.status_code)
self.assertEqual(
"chemical/x-mdl-molfile", result.headers["Content-Type"]
)
result = requests.post(
self.url_prefix + "/aromatize",
headers={"Content-Type": "chemical/x-daylight-smiles"},
data="CC>>CC",
)
self.assertEqual(200, result.status_code)
self.assertEqual(
"chemical/x-mdl-rxnfile", result.headers["Content-Type"]
)
def test_headers_wrong(self):
# Missing both Content-Type and Accept headers
result = requests.post(
self.url_prefix + "/aromatize", headers={}, data="c1ccccc1"
)
self.assertEqual(400, result.status_code)
self.assertIn("'input_format': ['Not a valid choice.']", result.text)
# Missing Accept header
result = requests.post(
self.url_prefix + "/aromatize",
headers={"Content-Type": "chemical/x-daylight-smiles"},
data="c1ccccc1",
)
self.assertEqual(200, result.status_code)
# Wrong Content-Type header
result = requests.post(
self.url_prefix + "/aromatize",
headers={
"Content-Type": "chemical/x-daylight-smiles1",
"Accept": "chemical/x-daylight-smiles",
},
data="c1ccccc1",
)
self.assertEqual(400, result.status_code)
self.assertEqual(
"ValidationError: {'input_format': ['Not a valid choice.']}",
result.text,
)
# Wrong Accept header
result = requests.post(
self.url_prefix + "/aromatize",
headers={
"Content-Type": "chemical/x-daylight-smiles",
"Accept": "chemical/x-daylight-smiles2",
},
data="c1ccccc1",
)
self.assertEqual(400, result.status_code)
self.assertEqual(
"ValidationError: {'output_format': ['Not a valid choice.']}",
result.text,
)
def test_dearomatize_correct(self):
formats = (
"chemical/x-mdl-molfile",
"chemical/x-daylight-smiles",
"chemical/x-cml",
"chemical/x-inchi",
)
for input_format in formats:
for output_format in formats:
result = requests.post(
self.url_prefix + "/dearomatize",
headers={
"Content-Type": input_format,
"Accept": output_format,
},
data=self.aromatized_mols[input_format][0],
)
self.assertEqual(200, result.status_code)
self.assertEqual(output_format, result.headers["Content-Type"])
if output_format in (
"chemical/x-mdl-molfile",
"chemical/x-mdl-rxnfile",
): # Skip Molfile date
self.assertIn(
"\n".join(result.text.splitlines()[2:]),
[
"\n".join(m.splitlines()[2:])
for m in self.dearomatized_mols[output_format]
],
)
else:
self.assertIn(
result.text, self.dearomatized_mols[output_format]
)
def test_dearomatize_selected(self):
headers, data = self.get_headers(
{
"struct": "c1ccccc1.c1ccccc1",
"output_format": "chemical/x-daylight-smiles",
"selected": [0, 1, 2, 3, 4, 5],
}
)
result = requests.post(
self.url_prefix + "/dearomatize", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("C1C=CC=CC=1.C1C=CC=CC=1", result_data["struct"])
def test_dearomatize_query_molecule(self):
result = requests.post(
self.url_prefix + "/dearomatize",
headers={
"Content-Type": "chemical/x-daylight-smiles",
"Accept": "chemical/x-daylight-smiles",
},
data="CX",
)
self.assertEqual(400, result.status_code)
self.assertEqual(
"Structures with query features cannot be dearomatized yet",
result.text,
)
def test_convert_correct(self):
formats = (
"chemical/x-mdl-molfile",
"chemical/x-daylight-smiles",
"chemical/x-cml",
"chemical/x-inchi",
)
# Test for POST request
for input_format in formats:
for output_format in formats:
result = requests.post(
self.url_prefix + "/convert",
headers={
"Content-Type": input_format,
"Accept": output_format,
},
data=self.dearomatized_mols[input_format][0],
)
self.assertEqual(200, result.status_code)
self.assertEqual(output_format, result.headers["Content-Type"])
if output_format in (
"chemical/x-mdl-molfile",
"chemical/x-mdl-rxnfile",
): # Skip Molfile date
self.assertIn(
"\n".join(result.text.splitlines()[2:]),
[
"\n".join(m.splitlines()[2:])
for m in self.dearomatized_mols[output_format]
],
)
else:
self.assertIn(
result.text, self.dearomatized_mols[output_format]
)
for output_format in formats:
# Test for GET request
result = requests.get(
self.url_prefix + "/convert",
params={
"struct": self.dearomatized_mols[input_format][0],
"output_format": output_format,
},
)
self.assertEqual(200, result.status_code)
if output_format in (
"chemical/x-mdl-molfile",
"chemical/x-mdl-rxnfile",
): # Skip Molfile date
self.assertIn(
"\n".join(result.text.splitlines()[2:]),
[
"\n".join(m.splitlines()[2:])
for m in self.dearomatized_mols[output_format]
],
)
else:
self.assertIn(
result.text, self.dearomatized_mols[output_format]
)
def test_convert_canonical_smiles(self):
headers, data = self.get_headers(
{
"struct": "C1=CC=CC=C1O",
"output_format": "chemical/x-daylight-smiles",
}
)
result_standard = requests.post(
self.url_prefix + "/convert", headers=headers, data=data
)
headers, data = self.get_headers(
{
"struct": "C1=CC=CC=C1O",
"output_format": "chemical/x-daylight-smiles",
"options": {"smiles": "canonical"},
}
)
result_canonical = requests.post(
self.url_prefix + "/convert", headers=headers, data=data
)
self.assertNotEqual(
json.loads(result_canonical.text)["struct"],
json.loads(result_standard.text)["struct"],
)
def test_convert_smarts(self):
smarts = [
"[#8;A]-[!#1]-[#6;A](-[#9])(-[#9])-[#9]",
"[#6,#1]",
"[#1,#1]",
"[#9,#17,#35,#53,#7&A&+,$([OH]-*=[!#6]),+;!#1]",
]
results = []
results_get = []
for mol in smarts:
params = {
"struct": mol,
"input_format": "chemical/x-daylight-smarts",
"output_format": "chemical/x-daylight-smarts",
}
headers, data = self.get_headers(params)
result = requests.post(
self.url_prefix + "/convert", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
results.append(result_data["struct"])
result = requests.get(self.url_prefix + "/convert", params=params)
self.assertEqual(200, result.status_code)
results_get.append(result.text)
self.assertEqual(smarts, results)
self.assertEqual(smarts, results_get)
def test_convert_name_to_structure(self):
names = [
"methane",
"ethane",
"propane",
"butane",
"ethene",
"propene",
"butene",
"ethyne",
"propyne",
"butyne",
"oct-3-ene",
"oct-5,3-diene",
"oct-3-yne",
"oct-3,5-diyne",
"3-ethyl-octane",
"3,5-diethyl-octane",
"3-methyl-5-ethyl-octane",
"3-(2,4-dimethyl-pentyl)-octane",
"3-methyl-5-ethyl-octane",
"3-(2,4-dimethyl-pentyl)-octane",
"cyclooctane",
"cyclooctene",
"cyclooctyne",
"3-methyl-5-ethyl-cyclooctane",
"cyclotetradecane",
"cyclododeca-1,3,5,7,9,11-hexaene",
]
smiles = [
"C",
"CC",
"CCC",
"CCCC",
"C=C",
"C=CC",
"C=CCC",
"C#C",
"C#CC",
"C#CCC",
"CCC=CCCCC",
"CCC=CC=CCC",
"CCC#CCCCC",
"CCC#CC#CCC",
"CCC(CCCCC)CC",
"CCC(CC(CCC)CC)CC",
"CCC(CC(CCC)CC)C",
"CCC(CCCCC)CC(CC(C)C)C",
"CCC(CC(CCC)CC)C",
"CCC(CCCCC)CC(CC(C)C)C",
"C1CCCCCCC1",
"C1CCCCCCC=1",
"C1CCCCCCC#1",
"C1CCCC(CC)CC(C)C1",
"C1CCCCCCCCCCCCC1",
"C1C=CC=CC=CC=CC=CC=1",
]
results = []
results_get = []
for name in names:
params = {
"struct": name,
"input_format": "chemical/x-iupac",
"output_format": "chemical/x-daylight-smiles",
}
# POST
headers, data = self.get_headers(params)
result = requests.post(
self.url_prefix + "/convert", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
results.append(result_data["struct"])
# GET
result = requests.get(self.url_prefix + "/convert", params=params)
results_get.append(result.text)
self.assertEqual(smiles, results)
self.assertEqual(smiles, results_get)
def test_convert_utf8(self):
text = """
Ketcher 02051318482D 1 1.00000 0.00000 0
5 4 0 0 0 999 V2000
-4.1250 -8.1000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.2590 -8.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.3929 -8.1000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.5269 -8.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.6609 -8.1000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 1 0 0 0
3 4 1 0 0 0
4 5 1 0 0 0
M STY 1 1 DAT
M SLB 1 1 1
M SAL 1 2 4 5
M SDT 1 single-name F
M SDD 1 1.6314 -1.1000 DR ALL 1 1
M SED 1 single-value-бензол
M END
"""
answ = """$MDL REV 1
$MOL
$HDR
$END HDR
$CTAB
15 14 0 0 0 999 V2000
6.6250 -7.3500 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
7.4910 -7.8500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.3571 -7.3500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
9.2231 -7.8500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.0891 -7.3500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.9551 -7.8500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.8212 -7.3500 0.0000 C 0 0 0 4 0 0 0 0 0 0 0 0
12.6872 -7.8500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
13.5532 -7.3500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
14.4192 -7.8500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
15.2853 -7.3500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
16.1513 -7.8500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 1
17.0173 -7.3500 0.0000 C 0 0 0 0 0 0 0 0 0 0 1 0
17.8833 -7.8500 0.0000 L 0 0 0 0 0 0 0 0 0 0 0 0
15.2853 -6.3500 0.0000 L 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 1 0 0 0
3 4 1 0 0 0
4 5 1 0 0 1
5 6 1 0 0 0
6 7 1 0 1 0
7 8 1 0 0 0
8 9 1 0 2 0
9 10 1 0 0 0
10 11 1 0 0 0
11 12 1 0 0 0
12 13 1 0 0 0
13 14 1 0 0 0
11 15 1 0 0 0
M RGP 1 1 1
M RBC 1 5 -2
M SUB 1 9 -2
M UNS 1 3 1
M ALS 14 3 F B Si As
M ALS 15 3 T N P As
M END
$END CTAB
$RGP
1
$CTAB
6 6 0 0 0 999 V2000
7.9000 -10.9000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.7660 -11.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.7660 -12.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.9000 -12.9000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.0340 -12.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.0340 -11.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 2 0 0 0
3 4 1 0 0 0
4 5 2 0 0 0
5 6 1 0 0 0
6 1 2 0 0 0
M END
$END CTAB
$END RGP
$END MOL
"""
# POST
result = requests.post(
self.url_prefix + "/convert",
headers={
"Content-Type": "chemical/x-mdl-molfile",
"Accept": "chemical/x-cml",
},
data=text.encode("utf-8"),
)
self.assertEqual(200, result.status_code)
result = requests.post(
self.url_prefix + "/convert",
headers={
"Content-Type": "chemical/x-mdl-molfile",
"Accept": "chemical/x-cml",
},
data=answ,
)
self.assertEqual(200, result.status_code)
# GET
result = requests.get(
self.url_prefix + "/convert",
params={"struct": text.encode("utf-8")},
)
self.assertEqual(200, result.status_code)
result = requests.get(
self.url_prefix + "/convert", params={"struct": answ}
)
self.assertEqual(200, result.status_code)
def test_layout(self):
result = requests.post(
self.url_prefix + "/layout",
headers={
"Content-Type": "chemical/x-daylight-smiles",
"Accept": "chemical/x-mdl-molfile",
},
data="C1=CC=CC=C1",
)
self.assertEqual(200, result.status_code)
self.assertEqual(
"""
6 6 0 0 0 0 0 0 0 0999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.3856 -0.8000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.3856 -2.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 -3.2000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.3856 -2.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.3856 -0.8000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0 0 0 0
2 3 1 0 0 0 0
3 4 2 0 0 0 0
4 5 1 0 0 0 0
5 6 2 0 0 0 0
6 1 1 0 0 0 0
M END""",
"\n".join(result.text.splitlines()[2:]),
)
def test_layout_selective(self):
headers, data = self.get_headers(
{
"struct": "CCC",
"output_format": "chemical/x-mdl-molfile",
"selected": [1, 2],
}
)
result = requests.post(
self.url_prefix + "/layout", headers=headers, data=data
)
result_data = json.loads(result.text)
self.assertEqual(200, result.status_code)
self.assertEqual(
"""
3 2 0 0 0 0 0 0 0 0999 V2000
-1.3856 0.8000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.3856 0.8000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 1 0 0 0 0
M END""",
"\n".join(result_data["struct"].splitlines()[2:]),
)
def test_layout_selective_reaction(self):
headers, data = self.get_headers(
{
"struct": """$RXN
2 1 0
$MOL
Ketcher 10071615322D 1 1.00000 0.00000 0
6 6 0 0 0 999 V2000
0.5450 0.6292 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0001 0.3146 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 -0.3146 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.5449 -0.6292 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.0898 -0.3146 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.0898 0.3146 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 2 0 0 0
3 4 1 0 0 0
4 5 2 0 0 0
5 6 1 0 0 0
6 1 2 0 0 0
M END
$MOL
Ketcher 10071615322D 1 1.00000 0.00000 0
12 13 0 0 0 999 V2000
3.0898 -0.0001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.7135 1.0801 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.9608 1.0803 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.5845 0.0001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.9609 -1.0803 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.3636 -3.8303 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.8313 0.0001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.3519 0.9017 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.3931 0.9017 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.9137 0.0001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
12.0931 -4.9516 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.3519 -0.9016 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 2 0 0 0
3 4 1 0 0 0
4 5 2 0 0 0
5 6 1 0 0 0
6 1 2 0 0 0
4 7 1 0 0 0
7 8 1 0 0 0
8 9 2 0 0 0
9 10 1 0 0 0
10 11 2 0 0 0
11 12 1 0 0 0
12 7 2 0 0 0
M END
$MOL
Ketcher 10071615322D 1 1.00000 0.00000 0
6 6 0 0 0 999 V2000
16.4754 0.9017 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
15.4343 0.9017 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
14.9137 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
15.4343 -0.9017 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
16.4754 -0.9017 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
16.9960 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 2 0 0 0
3 4 1 0 0 0
4 5 2 0 0 0
5 6 1 0 0 0
6 1 2 0 0 0
M END
""",
"selected": [5, 6],
"output_format": "chemical/x-mdl-rxnfile",
"options": {"molfile-saving-skip-date": "1"},
}
)
result = requests.post(
self.url_prefix + "/layout", headers=headers, data=data
)
result_data = json.loads(result.text)
self.assertEqual(200, result.status_code)
self.assertEqual(
"""$RXN
-INDIGO- 0100000000
2 1
$MOL
-INDIGO-01000000002D
6 6 0 0 0 0 0 0 0 0999 V2000
1.3856 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 -0.8000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 -2.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.3856 -3.2000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2.7713 -2.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2.7713 -0.8000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 2 0 0 0 0
3 4 1 0 0 0 0
4 5 2 0 0 0 0
5 6 1 0 0 0 0
6 1 2 0 0 0 0
M END
$MOL
-INDIGO-01000000002D
12 13 0 0 0 0 0 0 0 0999 V2000
8.2513 -1.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
9.0513 -0.2144 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.6513 -0.2144 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.4513 -1.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.6513 -2.9856 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
9.0513 -2.9856 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
13.0513 -1.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
13.8513 -0.2144 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
15.4513 -0.2144 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
16.2513 -1.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
15.4513 -2.9856 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
13.8513 -2.9856 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 2 0 0 0 0
3 4 1 0 0 0 0
4 5 2 0 0 0 0
5 6 1 0 0 0 0
6 1 2 0 0 0 0
4 7 1 0 0 0 0
7 8 1 0 0 0 0
8 9 2 0 0 0 0
9 10 1 0 0 0 0
10 11 2 0 0 0 0
11 12 1 0 0 0 0
12 7 2 0 0 0 0
M END
$MOL
-INDIGO-01000000002D
6 6 0 0 0 0 0 0 0 0999 V2000
23.1169 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
21.7313 -0.8000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
21.7313 -2.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
23.1169 -3.2000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
24.5026 -2.4000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
24.5026 -0.8000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 2 0 0 0 0
3 4 1 0 0 0 0
4 5 2 0 0 0 0
5 6 1 0 0 0 0
6 1 2 0 0 0 0
M END
""",
result_data["struct"],
)
def test_clean(self):
result = requests.post(
self.url_prefix + "/clean",
headers={
"Content-Type": "chemical/x-daylight-smiles",
"Accept": "chemical/x-mdl-molfile",
},
data="C1=CC=CC=C1",
)
self.assertEqual(200, result.status_code)
self.assertEqual(
"""
6 6 0 0 0 0 0 0 0 0999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0 0 0 0
2 3 1 0 0 0 0
3 4 2 0 0 0 0
4 5 1 0 0 0 0
5 6 2 0 0 0 0
6 1 1 0 0 0 0
M END""",
"\n".join(result.text.splitlines()[2:]),
)
def test_automap_no_header(self):
result = requests.post(
self.url_prefix + "/automap",
headers={
"Content-Type": "chemical/x-daylight-smiles",
"Accept": "chemical/x-daylight-smiles",
},
data="C>>C",
)
self.assertEqual(200, result.status_code)
self.assertEqual("[CH4:1]>>[CH4:1]", result.text)
def test_automap_correct_header(self):
result = requests.post(
self.url_prefix + "/automap",
headers={
"Content-Type": "chemical/x-daylight-smiles",
"Accept": "chemical/x-daylight-smiles",
},
data="C>>C",
)
self.assertEqual(200, result.status_code)
self.assertEqual("[CH4:1]>>[CH4:1]", result.text)
headers, data = self.get_headers(
{
"struct": "C>>C",
"output_format": "chemical/x-daylight-smiles",
"mode": "discard",
}
)
result = requests.post(
self.url_prefix + "/automap", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("[CH4:1]>>[CH4:1]", result_data["struct"])
headers, data = self.get_headers(
{
"struct": "C>>C",
"output_format": "chemical/x-daylight-smiles",
"mode": "keep",
}
)
result = requests.post(
self.url_prefix + "/automap", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("[CH4:1]>>[CH4:1]", result_data["struct"])
headers, data = self.get_headers(
{
"struct": "C>>C",
"output_format": "chemical/x-daylight-smiles",
"mode": "alter",
}
)
result = requests.post(
self.url_prefix + "/automap", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("[CH4:1]>>[CH4:1]", result_data["struct"])
headers, data = self.get_headers(
{
"struct": "C>>C",
"output_format": "chemical/x-daylight-smiles",
"mode": "clear",
}
)
result = requests.post(
self.url_prefix + "/automap", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("C>>C", result_data["struct"])
def test_automap_wrong_header(self):
headers, data = self.get_headers(
{
"struct": "C>>C",
"output_format": "chemical/x-daylight-smiles",
"mode": "wrong_mode",
}
)
result = requests.post(
self.url_prefix + "/automap", headers=headers, data=data
)
self.assertEqual(400, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Not a valid choice.", "".join(result_data["error"]["mode"])
)
def test_automap_wrong_reaction(self):
headers, data = self.get_headers(
{
"struct": "C>C",
"output_format": "chemical/x-daylight-smiles",
"mode": "discard",
}
)
result = requests.post(
self.url_prefix + "/automap", headers=headers, data=data
)
self.assertEqual(400, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"IndigoException: molecule auto loader: SMILES loader: invalid character within atom description: '>'",
result_data["error"],
)
def test_automap_molecule_instead_of_reaction(self):
headers, data = self.get_headers(
{
"struct": "C",
"output_format": "chemical/x-daylight-smiles",
"mode": "discard",
}
)
result = requests.post(
self.url_prefix + "/automap", headers=headers, data=data
)
self.assertEqual(400, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"IndigoException: core: <molecule> is not a base reaction",
result_data["error"],
)
def test_calculate_cip_correct(self):
result = requests.post(
self.url_prefix + "/calculate_cip",
headers={
"Content-Type": "chemical/x-mdl-molfile",
"Accept": "chemical/x-mdl-molfile",
},
data="""
Ketcher 07261618302D 1 1.00000 0.00000 0
12 12 0 0 0 999 V2000
9.3770 -13.9546 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.2427 -13.4548 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.2427 -12.4552 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.1091 -13.9546 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.9749 -13.4548 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.9749 -12.4552 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.1091 -11.9546 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.1091 -14.9550 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.9749 -15.4548 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
10.2427 -11.4546 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.1091 -10.9548 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
9.2427 -12.4552 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0 0 0
2 3 1 0 0 0
3 10 1 1 0 0
3 7 1 0 0 0
7 11 1 6 0 0
2 4 1 0 0 0
4 5 2 0 0 0
5 6 1 0 0 0
6 7 1 0 0 0
4 8 1 0 0 0
8 9 1 0 0 0
3 12 1 0 0 0
M END
""",
)
self.assertEqual(200, result.status_code)
# print(result.text)
self.assertEqual(
"""
12 12 0 0 0 0 0 0 0 0999 V2000
9.3770 -13.9546 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.2427 -13.4548 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.2427 -12.4552 0.0000 C 0 0 2 0 0 0 0 0 0 0 0 0
11.1091 -13.9546 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.9749 -13.4548 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.9749 -12.4552 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.1091 -11.9546 0.0000 C 0 0 1 0 0 0 0 0 0 0 0 0
11.1091 -14.9550 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.9749 -15.4548 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
10.2427 -11.4546 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.1091 -10.9548 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
9.2427 -12.4552 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0 0 0 0
2 3 1 0 0 0 0
3 10 1 1 0 0 0
3 7 1 0 0 0 0
7 11 1 6 0 0 0
2 4 1 0 0 0 0
4 5 2 0 0 0 0
5 6 1 0 0 0 0
6 7 1 0 0 0 0
4 8 1 0 0 0 0
8 9 1 0 0 0 0
3 12 1 0 0 0 0
M STY 2 1 DAT 2 DAT
M SLB 2 1 1 2 2
M SAL 1 1 3
M SDT 1 INDIGO_CIP_DESC
M SDD 1 0.0000 0.0000 DR ALL 1 1
M SED 1 (s)
M SAL 2 1 7
M SDT 2 INDIGO_CIP_DESC
M SDD 2 0.0000 0.0000 DR ALL 1 1
M SED 2 (R)
M END""",
"\n".join([s.rstrip() for s in result.text.splitlines()[2:]]),
)
def test_render(self):
result = requests.post(
self.url_prefix + "/render",
headers={
"Content-Type": "chemical/x-daylight-smiles",
"Accept": "image/svg+xml",
},
data="C",
)
self.assertEqual(200, result.status_code)
self.assertEqual("image/svg+xml", result.headers["Content-Type"])
headers, data = self.get_headers({"struct": "C"})
result = requests.post(
self.url_prefix + "/render", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
self.assertEqual("image/svg+xml", result.headers["Content-Type"])
headers, data = self.get_headers(
{"struct": "C", "output_format": "application/pdf"}
)
result = requests.post(
self.url_prefix + "/render", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
self.assertEqual("application/pdf", result.headers["Content-Type"])
headers, data = self.get_headers(
{"struct": "C", "output_format": "image/png"}
)
result = requests.post(
self.url_prefix + "/render", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
self.assertEqual("image/png", result.headers["Content-Type"])
# GET
data = {"struct": "c1ccccc1", "output_format": "image/png"}
result = requests.get(self.url_prefix + "/render", params=data)
self.assertEqual(200, result.status_code)
data = {"struct": "c1ccccc1", "output_format": "image/svg+xml"}
result = requests.get(self.url_prefix + "/render", params=data)
self.assertEqual(200, result.status_code)
data = {"struct": "c1ccccc1", "output_format": "application/pdf"}
result = requests.get(self.url_prefix + "/render", params=data)
self.assertEqual(200, result.status_code)
def test_renderhighlight(self):
params = {"struct": "C1=CC=CC=C1", "query": "C"}
headers, data = self.get_headers(params)
# POST
result = requests.post(
self.url_prefix + "/render", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
self.assertEqual("image/svg+xml", result.headers["Content-Type"])
# GET
result = requests.get(self.url_prefix + "/render", params=params)
self.assertEqual(200, result.status_code)
def test_render_exceptions(self):
# either query or structure should be present
headers, data = self.get_headers({})
result = requests.post(
self.url_prefix + "/render", headers=headers, data=data
)
self.assertEqual(400, result.status_code)
result_data = json.loads(result.text)
self.assertIn("_schema", result_data["error"])
# GET
result = requests.get(self.url_prefix + "/render", params={})
self.assertEqual(400, result.status_code)
# render format is wrong
headers, data = self.get_headers(
{"struct": "C", "output_format": "foo"}
)
result = requests.post(
self.url_prefix + "/render", headers=headers, data=data
)
self.assertEqual(400, result.status_code)
result_data = json.loads(result.text)
self.assertIn("output_format", result_data["error"])
# GET
result = requests.get(
self.url_prefix + "/render",
params={"struct": "C", "output_format": "foo"},
)
self.assertEqual(400, result.status_code)
def test_json_aromatize_correct(self):
formats = (
"chemical/x-mdl-molfile",
"chemical/x-daylight-smiles",
"chemical/x-cml",
"chemical/x-inchi",
)
for input_format in formats:
for output_format in formats:
headers, data = self.get_headers(
{
"struct": self.dearomatized_mols[input_format][0],
"input_format": input_format,
"output_format": output_format,
}
)
result = requests.post(
self.url_prefix + "/aromatize", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
result_struct = result_data["struct"]
result_format = result_data["format"]
self.assertEqual(result_format, output_format)
if output_format in (
"chemical/x-mdl-molfile",
"chemical/x-mdl-rxnfile",
): # Skip Molfile date
self.assertIn(
"\n".join(result_struct.splitlines()[2:]).strip(),
[
"\n".join(m.splitlines()[2:]).strip()
for m in self.aromatized_mols[output_format]
],
)
else:
self.assertIn(
result_struct, self.aromatized_mols[output_format]
)
def test_json_check(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 08151618402D 1 1.00000 0.00000 0
13 13 0 0 0 999 V2000
-0.8662 1.5003 0.0000 C 0 0 0 0 0 7 0 0 0 0 0 0
-1.7324 1.0003 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.7324 0.0001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.8662 -0.5001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0001 1.0002 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.5982 1.5002 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.8659 1.5001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.8663 -1.4999 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
0.5876 -0.8089 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.9943 -0.1045 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.2079 -0.9779 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.7431 0.6690 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
1 6 2 0 0 0
2 3 2 0 0 0
3 4 1 0 0 0
4 5 2 0 0 0
5 6 1 0 0 0
2 7 1 0 0 0
6 8 1 0 0 0
4 9 1 0 0 0
5 10 1 0 0 0
5 11 1 0 0 0
5 12 1 0 0 0
5 13 1 0 0 0
M CHG 1 3 -40
M RAD 1 3 3
M STY 1 1 DAT
M SLB 1 1 1
M SAL 1 1 7
M SDT 1 INDIGO_ALIAS
M SDD 1 -2.5982 1.5002 AA ALL 1 1
M SED 1 Psd
M STY 1 2 DAT
M SLB 1 2 2
M SAL 2 1 8
M SDT 2 INDIGO_ALIAS
M SDD 2 0.8659 1.5001 AA ALL 1 1
M SED 2 Pol
M END
""",
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Structure contains 2 atoms with bad valence",
result_data["valence"],
)
self.assertEqual(
"Structure contains 1 atom with radical electrons",
result_data["radicals"],
)
self.assertEqual("Structure has SGroups", result_data["sgroups"])
def test_check(self):
result = requests.post(
self.url_prefix + "/check",
headers={"Content-Type": "chemical/x-mdl-molfile"},
data="""
Ketcher 08121615592D 1 1.00000 0.00000 0
13 12 0 0 0 999 V2000
22.2500 -10.8750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
23.2500 -10.8750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
21.7500 -10.0090 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
21.7500 -11.7410 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
21.2500 -10.8750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
22.7500 -10.0090 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
22.7500 -11.7410 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
23.1160 -10.3750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
23.1160 -11.3750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
21.3840 -11.3750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
21.3840 -10.3750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
22.2500 -9.8750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
22.2500 -11.8750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
1 3 1 0 0 0
1 4 1 0 0 0
1 5 1 0 0 0
1 6 1 0 0 0
1 7 1 0 0 0
1 8 1 0 0 0
1 9 1 0 0 0
1 10 1 0 0 0
1 11 1 0 0 0
1 12 1 0 0 0
1 13 1 0 0 0
M END
""",
)
self.assertEqual(200, result.status_code)
result_data = result.text
self.assertEqual(
"valence: Structure contains 1 atom with bad valence", result_data
)
def test_check_overlap(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 08221617222D 1 1.00000 0.00000 0
6 6 0 0 0 999 V2000
2.5908 -10.5562 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.4568 -11.0562 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.4568 -12.0563 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2.5908 -12.5563 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.7248 -11.0563 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.7248 -11.0562 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 2 0 0 0
3 4 1 0 0 0
4 5 2 0 0 0
5 6 1 0 0 0
6 1 2 0 0 0
M END
""",
"types": ["overlapping_atoms"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Structure contains overlapping atoms",
result_data["overlapping_atoms"],
)
def test_check_stereo(self):
# up
headers, data = self.get_headers(
{
"struct": """
-INDIGO-10201021542D
10 11 0 0 0 0 0 0 0 0999 V2000
10.2958 -8.7041 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.9326 -8.7062 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
12.9333 -7.9249 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
12.9500 -9.5749 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
9.2625 -7.8874 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
9.2625 -9.6499 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
13.5541 -7.1333 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
13.5750 -10.3708 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.2625 -6.9041 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.4708 -10.7124 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4 2 1 0 0 0 0
5 1 1 0 0 0 0
6 1 1 0 0 0 0
3 4 1 0 0 0 0
5 6 1 0 0 0 0
7 3 1 1 0 0 0
4 8 1 1 0 0 0
5 9 1 1 0 0 0
10 6 1 1 0 0 0
1 2 2 0 0 0 0
3 2 1 0 0 0 0
M END
""",
"types": [
"stereo",
],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Structure has stereochemistry errors", result_data["stereo"]
)
# cis
headers, data = self.get_headers(
{
"struct": r"F/C=C\F",
"types": [
"stereo",
],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual({}, result_data)
# trans
headers, data = self.get_headers(
{
"struct": "F/C=C/F",
"types": [
"stereo",
],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual({}, result_data)
# normal
headers, data = self.get_headers(
{
"struct": """
Ketcher 10171617062D 1 1.00000 0.00000 0
6 6 0 0 0 999 V2000
20.5253 -4.7001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
21.3913 -5.2001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
21.3913 -6.2001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
20.5253 -6.7001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
19.6593 -6.2001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
19.6593 -5.2001 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5 6 2 0 0 0
2 3 1 0 0 0
3 4 2 0 0 0
4 5 1 0 0 0
6 1 1 0 0 0
1 2 2 0 0 0
M END
""",
"types": [
"stereo",
],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual({}, result_data)
def test_check_query(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 09201617322D 1 1.00000 0.00000 0
27 27 0 0 0 999 V2000
5.4529 -1.9710 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.4249 -4.3553 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.5034 -8.4509 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.8029 -12.0523 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.6382 -2.9124 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2.7353 -2.5317 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.9475 -2.9655 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.7705 -4.0012 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.8167 -4.4748 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.5503 -3.0363 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.5503 -3.9835 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.3559 -5.5946 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.5503 -6.2408 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.5326 -7.2234 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.5478 -7.7943 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.7527 -6.2231 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.8944 -7.2234 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2.7707 -7.6925 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.6737 -7.3119 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.8333 -8.7637 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.8778 -9.0646 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.1346 -10.0650 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.1968 -10.4102 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.4623 -11.3929 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.3425 -4.6034 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.5399 -2.4419 0.0000 C 0 0 0 0 0 3 0 0 0 0 0 0
0.9503 -5.7709 0.0000 Q 0 0 0 0 0 0 0 0 0 0 0 0
5 6 1 0 0 0
6 7 2 0 0 0
7 8 1 0 0 0
8 9 2 0 0 0
2 9 1 0 0 0
10 11 2 0 0 0
12 13 1 0 0 0
13 14 2 0 0 0
14 15 1 0 0 0
16 17 1 0 0 0
17 18 2 0 0 0
18 19 1 0 0 0
15 19 2 0 0 0
15 20 1 0 0 0
20 21 1 0 0 0
3 21 1 0 0 0
21 22 1 0 0 0
22 23 1 0 0 0
23 24 1 0 0 0
4 24 1 0 0 0
11 25 1 0 0 0
12 25 2 0 0 0
1 26 1 0 0 0
5 26 2 0 0 0
10 26 1 0 0 0
9 27 1 0 0 0
16 27 2 0 0 0
M END
""",
"types": [
"valence",
"ambiguous_h",
"query",
"pseudoatoms",
"radicals",
"stereo",
"overlapping_atoms",
"overlapping_bonds",
"3d",
"sgroups",
"v3000",
"rgroups",
],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Structure contains query features, so valency could not be checked",
result_data["valence"],
)
self.assertEqual(
"Structure contains query features, so ambiguous H could not be checked",
result_data["ambiguous_h"],
)
self.assertEqual(
"Structure contains query features", result_data["query"]
)
def test_check_pseudoatom(self):
headers, data = self.get_headers(
{
"struct": """
Marvin 02121015302D
9 9 0 0 0 0 999 V2000
-1.8857 2.4750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.6002 2.0625 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.6002 1.2375 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.8857 0.8250 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.1712 1.2375 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.1712 2.0625 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.3147 2.4750 0.0000 Psd 0 0 0 0 0 0 0 0 0 0 0 0
-0.4568 2.4750 0.0000 Pol 0 0 0 0 0 0 0 0 0 0 0 0
-1.8857 -0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
1 6 2 0 0 0 0
2 3 2 0 0 0 0
3 4 1 0 0 0 0
4 5 2 0 0 0 0
5 6 1 0 0 0 0
2 7 1 0 0 0 0
6 8 1 0 0 0 0
4 9 1 0 0 0 0
M END
""",
"types": [
"valence",
"ambiguous_h",
"query",
"pseudoatoms",
"radicals",
"stereo",
"overlapping_atoms",
"overlapping_bonds",
"3d",
"sgroups",
"v3000",
"rgroups",
],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Structure contains pseudoatoms, so radicals could not be checked",
result_data["radicals"],
)
self.assertEqual(
"Structure contains 2 pseudoatoms", result_data["pseudoatoms"]
)
def test_check_empty(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 09221617072D 1 1.00000 0.00000 0
0 0 0 0 0 999 V2000
M END
""",
"types": [
"valence",
"ambiguous_h",
"query",
"pseudoatoms",
"radicals",
"stereo",
"overlapping_atoms",
"overlapping_bonds",
"3d",
"sgroups",
"v3000",
"rgroups",
],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual({}, result_data)
def test_check_overlapping_bonds(self):
# intersecting bonds
headers, data = self.get_headers(
{
"struct": """
Ketcher 09271616302D 1 1.00000 0.00000 0
4 2 0 0 0 999 V2000
-0.3081 -0.0005 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.6917 -0.0225 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.1818 -0.5124 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.2038 0.4874 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
3 4 1 0 0 0
M END
""",
"types": ["overlapping_bonds"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Structure contains overlapping bonds",
result_data["overlapping_bonds"],
)
# two bonds from one atom:
headers, data = self.get_headers(
{
"struct": """
Ketcher 09271617112D 1 1.00000 0.00000 0
3 2 0 0 0 999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.8660 0.5000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.8660 0.5000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
1 3 1 0 0 0
M END
""",
"types": ["overlapping_bonds"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual({}, result_data)
# bonds on the same line
headers, data = self.get_headers(
{
"struct": """
Ketcher 09271617122D 1 1.00000 0.00000 0
4 2 0 0 0 999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
3 4 1 0 0 0
M END
""",
"types": ["overlapping_bonds"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual({}, result_data)
# parallel bonds
headers, data = self.get_headers(
{
"struct": """
Ketcher 09271617122D 1 1.00000 0.00000 0
4 2 0 0 0 999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 1.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.0000 1.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
3 4 1 0 0 0
M END
""",
"types": ["overlapping_bonds"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual({}, result_data)
def test_check_reaction_queries(self):
headers, data = self.get_headers(
{
"struct": """$RXN
2 2 0
$MOL
Ketcher 10051614552D 1 1.00000 0.00000 0
1 0 0 0 0 999 V2000
15.8500 -8.0750 0.0000 Q 0 0 0 0 0 0 0 0 0 0 0 0
M END
$MOL
Ketcher 10051614552D 1 1.00000 0.00000 0
1 0 0 0 0 999 V2000
18.1250 -8.1750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
M END
$MOL
Ketcher 10051614552D 1 1.00000 0.00000 0
1 0 0 0 0 999 V2000
23.0750 -8.0250 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
M END
$MOL
Ketcher 10051614552D 1 1.00000 0.00000 0
1 0 0 0 0 999 V2000
26.1250 -7.9750 0.0000 Q 0 0 0 0 0 0 0 0 0 0 0 0
M END
""",
"types": ["query", "valence"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
{
"valence": "Structure contains query features, so valency could not be checked",
"query": "Structure contains query features",
},
result_data,
)
# TODO: Uncomment when Ketcher supports JSON format
# self.assertEqual({'query': {'reactants': {'0': 'Query'}, 'products': {'1': 'Query'}}, 'valence': {'reactants': {'0': 'Structure contains query features, so valency could not be checked'}, 'products': {'1': 'Structure contains query features, so valency could not be checked'}}}, result_data)
def test_check_atoms(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 10311615312D 1 1.00000 0.00000 0
2 0 0 0 0 999 V2000
0.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
1.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
M END
"""
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual({}, result_data)
# TODO: Add validation checks for /check
def test_json_calculate(self):
headers, data = self.get_headers(
{"struct": "C", "properties": ("molecular-weight", "gross")}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("C H4", result_data["gross"])
self.assertGreater(17, float(result_data["molecular-weight"]))
self.assertLess(16, float(result_data["molecular-weight"]))
def test_calculate(self):
result = requests.post(
self.url_prefix + "/calculate",
headers={"Content-Type": "chemical/x-daylight-smiles"},
data="C",
)
self.assertEqual(200, result.status_code)
result_data = result.text
self.assertEqual("molecular-weight: 16.0424604", result_data)
def test_calculate_components_mol(self):
headers, data = self.get_headers(
{
"struct": "C.CC",
"properties": (
"molecular-weight",
"gross",
"mass-composition",
),
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"16.0424604; 30.0690408", result_data["molecular-weight"]
)
self.assertEqual("C H4; C2 H6", result_data["gross"])
self.assertEqual(
"C 74.87 H 25.13; C 79.89 H 20.11", result_data["mass-composition"]
)
def test_calculate_polymer(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 11071615122D 1 1.00000 0.00000 0
7 6 0 0 0 999 V2000
6.8750 -6.1000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.7410 -6.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.6071 -6.1000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
9.4731 -6.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.3391 -6.1000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
11.2051 -6.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
12.0712 -6.1000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 1 0 0 0
3 4 1 0 0 0
4 5 1 0 0 0
5 6 1 0 0 0
6 7 1 0 0 0
M STY 1 1 SRU
M SLB 1 1 1
M SCN 1 1 HT
M SMT 1 n
M SAL 1 1 3
M SBL 1 2 2 3
M SDI 1 4 8.1740 -7.1000 8.1740 -5.6000
M SDI 1 4 9.0401 -5.6000 9.0401 -7.1000
M STY 1 2 SRU
M SLB 1 2 2
M SCN 1 2 HT
M SMT 2 kk
M SAL 2 2 4 5
M SBL 2 2 3 5
M SDI 2 4 9.0401 -7.1000 9.0401 -5.6000
M SDI 2 4 10.7721 -5.6000 10.7721 -7.1000
M END
""",
"properties": (
"molecular-weight",
"gross",
"mass-composition",
),
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"error: Cannot calculate mass for structure with repeating units",
result_data["molecular-weight"],
)
self.assertEqual("C4 H10(C H2)n(C2 H4)kk", result_data["gross"])
self.assertEqual(
"error: Cannot calculate mass for structure with repeating units",
result_data["mass-composition"],
)
def test_calculate_components_rxn(self):
headers, data = self.get_headers(
{
"struct": """$RXN
1 1 0
$MOL
Ketcher 10271616252D 1 1.00000 0.00000 0
3 1 0 0 0 999 V2000
7.8420 -6.5250 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.7080 -6.0250 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.5500 -6.3250 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
M END
$MOL
Ketcher 10271616252D 1 1.00000 0.00000 0
3 1 0 0 0 999 V2000
19.9670 -6.1000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
20.8330 -5.6000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
24.9250 -5.6250 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
M END
""",
"properties": (
"molecular-weight",
"gross",
"mass-composition",
),
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"[30.0690408; 16.0424604] > [30.0690408; 16.0424604]",
result_data["molecular-weight"],
)
self.assertEqual("[C2 H6; C H4] > [C2 H6; C H4]", result_data["gross"])
self.assertEqual(
"[C 79.89 H 20.11; C 74.87 H 25.13] > [C 79.89 H 20.11; C 74.87 H 25.13]",
result_data["mass-composition"],
)
def test_calculate_rxn(self):
headers, data = self.get_headers(
{
"struct": "C.CC>>CC.C",
"properties": (
"molecular-weight",
"gross",
"mass-composition",
"most-abundant-mass",
"monoisotopic-mass",
),
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"[16.0424604] + [30.0690408] > [30.0690408] + [16.0424604]",
result_data["molecular-weight"],
)
self.assertEqual(
"[16.0313001] + [30.0469501] > [30.0469501] + [16.0313001]",
result_data["most-abundant-mass"],
)
self.assertEqual(
"[16.0313001] + [30.0469501] > [30.0469501] + [16.0313001]",
result_data["monoisotopic-mass"],
)
self.assertEqual(
"[C H4] + [C2 H6] > [C2 H6] + [C H4]", result_data["gross"]
)
self.assertEqual(
"[C 74.87 H 25.13] + [C 79.89 H 20.11] > [C 79.89 H 20.11] + [C 74.87 H 25.13]",
result_data["mass-composition"],
)
def test_calculate_selected(self):
headers, data = self.get_headers(
{
"struct": "CC",
"input_format": "chemical/x-mdl-molfile",
"selected": [
0,
],
"properties": (
"molecular-weight",
"gross",
"mass-composition",
),
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("15.0345204", result_data["molecular-weight"])
self.assertEqual("C H3", result_data["gross"])
self.assertEqual("C 79.89 H 20.11", result_data["mass-composition"])
def test_calculate_selected_benzene(self):
headers, data = self.get_headers(
{
"struct": "C1(N)=CC=CC=C1",
"selected": [0, 2, 3, 4, 5, 6],
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("C6 H5", result_data["gross"])
self.assertEqual("77.1039016", result_data["molecular-weight"])
self.assertEqual("C 93.46 H 6.54", result_data["mass-composition"])
def test_calculate_empty(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 10211616132D 1 1.00000 0.00000 0
0 0 0 0 0 999 V2000
M END
""",
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("", result_data["gross"])
self.assertEqual("", result_data["molecular-weight"])
self.assertEqual("", result_data["most-abundant-mass"])
self.assertEqual("", result_data["monoisotopic-mass"])
self.assertEqual("", result_data["mass-composition"])
def test_calculate_selected_benzene_2(self):
headers, data = self.get_headers(
{
"struct": "C1=CC=CC=C1",
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
"selected": [
0,
],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("C H", result_data["gross"])
self.assertEqual("13.0186403", result_data["molecular-weight"])
self.assertEqual("13.007825", result_data["most-abundant-mass"])
self.assertEqual("13.007825", result_data["monoisotopic-mass"])
self.assertEqual("C 92.26 H 7.74", result_data["mass-composition"])
def test_calculate_query_mol(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 11081614202D 1 1.00000 0.00000 0
3 2 0 0 0 999 V2000
6.4500 -4.5500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.4500 -4.5500 0.0000 Q 0 0 0 0 0 0 0 0 0 0 0 0
7.9500 -5.4160 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 8 0 0 0
2 3 1 0 0 0
M END
""",
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["gross"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["molecular-weight"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["most-abundant-mass"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["monoisotopic-mass"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["mass-composition"],
)
def test_calculate_query_mol_selected(self):
mol = """
Ketcher 11081614252D 1 1.00000 0.00000 0
3 2 0 0 0 999 V2000
6.4500 -4.5500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.4500 -4.5500 0.0000 Q 0 0 0 0 0 0 0 0 0 0 0 0
7.9500 -5.4160 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 8 0 0 0
2 3 1 0 0 0
M END
"""
headers, data = self.get_headers(
{
"struct": mol,
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
"selected": [
0,
],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["gross"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["molecular-weight"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["most-abundant-mass"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["monoisotopic-mass"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["mass-composition"],
)
headers, data = self.get_headers(
{
"struct": mol,
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
"selected": [
2,
],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("C H3", result_data["gross"])
self.assertEqual("15.0345204", result_data["molecular-weight"])
self.assertEqual("15.0234751", result_data["most-abundant-mass"])
self.assertEqual("15.0234751", result_data["monoisotopic-mass"])
self.assertEqual("C 79.89 H 20.11", result_data["mass-composition"])
def test_calculate_query_rxn(self):
headers, data = self.get_headers(
{
"struct": """$RXN
1 1 0
$MOL
Ketcher 11081614262D 1 1.00000 0.00000 0
3 2 0 0 0 999 V2000
6.4500 -4.5500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.4500 -4.5500 0.0000 Q 0 0 0 0 0 0 0 0 0 0 0 0
7.9500 -5.4160 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 8 0 0 0
2 3 1 0 0 0
M END
$MOL
Ketcher 11081614262D 1 1.00000 0.00000 0
3 2 0 0 0 999 V2000
13.1000 -4.4920 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
14.1000 -4.4920 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
14.6000 -5.3580 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 1 0 0 0
M END
""",
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["gross"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["molecular-weight"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["most-abundant-mass"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["monoisotopic-mass"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["mass-composition"],
)
def test_calculate_query_rxn_selected(self):
rxn = """$RXN
1 1 0
$MOL
Ketcher 11081614532D 1 1.00000 0.00000 0
3 2 0 0 0 999 V2000
0.0000 0.2500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.8660 -0.2500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.7321 0.2500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2 3 1 0 0 0
1 2 8 0 0 0
M END
$MOL
Ketcher 11081614532D 1 1.00000 0.00000 0
4 3 0 0 0 999 V2000
7.7321 0.2500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.5980 -0.2500 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
9.4641 0.2500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
10.3301 -0.2500 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 1 0 0 0
3 4 1 0 0 0
M END
"""
headers, data = self.get_headers(
{
"struct": rxn,
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
"selected": [
0,
],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["gross"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["molecular-weight"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["most-abundant-mass"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["monoisotopic-mass"],
)
self.assertEqual(
"Cannot calculate properties for structures with query features",
result_data["mass-composition"],
)
headers, data = self.get_headers(
{
"struct": rxn,
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
"selected": [
2,
],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("C H3", result_data["gross"])
self.assertEqual("15.0345204", result_data["molecular-weight"])
self.assertEqual("15.0234751", result_data["most-abundant-mass"])
self.assertEqual("15.0234751", result_data["monoisotopic-mass"])
self.assertEqual("C 79.89 H 20.11", result_data["mass-composition"])
headers, data = self.get_headers(
{
"struct": rxn,
"properties": [
"molecular-weight",
"most-abundant-mass",
"monoisotopic-mass",
"gross",
"mass-composition",
],
"selected": [2, 3, 4, 5],
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual("C H3; C2 H6 N", result_data["gross"])
self.assertEqual(
"15.0345204; 44.0757403", result_data["molecular-weight"]
)
self.assertEqual(
"15.0234751; 44.0500238", result_data["most-abundant-mass"]
)
self.assertEqual(
"15.0234751; 44.0500238", result_data["monoisotopic-mass"]
)
self.assertEqual(
"C 79.89 H 20.11; C 54.50 H 13.72 N 31.78",
result_data["mass-composition"],
)
def test_calculate_selected_components_mol(self):
headers, data = self.get_headers(
{
"struct": "CC.CC",
"input_format": "chemical/x-mdl-molfile",
"selected": [0, 2, 3],
"properties": (
"molecular-weight",
"gross",
"mass-composition",
),
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"15.0345204; 30.0690408", result_data["molecular-weight"]
)
self.assertEqual("C H3; C2 H6", result_data["gross"])
self.assertEqual(
"C 79.89 H 20.11; C 79.89 H 20.11", result_data["mass-composition"]
)
def test_calculate_selected_components_rxn(self):
headers, data = self.get_headers(
{
"struct": "CC>>CC.CC",
"input_format": "chemical/x-mdl-rxnfile",
"selected": [0, 2, 3],
"properties": (
"molecular-weight",
"gross",
"mass-composition",
),
}
)
result = requests.post(
self.url_prefix + "/calculate", headers=headers, data=data
)
self.assertEqual(200, result.status_code)
result_data = json.loads(result.text)
self.assertEqual(
"16.0424604; 30.0690408", result_data["molecular-weight"]
)
self.assertEqual("C H4; C2 H6", result_data["gross"])
self.assertEqual(
"C 74.87 H 25.13; C 79.89 H 20.11", result_data["mass-composition"]
)
def test_convert_inchi_aux(self):
params = {
"struct": "c1ccccc1",
"output_format": "chemical/x-inchi-aux",
}
headers, data = self.get_headers(params)
result = requests.post(
self.url_prefix + "/convert", headers=headers, data=data
)
result_data = json.loads(result.text)
self.assertEqual("chemical/x-inchi-aux", result_data["format"])
self.assertIn("AuxInfo=", result_data["struct"])
result = requests.get(self.url_prefix + "/convert", params=params)
self.assertIn("AuxInfo=", result.text)
def test_convert_chemaxon_smiles(self):
params = {
"struct": "CC[*]",
"output_format": "chemical/x-chemaxon-cxsmiles",
}
headers, data = self.get_headers(params)
result = requests.post(
self.url_prefix + "/convert", headers=headers, data=data
)
result_data = json.loads(result.text)
self.assertEqual("chemical/x-chemaxon-cxsmiles", result_data["format"])
self.assertEqual("CC%91.[*]%91", result_data["struct"])
result = requests.get(self.url_prefix + "/convert", params=params)
self.assertEqual("CC%91.[*]%91", result.text)
# TODO: Add validation checks for /calculate
def test_stereo(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 03071819152D 1 1.00000 0.00000 0
7 6 0 0 0 999 V2000
7.2750 -2.8750 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.5338 -3.8409 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.4997 -4.0997 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.9997 -4.9658 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
9.9997 -4.9658 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.6678 -4.3409 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
8.3998 -3.3409 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 1 0 0 0
3 4 1 0 0 0
4 5 1 0 0 0
2 6 1 0 0 0
2 7 1 0 0 0
M END
""",
"types": ["query", "stereo"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
result_data = json.loads(result.text)
self.assertEqual(
"Structure contains one or more stereogenic atom(s) with unspecified stereochemistry",
result_data["stereo"],
)
headers, data = self.get_headers(
{
"struct": """
Ketcher 03071820162D 1 1.00000 0.00000 0
7 7 0 0 0 999 V2000
6.1956 -7.4385 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.5784 -6.6550 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.8007 -5.6826 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.6960 -5.2488 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.2021 -7.4385 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.6025 -5.6819 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.8249 -6.6550 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6 7 1 1 0 0
5 7 1 1 0 0
1 5 1 1 0 0
4 6 1 1 0 0
3 4 1 1 0 0
2 3 1 1 0 0
1 2 1 1 0 0
M END
""",
"types": ["query", "stereo"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
result_data = json.loads(result.text)
self.assertEqual(
"Structure has stereochemistry errors", result_data["stereo"]
)
def test_chiral(self):
headers, data = self.get_headers(
{
"struct": """
Ketcher 12021612452D 1 1.00000 0.00000 0
45 42 0 0 0 999 V2000
7.3640 -2.5680 9.2346 Ca 0 0 0 0 0 0 0 0 0 0 0 0
8.0647 -3.0537 10.1222 C 0 0 0 0 0 0 0 0 0 0 0 0
8.6967 -2.3175 10.8796 C 0 0 0 0 0 0 0 0 0 0 0 0
9.3974 -2.8034 11.7673 Nb 0 0 0 0 0 0 0 0 0 0 0 0
10.0293 -2.0671 12.5246 C 0 0 0 0 0 0 0 0 0 0 0 0
10.7301 -2.5529 13.4123 Se 0 0 0 0 0 0 0 0 0 0 0 0
11.3620 -1.8167 14.1698 C 0 0 0 0 0 0 0 0 0 0 0 0
12.0628 -2.3024 15.0573 Ge 0 0 0 0 0 0 0 0 0 0 0 0
12.6946 -1.5662 15.8148 C 0 0 0 0 0 0 0 0 0 0 0 0
13.3954 -2.0521 16.7023 C 0 0 0 0 0 0 0 0 0 0 0 0
14.0273 -1.3158 17.4598 C 0 0 0 0 0 0 0 0 0 0 0 0
14.7281 -1.8016 18.3475 C 0 0 0 0 0 0 0 0 0 0 0 0
15.3600 -1.0654 19.1048 S 0 0 0 0 0 0 0 0 0 0 0 0
16.0608 -1.5511 19.9924 C 0 0 0 0 0 0 0 0 0 0 0 0
16.6927 -0.8149 20.7499 C 0 0 0 0 0 0 0 0 0 0 0 0
17.3934 -1.3007 21.6375 C 0 0 0 0 0 0 0 0 0 0 0 0
18.0254 -0.5645 22.3948 Ca 0 0 0 0 0 0 0 0 0 0 0 0
17.4624 -2.5227 21.7677 F 0 0 0 0 0 0 0 0 0 0 0 0
16.1296 -2.7732 20.1226 F 0 0 0 0 0 0 0 0 0 0 0 0
14.7970 -3.0235 18.4776 F 0 0 0 0 0 0 0 0 0 0 0 0
13.9584 -0.0938 17.3296 Cl 0 0 0 0 0 0 0 0 0 0 0 0
12.6258 -0.3443 15.6845 Cl 0 0 0 0 0 0 0 0 0 0 0 0
11.2932 -0.5947 14.0395 Cl 0 0 0 0 0 0 0 0 0 0 0 0
9.9604 -0.8451 12.3944 Cl 0 0 0 0 0 0 0 0 0 0 0 0
15.0153 -6.2242 18.8656 Ca 0 0 0 0 0 0 0 0 0 0 0 0
15.7847 -6.0797 19.8155 O 0 0 0 0 0 0 0 0 0 0 0 0
6.7015 -10.4726 8.7015 Se 0 0 0 0 0 0 0 0 0 0 0 0
7.4024 -10.9583 9.5891 C 0 0 0 0 0 0 0 0 0 0 0 0
8.0343 -10.2221 10.3467 Ge 0 0 0 0 0 0 0 0 0 0 0 0
8.7351 -10.7079 11.2342 C 0 0 0 0 0 0 0 0 0 0 0 0
9.3669 -9.9716 11.9915 As 0 0 0 0 0 0 0 0 0 0 0 0
10.0677 -10.4574 12.8792 C 0 0 0 0 0 0 0 0 0 0 0 0
10.6997 -9.7213 13.6367 C 0 0 0 0 0 0 0 0 0 0 0 0
11.4004 -10.2070 14.5242 C 0 0 0 0 0 0 0 0 0 0 0 0
12.0323 -9.4708 15.2817 Ge 0 0 0 0 0 0 0 0 0 0 0 0
12.7330 -9.9566 16.1693 C 0 0 0 0 0 0 0 0 0 0 0 0
13.3650 -9.2203 16.9267 Se 0 0 0 0 0 0 0 0 0 0 0 0
14.0658 -9.7061 17.8144 Se 0 0 0 0 0 0 0 0 0 0 0 0
9.3423 -10.9133 11.9955 Ge 0 0 0 0 0 0 0 0 0 0 0 0
10.6307 -8.4993 13.5065 As 0 0 0 0 0 0 0 0 0 0 0 0
12.9987 -11.0994 16.5408 Se 0 0 0 0 0 0 0 0 0 0 0 0
7.4713 -12.1804 9.7193 Sb 0 0 0 0 0 0 0 0 0 0 0 0
7.9654 -9.0001 10.2164 Sb 0 0 0 0 0 0 0 0 0 0 0 0
11.4693 -11.4291 14.6545 Sb 0 0 0 0 0 0 0 0 0 0 0 0
10.3334 -11.6004 13.2508 Sb 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 1 0 0 0
3 4 1 0 0 0
4 5 1 0 0 0
5 6 1 0 0 0
6 7 1 0 0 0
7 8 1 0 0 0
8 9 1 0 0 0
9 10 1 0 0 0
10 11 1 0 0 0
11 12 1 0 0 0
12 13 1 0 0 0
13 14 1 0 0 0
14 15 1 0 0 0
15 16 1 0 0 0
16 17 1 0 0 0
16 18 1 0 0 0
14 19 1 0 0 0
12 20 1 0 0 0
11 21 1 0 0 0
9 22 1 0 0 0
7 23 1 0 0 0
5 24 1 0 0 0
25 26 2 0 0 0
27 28 1 0 0 0
28 29 1 0 0 0
29 30 1 0 0 0
30 31 1 0 0 0
31 32 1 0 0 0
32 33 1 0 0 0
33 34 1 0 0 0
34 35 1 0 0 0
35 36 1 0 0 0
36 37 1 0 0 0
37 38 1 0 0 0
32 39 1 0 0 0
33 40 1 0 0 0
36 41 1 0 0 0
28 42 1 0 0 0
29 43 1 0 0 0
34 44 1 0 0 0
32 45 1 0 0 0
M CHG 2 1 -1 17 -1
M END
""",
"types": ["query", "chiral"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
result_data = json.loads(result.text)
self.assertEqual(
"Structure has 3D Chiral center", result_data["chiral"]
)
headers, data = self.get_headers(
{
"struct": """
Ketcher 03071819482D 1 1.00000 0.00000 0
6 6 0 1 1 999 V2000
5.1750 -4.7000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.0410 -5.2000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.0410 -6.2000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.1750 -6.7000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.3090 -6.2000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.3090 -5.2000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0
2 3 1 0 0 0
3 4 1 0 0 0
4 5 1 0 0 0
5 6 1 0 0 0
6 1 1 0 0 0
M END
""",
"types": ["query", "chiral"],
}
)
result = requests.post(
self.url_prefix + "/check", headers=headers, data=data
)
result_data = json.loads(result.text)
self.assertEqual(
"Structure has invalid Chiral flag", result_data["chiral"]
)
| 35.977673 | 1,358 | 0.461495 | 16,461 | 106,350 | 2.937792 | 0.053642 | 0.202072 | 0.257201 | 0.282719 | 0.871151 | 0.836556 | 0.805124 | 0.775905 | 0.740793 | 0.711512 | 0 | 0.258987 | 0.440733 | 106,350 | 2,955 | 1,359 | 35.989848 | 0.55407 | 0.009243 | 0 | 0.529486 | 0 | 0.114178 | 0.467764 | 0.024698 | 0 | 0 | 0 | 0.000338 | 0.101631 | 1 | 0.025931 | false | 0.000418 | 0.001673 | 0 | 0.029695 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
137aaf2d9f44161a09bfdd6b12aec2c248064b37 | 26,406 | py | Python | src/tests/test_long_audio_recognition.py | drygdryg/yandex-speechkit-lib-python | b45bbb6550d5bb77289ff0462e794cbc8b0669a0 | [
"MIT"
] | 2 | 2021-12-13T20:29:00.000Z | 2022-01-28T13:02:54.000Z | src/tests/test_long_audio_recognition.py | drygdryg/yandex-speechkit-lib-python | b45bbb6550d5bb77289ff0462e794cbc8b0669a0 | [
"MIT"
] | 1 | 2021-07-26T00:36:18.000Z | 2021-07-26T13:52:59.000Z | src/tests/test_long_audio_recognition.py | drygdryg/yandex-speechkit-lib-python | b45bbb6550d5bb77289ff0462e794cbc8b0669a0 | [
"MIT"
] | 2 | 2022-02-08T12:39:26.000Z | 2022-03-12T19:08:18.000Z | import os
import time
import unittest
import warnings
from speechkit import RecognitionLongAudio, Session
from speechkit.auth import generate_jwt
test_data = b'RIFFl@\x08\x00WAVEfmt \x10\x00\x00\x00\x01\x00\x01\x00\x80\xbb\x00\x00\x00w\x01\x00\x02\x00\x10\x00LIST@\x00\x00\x00INFOINAM\x1d\x00\x00\x00\xd1\x83\xd0\xbb\xd0\xb8\xd1\x86\xd0\xb0 8 \xd0\x9c\xd0\xb0\xd1\x80\xd1\x82\xd0\xb0, 6 9\x00\x00ISFT\x0e\x00\x00\x00Lavf58.76.100\x00data\x00@\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x00\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x02\x00\x02\x00\x01\x00\x01\x00\x01\x00\x01\x00\x02\x00\x02\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x01\x00\x02\x00\x02\x00\x02\x00\x02\x00\x02\x00\x01\x00\x00\x00\x00\x00\x01\x00\x02\x00\x02\x00\x03\x00\x02\x00\x02\x00\x01\x00\xff\xff\xff\xff\x00\x00\xff\xff\xfe\xff\xff\xff\x02\x00\x03\x00\x00\x00\xfd\xff\xfe\xff\x02\x00\x03\x00\x01\x00\xfe\xff\x01\x00\x04\x00\x03\x00\xfe\xff\xfb\xff\xfd\xff\x01\x00\x02\x00\x00\x00\xfc\xff\xfb\xff\xfc\xff\xff\xff\x02\x00\xff\xff\xf2\xff\xe0\xff\xd3\xff\xcf\xff\xd0\xff\xcc\xff\xc4\xff\xbd\xff\xb5\xff\xa5\xff\x8d\xfft\xffd\xff\\\xffX\xffV\xffV\xffV\xffN\xff?\xff1\xff/\xff8\xffA\xffG\xffJ\xffN\xffO\xffL\xffF\xffF\xffH\xffG\xff<\xff.\xff(\xff(\xff&\xff\x1e\xff\x16\xff\x18\xff#\xff0\xff8\xff<\xffA\xffF\xffH\xffH\xffL\xffV\xffb\xffm\xffs\xffv\xffy\xff}\xff\x85\xff\x8f\xff\x9a\xff\xa1\xff\xa8\xff\xaf\xff\xb4\xff\xb6\xff\xb7\xff\xbd\xff\xcc\xff\xdd\xff\xec\xff\xf9\xff\x08\x00\x18\x00\'\x003\x00?\x00J\x00Q\x00T\x00T\x00T\x00S\x00S\x00Z\x00n\x00\x87\x00\x99\x00\xa2\x00\xab\x00\xbb\x00\xcb\x00\xd4\x00\xd3\x00\xce\x00\xc9\x00\xc1\x00\xb4\x00\xa6\x00\x99\x00\x8b\x00~\x00w\x00z\x00\x80\x00\x80\x00w\x00n\x00k\x00o\x00q\x00n\x00j\x00k\x00p\x00q\x00h\x00Y\x00K\x00G\x00G\x00I\x00M\x00V\x00`\x00b\x00]\x00Y\x00X\x00T\x00D\x001\x00(\x00\'\x00 \x00\x0f\x00\xff\xff\xff\xff\t\x00\x0e\x00\x08\x00\xff\xff\xff\xff\x03\x00\x07\x00\t\x00\t\x00\x07\x00\x04\x00\x08\x00\x18\x00,\x005\x007\x00>\x00P\x00_\x00`\x00Y\x00Y\x00e\x00s\x00z\x00\x7f\x00\x85\x00\x8c\x00\x8f\x00\x8e\x00\x8d\x00\x8a\x00\x80\x00r\x00h\x00h\x00n\x00r\x00n\x00j\x00l\x00s\x00w\x00u\x00m\x00f\x00_\x00U\x00F\x008\x001\x00/\x00+\x00"\x00\x1a\x00\x16\x00\x0e\x00\xf9\xff\xda\xff\xbd\xff\xa7\xff\x90\xffs\xffX\xffH\xffA\xff6\xff#\xff\x12\xff\x0c\xff\x0b\xff\x05\xff\xf8\xfe\xed\xfe\xec\xfe\xed\xfe\xea\xfe\xe2\xfe\xde\xfe\xe0\xfe\xe4\xfe\xe5\xfe\xe4\xfe\xe3\xfe\xe1\xfe\xde\xfe\xdb\xfe\xdc\xfe\xe1\xfe\xe3\xfe\xe0\xfe\xde\xfe\xe1\xfe\xe7\xfe\xec\xfe\xf0\xfe\xfc\xfe\x10\xff!\xff\'\xff\'\xff-\xff<\xffJ\xffQ\xffV\xff]\xffe\xfff\xffa\xff\\\xff\\\xff`\xffh\xffs\xff\x81\xff\x8d\xff\x9a\xff\xa9\xff\xbc\xff\xcf\xff\xdf\xff\xf1\xff\x06\x00\x18\x00\x1f\x00!\x00,\x00E\x00^\x00m\x00v\x00\x82\x00\x91\x00\x9b\x00\x9f\x00\xa7\x00\xb8\x00\xcb\x00\xd9\x00\xe2\x00\xee\x00\xfb\x00\x04\x01\x05\x01\x02\x01\x01\x01\x01\x01\xff\x00\xfa\x00\xf4\x00\xec\x00\xe5\x00\xe3\x00\xe6\x00\xed\x00\xf5\x00\x01\x01\x0e\x01\x14\x01\x14\x01\x11\x01\x14\x01\x19\x01\x19\x01\x11\x01\x06\x01\xfc\x00\xf4\x00\xee\x00\xeb\x00\xe9\x00\xe5\x00\xde\x00\xd8\x00\xd5\x00\xce\x00\xbe\x00\xae\x00\xa8\x00\xaa\x00\xa3\x00\x91\x00\x83\x00\x86\x00\x91\x00\x90\x00\x82\x00v\x00r\x00l\x00[\x00H\x00A\x00F\x00G\x00=\x000\x00)\x00\'\x00$\x00\x1b\x00\x11\x00\n\x00\x07\x00\x05\x00\x01\x00\xf8\xff\xe9\xff\xd8\xff\xcc\xff\xc3\xff\xb6\xff\xa2\xff\x91\xff\x8a\xff\x90\xff\x9b\xff\xa4\xff\xad\xff\xba\xff\xcb\xff\xda\xff\xe5\xff\xef\xff\xf9\xff\xff\xff\xfe\xff\xfa\xff\xfa\xff\x02\x00\x08\x00\n\x00\x0e\x00\x1e\x004\x00D\x00K\x00U\x00k\x00\x84\x00\x92\x00\x99\x00\xa9\x00\xc3\x00\xd9\x00\xe3\x00\xec\x00\x02\x01\x1c\x01.\x017\x01D\x01X\x01d\x01^\x01T\x01U\x01^\x01^\x01P\x01C\x01C\x01N\x01W\x01\\\x01b\x01l\x01w\x01\x7f\x01\x84\x01\x83\x01|\x01u\x01q\x01o\x01g\x01W\x01F\x01<\x017\x012\x01)\x01"\x01\x1d\x01\x1b\x01\x16\x01\n\x01\xf5\x00\xd7\x00\xb6\x00\x9c\x00\x87\x00o\x00S\x007\x00$\x00\x1c\x00\x14\x00\x07\x00\xf4\xff\xe1\xff\xd0\xff\xc1\xff\xb1\xff\x9f\xff\x8e\xff\x82\xff|\xffz\xfft\xffj\xff`\xffZ\xffT\xffL\xffC\xff:\xff4\xff-\xff%\xff\x1b\xff\x11\xff\x08\xff\xff\xfe\xfa\xfe\xf8\xfe\xf7\xfe\xf3\xfe\xec\xfe\xe4\xfe\xdf\xfe\xe0\xfe\xe5\xfe\xed\xfe\xf4\xfe\xfd\xfe\r\xff\x1f\xff,\xff.\xff,\xff3\xffE\xffY\xfff\xffq\xff\x80\xff\x98\xff\xb8\xff\xd9\xff\xf5\xff\x06\x00\x10\x00\x1d\x00.\x00<\x00<\x005\x004\x00:\x00:\x00&\x00\n\x00\xf8\xff\xf5\xff\xf6\xff\xf2\xff\xef\xff\xf3\xff\xfa\xff\xfc\xff\xf9\xff\xfb\xff\x00\x00\x03\x00\x01\x00\xff\xff\x04\x00\x0c\x00\x10\x00\x0e\x00\n\x00\x0b\x00\r\x00\x0f\x00\x0e\x00\x0c\x00\x0c\x00\x0c\x00\x0e\x00\r\x00\x0b\x00\t\x00\x0c\x00\x10\x00\x0f\x00\t\x00\x02\x00\x02\x00\x07\x00\n\x00\n\x00\t\x00\t\x00\t\x00\x07\x00\x06\x00\x05\x00\x04\x00\x01\x00\x00\x00\x01\x00\x00\x00\xfe\xff\xfb\xff\xfb\xff\xfc\xff\xfb\xff\xf8\xff\xf5\xff\xf7\xff\xf9\xff\xf7\xff\xf2\xff\xf1\xff\xf4\xff\xf7\xff\xf5\xff\xf2\xff\xf1\xff\xf2\xff\xf3\xff\xf3\xff\xf2\xff\xf2\xff\xf2\xff\xf1\xff\xef\xff\xed\xff\xec\xff\xed\xff\xef\xff\xf0\xff\xf1\xff\xf2\xff\xf3\xff\xf4\xff\xf5\xff\xf3\xff\xf0\xff\xee\xff\xed\xff\xee\xff\xed\xff\xeb\xff\xea\xff\xeb\xff\xec\xff\xeb\xff\xe9\xff\xe8\xff\xea\xff\xec\xff\xec\xff\xea\xff\xe9\xff\xea\xff\xeb\xff\xea\xff\xe9\xff\xe8\xff\xe9\xff\xe9\xff\xea\xff\xea\xff\xe8\xff\xe5\xff\xe2\xff\xe2\xff\xe4\xff\xe4\xff\xe2\xff\xdf\xff\xdf\xff\xe0\xff\xe0\xff\xde\xff\xdd\xff\xdc\xff\xda\xff\xd9\xff\xd9\xff\xdb\xff\xdd\xff\xdc\xff\xd9\xff\xd8\xff\xda\xff\xdb\xff\xdb\xff\xdc\xff\xde\xff\xdd\xff\xda\xff\xd7\xff\xd7\xff\xd8\xff\xd8\xff\xd7\xff\xd7\xff\xd9\xff\xda\xff\xd8\xff\xd6\xff\xd5\xff\xd5\xff\xd5\xff\xd5\xff\xd5\xff\xd5\xff\xd5\xff\xd6\xff\xda\xff\xdd\xff\xde\xff\xde\xff\xde\xff\xe1\xff\xe2\xff\xe1\xff\xe0\xff\xe1\xff\xe3\xff\xe4\xff\xe2\xff\xe1\xff\xe3\xff\xe4\xff\xe4\xff\xe2\xff\xe2\xff\xe4\xff\xe6\xff\xe8\xff\xe9\xff\xe9\xff\xea\xff\xeb\xff\xec\xff\xec\xff\xec\xff\xed\xff\xef\xff\xf0\xff\xef\xff\xed\xff\xed\xff\xee\xff\xef\xff\xee\xff\xec\xff\xeb\xff\xeb\xff\xeb\xff\xea\xff\xea\xff\xe9\xff\xe9\xff\xe8\xff\xe8\xff\xe8\xff\xe7\xff\xe5\xff\xe4\xff\xe4\xff\xe6\xff\xe8\xff\xe8\xff\xe6\xff\xe4\xff\xe4\xff\xe5\xff\xe6\xff\xe6\xff\xe5\xff\xe4\xff\xe3\xff\xe4\xff\xe4\xff\xe3\xff\xe1\xff\xe0\xff\xe0\xff\xe0\xff\xdf\xff\xde\xff\xde\xff\xdd\xff\xdb\xff\xda\xff\xda\xff\xda\xff\xd9\xff\xd8\xff\xd7\xff\xd8\xff\xd8\xff\xd6\xff\xd3\xff\xd2\xff\xd3\xff\xd5\xff\xd7\xff\xda\xff\xdc\xff\xde\xff\xde\xff\xde\xff\xdf\xff\xe0\xff\xe0\xff\xe0\xff\xe1\xff\xe3\xff\xe5\xff\xe4\xff\xe0\xff\xdd\xff\xdd\xff\xe1\xff\xe4\xff\xe4\xff\xe2\xff\xe1\xff\xe2\xff\xe3\xff\xe4\xff\xe5\xff\xe5\xff\xe5\xff\xe4\xff\xe4\xff\xe5\xff\xe6\xff\xe7\xff\xe7\xff\xe6\xff\xe6\xff\xe7\xff\xe7\xff\xe8\xff\xe8\xff\xe8\xff\xe8\xff\xe9\xff\xea\xff\xea\xff\xea\xff\xe9\xff\xe9\xff\xe9\xff\xe9\xff\xe9\xff\xe9\xff\xe8\xff\xe8\xff\xe8\xff\xe8\xff\xe9\xff\xe9\xff\xe8\xff\xe7\xff\xe6\xff\xe7\xff\xe7\xff\xe6\xff\xe5\xff\xe5\xff\xe5\xff\xe6\xff\xe5\xff\xe4\xff\xe4\xff\xe5\xff\xe4\xff\xe4\xff\xe3\xff\xe2\xff\xe2\xff\xe2\xff\xe1\xff\xe0\xff\xe0\xff\xdf\xff\xde\xff\xde\xff\xdd\xff\xdd\xff\xdc\xff\xdc\xff\xdb\xff\xdb\xff\xdb\xff\xda\xff\xda\xff\xda\xff\xda\xff\xd9\xff\xd9\xff\xd9\xff\xd8\xff\xd8\xff\xd9\xff\xd9\xff\xd9\xff\xd9\xff\xd9\xff\xda\xff\xda\xff\xda\xff\xda\xff\xdb\xff\xdb\xff\xdb\xff\xdc\xff\xdc\xff\xdc\xff\xdd\xff\xdd\xff\xde\xff\xde\xff\xdf\xff\xdf\xff\xe0\xff\xe0\xff\xe1\xff\xe1\xff\xe1\xff\xe2\xff\xe2\xff\xe3\xff\xe3\xff\xe3\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe4\xff\xe3\xff\xe3\xff\xe3\xff\xe2\xff\xe2\xff\xe2\xff\xe2\xff\xe1\xff\xe1\xff\xe1\xff\xe1\xff'
class RecognizeLongAudio(unittest.TestCase):
def setUp(self):
warnings.filterwarnings("ignore", category=ResourceWarning, message="unclosed.*<ssl.SSLSocket.*>")
def test_init_wrong_description(self):
session = Session(auth_type=Session.IAM_TOKEN, credential='lol', folder_id='ad')
with self.assertRaises(ValueError):
RecognitionLongAudio(session, 'acid', 'bucket', aws_credentials_description='l' * 257)
def test_init(self):
bucket_name = os.environ.get('BUCKET_NAME')
service_account_id = os.environ.get('SERVICE_ACCOUNT_ID')
key_id = os.environ.get('YANDEX_KEY_ID')
private_key = os.environ.get('YANDEX_PRIVATE_KEY').replace('\\n', '\n').encode()
jwt = generate_jwt(service_account_id, key_id, private_key)
session = Session.from_jwt(jwt)
recognize_long_audio = RecognitionLongAudio(session, service_account_id, bucket_name)
self.assertIsInstance(recognize_long_audio._headers, dict)
def test_recognition(self):
bucket_name = os.environ.get('BUCKET_NAME')
service_account_id = os.environ.get('SERVICE_ACCOUNT_ID')
key_id = os.environ.get('YANDEX_KEY_ID')
private_key = os.environ.get('YANDEX_PRIVATE_KEY').replace('\\n', '\n').encode()
jwt = generate_jwt(service_account_id, key_id, private_key)
session = Session.from_jwt(jwt)
recognize_long_audio = RecognitionLongAudio(session, service_account_id, bucket_name)
self.path = os.path.join(os.path.dirname(__file__), 'test_rec.wav')
with open(self.path, 'wb') as f:
f.write(test_data)
recognize_long_audio.send_for_recognition(
self.path, audioEncoding='LINEAR16_PCM', sampleRateHertz='48000', audioChannelCount=1, rawResults=False
)
while True:
time.sleep(2)
if recognize_long_audio.get_recognition_results():
break
data = recognize_long_audio.get_data()
self.assertIsInstance(data, (list, type(None)))
text = recognize_long_audio.get_raw_text()
self.assertIsInstance(text, str)
| 432.885246 | 24,105 | 0.744149 | 6,245 | 26,406 | 3.132906 | 0.051401 | 1.286174 | 1.918681 | 2.549042 | 0.803884 | 0.768157 | 0.768157 | 0.725019 | 0.725019 | 0.697879 | 0 | 0.382454 | 0.017534 | 26,406 | 60 | 24,106 | 440.1 | 0.3717 | 0 | 0 | 0.311111 | 1 | 0.088889 | 0.743581 | 0.736196 | 0 | 1 | 0 | 0 | 0.088889 | 1 | 0.088889 | false | 0 | 0.133333 | 0 | 0.244444 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 17 |
13a15c8d9cd420746c8b04e5fc042eba79280469 | 2,326 | py | Python | documents/tests/factories/test_platform_invoice_producer.py | stanwood/traidoo-api | 83e8599f2eb54352988bac27e2d4acd30734816d | [
"MIT"
] | 3 | 2020-05-05T12:12:09.000Z | 2020-05-08T08:48:16.000Z | documents/tests/factories/test_platform_invoice_producer.py | stanwood/traidoo-api | 83e8599f2eb54352988bac27e2d4acd30734816d | [
"MIT"
] | 160 | 2020-05-19T13:03:43.000Z | 2022-03-12T00:35:28.000Z | documents/tests/factories/test_platform_invoice_producer.py | stanwood/traidoo-api | 83e8599f2eb54352988bac27e2d4acd30734816d | [
"MIT"
] | null | null | null | from documents import factories
def test_create_platform_invoice_not_cooperative_member(
order,
order_items,
central_platform_user,
traidoo_region,
delivery_address,
delivery_options,
seller,
):
seller.is_cooperative_member = False
seller.save()
order.recalculate_items_delivery_fee()
document = factories.PlatformInvoiceFactory(
order, region=traidoo_region, seller=seller
).compose()
document.save()
assert document.seller == factories.PlatformInvoiceFactory.as_dict(
central_platform_user
)
assert document.buyer == factories.PlatformInvoiceFactory.as_company(seller)
assert document.order_id == order.id
assert len(document.lines) == 1
assert document.lines[0] == {
"amount": 1.0,
"category": "",
"count": 1.0,
"name": "Plattformgebühr",
"number": "",
"price": 16.32,
"producer": "Traidoo",
"seller_user_id": central_platform_user.id,
"unit": "",
"vat_rate": 19.0,
}
def test_create_platform_invoice_cooperative_member(
order,
order_items,
central_platform_user,
traidoo_region,
delivery_address,
delivery_options,
seller,
):
assert seller.is_cooperative_member
document = factories.PlatformInvoiceFactory(
order, region=traidoo_region, seller=seller
).compose()
document.save()
assert document.seller == factories.PlatformInvoiceFactory.as_dict(
central_platform_user
)
assert document.buyer == factories.PlatformInvoiceFactory.as_company(seller)
assert document.order_id == order.id
assert len(document.lines) == 1
assert document.lines[0] == {
"amount": 1.0,
"category": "",
"count": 1.0,
"name": "Plattformgebühr",
"number": "",
"price": 13.6,
"producer": "Traidoo",
"seller_user_id": central_platform_user.id,
"unit": "",
"vat_rate": 19.0,
}
def test_include_paid_notice_in_seller_platform_invoice(
order, traidoo_region, seller, order_items
):
document = (
factories.PlatformInvoiceFactory(order, region=traidoo_region, seller=seller)
.compose()
.render_html()
)
assert "Diese Rechnung ist bereits bezahlt" in document
| 25.844444 | 85 | 0.653912 | 241 | 2,326 | 6.049793 | 0.278008 | 0.076818 | 0.078189 | 0.090535 | 0.814129 | 0.780521 | 0.780521 | 0.780521 | 0.780521 | 0.780521 | 0 | 0.014108 | 0.238177 | 2,326 | 89 | 86 | 26.134831 | 0.808691 | 0 | 0 | 0.723684 | 0 | 0 | 0.092003 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 1 | 0.039474 | false | 0 | 0.013158 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
13b88f1b8cb46c7ca445a2284444b2dc1f6f6cc9 | 7,116 | py | Python | src/lod_api/apis/authority_provider.py | efre-lod/efre-lod-api | 07b1f7755df4785868d08b1a11921de410f3b25c | [
"Apache-2.0"
] | 4 | 2019-11-27T15:59:17.000Z | 2021-06-16T11:18:19.000Z | src/lod_api/apis/authority_provider.py | efre-lod/efre-lod-api | 07b1f7755df4785868d08b1a11921de410f3b25c | [
"Apache-2.0"
] | 3 | 2020-01-13T13:05:50.000Z | 2020-06-25T14:46:24.000Z | src/lod_api/apis/authority_provider.py | efre-lod/efre-lod-api | 07b1f7755df4785868d08b1a11921de410f3b25c | [
"Apache-2.0"
] | 1 | 2019-10-07T14:07:46.000Z | 2019-10-07T14:07:46.000Z | import flask
from flask_restx import Namespace
from flask_restx import reqparse
from elasticsearch import Elasticsearch
from lod_api import CONFIG
from lod_api.tools.resource import LodResource
from lod_api.tools.helper import ES_wrapper
api = Namespace(name="authority_search", path="/",
description="Authority Provider Identifier Search")
# flaskREST+ BUG, which ignores the last element in <any([…])> list
# [see](https://github.com/noirbizarre/flask-restplus/issues/695)
# quickfix: add whitespace string as element
@api.route('/<any({}):authority_provider>/<string:id>'
.format(CONFIG.get("authorities_list") + [" "]),
methods=['GET']
)
@api.param('authority_provider',
'The name of the authority-provider to access. '
'Allowed Values: {}.'.format(CONFIG.get("authorities_list")))
@api.param('id', 'The ID-String of the authority-identifier to access. '
'Possible Values (examples): 208922695, 118695940, 20474817, Q1585819')
class AutSearch(LodResource):
parser = reqparse.RequestParser()
parser.add_argument(
'format', type=str, help="set the Content-Type over this Query-Parameter. Allowed: nt, rdf, ttl, nq, jsonl, json", location="args")
parser.add_argument(
'size', type=int, help="Configure the maxmimum amount of hits to be returned", location="args", default=100)
parser.add_argument(
'from', type=int, help="Configure the offset from the frist result you want to fetch", location="args", default=0)
es_host, es_port, excludes, indices, authorities, auth_path = CONFIG.get("es_host", "es_port", "excludes", "indices", "authorities", "authority_path")
es = Elasticsearch([{'host': es_host}], port=es_port, timeout=10)
@api.response(200, 'Success')
@api.response(404, 'Record(s) not found')
@api.expect(parser)
@api.doc('get record by authority-id')
def get(self, authority_provider, id):
"""
search for an given ID of a given authority-provider
"""
print(type(self).__name__)
retarray = []
args = self.parser.parse_args()
name = ""
ending = ""
if "." in id:
dot_fields = id.split(".")
name = dot_fields[0]
ending = dot_fields[1]
else:
name = id
ending = ""
if authority_provider not in self.authorities:
flask.abort(404)
auth_url = self.authorities.get(authority_provider)
# combine query fitting both protocolls: http and https
es_query = "{path}:\"http://{url}{id}\" OR {path}:\"https://{url}{id}\"".format(
path=self.auth_path, url=auth_url, id=name)
search = {
"_source": {
"excludes": self.excludes
},
"query": {
"query_string": {
"query": es_query
}
}
}
res = ES_wrapper.call(self.es, action='search',
index=','.join(CONFIG.get("indices_list")),
body=search,
size=args.get("size"), from_=args.get("from"),
_source_excludes=self.excludes)
if "hits" in res and "hits" in res["hits"]:
for hit in res["hits"]["hits"]:
retarray.append(hit.get("_source"))
return self.response.parse(retarray, args.get("format"), ending, flask.request)
# flaskREST+ BUG, which ignores the last element in <any([…])> list
# [see](https://github.com/noirbizarre/flask-restplus/issues/695)
# quickfix: add whitespace string as element
@api.route('/<any({aut}):authority_provider>/<any({ent}):entity_type>'
'/<string:id>'.format(aut=CONFIG.get("authorities_list") + [" "],
ent=CONFIG.get("indices_list") + [" "]),
methods=['GET'])
@api.param('authority_provider',
'The name of the authority-provider to access. '
'Allowed Values: {}.'.format(CONFIG.get("authorities_list")))
@api.param('entity_type', 'The name of the entity-index to access. '
'Allowed Values: {}.'.format(CONFIG.get("indices_list")))
@api.param('id', 'The ID-String of the authority-identifier to access. '
'Possible Values (examples): 208922695, 118695940, 20474817, Q1585819')
class AutEntSearch(LodResource):
parser = reqparse.RequestParser()
parser.add_argument(
'format', type=str, help="set the Content-Type over this Query-Parameter. Allowed: nt, rdf, ttl, nq, jsonl, json", location="args")
parser.add_argument(
'size', type=int, help="Configure the maxmimum amount of hits to be returned", location="args", default=100)
parser.add_argument(
'from', type=int, help="Configure the offset from the frist result you want to fetch", location="args", default=0)
es_host, es_port, excludes, indices, authorities, auth_path = CONFIG.get("es_host", "es_port", "excludes", "indices", "authorities", "authority_path")
es = Elasticsearch([{'host': es_host}], port=es_port, timeout=10)
@api.response(200, 'Success')
@api.response(404, 'Record(s) not found')
@api.expect(parser)
@api.doc('get record by authority-id and entity-id')
def get(self, authority_provider, entity_type, id):
"""
search for an given ID of a given authority-provider on a given entity-index
"""
print(type(self).__name__)
retarray = []
args = self.parser.parse_args()
name = ""
ending = ""
if "." in id:
dot_fields = id.split(".")
name = dot_fields[0]
ending = dot_fields[1]
else:
name = id
ending = ""
if authority_provider not in self.authorities or entity_type not in CONFIG.get("indices_list"):
flask.abort(404)
auth_url = self.authorities.get(authority_provider)
# combine query fitting both protocolls: http and https
es_query = "{path}:\"http://{url}{id}\" OR {path}:\"https://{url}{id}\"".format(
path=self.auth_path, url=auth_url, id=name)
search = {
"_source": {
"excludes": self.excludes
},
"query": {
"query_string": {
"query": es_query
}
}
}
res = ES_wrapper.call(self.es, action='search',
index=entity_type, body=search,
size=args.get("size"), from_=args.get("from"),
_source_excludes=self.excludes)
if "hits" in res and "hits" in res["hits"]:
for hit in res["hits"]["hits"]:
retarray.append(hit.get("_source"))
return self.response.parse(retarray, args.get("format"), ending, flask.request)
| 45.909677 | 154 | 0.575885 | 821 | 7,116 | 4.886724 | 0.198538 | 0.063559 | 0.025424 | 0.023928 | 0.857428 | 0.850947 | 0.836989 | 0.828016 | 0.828016 | 0.828016 | 0 | 0.020932 | 0.288364 | 7,116 | 154 | 155 | 46.207792 | 0.770142 | 0.083193 | 0 | 0.707692 | 0 | 0.015385 | 0.256723 | 0.015147 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0 | 0.053846 | 0 | 0.130769 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
13bf5219a0437ac8c831f972a9a3df508e63c1d7 | 11,075 | py | Python | cashbook/testing/test_integration/test_audit.py | rossm6/accounts | 74633ce4038806222048d85ef9dfe97a957a6a71 | [
"MIT"
] | 11 | 2021-01-23T01:09:54.000Z | 2021-01-25T07:16:30.000Z | cashbook/testing/test_integration/test_audit.py | rossm6/accounts | 74633ce4038806222048d85ef9dfe97a957a6a71 | [
"MIT"
] | 7 | 2021-04-06T18:19:10.000Z | 2021-09-22T19:45:03.000Z | cashbook/testing/test_integration/test_audit.py | rossm6/accounts | 74633ce4038806222048d85ef9dfe97a957a6a71 | [
"MIT"
] | 3 | 2021-01-23T18:55:32.000Z | 2021-02-16T17:47:59.000Z | from datetime import date
from accountancy.signals import audit_post_delete
from cashbook.models import (CashBook, CashBookHeader, CashBookLine,
CashBookTransaction)
from django.db import models
from django.test import TestCase
from nominals.models import Nominal
from simple_history.models import HistoricalRecords
class CashBookAuditTests(TestCase):
def test_simple_history_post_delete_receiver_is_removed(self):
"""
The ready method of the AppConfig calls simple_history_custom_set_up
on the AuditMixin class which disconnects this receiver.
"""
live_receivers = models.signals.post_delete._live_receivers(CashBook)
for receiver in live_receivers:
if receiver.__self__.__class__.__name__ == HistoricalRecords.__name__:
self.fail(
"""
Historical Records receiver not disconnected.
It should be because we are using our own custom signal
which is fired when we delete."""
)
def test_audit_post_delete_signal_is_added(self):
"""
After registering the model and disconnecting the receiver from
the post delete signal we add our receiver to a custom signal
"""
live_receivers = audit_post_delete._live_receivers(CashBook)
found = False
for receiver in live_receivers:
if str(receiver) == "<bound method AuditMixin.post_delete of <class 'cashbook.models.CashBook'>>":
found = True
break
if not found:
self.fail(
"Failed to find the post_delete method of the AuditMixin class")
"""
Create and update are taken care of by the app simple_history. Just check here that is working.
"""
def test_audit_is_created(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
self.assertEqual(
len(
CashBook.history.all()
),
1 # created audit
)
def test_audit_is_updated(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
c.name = "new name"
c.save()
self.assertEqual(
len(
CashBook.history.all()
),
2 # created + updated audits
)
def test_instance_deleted(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
c.delete()
self.assertEqual(
len(
CashBook.history.all()
),
2 # created + deleted audits
)
def test_queryset_deleted(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
CashBook.objects.all().delete()
self.assertEqual(
len(
CashBook.history.all()
),
1 # created audit only
# deleted audit is not created
# use bulk_delete_with_history for deleted audits
)
class CashBookHeaderAuditTests(TestCase):
def test_simple_history_post_delete_receiver_is_removed(self):
"""
The ready method of the AppConfig calls simple_history_custom_set_up
on the AuditMixin class which disconnects this receiver.
"""
live_receivers = models.signals.post_delete._live_receivers(
CashBookHeader)
for receiver in live_receivers:
if receiver.__self__.__class__.__name__ == HistoricalRecords.__name__:
self.fail(
"""
Historical Records receiver not disconnected.
It should be because we are using our own custom signal
which is fired when we delete."""
)
def test_audit_post_delete_signal_is_added(self):
"""
After registering the model and disconnecting the receiver from
the post delete signal we add our receiver to a custom signal
"""
live_receivers = audit_post_delete._live_receivers(CashBookHeader)
found = False
for receiver in live_receivers:
if str(receiver) == "<bound method AuditMixin.post_delete of <class 'cashbook.models.CashBookHeader'>>":
found = True
break
if not found:
self.fail(
"Failed to find the post_delete method of the AuditMixin class")
"""
Create and update are taken care of by the app simple_history. Just check here that is working.
"""
def test_audit_is_created(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
t = CashBookHeader(
date=date.today(),
cash_book=c
)
t.save()
self.assertEqual(
len(
CashBookHeader.history.all()
),
1 # created audit
)
def test_audit_is_updated(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
t = CashBookHeader(
date=date.today(),
cash_book=c
)
t.save()
t.date = date.today()
t.save()
self.assertEqual(
len(
CashBookHeader.history.all()
),
2 # created + updated audits
)
def test_instance_deleted(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
t = CashBookHeader(
date=date.today(),
cash_book=c
)
t.save()
t.delete()
self.assertEqual(
len(
CashBookHeader.history.all()
),
2 # created + deleted audits
)
def test_queryset_deleted(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
t = CashBookHeader(
date=date.today(),
cash_book=c
)
t.save()
CashBookHeader.objects.all().delete()
self.assertEqual(
len(
CashBookHeader.history.all()
),
1 # created audit only
# deleted audit is not created
# use bulk_delete_with_history for deleted audits
)
class CashBookLineAuditTests(TestCase):
def test_simple_history_post_delete_receiver_is_removed(self):
"""
The ready method of the AppConfig calls simple_history_custom_set_up
on the AuditMixin class which disconnects this receiver.
"""
live_receivers = models.signals.post_delete._live_receivers(
CashBookLine)
for receiver in live_receivers:
if receiver.__self__.__class__.__name__ == HistoricalRecords.__name__:
self.fail(
"""
Historical Records receiver not disconnected.
It should be because we are using our own custom signal
which is fired when we delete."""
)
def test_audit_post_delete_signal_is_added(self):
"""
After registering the model and disconnecting the receiver from
the post delete signal we add our receiver to a custom signal
"""
live_receivers = audit_post_delete._live_receivers(CashBookLine)
found = False
for receiver in live_receivers:
if str(receiver) == "<bound method AuditMixin.post_delete of <class 'cashbook.models.CashBookLine'>>":
found = True
break
if not found:
self.fail(
"Failed to find the post_delete method of the AuditMixin class")
"""
Create and update are taken care of by the app simple_history. Just check here that is working.
"""
def test_audit_is_created(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
h = CashBookHeader(
date=date.today(),
cash_book=c
)
h.save()
l = CashBookLine.objects.create(
header=h, line_no=1, description="description")
self.assertEqual(
len(
CashBookLine.history.all()
),
1 # created audits
)
def test_audit_is_updated(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
h = CashBookHeader(
date=date.today(),
cash_book=c
)
h.save()
l = CashBookLine.objects.create(
header=h, line_no=1, description="description")
l.description = "new description"
l.save()
self.assertEqual(
len(
CashBookLine.history.all()
),
2 # created + updated audits
)
def test_instance_deleted(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
h = CashBookHeader(
date=date.today(),
cash_book=c
)
h.save()
l = CashBookLine.objects.create(
header=h, line_no=1, description="description")
l.delete()
self.assertEqual(
len(
CashBookLine.history.all()
),
2 # created + deleted audits
)
def test_queryset_deleted(self):
n = Nominal.objects.create(name="nominal")
c = CashBook(
name="current",
nominal=n
)
c.save()
h = CashBookHeader(
date=date.today(),
cash_book=c
)
h.save()
l = CashBookLine.objects.create(
header=h, line_no=1, description="description")
CashBookLine.objects.all().delete()
self.assertEqual(
len(
CashBookLine.history.all()
),
1 # created audit only
# deleted audit is not created
# use bulk_delete_with_history for deleted audits
)
class CashBookTransactionAuditTests(TestCase):
def test_no_historical_model_exists(self):
if hasattr(CashBookTransaction, "history"):
self.fail("This model should not be audited")
| 30.425824 | 116 | 0.546546 | 1,131 | 11,075 | 5.17595 | 0.116711 | 0.037581 | 0.024599 | 0.038948 | 0.895969 | 0.884353 | 0.879228 | 0.873078 | 0.827981 | 0.808165 | 0 | 0.002306 | 0.373634 | 11,075 | 363 | 117 | 30.509642 | 0.841574 | 0.111693 | 0 | 0.711744 | 0 | 0 | 0.079613 | 0.018408 | 0 | 0 | 0 | 0 | 0.042705 | 1 | 0.067616 | false | 0 | 0.024911 | 0 | 0.106762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
13d58bfd78b126d5f5210061a8c26b4bbdcd195a | 5,348 | py | Python | configs/trackMPNN_detector/libra_cascade_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py | mez/mmdetection | 79262d3db7452ab465466e8f11cd85dc866b3672 | [
"Apache-2.0"
] | null | null | null | configs/trackMPNN_detector/libra_cascade_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py | mez/mmdetection | 79262d3db7452ab465466e8f11cd85dc866b3672 | [
"Apache-2.0"
] | null | null | null | configs/trackMPNN_detector/libra_cascade_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py | mez/mmdetection | 79262d3db7452ab465466e8f11cd85dc866b3672 | [
"Apache-2.0"
] | null | null | null | _base_ = 'cascade_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py'
model = dict(
neck=[
dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
dict(
type='BFP',
in_channels=256,
num_levels=5,
refine_level=2,
refine_type='non_local')
],
roi_head = dict(
bbox_head=[
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=8,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(
type='BalancedL1Loss',
alpha=0.5,
gamma=1.5,
beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=8,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(
type='BalancedL1Loss',
alpha=0.5,
gamma=1.5,
beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=8,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(
type='BalancedL1Loss',
alpha=0.5,
gamma=1.5,
beta=1.0,
loss_weight=1.0))
]
),
train_cfg=dict(
rpn=dict(sampler=dict(neg_pos_ub=5), allowed_border=-1),
rcnn=[
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='CombinedSampler',
num=512,
pos_fraction=0.25,
add_gt_as_proposals=True,
pos_sampler=dict(type='InstanceBalancedPosSampler'),
neg_sampler=dict(
type='IoUBalancedNegSampler',
floor_thr=-1,
floor_fraction=0,
num_bins=3)),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.6,
neg_iou_thr=0.6,
min_pos_iou=0.6,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='CombinedSampler',
num=512,
pos_fraction=0.25,
add_gt_as_proposals=True,
pos_sampler=dict(type='InstanceBalancedPosSampler'),
neg_sampler=dict(
type='IoUBalancedNegSampler',
floor_thr=-1,
floor_fraction=0,
num_bins=3)),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.7,
min_pos_iou=0.7,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='CombinedSampler',
num=512,
pos_fraction=0.25,
add_gt_as_proposals=True,
pos_sampler=dict(type='InstanceBalancedPosSampler'),
neg_sampler=dict(
type='IoUBalancedNegSampler',
floor_thr=-1,
floor_fraction=0,
num_bins=3)),
pos_weight=-1,
debug=False)
]
)
)
| 35.184211 | 72 | 0.405011 | 483 | 5,348 | 4.21118 | 0.20911 | 0.102262 | 0.066372 | 0.035398 | 0.825959 | 0.825959 | 0.825959 | 0.825959 | 0.825959 | 0.804326 | 0 | 0.073235 | 0.504675 | 5,348 | 151 | 73 | 35.417219 | 0.694602 | 0 | 0 | 0.761589 | 0 | 0 | 0.091249 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b93168a5805cc96b6f22c471b4e766774dda93d0 | 635 | py | Python | tests/integrationtests/connection/testdata/TestConnectionQueueData.py | jedicontributors/pythondataintegrator | 3e877b367ab9b20185476128ec053db41087879f | [
"MIT"
] | 14 | 2020-12-19T15:06:13.000Z | 2022-01-12T19:52:17.000Z | tests/integrationtests/connection/testdata/TestConnectionQueueData.py | jedicontributors/pythondataintegrator | 3e877b367ab9b20185476128ec053db41087879f | [
"MIT"
] | 43 | 2021-01-06T22:05:22.000Z | 2022-03-10T10:30:30.000Z | tests/integrationtests/connection/testdata/TestConnectionQueueData.py | jedicontributors/pythondataintegrator | 3e877b367ab9b20185476128ec053db41087879f | [
"MIT"
] | 4 | 2020-12-18T23:10:09.000Z | 2021-04-02T13:03:12.000Z | class TestConnectionQueueData:
test_insert_input = {
"Name": "TestConnection",
"ConnectorTypeName": "Kafka",
"Servers": [{
"Host": "string",
"Port": 0
}],
"Protocol": "TEST",
"Mechanism": "TEST",
"User": "string",
"Password": "string"
}
test_update_input = {
"Name": "TestConnection",
"ConnectorTypeName": "Kafka",
"Servers": [{
"Host": "string",
"Port": 0
}],
"Protocol": "TEST",
"Mechanism": "TEST",
"User": "string",
"Password": "string"
}
| 24.423077 | 37 | 0.447244 | 42 | 635 | 6.666667 | 0.452381 | 0.064286 | 0.164286 | 0.285714 | 0.828571 | 0.828571 | 0.828571 | 0.828571 | 0.828571 | 0.828571 | 0 | 0.005063 | 0.377953 | 635 | 25 | 38 | 25.4 | 0.703797 | 0 | 0 | 0.8 | 0 | 0 | 0.346457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.08 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
b934fe8923d8b92901f58cf185ad1ebf4abe0fa1 | 12,452 | py | Python | sdk/python/pulumi_azure/storage/blob.py | vijayraavi/pulumi-azure | d080c0f5258c18148d1ea95f77a49a04b0ffb208 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/storage/blob.py | vijayraavi/pulumi-azure | d080c0f5258c18148d1ea95f77a49a04b0ffb208 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/storage/blob.py | vijayraavi/pulumi-azure | d080c0f5258c18148d1ea95f77a49a04b0ffb208 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from .. import utilities, tables
class Blob(pulumi.CustomResource):
access_tier: pulumi.Output[str]
"""
The access tier of the storage blob. Possible values are `Archive`, `Cool` and `Hot`.
"""
attempts: pulumi.Output[float]
"""
The number of attempts to make per page or block when uploading. Defaults to `1`.
"""
content_type: pulumi.Output[str]
"""
The content type of the storage blob. Cannot be defined if `source_uri` is defined. Defaults to `application/octet-stream`.
"""
metadata: pulumi.Output[dict]
"""
A map of custom blob metadata.
"""
name: pulumi.Output[str]
"""
The name of the storage blob. Must be unique within the storage container the blob is located.
"""
parallelism: pulumi.Output[float]
"""
The number of workers per CPU core to run for concurrent uploads. Defaults to `8`.
"""
resource_group_name: pulumi.Output[str]
"""
The name of the resource group in which to create the storage container.
"""
size: pulumi.Output[float]
"""
Used only for `page` blobs to specify the size in bytes of the blob to be created. Must be a multiple of 512. Defaults to 0.
"""
source: pulumi.Output[str]
"""
An absolute path to a file on the local system. This field cannot be specified for Append blobs and annot be specified if `source_content` or `source_uri` is specified.
"""
source_content: pulumi.Output[str]
"""
The content for this blob which should be defined inline. This field can only be specified for Block blobs and cannot be specified if `source` or `source_uri` is specified.
"""
source_uri: pulumi.Output[str]
"""
The URI of an existing blob, or a file in the Azure File service, to use as the source contents
for the blob to be created. Changing this forces a new resource to be created. This field cannot be specified for Append blobs and cannot be specified if `source` or `source_content` is specified.
"""
storage_account_name: pulumi.Output[str]
"""
Specifies the storage account in which to create the storage container.
Changing this forces a new resource to be created.
"""
storage_container_name: pulumi.Output[str]
"""
The name of the storage container in which this blob should be created.
"""
type: pulumi.Output[str]
"""
The type of the storage blob to be created. Possible values are `Append`, `Block` or `Page`. Changing this forces a new resource to be created.
"""
url: pulumi.Output[str]
"""
The URL of the blob
"""
def __init__(__self__, resource_name, opts=None, access_tier=None, attempts=None, content_type=None, metadata=None, name=None, parallelism=None, resource_group_name=None, size=None, source=None, source_content=None, source_uri=None, storage_account_name=None, storage_container_name=None, type=None, __props__=None, __name__=None, __opts__=None):
"""
Manages a Blob within a Storage Container.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] access_tier: The access tier of the storage blob. Possible values are `Archive`, `Cool` and `Hot`.
:param pulumi.Input[float] attempts: The number of attempts to make per page or block when uploading. Defaults to `1`.
:param pulumi.Input[str] content_type: The content type of the storage blob. Cannot be defined if `source_uri` is defined. Defaults to `application/octet-stream`.
:param pulumi.Input[dict] metadata: A map of custom blob metadata.
:param pulumi.Input[str] name: The name of the storage blob. Must be unique within the storage container the blob is located.
:param pulumi.Input[float] parallelism: The number of workers per CPU core to run for concurrent uploads. Defaults to `8`.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which to create the storage container.
:param pulumi.Input[float] size: Used only for `page` blobs to specify the size in bytes of the blob to be created. Must be a multiple of 512. Defaults to 0.
:param pulumi.Input[str] source: An absolute path to a file on the local system. This field cannot be specified for Append blobs and annot be specified if `source_content` or `source_uri` is specified.
:param pulumi.Input[str] source_content: The content for this blob which should be defined inline. This field can only be specified for Block blobs and cannot be specified if `source` or `source_uri` is specified.
:param pulumi.Input[str] source_uri: The URI of an existing blob, or a file in the Azure File service, to use as the source contents
for the blob to be created. Changing this forces a new resource to be created. This field cannot be specified for Append blobs and cannot be specified if `source` or `source_content` is specified.
:param pulumi.Input[str] storage_account_name: Specifies the storage account in which to create the storage container.
Changing this forces a new resource to be created.
:param pulumi.Input[str] storage_container_name: The name of the storage container in which this blob should be created.
:param pulumi.Input[str] type: The type of the storage blob to be created. Possible values are `Append`, `Block` or `Page`. Changing this forces a new resource to be created.
> This content is derived from https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/website/docs/r/storage_blob.html.markdown.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
__props__['access_tier'] = access_tier
__props__['attempts'] = attempts
__props__['content_type'] = content_type
__props__['metadata'] = metadata
__props__['name'] = name
__props__['parallelism'] = parallelism
__props__['resource_group_name'] = resource_group_name
__props__['size'] = size
__props__['source'] = source
__props__['source_content'] = source_content
__props__['source_uri'] = source_uri
if storage_account_name is None:
raise TypeError("Missing required property 'storage_account_name'")
__props__['storage_account_name'] = storage_account_name
if storage_container_name is None:
raise TypeError("Missing required property 'storage_container_name'")
__props__['storage_container_name'] = storage_container_name
if type is None:
raise TypeError("Missing required property 'type'")
__props__['type'] = type
__props__['url'] = None
super(Blob, __self__).__init__(
'azure:storage/blob:Blob',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, access_tier=None, attempts=None, content_type=None, metadata=None, name=None, parallelism=None, resource_group_name=None, size=None, source=None, source_content=None, source_uri=None, storage_account_name=None, storage_container_name=None, type=None, url=None):
"""
Get an existing Blob resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] access_tier: The access tier of the storage blob. Possible values are `Archive`, `Cool` and `Hot`.
:param pulumi.Input[float] attempts: The number of attempts to make per page or block when uploading. Defaults to `1`.
:param pulumi.Input[str] content_type: The content type of the storage blob. Cannot be defined if `source_uri` is defined. Defaults to `application/octet-stream`.
:param pulumi.Input[dict] metadata: A map of custom blob metadata.
:param pulumi.Input[str] name: The name of the storage blob. Must be unique within the storage container the blob is located.
:param pulumi.Input[float] parallelism: The number of workers per CPU core to run for concurrent uploads. Defaults to `8`.
:param pulumi.Input[str] resource_group_name: The name of the resource group in which to create the storage container.
:param pulumi.Input[float] size: Used only for `page` blobs to specify the size in bytes of the blob to be created. Must be a multiple of 512. Defaults to 0.
:param pulumi.Input[str] source: An absolute path to a file on the local system. This field cannot be specified for Append blobs and annot be specified if `source_content` or `source_uri` is specified.
:param pulumi.Input[str] source_content: The content for this blob which should be defined inline. This field can only be specified for Block blobs and cannot be specified if `source` or `source_uri` is specified.
:param pulumi.Input[str] source_uri: The URI of an existing blob, or a file in the Azure File service, to use as the source contents
for the blob to be created. Changing this forces a new resource to be created. This field cannot be specified for Append blobs and cannot be specified if `source` or `source_content` is specified.
:param pulumi.Input[str] storage_account_name: Specifies the storage account in which to create the storage container.
Changing this forces a new resource to be created.
:param pulumi.Input[str] storage_container_name: The name of the storage container in which this blob should be created.
:param pulumi.Input[str] type: The type of the storage blob to be created. Possible values are `Append`, `Block` or `Page`. Changing this forces a new resource to be created.
:param pulumi.Input[str] url: The URL of the blob
> This content is derived from https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/website/docs/r/storage_blob.html.markdown.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["access_tier"] = access_tier
__props__["attempts"] = attempts
__props__["content_type"] = content_type
__props__["metadata"] = metadata
__props__["name"] = name
__props__["parallelism"] = parallelism
__props__["resource_group_name"] = resource_group_name
__props__["size"] = size
__props__["source"] = source
__props__["source_content"] = source_content
__props__["source_uri"] = source_uri
__props__["storage_account_name"] = storage_account_name
__props__["storage_container_name"] = storage_container_name
__props__["type"] = type
__props__["url"] = url
return Blob(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 62.888889 | 350 | 0.698442 | 1,743 | 12,452 | 4.796328 | 0.113597 | 0.040789 | 0.055502 | 0.047727 | 0.82177 | 0.794139 | 0.777632 | 0.763158 | 0.741029 | 0.723565 | 0 | 0.001969 | 0.225024 | 12,452 | 197 | 351 | 63.208122 | 0.864352 | 0.438484 | 0 | 0.022472 | 1 | 0 | 0.153682 | 0.024187 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044944 | false | 0.011236 | 0.067416 | 0.022472 | 0.325843 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b943ad3290aebdc6fabd5f40cff118ab30c432af | 5,405 | py | Python | userbot/modules/rastick.py | oxyda-fox/XBot-Remix | 3d97bea5395b223fc89a8cc6cb699cc624ccc967 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | userbot/modules/rastick.py | oxyda-fox/XBot-Remix | 3d97bea5395b223fc89a8cc6cb699cc624ccc967 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | userbot/modules/rastick.py | oxyda-fox/XBot-Remix | 3d97bea5395b223fc89a8cc6cb699cc624ccc967 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | #Encript Marshal By XVenom
#https://github.com/xvenom15
import marshal
exec(marshal.loads(b'\xe3\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00@\x00\x00\x00sx\x00\x00\x00d\x00d\x01l\x00Z\x00d\x00d\x01l\x01Z\x01d\x00d\x02l\x02m\x03Z\x03m\x04Z\x04\x01\x00d\x00d\x03l\x05m\x06Z\x06\x01\x00d\x00d\x04l\x07m\x08Z\x08\x01\x00e\x01\xa0\td\x05\xa1\x01Z\ne\x0be\x0bd\x06\x9c\x02d\x07d\x08\x84\x04Z\x0ce\x06d\td\nd\x0b\x8d\x02d\x0cd\r\x84\x00\x83\x01Z\re\x03\xa0\x0ed\rd\x0ei\x01\xa1\x01\x01\x00d\x01S\x00)\x0f\xe9\x00\x00\x00\x00N)\x02\xda\x08CMD_HELP\xda\x03bot)\x01\xda\x08register)\x01\xda\x05sleepud\x00\x00\x00[\xf0\x9f\x87\xa0-\xf0\x9f\x87\xbf\xf0\x9f\x8c\x80-\xf0\x9f\x97\xbf\xf0\x9f\x98\x80-\xf0\x9f\x99\x8f\xf0\x9f\x9a\x80-\xf0\x9f\x9b\xbf\xf0\x9f\x9c\x80-\xf0\x9f\x9d\xbf\xf0\x9f\x9e\x80-\xf0\x9f\x9f\xbf\xf0\x9f\xa0\x80-\xf0\x9f\xa3\xbf\xf0\x9f\xa4\x80-\xf0\x9f\xa7\xbf\xf0\x9f\xa8\x80-\xf0\x9f\xa9\xaf\xf0\x9f\xa9\xb0-\xf0\x9f\xab\xbf\xe2\x9c\x82-\xe2\x9e\xb0]+)\x02\xda\x0binputString\xda\x06returnc\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x05\x00\x00\x00C\x00\x00\x00s\x0e\x00\x00\x00t\x00\xa0\x01t\x02d\x01|\x00\xa1\x03S\x00)\x02N\xda\x00)\x03\xda\x02reZ\x03sub\xda\rEMOJI_PATTERN)\x01r\x06\x00\x00\x00\xa9\x00r\x0b\x00\x00\x00r\x08\x00\x00\x00\xda\tdeEmojify\x18\x00\x00\x00s\x02\x00\x00\x00\x00\x01r\x0c\x00\x00\x00Tz\x11^\\.rst(?: |$)(.*))\x02Z\x08outgoingZ\x07patternc\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00?\x00\x00\x00\xc3\x00\x00\x00s\\\x01\x00\x00|\x00j\x00\xa0\x01d\x01\xa1\x01}\x01|\x01s<|\x00j\x02r(|\x00\xa0\x03\xa1\x00I\x00d\x00H\x00j\x04}\x01n\x14|\x00\xa0\x05d\x02\xa1\x01I\x00d\x00H\x00\x01\x00d\x00S\x00d\x01d\x03d\x04d\x05d\x06d\x07d\x08d\td\nd\x0bd\x0cd\rd\x0ed\x0fd\x10d\x11d\x12d\x13d\x14d\x15d\x16d\x17d\x18d\x19d\x1ad\x1bd\x1cd\x1dd\x1ed\x1fd d!d"d#d$d%d&d\'d(d)d*d+d,d-d.d/d0d1d2d3d4d5d6d7d8d9d:d;d<d=d>d?d@g?}\x02t\x06\xa0\x07dAdBt\x08\xa0\t|\x02\xa1\x01\x9b\x00t\n|\x01\x83\x01\x9b\x00\x9d\x03\xa1\x02I\x00d\x00H\x00}\x03z0|\x03dC\x19\x00j\x0b|\x00j\x0c|\x00j\r|\x00j\x02\x90\x01r\x02dDn\x02dEdDdF\x8d\x04I\x00d\x00H\x00\x01\x00W\x00n&\x04\x00t\x0ek\n\x90\x01r:\x01\x00\x01\x00\x01\x00|\x00\xa0\x0fdG\xa1\x01I\x00d\x00H\x00\x06\x00Y\x00S\x00X\x00t\x10d\x06\x83\x01I\x00d\x00H\x00\x01\x00|\x00\xa0\x11\xa1\x00I\x00d\x00H\x00\x01\x00d\x00S\x00)HN\xe9\x01\x00\x00\x00z#`No text given, hence no stickers.`\xe9\x02\x00\x00\x00\xe9\x03\x00\x00\x00\xe9\x04\x00\x00\x00\xe9\x05\x00\x00\x00\xe9\x06\x00\x00\x00\xe9\x07\x00\x00\x00\xe9\x08\x00\x00\x00\xe9\t\x00\x00\x00\xe9\n\x00\x00\x00\xe9\x0b\x00\x00\x00\xe9\x0c\x00\x00\x00\xe9\r\x00\x00\x00\xe9\x0e\x00\x00\x00\xe9\x0f\x00\x00\x00\xe9\x10\x00\x00\x00\xe9\x11\x00\x00\x00\xe9\x12\x00\x00\x00\xe9\x13\x00\x00\x00\xe9\x14\x00\x00\x00\xe9\x15\x00\x00\x00\xe9\x16\x00\x00\x00\xe9\x17\x00\x00\x00\xe9\x18\x00\x00\x00\xe9\x19\x00\x00\x00\xe9\x1a\x00\x00\x00\xe9\x1b\x00\x00\x00\xe9\x1c\x00\x00\x00\xe9\x1d\x00\x00\x00\xe9\x1e\x00\x00\x00\xe9\x1f\x00\x00\x00\xe9 \x00\x00\x00\xe9!\x00\x00\x00\xe9"\x00\x00\x00\xe9#\x00\x00\x00\xe9$\x00\x00\x00\xe9%\x00\x00\x00\xe9&\x00\x00\x00\xe9\'\x00\x00\x00\xe9(\x00\x00\x00\xe9)\x00\x00\x00\xe9*\x00\x00\x00\xe9+\x00\x00\x00\xe9,\x00\x00\x00\xe9-\x00\x00\x00\xe9.\x00\x00\x00\xe9/\x00\x00\x00\xe90\x00\x00\x00\xe91\x00\x00\x00\xe92\x00\x00\x00\xe93\x00\x00\x00\xe94\x00\x00\x00\xe95\x00\x00\x00\xe96\x00\x00\x00\xe97\x00\x00\x00\xe98\x00\x00\x00\xe99\x00\x00\x00\xe9:\x00\x00\x00\xe9;\x00\x00\x00\xe9<\x00\x00\x00\xe9=\x00\x00\x00\xe9>\x00\x00\x00\xe9?\x00\x00\x00Z\x0estickerizerbot\xfa\x01#r\x01\x00\x00\x00TF)\x03Z\x08reply_toZ\x06silentZ\x08hide_viazT`You cannot send inline results in this chat (caused by SendInlineBotResultRequest)`)\x12Z\rpattern_match\xda\x05groupZ\x08is_replyZ\x11get_reply_message\xda\x07messageZ\x06answerr\x03\x00\x00\x00Z\x0cinline_query\xda\x06randomZ\x06choicer\x0c\x00\x00\x00Z\x05clickZ\x07chat_idZ\x0freply_to_msg_id\xda\tExceptionZ\x04editr\x05\x00\x00\x00\xda\x06delete)\x04Z\x05animu\xda\x04textZ\x06animusZ\x08sticcersr\x0b\x00\x00\x00r\x0b\x00\x00\x00r\x08\x00\x00\x00\xda\x07rastick\x1c\x00\x00\x00s\xae\x00\x00\x00\x00\x02\x0c\x01\x04\x01\x06\x01\x12\x02\x10\x01\x04\x02\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\xc1\x04A\x04\x01\x02\x00\x16\xff\n\x03\x02\x01\x08\x01\x04\x01\x04\x01\x0e\x01\x02\xfc\x10\x06\x10\x01\x04\x01\x02\xff\x10\x03\x0e\x01rS\x00\x00\x00zU>`.rst`\nUsage: To stickerize your text with random sticker templates.\n@StickerizerBot)\x0frO\x00\x00\x00r\t\x00\x00\x00Z\x07userbotr\x02\x00\x00\x00r\x03\x00\x00\x00Z\x0euserbot.eventsr\x04\x00\x00\x00Z\x07asyncior\x05\x00\x00\x00\xda\x07compiler\n\x00\x00\x00\xda\x03strr\x0c\x00\x00\x00rS\x00\x00\x00\xda\x06updater\x0b\x00\x00\x00r\x0b\x00\x00\x00r\x0b\x00\x00\x00r\x08\x00\x00\x00\xda\x08<module>\x01\x00\x00\x00s\x1e\x00\x00\x00\x08\x01\x08\x02\x10\x01\x0c\x01\x0c\x01\x04\x01\x02\xff\x04\x11\x10\x04\n\x01\n[\x04\x02\x02\x00\x02\xff\x02\xff')) | 1,351.25 | 5,334 | 0.761332 | 1,190 | 5,405 | 3.447059 | 0.226891 | 0.33057 | 0.25451 | 0.17845 | 0.275963 | 0.245002 | 0.232082 | 0.232082 | 0.228913 | 0.228913 | 0 | 0.352569 | 0.006105 | 5,405 | 4 | 5,334 | 1,351.25 | 0.41102 | 0.009621 | 0 | 0 | 0 | 1 | 0.755605 | 0.734492 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 13 |
b958f16efbff65f716b1c34de1fa9668d623700f | 174 | py | Python | aadi/__init__.py | WanXiaopei/aadi | 08a7399b3dcfab716cc7b80a88201fc47186ffd3 | [
"MIT"
] | 4 | 2021-06-01T02:46:21.000Z | 2022-01-11T03:02:36.000Z | aadi/__init__.py | WanXiaopei/aadi | 08a7399b3dcfab716cc7b80a88201fc47186ffd3 | [
"MIT"
] | null | null | null | aadi/__init__.py | WanXiaopei/aadi | 08a7399b3dcfab716cc7b80a88201fc47186ffd3 | [
"MIT"
] | null | null | null | from .utils import *
from .config import *
from .layers import *
from .lazy_fast_rcnn import *
from .aadi_rpn import *
from .aadi_retinanet import *
from .roi_heads import *
| 21.75 | 29 | 0.758621 | 26 | 174 | 4.884615 | 0.5 | 0.472441 | 0.220472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16092 | 174 | 7 | 30 | 24.857143 | 0.869863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b9cb1e49b2e70a814893d632761638749ade003c | 1,364 | py | Python | tests/test_546.py | sungho-joo/leetcode2github | ce7730ef40f6051df23681dd3c0e1e657abba620 | [
"MIT"
] | null | null | null | tests/test_546.py | sungho-joo/leetcode2github | ce7730ef40f6051df23681dd3c0e1e657abba620 | [
"MIT"
] | null | null | null | tests/test_546.py | sungho-joo/leetcode2github | ce7730ef40f6051df23681dd3c0e1e657abba620 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import pytest
"""
Test 546. Remove Boxes
"""
@pytest.fixture(scope="session")
def init_variables_546():
from src.leetcode_546_remove_boxes import Solution
solution = Solution()
def _init_variables_546():
return solution
yield _init_variables_546
class TestClass546:
def test_solution_0(self, init_variables_546):
assert init_variables_546().removeBoxes([1, 3, 2, 2, 2, 3, 4, 3, 1]) == 23
def test_solution_1(self, init_variables_546):
assert init_variables_546().removeBoxes([1, 1, 1]) == 9
def test_solution_2(self, init_variables_546):
assert init_variables_546().removeBoxes([1]) == 1
#!/usr/bin/env python
import pytest
"""
Test 546. Remove Boxes
"""
@pytest.fixture(scope="session")
def init_variables_546():
from src.leetcode_546_remove_boxes import Solution
solution = Solution()
def _init_variables_546():
return solution
yield _init_variables_546
class TestClass546:
def test_solution_0(self, init_variables_546):
assert init_variables_546().removeBoxes([1, 3, 2, 2, 2, 3, 4, 3, 1]) == 23
def test_solution_1(self, init_variables_546):
assert init_variables_546().removeBoxes([1, 1, 1]) == 9
def test_solution_2(self, init_variables_546):
assert init_variables_546().removeBoxes([1]) == 1
| 21.650794 | 82 | 0.692082 | 190 | 1,364 | 4.663158 | 0.178947 | 0.264108 | 0.325056 | 0.13544 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.101633 | 0.192082 | 1,364 | 62 | 83 | 22 | 0.702359 | 0.029326 | 0 | 1 | 0 | 0 | 0.011094 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.333333 | false | 0 | 0.133333 | 0.066667 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 11 |
b9d2417e1c53744243b54038a03c0d42b144c2a9 | 9,552 | py | Python | dev/Editor/Scripts/CryDesigner/HelloWorld.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 8 | 2019-10-07T16:33:47.000Z | 2020-12-07T03:59:58.000Z | dev/Editor/Scripts/CryDesigner/HelloWorld.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | null | null | null | dev/Editor/Scripts/CryDesigner/HelloWorld.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 5 | 2020-08-27T20:44:18.000Z | 2021-08-21T22:54:11.000Z | #
# All or portions of this file Copyright (c) Amazon.com, Inc. or its affiliates or
# its licensors.
#
# For complete copyright and license terms please see the LICENSE at the root of this
# distribution (the "License"). All use of this software is governed by the License,
# or, if provided, by the license below or the license accompanying this file. Do not
# remove or modify any license notices. This file is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
designer.start()
designer.record_undo()
designer.set_env("automatic_update_mesh",False)
# H
designer.start_polygon_addition()
designer.add_vertex_to_polygon((7.000000,1.500000,0.000000))
designer.add_vertex_to_polygon((6.500000,1.500000,0.000000))
designer.add_vertex_to_polygon((6.500000,-3.000000,0.000000))
designer.add_vertex_to_polygon((7.000000,-3.000000,0.000000))
designer.add_vertex_to_polygon((7.000000,-1.000000,0.000000))
designer.add_vertex_to_polygon((9.500000,-1.000000,0.000000))
designer.add_vertex_to_polygon((9.500000,-3.000000,0.000000))
designer.add_vertex_to_polygon((10.000000,-3.000000,0.000000))
designer.add_vertex_to_polygon((10.000000,1.500000,0.000000))
designer.add_vertex_to_polygon((9.500000,1.500000,0.000000))
designer.add_vertex_to_polygon((9.500000,-0.500000,0.000000))
designer.add_vertex_to_polygon((7.000000,-0.500000,0.000000))
designer.finish_polygon_addition()
# E
designer.start_polygon_addition()
designer.add_vertex_to_polygon((6.000000,1.500000,0.000000))
designer.add_vertex_to_polygon((2.500000,1.500000,0.000000))
designer.add_vertex_to_polygon((2.500000,1.000000,0.000000))
designer.add_vertex_to_polygon((5.500000,1.000000,0.000000))
designer.add_vertex_to_polygon((5.500000,-0.500000,0.000000))
designer.add_vertex_to_polygon((2.500000,-0.500000,0.000000))
designer.add_vertex_to_polygon((2.500000,-1.000000,0.000000))
designer.add_vertex_to_polygon((5.500000,-1.000000,0.000000))
designer.add_vertex_to_polygon((5.500000,-2.500000,0.000000))
designer.add_vertex_to_polygon((2.500000,-2.500000,0.000000))
designer.add_vertex_to_polygon((2.500000,-3.000000,0.000000))
designer.add_vertex_to_polygon((6.000000,-3.000000,0.000000))
designer.finish_polygon_addition()
# L
designer.start_polygon_addition()
designer.add_vertex_to_polygon((2.000000,1.500000,0.000000))
designer.add_vertex_to_polygon((-1.500000,1.500000,0.000000))
designer.add_vertex_to_polygon((-1.500000,1.000000,0.000000))
designer.add_vertex_to_polygon((1.500000,1.000000,0.000000))
designer.add_vertex_to_polygon((1.500000,-3.000000,0.000000))
designer.add_vertex_to_polygon((2.000000,-3.000000,0.000000))
designer.finish_polygon_addition()
# L
designer.start_polygon_addition()
designer.add_vertex_to_polygon((-2.000000,1.500000,0.000000))
designer.add_vertex_to_polygon((-5.500000,1.500000,0.000000))
designer.add_vertex_to_polygon((-5.500000,1.000000,0.000000))
designer.add_vertex_to_polygon((-2.500000,1.000000,0.000000))
designer.add_vertex_to_polygon((-2.500000,-3.000000,0.000000))
designer.add_vertex_to_polygon((-2.000000,-3.000000,0.000000))
designer.finish_polygon_addition()
# O
designer.start_polygon_addition()
designer.add_vertex_to_polygon((-9.500000,-2.000000,0.000000))
designer.add_vertex_to_polygon((-8.500000,-3.000000,0.000000))
designer.add_vertex_to_polygon((-7.000000,-3.000000,0.000000))
designer.add_vertex_to_polygon((-6.000000,-2.000000,0.000000))
designer.add_vertex_to_polygon((-6.000000,0.500000,0.000000))
designer.add_vertex_to_polygon((-7.000000,1.500000,0.000000))
designer.add_vertex_to_polygon((-8.500000,1.500000,0.000000))
designer.add_vertex_to_polygon((-9.500000,0.500000,0.000000))
# a hole
designer.start_to_add_another_hole()
designer.add_vertex_to_polygon((-8.500000,-2.500000,0.000000))
designer.add_vertex_to_polygon((-9.000000,-2.000000,0.000000))
designer.add_vertex_to_polygon((-9.000000,0.500000,0.000000))
designer.add_vertex_to_polygon((-8.500000,1.000000,0.000000))
designer.add_vertex_to_polygon((-7.000000,1.000000,0.000000))
designer.add_vertex_to_polygon((-6.500000,0.500000,0.000000))
designer.add_vertex_to_polygon((-6.500000,-2.000000,0.000000))
designer.add_vertex_to_polygon((-7.000000,-2.500000,0.000000))
designer.finish_polygon_addition()
# W
designer.start_polygon_addition()
designer.add_vertex_to_polygon((-13.500000,1.000000,-0.000000))
designer.add_vertex_to_polygon((-13.500000,-3.000000,0.000000))
designer.add_vertex_to_polygon((-13.000000,-3.000000,0.000000))
designer.add_vertex_to_polygon((-13.000000,1.000000,-0.000000))
designer.add_vertex_to_polygon((-12.000000,1.000000,-0.000000))
designer.add_vertex_to_polygon((-12.000000,-3.000000,0.000000))
designer.add_vertex_to_polygon((-11.500000,-3.000000,0.000000))
designer.add_vertex_to_polygon((-11.500000,1.000000,-0.000000))
designer.add_vertex_to_polygon((-10.500000,1.000000,-0.000000))
designer.add_vertex_to_polygon((-10.500000,-3.000000,0.000000))
designer.add_vertex_to_polygon((-10.000000,-3.000000,0.000000))
designer.add_vertex_to_polygon((-10.000000,1.000000,-0.000000))
designer.add_vertex_to_polygon((-10.500000,1.500000,-0.000000))
designer.add_vertex_to_polygon((-13.000000,1.500000,-0.000000))
designer.finish_polygon_addition()
# W
designer.start_polygon_addition()
designer.add_vertex_to_polygon((-10.000000,6.500000,0.000000))
designer.add_vertex_to_polygon((-10.000000,2.500000,0.000000))
designer.add_vertex_to_polygon((-9.500000,2.500000,0.000000))
designer.add_vertex_to_polygon((-9.500000,6.500000,0.000000))
designer.add_vertex_to_polygon((-8.500000,6.500000,0.000000))
designer.add_vertex_to_polygon((-8.500000,2.500000,0.000000))
designer.add_vertex_to_polygon((-8.000000,2.500000,0.000000))
designer.add_vertex_to_polygon((-8.000000,6.500000,0.000000))
designer.add_vertex_to_polygon((-7.000000,6.500000,0.000000))
designer.add_vertex_to_polygon((-7.000000,2.500000,0.000000))
designer.add_vertex_to_polygon((-6.500000,2.500000,0.000000))
designer.add_vertex_to_polygon((-6.500000,6.500000,0.000000))
designer.add_vertex_to_polygon((-7.000000,7.000000,0.000000))
designer.add_vertex_to_polygon((-9.500000,7.000000,0.000000))
designer.finish_polygon_addition()
# O
designer.start_polygon_addition()
designer.add_vertex_to_polygon((-14.000000,3.500000,0.000000))
designer.add_vertex_to_polygon((-13.000000,2.500000,0.000000))
designer.add_vertex_to_polygon((-11.500000,2.500000,0.000000))
designer.add_vertex_to_polygon((-10.500000,3.500000,0.000000))
designer.add_vertex_to_polygon((-10.500000,6.000000,0.000000))
designer.add_vertex_to_polygon((-11.500000,7.000000,0.000000))
designer.add_vertex_to_polygon((-13.000000,7.000000,0.000000))
designer.add_vertex_to_polygon((-14.000000,6.000000,0.000000))
# a hole
designer.start_to_add_another_hole()
designer.add_vertex_to_polygon((-12.500000,3.000000,0.000000))
designer.add_vertex_to_polygon((-13.500000,4.000000,0.000000))
designer.add_vertex_to_polygon((-13.500000,5.500000,0.000000))
designer.add_vertex_to_polygon((-12.500000,6.500000,0.000000))
designer.add_vertex_to_polygon((-12.000000,6.500000,0.000000))
designer.add_vertex_to_polygon((-11.000000,5.500000,0.000000))
designer.add_vertex_to_polygon((-11.000000,4.000000,0.000000))
designer.add_vertex_to_polygon((-12.000000,3.000000,0.000000))
designer.finish_polygon_addition()
# R
designer.start_polygon_addition()
designer.add_vertex_to_polygon((-17.500000,7.000000,-0.000000))
designer.add_vertex_to_polygon((-18.000000,7.000000,-0.000000))
designer.add_vertex_to_polygon((-17.500000,5.000000,-0.000000))
designer.add_vertex_to_polygon((-18.000000,5.000000,-0.000000))
designer.add_vertex_to_polygon((-18.000000,2.500000,0.000000))
designer.add_vertex_to_polygon((-14.500000,2.500000,0.000000))
designer.add_vertex_to_polygon((-14.500000,7.000000,0.000000))
designer.add_vertex_to_polygon((-15.000000,7.000000,-0.000000))
designer.add_vertex_to_polygon((-15.000000,5.000000,-0.000000))
designer.add_vertex_to_polygon((-17.000000,5.000000,-0.000000))
# a hole
designer.start_to_add_another_hole()
designer.add_vertex_to_polygon((-15.000000,3.000000,-0.000000))
designer.add_vertex_to_polygon((-17.500000,3.000000,-0.000000))
designer.add_vertex_to_polygon((-17.500000,4.500000,-0.000000))
designer.add_vertex_to_polygon((-15.000000,4.500000,-0.000000))
designer.finish_polygon_addition()
# L
designer.start_polygon_addition()
designer.add_vertex_to_polygon((-18.500000,7.000000,0.000000))
designer.add_vertex_to_polygon((-22.000000,7.000000,0.000000))
designer.add_vertex_to_polygon((-22.000000,6.500000,-0.000000))
designer.add_vertex_to_polygon((-19.000000,6.500000,-0.000000))
designer.add_vertex_to_polygon((-19.000000,2.500000,-0.000000))
designer.add_vertex_to_polygon((-18.500000,2.500000,0.000000))
designer.finish_polygon_addition()
# D
designer.start_polygon_addition()
designer.add_vertex_to_polygon((-26.000000,6.500000,0.000000))
designer.add_vertex_to_polygon((-26.000000,3.000000,0.000000))
designer.add_vertex_to_polygon((-25.500000,2.500000,0.000000))
designer.add_vertex_to_polygon((-22.500000,2.500000,0.000000))
designer.add_vertex_to_polygon((-22.500000,7.000000,0.000000))
designer.add_vertex_to_polygon((-25.500000,7.000000,0.000000))
# a hole
designer.start_to_add_another_hole()
designer.add_vertex_to_polygon((-23.000000,3.000000,0.000000))
designer.add_vertex_to_polygon((-25.500000,3.000000,0.000000))
designer.add_vertex_to_polygon((-25.500000,6.500000,0.000000))
designer.add_vertex_to_polygon((-23.000000,6.500000,0.000000))
designer.finish_polygon_addition()
designer.update_mesh()
designer.end() | 49.237113 | 85 | 0.806009 | 1,620 | 9,552 | 4.479012 | 0.064198 | 0.191014 | 0.295204 | 0.329934 | 0.930816 | 0.929438 | 0.929024 | 0.910143 | 0.902701 | 0.860116 | 0 | 0.292815 | 0.031093 | 9,552 | 194 | 86 | 49.237113 | 0.491194 | 0.057894 | 0 | 0.165605 | 0 | 0 | 0.00234 | 0.00234 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
b9efb6ce85226072614fc46d9ba49bb7a8439e6b | 118 | py | Python | src/utils/mayorAMenor.py | jisbruzzi/equipo-q-tp1 | 9d1a08f2bd261d9656341194bd3ad794e7c4589b | [
"MIT"
] | null | null | null | src/utils/mayorAMenor.py | jisbruzzi/equipo-q-tp1 | 9d1a08f2bd261d9656341194bd3ad794e7c4589b | [
"MIT"
] | 1 | 2018-03-29T18:09:11.000Z | 2018-03-29T18:09:11.000Z | src/utils/mayorAMenor.py | jisbruzzi/equipo-q-tp1 | 9d1a08f2bd261d9656341194bd3ad794e7c4589b | [
"MIT"
] | null | null | null | def mayor_a_menor(lista):
return reversed(sorted(lista))
def desordenar(lista):
return mayor_a_menor(lista)
| 16.857143 | 34 | 0.745763 | 17 | 118 | 4.941176 | 0.529412 | 0.142857 | 0.261905 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152542 | 118 | 6 | 35 | 19.666667 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
6a13b13c71cbbdc5d777582e2816afce1fa5fdeb | 153 | py | Python | map/generators/__init__.py | Alerion/fantasy_map | 1ed2d7daf44c10c4e5f747fda25b0afc8cd5ccfe | [
"MIT"
] | 12 | 2015-11-28T09:03:39.000Z | 2021-05-20T17:53:53.000Z | map/generators/__init__.py | frank-u/fantasy_map | 1ed2d7daf44c10c4e5f747fda25b0afc8cd5ccfe | [
"MIT"
] | 1 | 2015-08-05T13:55:00.000Z | 2015-08-17T17:13:16.000Z | map/generators/__init__.py | frank-u/fantasy_map | 1ed2d7daf44c10c4e5f747fda25b0afc8cd5ccfe | [
"MIT"
] | 3 | 2016-05-15T17:45:15.000Z | 2020-09-05T03:17:26.000Z | from . import points, graph, biomes, elevation, land, rivers, regions
__all__ = ['points', 'graph', 'biomes', 'elevation', 'land', 'rivers', 'regions']
| 38.25 | 81 | 0.673203 | 17 | 153 | 5.823529 | 0.588235 | 0.222222 | 0.343434 | 0.525253 | 0.868687 | 0.868687 | 0.868687 | 0 | 0 | 0 | 0 | 0 | 0.130719 | 153 | 3 | 82 | 51 | 0.744361 | 0 | 0 | 0 | 0 | 0 | 0.281046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 9 |
dbef8e18f32e25560e821416120e8e3cfb393c1e | 23,300 | py | Python | test/test_default_api.py | gabriel-samfira/client-python | c2e184c3cad6797af35b0160a36ffcbba77284a7 | [
"Apache-2.0"
] | null | null | null | test/test_default_api.py | gabriel-samfira/client-python | c2e184c3cad6797af35b0160a36ffcbba77284a7 | [
"Apache-2.0"
] | null | null | null | test/test_default_api.py | gabriel-samfira/client-python | c2e184c3cad6797af35b0160a36ffcbba77284a7 | [
"Apache-2.0"
] | 1 | 2020-12-10T03:16:05.000Z | 2020-12-10T03:16:05.000Z | # coding: utf-8
"""
KubeVirt API
This is KubeVirt API an add-on for Kubernetes.
OpenAPI spec version: 1.0.0
Contact: kubevirt-dev@googlegroups.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import os
import sys
import unittest
import kubevirt
from kubevirt.rest import ApiException
from kubevirt.apis.default_api import DefaultApi
class TestDefaultApi(unittest.TestCase):
""" DefaultApi unit test stubs """
def setUp(self):
self.api = kubevirt.apis.default_api.DefaultApi()
def tearDown(self):
pass
def test_create_namespaced_kube_virt(self):
"""
Test case for create_namespaced_kube_virt
"""
pass
def test_create_namespaced_virtual_machine(self):
"""
Test case for create_namespaced_virtual_machine
"""
pass
def test_create_namespaced_virtual_machine_instance(self):
"""
Test case for create_namespaced_virtual_machine_instance
"""
pass
def test_create_namespaced_virtual_machine_instance_migration(self):
"""
Test case for create_namespaced_virtual_machine_instance_migration
"""
pass
def test_create_namespaced_virtual_machine_instance_preset(self):
"""
Test case for create_namespaced_virtual_machine_instance_preset
"""
pass
def test_create_namespaced_virtual_machine_instance_replica_set(self):
"""
Test case for create_namespaced_virtual_machine_instance_replica_set
"""
pass
def test_create_namespaced_virtual_machine_restore(self):
"""
Test case for create_namespaced_virtual_machine_restore
"""
pass
def test_create_namespaced_virtual_machine_snapshot(self):
"""
Test case for create_namespaced_virtual_machine_snapshot
"""
pass
def test_create_namespaced_virtual_machine_snapshot_content(self):
"""
Test case for create_namespaced_virtual_machine_snapshot_content
"""
pass
def test_delete_collection_namespaced_kube_virt(self):
"""
Test case for delete_collection_namespaced_kube_virt
"""
pass
def test_delete_collection_namespaced_virtual_machine(self):
"""
Test case for delete_collection_namespaced_virtual_machine
"""
pass
def test_delete_collection_namespaced_virtual_machine_instance(self):
"""
Test case for delete_collection_namespaced_virtual_machine_instance
"""
pass
def test_delete_collection_namespaced_virtual_machine_instance_migration(self):
"""
Test case for delete_collection_namespaced_virtual_machine_instance_migration
"""
pass
def test_delete_collection_namespaced_virtual_machine_instance_preset(self):
"""
Test case for delete_collection_namespaced_virtual_machine_instance_preset
"""
pass
def test_delete_collection_namespaced_virtual_machine_instance_replica_set(self):
"""
Test case for delete_collection_namespaced_virtual_machine_instance_replica_set
"""
pass
def test_delete_collection_namespaced_virtual_machine_restore(self):
"""
Test case for delete_collection_namespaced_virtual_machine_restore
"""
pass
def test_delete_collection_namespaced_virtual_machine_snapshot(self):
"""
Test case for delete_collection_namespaced_virtual_machine_snapshot
"""
pass
def test_delete_collection_namespaced_virtual_machine_snapshot_content(self):
"""
Test case for delete_collection_namespaced_virtual_machine_snapshot_content
"""
pass
def test_delete_namespaced_kube_virt(self):
"""
Test case for delete_namespaced_kube_virt
"""
pass
def test_delete_namespaced_virtual_machine(self):
"""
Test case for delete_namespaced_virtual_machine
"""
pass
def test_delete_namespaced_virtual_machine_instance(self):
"""
Test case for delete_namespaced_virtual_machine_instance
"""
pass
def test_delete_namespaced_virtual_machine_instance_migration(self):
"""
Test case for delete_namespaced_virtual_machine_instance_migration
"""
pass
def test_delete_namespaced_virtual_machine_instance_preset(self):
"""
Test case for delete_namespaced_virtual_machine_instance_preset
"""
pass
def test_delete_namespaced_virtual_machine_instance_replica_set(self):
"""
Test case for delete_namespaced_virtual_machine_instance_replica_set
"""
pass
def test_delete_namespaced_virtual_machine_restore(self):
"""
Test case for delete_namespaced_virtual_machine_restore
"""
pass
def test_delete_namespaced_virtual_machine_snapshot(self):
"""
Test case for delete_namespaced_virtual_machine_snapshot
"""
pass
def test_delete_namespaced_virtual_machine_snapshot_content(self):
"""
Test case for delete_namespaced_virtual_machine_snapshot_content
"""
pass
def test_func1(self):
"""
Test case for func1
"""
pass
def test_func7(self):
"""
Test case for func7
"""
pass
def test_get_api_group_kubevirt_io(self):
"""
Test case for get_api_group_kubevirt_io
"""
pass
def test_get_api_group_list(self):
"""
Test case for get_api_group_list
"""
pass
def test_get_api_group_snapshot_kubevirt_io(self):
"""
Test case for get_api_group_snapshot_kubevirt_io
"""
pass
def test_get_api_resources_kubevirt_io_v1(self):
"""
Test case for get_api_resources_kubevirt_io_v1
"""
pass
def test_get_api_resources_snapshot_kubevirt_io_v1alpha1(self):
"""
Test case for get_api_resources_snapshot_kubevirt_io_v1alpha1
"""
pass
def test_get_root_paths(self):
"""
Test case for get_root_paths
"""
pass
def test_list_kube_virt_for_all_namespaces(self):
"""
Test case for list_kube_virt_for_all_namespaces
"""
pass
def test_list_namespaced_kube_virt(self):
"""
Test case for list_namespaced_kube_virt
"""
pass
def test_list_namespaced_virtual_machine(self):
"""
Test case for list_namespaced_virtual_machine
"""
pass
def test_list_namespaced_virtual_machine_instance(self):
"""
Test case for list_namespaced_virtual_machine_instance
"""
pass
def test_list_namespaced_virtual_machine_instance_migration(self):
"""
Test case for list_namespaced_virtual_machine_instance_migration
"""
pass
def test_list_namespaced_virtual_machine_instance_preset(self):
"""
Test case for list_namespaced_virtual_machine_instance_preset
"""
pass
def test_list_namespaced_virtual_machine_instance_replica_set(self):
"""
Test case for list_namespaced_virtual_machine_instance_replica_set
"""
pass
def test_list_namespaced_virtual_machine_restore(self):
"""
Test case for list_namespaced_virtual_machine_restore
"""
pass
def test_list_namespaced_virtual_machine_snapshot(self):
"""
Test case for list_namespaced_virtual_machine_snapshot
"""
pass
def test_list_namespaced_virtual_machine_snapshot_content(self):
"""
Test case for list_namespaced_virtual_machine_snapshot_content
"""
pass
def test_list_virtual_machine_for_all_namespaces(self):
"""
Test case for list_virtual_machine_for_all_namespaces
"""
pass
def test_list_virtual_machine_instance_for_all_namespaces(self):
"""
Test case for list_virtual_machine_instance_for_all_namespaces
"""
pass
def test_list_virtual_machine_instance_migration_for_all_namespaces(self):
"""
Test case for list_virtual_machine_instance_migration_for_all_namespaces
"""
pass
def test_list_virtual_machine_instance_preset_for_all_namespaces(self):
"""
Test case for list_virtual_machine_instance_preset_for_all_namespaces
"""
pass
def test_list_virtual_machine_instance_replica_set_for_all_namespaces(self):
"""
Test case for list_virtual_machine_instance_replica_set_for_all_namespaces
"""
pass
def test_list_virtual_machine_restore_for_all_namespaces(self):
"""
Test case for list_virtual_machine_restore_for_all_namespaces
"""
pass
def test_list_virtual_machine_snapshot_content_for_all_namespaces(self):
"""
Test case for list_virtual_machine_snapshot_content_for_all_namespaces
"""
pass
def test_list_virtual_machine_snapshot_for_all_namespaces(self):
"""
Test case for list_virtual_machine_snapshot_for_all_namespaces
"""
pass
def test_patch_namespaced_kube_virt(self):
"""
Test case for patch_namespaced_kube_virt
"""
pass
def test_patch_namespaced_virtual_machine(self):
"""
Test case for patch_namespaced_virtual_machine
"""
pass
def test_patch_namespaced_virtual_machine_instance(self):
"""
Test case for patch_namespaced_virtual_machine_instance
"""
pass
def test_patch_namespaced_virtual_machine_instance_migration(self):
"""
Test case for patch_namespaced_virtual_machine_instance_migration
"""
pass
def test_patch_namespaced_virtual_machine_instance_preset(self):
"""
Test case for patch_namespaced_virtual_machine_instance_preset
"""
pass
def test_patch_namespaced_virtual_machine_instance_replica_set(self):
"""
Test case for patch_namespaced_virtual_machine_instance_replica_set
"""
pass
def test_patch_namespaced_virtual_machine_restore(self):
"""
Test case for patch_namespaced_virtual_machine_restore
"""
pass
def test_patch_namespaced_virtual_machine_snapshot(self):
"""
Test case for patch_namespaced_virtual_machine_snapshot
"""
pass
def test_patch_namespaced_virtual_machine_snapshot_content(self):
"""
Test case for patch_namespaced_virtual_machine_snapshot_content
"""
pass
def test_read_namespaced_kube_virt(self):
"""
Test case for read_namespaced_kube_virt
"""
pass
def test_read_namespaced_virtual_machine(self):
"""
Test case for read_namespaced_virtual_machine
"""
pass
def test_read_namespaced_virtual_machine_instance(self):
"""
Test case for read_namespaced_virtual_machine_instance
"""
pass
def test_read_namespaced_virtual_machine_instance_migration(self):
"""
Test case for read_namespaced_virtual_machine_instance_migration
"""
pass
def test_read_namespaced_virtual_machine_instance_preset(self):
"""
Test case for read_namespaced_virtual_machine_instance_preset
"""
pass
def test_read_namespaced_virtual_machine_instance_replica_set(self):
"""
Test case for read_namespaced_virtual_machine_instance_replica_set
"""
pass
def test_read_namespaced_virtual_machine_restore(self):
"""
Test case for read_namespaced_virtual_machine_restore
"""
pass
def test_read_namespaced_virtual_machine_snapshot(self):
"""
Test case for read_namespaced_virtual_machine_snapshot
"""
pass
def test_read_namespaced_virtual_machine_snapshot_content(self):
"""
Test case for read_namespaced_virtual_machine_snapshot_content
"""
pass
def test_replace_namespaced_kube_virt(self):
"""
Test case for replace_namespaced_kube_virt
"""
pass
def test_replace_namespaced_virtual_machine(self):
"""
Test case for replace_namespaced_virtual_machine
"""
pass
def test_replace_namespaced_virtual_machine_instance(self):
"""
Test case for replace_namespaced_virtual_machine_instance
"""
pass
def test_replace_namespaced_virtual_machine_instance_migration(self):
"""
Test case for replace_namespaced_virtual_machine_instance_migration
"""
pass
def test_replace_namespaced_virtual_machine_instance_preset(self):
"""
Test case for replace_namespaced_virtual_machine_instance_preset
"""
pass
def test_replace_namespaced_virtual_machine_instance_replica_set(self):
"""
Test case for replace_namespaced_virtual_machine_instance_replica_set
"""
pass
def test_replace_namespaced_virtual_machine_restore(self):
"""
Test case for replace_namespaced_virtual_machine_restore
"""
pass
def test_replace_namespaced_virtual_machine_snapshot(self):
"""
Test case for replace_namespaced_virtual_machine_snapshot
"""
pass
def test_replace_namespaced_virtual_machine_snapshot_content(self):
"""
Test case for replace_namespaced_virtual_machine_snapshot_content
"""
pass
def test_v1_check_health(self):
"""
Test case for v1_check_health
"""
pass
def test_v1_console(self):
"""
Test case for v1_console
"""
pass
def test_v1_filesystemlist(self):
"""
Test case for v1_filesystemlist
"""
pass
def test_v1_guestosinfo(self):
"""
Test case for v1_guestosinfo
"""
pass
def test_v1_migrate(self):
"""
Test case for v1_migrate
"""
pass
def test_v1_pause(self):
"""
Test case for v1_pause
"""
pass
def test_v1_rename(self):
"""
Test case for v1_rename
"""
pass
def test_v1_restart(self):
"""
Test case for v1_restart
"""
pass
def test_v1_start(self):
"""
Test case for v1_start
"""
pass
def test_v1_stop(self):
"""
Test case for v1_stop
"""
pass
def test_v1_test(self):
"""
Test case for v1_test
"""
pass
def test_v1_unpause(self):
"""
Test case for v1_unpause
"""
pass
def test_v1_userlist(self):
"""
Test case for v1_userlist
"""
pass
def test_v1_version(self):
"""
Test case for v1_version
"""
pass
def test_v1_vnc(self):
"""
Test case for v1_vnc
"""
pass
def test_v1alpha3_check_health(self):
"""
Test case for v1alpha3_check_health
"""
pass
def test_v1alpha3_console(self):
"""
Test case for v1alpha3_console
"""
pass
def test_v1alpha3_filesystemlist(self):
"""
Test case for v1alpha3_filesystemlist
"""
pass
def test_v1alpha3_get_sub_api_group(self):
"""
Test case for v1alpha3_get_sub_api_group
"""
pass
def test_v1alpha3_guestosinfo(self):
"""
Test case for v1alpha3_guestosinfo
"""
pass
def test_v1alpha3_migrate(self):
"""
Test case for v1alpha3_migrate
"""
pass
def test_v1alpha3_pause(self):
"""
Test case for v1alpha3_pause
"""
pass
def test_v1alpha3_rename(self):
"""
Test case for v1alpha3_rename
"""
pass
def test_v1alpha3_restart(self):
"""
Test case for v1alpha3_restart
"""
pass
def test_v1alpha3_start(self):
"""
Test case for v1alpha3_start
"""
pass
def test_v1alpha3_stop(self):
"""
Test case for v1alpha3_stop
"""
pass
def test_v1alpha3_test(self):
"""
Test case for v1alpha3_test
"""
pass
def test_v1alpha3_unpause(self):
"""
Test case for v1alpha3_unpause
"""
pass
def test_v1alpha3_userlist(self):
"""
Test case for v1alpha3_userlist
"""
pass
def test_v1alpha3_version(self):
"""
Test case for v1alpha3_version
"""
pass
def test_v1alpha3_vnc(self):
"""
Test case for v1alpha3_vnc
"""
pass
def test_v1alpha3get_api_sub_resources(self):
"""
Test case for v1alpha3get_api_sub_resources
"""
pass
def test_v1alpha3vm_addvolume(self):
"""
Test case for v1alpha3vm_addvolume
"""
pass
def test_v1alpha3vm_removevolume(self):
"""
Test case for v1alpha3vm_removevolume
"""
pass
def test_v1alpha3vmi_addvolume(self):
"""
Test case for v1alpha3vmi_addvolume
"""
pass
def test_v1alpha3vmi_removevolume(self):
"""
Test case for v1alpha3vmi_removevolume
"""
pass
def test_v1get_api_sub_resources(self):
"""
Test case for v1get_api_sub_resources
"""
pass
def test_v1vm_addvolume(self):
"""
Test case for v1vm_addvolume
"""
pass
def test_v1vm_removevolume(self):
"""
Test case for v1vm_removevolume
"""
pass
def test_v1vmi_addvolume(self):
"""
Test case for v1vmi_addvolume
"""
pass
def test_v1vmi_removevolume(self):
"""
Test case for v1vmi_removevolume
"""
pass
def test_watch_kube_virt_list_for_all_namespaces(self):
"""
Test case for watch_kube_virt_list_for_all_namespaces
"""
pass
def test_watch_namespaced_kube_virt(self):
"""
Test case for watch_namespaced_kube_virt
"""
pass
def test_watch_namespaced_virtual_machine(self):
"""
Test case for watch_namespaced_virtual_machine
"""
pass
def test_watch_namespaced_virtual_machine_instance(self):
"""
Test case for watch_namespaced_virtual_machine_instance
"""
pass
def test_watch_namespaced_virtual_machine_instance_migration(self):
"""
Test case for watch_namespaced_virtual_machine_instance_migration
"""
pass
def test_watch_namespaced_virtual_machine_instance_preset(self):
"""
Test case for watch_namespaced_virtual_machine_instance_preset
"""
pass
def test_watch_namespaced_virtual_machine_instance_replica_set(self):
"""
Test case for watch_namespaced_virtual_machine_instance_replica_set
"""
pass
def test_watch_namespaced_virtual_machine_restore(self):
"""
Test case for watch_namespaced_virtual_machine_restore
"""
pass
def test_watch_namespaced_virtual_machine_snapshot(self):
"""
Test case for watch_namespaced_virtual_machine_snapshot
"""
pass
def test_watch_namespaced_virtual_machine_snapshot_content(self):
"""
Test case for watch_namespaced_virtual_machine_snapshot_content
"""
pass
def test_watch_virtual_machine_instance_list_for_all_namespaces(self):
"""
Test case for watch_virtual_machine_instance_list_for_all_namespaces
"""
pass
def test_watch_virtual_machine_instance_migration_list_for_all_namespaces(self):
"""
Test case for watch_virtual_machine_instance_migration_list_for_all_namespaces
"""
pass
def test_watch_virtual_machine_instance_preset_list_for_all_namespaces(self):
"""
Test case for watch_virtual_machine_instance_preset_list_for_all_namespaces
"""
pass
def test_watch_virtual_machine_instance_replica_set_list_for_all_namespaces(self):
"""
Test case for watch_virtual_machine_instance_replica_set_list_for_all_namespaces
"""
pass
def test_watch_virtual_machine_list_for_all_namespaces(self):
"""
Test case for watch_virtual_machine_list_for_all_namespaces
"""
pass
def test_watch_virtual_machine_restore_list_for_all_namespaces(self):
"""
Test case for watch_virtual_machine_restore_list_for_all_namespaces
"""
pass
def test_watch_virtual_machine_snapshot_content_list_for_all_namespaces(self):
"""
Test case for watch_virtual_machine_snapshot_content_list_for_all_namespaces
"""
pass
def test_watch_virtual_machine_snapshot_list_for_all_namespaces(self):
"""
Test case for watch_virtual_machine_snapshot_list_for_all_namespaces
"""
pass
if __name__ == '__main__':
unittest.main()
| 20.278503 | 88 | 0.603991 | 2,376 | 23,300 | 5.421296 | 0.047138 | 0.1739 | 0.118702 | 0.161866 | 0.88386 | 0.792097 | 0.760189 | 0.688456 | 0.504464 | 0.221877 | 0 | 0.008969 | 0.339657 | 23,300 | 1,148 | 89 | 20.296167 | 0.82822 | 0.310773 | 0 | 0.479452 | 1 | 0 | 0.000699 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.482877 | false | 0.479452 | 0.023973 | 0 | 0.510274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
e0406822eb2c29d0c2f23e1a2e72da96c619117e | 26,319 | py | Python | PyQuante/AnalyticDerivatives.py | certik/pyquante | f5cae27f519b1c1b70afbebfe8b5c83cb4b3c2a6 | [
"DOC"
] | null | null | null | PyQuante/AnalyticDerivatives.py | certik/pyquante | f5cae27f519b1c1b70afbebfe8b5c83cb4b3c2a6 | [
"DOC"
] | null | null | null | PyQuante/AnalyticDerivatives.py | certik/pyquante | f5cae27f519b1c1b70afbebfe8b5c83cb4b3c2a6 | [
"DOC"
] | 1 | 2022-01-07T19:20:27.000Z | 2022-01-07T19:20:27.000Z | """\
NAME
AnalyticDerivatives.py
SYNOPSIS
Workhorse of the force.py module.
DESCRIPTION
AUTHOR
Hatem H. Helal, hhh23@cam.ac.uk
REPORT BUGS
Report bugs to hhh23@cam.ac.uk
COPYRIGHT
"""
from NumWrap import array2string
from math import sqrt
from PGBF import PGBF,coulomb
from pyints import grad_nuc_att
def der_Hcore_element(a,bfi,bfj,atoms):
"""
Finds the derivative of the core-Hamiltonian matrix elements, which can
be written as
H_ij = T_ij + VNe_ij
Where T_ij is the kinetic energy integral and VNe_ij is the nuclear
attraction integral.
"""
dTij_dXa,dTij_dYa,dTij_dZa = der_kinetic_integral(a,bfi,bfj)
dVij_dXa,dVij_dYa,dVij_dZa = der_nuc_att(a,bfi,bfj,atoms)
dHij_dXa = dTij_dXa + dVij_dXa
dHij_dYa = dTij_dYa + dVij_dYa
dHij_dZa = dTij_dZa + dVij_dZa
return dHij_dXa,dHij_dYa,dHij_dZa
def der_kinetic_integral(a,bfi,bfj):
"""
The kinetic energy operator does not depend on the atomic position so we only
have to consider differentiating the Gaussian functions. There are 4 possible
cases we have to evaluate
Case 1: Neither of the basis functions depends on the position of atom A which gives:
dT_ij/dXa = 0
Cases 2 and 3: Only one of the basis functions depends the position of atom A which
gives us either of the following possible integrals to evaluate
dT_ij/dXa = integral{dr dg_i/dXa T g_j }
dT_ij/dXa = integral{dr g_i T dg_j/dXa }
Case 4: Both of the basis functions depend on the position of atom A which gives the
following integral to evaluate
dT_ij/dXa = integral{dr dg_i/dXa T g_j + g_i T dg_j/dXa }
"""
dTij_dXa,dTij_dYa,dTij_dZa = 0.0,0.0,0.0
#we use atom ids on the CGBFs to evaluate which of the 4 above case we have
#bfi is centered on atom a
if bfi.atid==a:
for upbf in bfj.prims():
for vpbf in bfi.prims():
alpha = vpbf.exp()
l,m,n = vpbf.powers()
origin = vpbf.origin()
coefs = upbf.coef()*vpbf.coef()
#x component
v = PGBF(alpha,origin,(l+1,m,n))
v.normalize()
terma = sqrt(alpha*(2.0*l+1.0))*coefs*v.kinetic(upbf)
if l>0:
v.reset_powers(l-1,m,n)
v.normalize()
termb = -2*l*sqrt(alpha/(2.0*l-1.0))*coefs*v.kinetic(upbf)
else: termb = 0.0
dTij_dXa += terma + termb
#y component
v.reset_powers(l,m+1,n)
v.normalize()
terma = sqrt(alpha*(2.0*m+1.0))*coefs*v.kinetic(upbf)
if m>0:
v.reset_powers(l,m-1,n)
v.normalize()
termb = -2*m*sqrt(alpha/(2.0*m-1.0))*coefs*v.kinetic(upbf)
else: termb = 0.0
dTij_dYa += terma + termb
#z component
v.reset_powers(l,m,n+1)
v.normalize()
terma = sqrt(alpha*(2.0*n+1.0))*coefs*v.kinetic(upbf)
if n>0:
v.reset_powers(l,m,n-1)
v.normalize()
termb = -2*n*sqrt(alpha/(2.0*n-1.0))*coefs*v.kinetic(upbf)
else: termb = 0.0
dTij_dZa += terma + termb
#bfj is centered on atom a
if bfj.atid==a:
for upbf in bfi.prims():
for vpbf in bfj.prims():
alpha = vpbf.exp()
l,m,n = vpbf.powers()
origin = vpbf.origin()
coefs = upbf.coef()*vpbf.coef()
#x component
v = PGBF(alpha,origin,(l+1,m,n))
v.normalize()
terma = sqrt(alpha*(2.0*l+1.0))*coefs*v.kinetic(upbf)
if l>0:
v.reset_powers(l-1,m,n)
v.normalize()
termb = -2*l*sqrt(alpha/(2.0*l-1.0))*coefs*v.kinetic(upbf)
else: termb = 0.0
dTij_dXa += terma + termb
#y component
v.reset_powers(l,m+1,n)
v.normalize()
terma = sqrt(alpha*(2.0*m+1.0))*coefs*v.kinetic(upbf)
if m>0:
v.reset_powers(l,m-1,n)
v.normalize()
termb = -2*m*sqrt(alpha/(2.0*m-1.0))*coefs*v.kinetic(upbf)
else: termb = 0.0
dTij_dYa += terma + termb
#z component
v.reset_powers(l,m,n+1)
v.normalize()
terma = sqrt(alpha*(2.0*n+1.0))*coefs*v.kinetic(upbf)
if n>0:
v.reset_powers(l,m,n-1)
v.normalize()
termb = -2*n*sqrt(alpha/(2.0*n-1.0))*coefs*v.kinetic(upbf)
else: termb = 0.0
dTij_dZa += terma + termb
return dTij_dXa,dTij_dYa,dTij_dZa
def der_nuc_att(a,bfi,bfj,atoms):
"""
This function finds the atomic gradient of the nuclear attraction integrals. Since the
nuclear attraction operator explicitly depends on the atomic coordinates we find
grad <i|V|j> = <grad i|V|j> + <i|V|grad j> + <i| grad V |j>
The first two terms are straightforward to evaluate using the recursion relation for the
derivative of a Gaussian basis function. The last term found through the nuclear_gradient
function in the primitive Gaussian class.
"""
dVij_dXa,dVij_dYa,dVij_dZa = 0.0,0.0,0.0
if bfi.atid==a: #bfi is centered on atom a
for upbf in bfj.prims():
for vpbf in bfi.prims():
alpha = vpbf.exp()
l,m,n = vpbf.powers()
origin = vpbf.origin()
coefs = upbf.coef()*vpbf.coef()
#x component
v = PGBF(alpha,origin,(l+1,m,n))
v.normalize()
terma=0.0
for atom in atoms:
terma += atom.atno*sqrt(alpha*(2.0*l+1.0))*coefs*v.nuclear(upbf,atom.pos())
if l>0:
v.reset_powers(l-1,m,n)
v.normalize()
termb=0.0
for atom in atoms:
termb += -2*l*atom.atno*sqrt(alpha/(2.0*l-1.0))*coefs*v.nuclear(upbf,atom.pos())
else: termb = 0.0
dVij_dXa += terma + termb
#y component
v.reset_powers(l,m+1,n)
v.normalize()
terma=0.0
for atom in atoms:
terma += atom.atno*sqrt(alpha*(2.0*m+1.0))*coefs*v.nuclear(upbf,atom.pos())
if m>0:
v.reset_powers(l,m-1,n)
v.normalize()
termb=0.0
for atom in atoms:
termb += -2*m*atom.atno*sqrt(alpha/(2.0*m-1.0))*coefs*v.nuclear(upbf,atom.pos())
else: termb = 0.0
dVij_dYa += terma + termb
#z component
v.reset_powers(l,m,n+1)
v.normalize()
terma=0.0
for atom in atoms:
terma += atom.atno*sqrt(alpha*(2.0*n+1.0))*coefs*v.nuclear(upbf,atom.pos())
if n>0:
v.reset_powers(l,m,n-1)
v.normalize()
termb=0.0
for atom in atoms:
termb += -2*n*atom.atno*sqrt(alpha/(2.0*n-1.0))*coefs*v.nuclear(upbf,atom.pos())
else: termb = 0.0
dVij_dZa += terma + termb
#bfj is centered on atom a
if bfj.atid==a:
for upbf in bfi.prims():
for vpbf in bfj.prims():
alpha = vpbf.exp()
l,m,n = vpbf.powers()
origin = vpbf.origin()
coefs = upbf.coef()*vpbf.coef()
#x component
v = PGBF(alpha,origin,(l+1,m,n))
v.normalize()
terma=0.0
for atom in atoms:
terma += atom.atno*sqrt(alpha*(2.0*l+1.0))*coefs*v.nuclear(upbf,atom.pos())
if l>0:
v.reset_powers(l-1,m,n)
v.normalize()
termb=0.0
for atom in atoms:
termb += -2*l*atom.atno*sqrt(alpha/(2.0*l-1.0))*coefs*v.nuclear(upbf,atom.pos())
else: termb = 0.0
dVij_dXa += terma + termb
#y component
v.reset_powers(l,m+1,n)
v.normalize()
terma=0.0
for atom in atoms:
terma += atom.atno*sqrt(alpha*(2.0*m+1.0))*coefs*v.nuclear(upbf,atom.pos())
if m>0:
v.reset_powers(l,m-1,n)
v.normalize()
termb=0.0
for atom in atoms:
termb += -2*m*atom.atno*sqrt(alpha/(2.0*m-1.0))*coefs*v.nuclear(upbf,atom.pos())
else: termb = 0.0
dVij_dYa += terma + termb
#z component
v.reset_powers(l,m,n+1)
v.normalize()
terma=0.0
for atom in atoms:
terma += atom.atno*sqrt(alpha*(2.0*n+1.0))*coefs*v.nuclear(upbf,atom.pos())
if n>0:
v.reset_powers(l,m,n-1)
v.normalize()
termb = 0.0
for atom in atoms:
termb += -2*n*atom.atno*sqrt(alpha/(2.0*n-1.0))*coefs*v.nuclear(upbf,atom.pos())
else: termb = 0.0
dVij_dZa += terma + termb
#finally evaluate <i| grad V |j>
for atom in atoms:
if atom.atid==a:
for upbf in bfi.prims():
for vpbf in bfj.prims():
prefactor = upbf.coef()*vpbf.coef()*atom.atno
list = upbf.nuclear_gradient(vpbf,atom.pos())
dVij_dXa+=prefactor*list[0]
dVij_dYa+=prefactor*list[1]
dVij_dZa+=prefactor*list[2]
return dVij_dXa,dVij_dYa,dVij_dZa
def der_overlap_element(a,bfi, bfj):
"""
finds the derivative of the overlap integral with respect to the
atomic coordinate of atom "a". Note there are four possible cases
for evaluating this integral:
1. Neither of the basis functions depend on the position of atom a
ie. they are centered on atoms other than atom a
2 and 3. One of the basis functions depends on the position of atom a
so we need to evaluate the derivative of a Gaussian with the
recursion (right word?) relation derived on page 442 of Szabo.
4. Both of the basis functions are centered on atom a, which through the
recursion relation for the derivative of a Gaussian basis function will
require the evaluation of 4 overlap integrals...
this function will return a 3 element list with the derivatives of the overlap
integrals with respect to the atomic coordinates Xa,Ya,Za.
"""
dSij_dXa,dSij_dYa,dSij_dZa = 0.0,0.0,0.0
#we use atom ids on the CGBFs to evaluate which of the 4 above case we have
if bfi.atid==a: #bfi is centered on atom a
for upbf in bfj.prims():
for vpbf in bfi.prims():
alpha = vpbf.exp()
l,m,n = vpbf.powers()
origin = vpbf.origin()
coefs = upbf.coef()*vpbf.coef()
#x component
v = PGBF(alpha,origin,(l+1,m,n))
v.normalize()
terma = sqrt(alpha*(2.0*l+1.0))*coefs*v.overlap(upbf)
if l>0:
v.reset_powers(l-1,m,n)
v.normalize()
termb = -2*l*sqrt(alpha/(2.0*l-1.0))*coefs*v.overlap(upbf)
else: termb = 0.0
dSij_dXa += terma + termb
#y component
v.reset_powers(l,m+1,n)
v.normalize()
terma = sqrt(alpha*(2.0*m+1.0))*coefs*v.overlap(upbf)
if m>0:
v.reset_powers(l,m-1,n)
v.normalize()
termb = -2*m*sqrt(alpha/(2.0*m-1.0))*coefs*v.overlap(upbf)
else: termb = 0.0
dSij_dYa += terma + termb
#z component
v.reset_powers(l,m,n+1)
v.normalize()
terma = sqrt(alpha*(2.0*n+1.0))*coefs*v.overlap(upbf)
if n>0:
v.reset_powers(l,m,n-1)
v.normalize()
termb = -2*n*sqrt(alpha/(2.0*n-1.0))*coefs*v.overlap(upbf)
else: termb = 0.0
dSij_dZa += terma + termb
#bfj is centered on atom a
if bfj.atid==a:
for upbf in bfi.prims():
for vpbf in bfj.prims():
alpha = vpbf.exp()
l,m,n = vpbf.powers()
origin = vpbf.origin()
coefs = upbf.coef()*vpbf.coef()
#x component
v = PGBF(alpha,origin,(l+1,m,n))
v.normalize()
terma = sqrt(alpha*(2.0*l+1.0))*coefs*v.overlap(upbf)
if l>0:
v.reset_powers(l-1,m,n)
v.normalize()
termb = -2*l*sqrt(alpha/(2.0*l-1.0))*coefs*v.overlap(upbf)
else: termb = 0.0
dSij_dXa += terma + termb
#y component
v.reset_powers(l,m+1,n)
v.normalize()
terma = sqrt(alpha*(2.0*m+1.0))*coefs*v.overlap(upbf)
if m>0:
v.reset_powers(l,m-1,n)
v.normalize()
termb = -2*m*sqrt(alpha/(2.0*m-1.0))*coefs*v.overlap(upbf)
else: termb = 0.0
dSij_dYa += terma + termb
#z component
v.reset_powers(l,m,n+1)
v.normalize()
terma = sqrt(alpha*(2.0*n+1.0))*coefs*v.overlap(upbf)
if n>0:
v.reset_powers(l,m,n-1)
v.normalize()
termb = -2*n*sqrt(alpha/(2.0*n-1.0))*coefs*v.overlap(upbf)
else: termb = 0.0
dSij_dZa += terma + termb
return dSij_dXa,dSij_dYa,dSij_dZa
def der_Jints(a, bfi,bfj,bfk,bfl):
"""
This function will find the atomic gradient of the Coloumb integral over
basis functions i,j,k, and l as in
grad_a <ij|kl> = <gi j|kl> + <i gj|kl> + <ij|gk l> + <ij|k gl>
"""
dJint_dXa,dJint_dYa,dJint_dZa = 0.0,0.0,0.0
if bfi.atid==a: #bfi is centered on atom a
for tpbf in bfi.prims():
for upbf in bfj.prims():
for vpbf in bfk.prims():
for wpbf in bfl.prims():
alpha = tpbf.exp()
l,m,n = tpbf.powers()
origin = tpbf.origin()
coefs = tpbf.coef()*upbf.coef()*vpbf.coef()*wpbf.coef()
#x component
tmp = PGBF(alpha, origin,(l+1,m,n)) #temp pgbf
tmp.normalize()
terma = sqrt(alpha*(2.0*l+1.0))*coefs*coulomb(tmp,upbf,vpbf,wpbf)
if l>0:
tmp.reset_powers(l-1,m,n)
tmp.normalize()
termb = -2*l*sqrt(alpha/(2.*l-1))*coefs*coulomb(tmp,upbf,vpbf,wpbf)
else: termb = 0.0
dJint_dXa += terma+termb
#y component
tmp.reset_powers(l,m+1,n)
tmp.normalize()
terma = sqrt(alpha*(2.0*m+1.0))*coefs*coulomb(tmp,upbf,vpbf,wpbf)
if m>0:
tmp.reset_powers(l,m-1,n)
tmp.normalize()
termb = -2*m*sqrt(alpha/(2.*m-1))*coefs*coulomb(tmp,upbf,vpbf,wpbf)
else: termb=0.0
dJint_dYa += terma + termb
#z component
tmp.reset_powers(l,m,n+1)
tmp.normalize()
terma = sqrt(alpha*(2.0*n+1.0))*coefs*coulomb(tmp,upbf,vpbf,wpbf)
if n>0:
tmp.reset_powers(l,m,n-1)
tmp.normalize()
termb = -2*n*sqrt(alpha/(2.*n-1))*coefs*coulomb(tmp,upbf,vpbf,wpbf)
else: termb=0.0
dJint_dZa += terma + termb
if bfj.atid==a: #bfj is centered on atom a
for tpbf in bfi.prims():
for upbf in bfj.prims():
for vpbf in bfk.prims():
for wpbf in bfl.prims():
alpha = upbf.exp()
l,m,n = upbf.powers()
origin = upbf.origin()
coefs = tpbf.coef()*upbf.coef()*vpbf.coef()*wpbf.coef()
#x component
tmp = PGBF(alpha, origin,(l+1,m,n)) #temp pgbf
tmp.normalize()
terma = sqrt(alpha*(2.0*l+1.0))*coefs*coulomb(tpbf,tmp,vpbf,wpbf)
if l>0:
tmp.reset_powers(l-1,m,n)
tmp.normalize()
termb = -2*l*sqrt(alpha/(2.*l-1))*coefs*coulomb(tpbf,tmp,vpbf,wpbf)
else: termb = 0.0
dJint_dXa += terma+termb
#y component
tmp.reset_powers(l,m+1,n)
tmp.normalize()
terma = sqrt(alpha*(2.0*m+1.0))*coefs*coulomb(tpbf,tmp,vpbf,wpbf)
if m>0:
tmp.reset_powers(l,m-1,n)
tmp.normalize()
termb = -2*m*sqrt(alpha/(2.*m-1))*coefs*coulomb(tpbf,tmp,vpbf,wpbf)
else: termb=0.0
dJint_dYa += terma + termb
#z component
tmp.reset_powers(l,m,n+1)
tmp.normalize()
terma = sqrt(alpha*(2.0*n+1.0))*coefs*coulomb(tpbf,tmp,vpbf,wpbf)
if n>0:
tmp.reset_powers(l,m,n-1)
tmp.normalize()
termb = -2*n*sqrt(alpha/(2.*n-1))*coefs*coulomb(tpbf,tmp,vpbf,wpbf)
else: termb=0.0
dJint_dZa += terma + termb
if bfk.atid==a: #bfk is centered on atom a
for tpbf in bfi.prims():
for upbf in bfj.prims():
for vpbf in bfk.prims():
for wpbf in bfl.prims():
alpha = vpbf.exp()
l,m,n = vpbf.powers()
origin = vpbf.origin()
coefs = tpbf.coef()*upbf.coef()*vpbf.coef()*wpbf.coef()
#x component
tmp = PGBF(alpha, origin,(l+1,m,n)) #temp pgbf
tmp.normalize()
terma = sqrt(alpha*(2.0*l+1.0))*coefs*coulomb(tpbf,upbf,tmp,wpbf)
if l>0:
tmp.reset_powers(l-1,m,n)
tmp.normalize()
termb = -2*l*sqrt(alpha/(2.*l-1))*coefs*coulomb(tpbf,upbf,tmp,wpbf)
else: termb = 0.0
dJint_dXa += terma+termb
#y component
tmp.reset_powers(l,m+1,n)
tmp.normalize()
terma = sqrt(alpha*(2.0*m+1.0))*coefs*coulomb(tpbf,upbf,tmp,wpbf)
if m>0:
tmp.reset_powers(l,m-1,n)
tmp.normalize()
termb = -2*m*sqrt(alpha/(2.*m-1))*coefs*coulomb(tpbf,upbf,tmp,wpbf)
else: termb=0.0
dJint_dYa += terma + termb
#z component
tmp.reset_powers(l,m,n+1)
tmp.normalize()
terma = sqrt(alpha*(2.0*n+1.0))*coefs*coulomb(tpbf,upbf,tmp,wpbf)
if n>0:
tmp.reset_powers(l,m,n-1)
tmp.normalize()
termb = -2*n*sqrt(alpha/(2.*n-1))*coefs*coulomb(tpbf,upbf,tmp,wpbf)
else: termb=0.0
dJint_dZa += terma + termb
if bfl.atid==a: #bfl is centered on atom a
for tpbf in bfi.prims():
for upbf in bfj.prims():
for vpbf in bfk.prims():
for wpbf in bfl.prims():
alpha = wpbf.exp()
l,m,n = wpbf.powers()
origin = wpbf.origin()
coefs = tpbf.coef()*upbf.coef()*vpbf.coef()*wpbf.coef()
#x component
tmp = PGBF(alpha, origin,(l+1,m,n)) #temp pgbf
tmp.normalize()
terma = sqrt(alpha*(2.0*l+1.0))*coefs*coulomb(tpbf,upbf,vpbf,tmp)
if l>0:
tmp.reset_powers(l-1,m,n)
tmp.normalize()
termb = -2*l*sqrt(alpha/(2.*l-1))*coefs*coulomb(tpbf,upbf,vpbf,tmp)
else: termb = 0.0
dJint_dXa += terma+termb
#y component
tmp.reset_powers(l,m+1,n)
tmp.normalize()
terma = sqrt(alpha*(2.0*m+1.0))*coefs*coulomb(tpbf,upbf,vpbf,tmp)
if m>0:
tmp.reset_powers(l,m-1,n)
tmp.normalize()
termb = -2*m*sqrt(alpha/(2.*m-1))*coefs*coulomb(tpbf,upbf,vpbf,tmp)
else: termb=0.0
dJint_dYa += terma + termb
#z component
tmp.reset_powers(l,m,n+1)
tmp.normalize()
terma = sqrt(alpha*(2.0*n+1.0))*coefs*coulomb(tpbf,upbf,vpbf,tmp)
if n>0:
tmp.reset_powers(l,m,n-1)
tmp.normalize()
termb = -2*n*sqrt(alpha/(2.*n-1))*coefs*coulomb(tpbf,upbf,vpbf,tmp)
else: termb=0.0
dJint_dZa += terma + termb
return dJint_dXa,dJint_dYa,dJint_dZa
| 38.933432 | 105 | 0.39462 | 3,025 | 26,319 | 3.380496 | 0.070083 | 0.012126 | 0.058674 | 0.051633 | 0.819871 | 0.802367 | 0.765793 | 0.758948 | 0.755721 | 0.749853 | 0 | 0.03584 | 0.502793 | 26,319 | 675 | 106 | 38.991111 | 0.745606 | 0.128462 | 0 | 0.841584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012376 | false | 0 | 0.009901 | 0 | 0.034653 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e0660cf78fb1d72dc050f040085d6d58cba255d7 | 132 | py | Python | reconnect.py | frostbite07/GRIDLOCK | 44d455068beb2caf15e67bb853f8e7daa053f9d1 | [
"BSD-2-Clause"
] | null | null | null | reconnect.py | frostbite07/GRIDLOCK | 44d455068beb2caf15e67bb853f8e7daa053f9d1 | [
"BSD-2-Clause"
] | null | null | null | reconnect.py | frostbite07/GRIDLOCK | 44d455068beb2caf15e67bb853f8e7daa053f9d1 | [
"BSD-2-Clause"
] | null | null | null | import os
def reconnect():
return "Needs Set up"
#os.system("nmcli d wifi connect <SSID> password <Password> ifname wlan0")
| 26.4 | 78 | 0.69697 | 19 | 132 | 4.842105 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009346 | 0.189394 | 132 | 4 | 79 | 33 | 0.850467 | 0.55303 | 0 | 0 | 0 | 0 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
0edd341919eb714d1c214d6258619a0c209a2464 | 1,271 | py | Python | P0008.py | sebastianaldi17/ProjectEuler | 19562fba3456ec904bcc264fb786a92610e42622 | [
"MIT"
] | null | null | null | P0008.py | sebastianaldi17/ProjectEuler | 19562fba3456ec904bcc264fb786a92610e42622 | [
"MIT"
] | null | null | null | P0008.py | sebastianaldi17/ProjectEuler | 19562fba3456ec904bcc264fb786a92610e42622 | [
"MIT"
] | null | null | null | # Largest product in a series
# https://projecteuler.net/problem=8
Q = "7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450"
N = 13
def solve(Q, N):
ans = 0
for i in range(len(Q)-N):
sub = Q[i:i+N]
if "0" in sub: continue
ans = max(eval("*".join(list(sub))), ans)
print(ans)
solve(Q, N) | 105.916667 | 1,006 | 0.902439 | 49 | 1,271 | 23.408163 | 0.612245 | 0.005231 | 0.012206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.839599 | 0.058222 | 1,271 | 12 | 1,007 | 105.916667 | 0.11863 | 0.04878 | 0 | 0 | 0 | 0 | 0.830157 | 0.8285 | 0 | 1 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.1 | 0.1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1652499ff6dcb6e127d272b03ed3884bde4c5623 | 2,156 | py | Python | test/integration/030_statement_test/test_statements.py | ClaySheffler/dbt | 588851ac1cd2ef9856706f14cee573dedbc46d8c | [
"Apache-2.0"
] | 1 | 2020-08-11T08:44:33.000Z | 2020-08-11T08:44:33.000Z | test/integration/030_statement_test/test_statements.py | ClaySheffler/dbt | 588851ac1cd2ef9856706f14cee573dedbc46d8c | [
"Apache-2.0"
] | null | null | null | test/integration/030_statement_test/test_statements.py | ClaySheffler/dbt | 588851ac1cd2ef9856706f14cee573dedbc46d8c | [
"Apache-2.0"
] | 1 | 2019-04-16T10:51:10.000Z | 2019-04-16T10:51:10.000Z | from test.integration.base import DBTIntegrationTest, use_profile
class TestStatements(DBTIntegrationTest):
@property
def schema(self):
return "statements_030"
@staticmethod
def dir(path):
return "test/integration/030_statement_test/" + path.lstrip("/")
@property
def models(self):
return self.dir("models")
@use_profile("postgres")
def test_postgres_statements(self):
self.use_default_project({"data-paths": [self.dir("seed")]})
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 2)
results = self.run_dbt()
self.assertEqual(len(results), 1)
self.assertTablesEqual("statement_actual","statement_expected")
@use_profile("snowflake")
def test_snowflake_statements(self):
self.use_default_project({"data-paths": [self.dir("seed")]})
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 2)
results = self.run_dbt()
self.assertEqual(len(results), 1)
self.assertManyTablesEqual(["STATEMENT_ACTUAL", "STATEMENT_EXPECTED"])
@use_profile("presto")
def test_presto_statements(self):
self.use_default_project({"data-paths": [self.dir("seed")]})
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 2)
results = self.run_dbt()
self.assertEqual(len(results), 1)
self.assertTablesEqual("statement_actual","statement_expected")
class TestStatementsBigquery(DBTIntegrationTest):
@property
def schema(self):
return "statements_030"
@staticmethod
def dir(path):
return "test/integration/030_statement_test/" + path.lstrip("/")
@property
def models(self):
return self.dir("models-bq")
@use_profile("bigquery")
def test_bigquery_statements(self):
self.use_default_project({"data-paths": [self.dir("seed")]})
results = self.run_dbt(["seed"])
self.assertEqual(len(results), 2)
results = self.run_dbt()
self.assertEqual(len(results), 1)
self.assertTablesEqual("statement_actual","statement_expected")
| 28.368421 | 78 | 0.653061 | 237 | 2,156 | 5.759494 | 0.189873 | 0.064469 | 0.082051 | 0.099634 | 0.827106 | 0.827106 | 0.789011 | 0.789011 | 0.789011 | 0.789011 | 0 | 0.011723 | 0.20872 | 2,156 | 75 | 79 | 28.746667 | 0.788394 | 0 | 0 | 0.735849 | 0 | 0 | 0.165197 | 0.033411 | 0 | 0 | 0 | 0 | 0.226415 | 1 | 0.188679 | false | 0 | 0.018868 | 0.113208 | 0.358491 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 8 |
166cecaebaa077b9ce2024ab16f94b018b7659fa | 24,860 | py | Python | src/transformers/utils/dummy_flax_objects.py | bhavika/transformers | 65cf33e7e53cd46313f3655f274b3f6ca0fd679d | [
"Apache-2.0"
] | 1 | 2022-03-16T13:02:15.000Z | 2022-03-16T13:02:15.000Z | src/transformers/utils/dummy_flax_objects.py | bhavika/transformers | 65cf33e7e53cd46313f3655f274b3f6ca0fd679d | [
"Apache-2.0"
] | 2 | 2022-03-14T10:13:16.000Z | 2022-03-14T11:50:27.000Z | src/transformers/utils/dummy_flax_objects.py | bhavika/transformers | 65cf33e7e53cd46313f3655f274b3f6ca0fd679d | [
"Apache-2.0"
] | 2 | 2022-03-21T04:32:39.000Z | 2022-03-22T01:02:49.000Z | # This file is autogenerated by the command `make fix-copies`, do not edit.
# flake8: noqa
from ..file_utils import DummyObject, requires_backends
class FlaxForcedBOSTokenLogitsProcessor(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxForcedEOSTokenLogitsProcessor(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxLogitsProcessor(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxLogitsProcessorList(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxLogitsWarper(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMinLengthLogitsProcessor(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxTemperatureLogitsWarper(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxTopKLogitsWarper(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxTopPLogitsWarper(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAlbertForMaskedLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAlbertForMultipleChoice(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAlbertForPreTraining(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAlbertForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAlbertForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAlbertForTokenClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAlbertModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAlbertPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
FLAX_MODEL_FOR_CAUSAL_LM_MAPPING = None
FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING = None
FLAX_MODEL_FOR_MASKED_LM_MAPPING = None
FLAX_MODEL_FOR_MULTIPLE_CHOICE_MAPPING = None
FLAX_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING = None
FLAX_MODEL_FOR_PRETRAINING_MAPPING = None
FLAX_MODEL_FOR_QUESTION_ANSWERING_MAPPING = None
FLAX_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING = None
FLAX_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING = None
FLAX_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING = None
FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING = None
FLAX_MODEL_MAPPING = None
class FlaxAutoModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForCausalLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForImageClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForMaskedLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForMultipleChoice(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForNextSentencePrediction(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForPreTraining(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForSeq2SeqLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForTokenClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxAutoModelForVision2Seq(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBartDecoderPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBartForCausalLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBartForConditionalGeneration(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBartForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBartForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBartModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBartPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBeitForImageClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBeitForMaskedImageModeling(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBeitModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBeitPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBertForMaskedLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBertForMultipleChoice(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBertForNextSentencePrediction(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBertForPreTraining(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBertForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBertForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBertForTokenClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBertModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBertPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBigBirdForMaskedLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBigBirdForMultipleChoice(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBigBirdForPreTraining(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBigBirdForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBigBirdForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBigBirdForTokenClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBigBirdModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBigBirdPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBlenderbotForConditionalGeneration(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBlenderbotModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBlenderbotPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBlenderbotSmallForConditionalGeneration(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBlenderbotSmallModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxBlenderbotSmallPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxCLIPModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxCLIPPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxCLIPTextModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxCLIPTextPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxCLIPVisionModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxCLIPVisionPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxDistilBertForMaskedLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxDistilBertForMultipleChoice(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxDistilBertForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxDistilBertForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxDistilBertForTokenClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxDistilBertModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxDistilBertPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxElectraForMaskedLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxElectraForMultipleChoice(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxElectraForPreTraining(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxElectraForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxElectraForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxElectraForTokenClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxElectraModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxElectraPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxEncoderDecoderModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxGPT2LMHeadModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxGPT2Model(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxGPT2PreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxGPTNeoForCausalLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxGPTNeoModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxGPTNeoPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxGPTJForCausalLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxGPTJModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxGPTJPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMarianModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMarianMTModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMarianPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMBartForConditionalGeneration(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMBartForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMBartForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMBartModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMBartPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMT5ForConditionalGeneration(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxMT5Model(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxPegasusForConditionalGeneration(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxPegasusModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxPegasusPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRobertaForMaskedLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRobertaForMultipleChoice(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRobertaForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRobertaForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRobertaForTokenClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRobertaModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRobertaPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRoFormerForMaskedLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRoFormerForMultipleChoice(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRoFormerForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRoFormerForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRoFormerForTokenClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRoFormerModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxRoFormerPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxSpeechEncoderDecoderModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxT5ForConditionalGeneration(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxT5Model(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxT5PreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxVisionEncoderDecoderModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxVisionTextDualEncoderModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxViTForImageClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxViTModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxViTPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxWav2Vec2ForCTC(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxWav2Vec2ForPreTraining(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxWav2Vec2Model(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxWav2Vec2PreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxXGLMForCausalLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxXGLMModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxXGLMPreTrainedModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxXLMRobertaForMaskedLM(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxXLMRobertaForMultipleChoice(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxXLMRobertaForQuestionAnswering(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxXLMRobertaForSequenceClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxXLMRobertaForTokenClassification(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
class FlaxXLMRobertaModel(metaclass=DummyObject):
_backends = ["flax"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["flax"])
| 23.721374 | 75 | 0.683025 | 2,267 | 24,860 | 7.081165 | 0.086458 | 0.144521 | 0.251168 | 0.287049 | 0.753006 | 0.747275 | 0.738803 | 0.734941 | 0.734941 | 0.734941 | 0 | 0.000978 | 0.177031 | 24,860 | 1,047 | 76 | 23.744031 | 0.783665 | 0.003459 | 0 | 0.733447 | 1 | 0 | 0.046506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.244482 | false | 0 | 0.001698 | 0 | 0.735144 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 11 |
168ee5c8fefb58cd58e88cd9dbe0faa46f9c669a | 188 | py | Python | ts/apps/metrics/predicted-cost-efficiency-controller/ai-proxy/query/handler.py | pics-polaris-cli/polaris | 333e7b9f3b1a4d2a267cec73c44d09d4674042b5 | [
"Apache-2.0"
] | null | null | null | ts/apps/metrics/predicted-cost-efficiency-controller/ai-proxy/query/handler.py | pics-polaris-cli/polaris | 333e7b9f3b1a4d2a267cec73c44d09d4674042b5 | [
"Apache-2.0"
] | 10 | 2022-03-27T17:09:20.000Z | 2022-03-27T17:16:47.000Z | ts/apps/metrics/predicted-cost-efficiency-controller/ai-proxy/query/handler.py | pics-polaris-cli/polaris | 333e7b9f3b1a4d2a267cec73c44d09d4674042b5 | [
"Apache-2.0"
] | null | null | null | from main import Context
from query import polaris
def handle(context: Context):
# TODO query data based on config and the request body (optional)
return polaris.handle(context)
| 23.5 | 69 | 0.760638 | 27 | 188 | 5.296296 | 0.703704 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18617 | 188 | 7 | 70 | 26.857143 | 0.934641 | 0.335106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
168fa5073afad8ee8e73b182b770a16da3cfa4ec | 1,077 | py | Python | src/monsters.py | linluk/roguelike | bf57c28e776a03bbbe801f91706d4b66f114bc86 | [
"WTFPL"
] | 2 | 2018-02-15T22:24:32.000Z | 2018-02-15T22:24:33.000Z | src/monsters.py | linluk/roguelike | bf57c28e776a03bbbe801f91706d4b66f114bc86 | [
"WTFPL"
] | 21 | 2018-02-15T22:27:11.000Z | 2018-03-09T22:01:23.000Z | src/monsters.py | linluk/downstairs | bf57c28e776a03bbbe801f91706d4b66f114bc86 | [
"WTFPL"
] | null | null | null |
import typing
import ui
import ecs
import components
def create_orc(x: int, y: int) -> ecs.Entity:
e = ecs.Entity()
e.add_component(components.Name('Orc'))
e.add_component(components.Ai(move_or_attack=True))
e.add_component(components.Position(x, y))
e.add_component(components.Sight(4))
e.add_component(components.Moveable())
c = components.CombatStats()
c.ATK = 10
c.DEF = 10
c.HPM = 1
c.HP = c.HPM
e.add_component(c)
e.add_component(components.Blocking())
e.add_component(components.Graphics('O', ui.GREEN, ui.BLACK))
return e
def create_snake(x: int, y: int) -> ecs.Entity:
e = ecs.Entity()
e.add_component(components.Name('Snake'))
e.add_component(components.Ai(move_or_attack=True))
e.add_component(components.Position(x, y))
e.add_component(components.Sight(4))
e.add_component(components.Moveable())
c = components.CombatStats()
c.ATK = 10
c.DEF = 10
c.HPM = 1
c.HP = c.HPM
e.add_component(c)
e.add_component(components.Blocking())
e.add_component(components.Graphics('S', ui.GREEN, ui.BLACK))
return e
| 25.046512 | 63 | 0.710306 | 172 | 1,077 | 4.319767 | 0.232558 | 0.086137 | 0.279946 | 0.433378 | 0.890983 | 0.890983 | 0.834455 | 0.834455 | 0.834455 | 0.834455 | 0 | 0.012945 | 0.139276 | 1,077 | 42 | 64 | 25.642857 | 0.788565 | 0 | 0 | 0.722222 | 0 | 0 | 0.009302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
169992de9cd9595b65e4b1b648e41bd0aa393c57 | 32,918 | py | Python | eda5/reports/views/obracuni_views.py | vasjapavlovic/eda5 | bc4b387b24239ea1dfb927657f05ddabbf707479 | [
"BSD-3-Clause"
] | null | null | null | eda5/reports/views/obracuni_views.py | vasjapavlovic/eda5 | bc4b387b24239ea1dfb927657f05ddabbf707479 | [
"BSD-3-Clause"
] | null | null | null | eda5/reports/views/obracuni_views.py | vasjapavlovic/eda5 | bc4b387b24239ea1dfb927657f05ddabbf707479 | [
"BSD-3-Clause"
] | null | null | null | # Python
import datetime as dt
import os
from datetime import timedelta, datetime
import pandas as pd
import time
# Django
from django.db.models import Q, F, Sum
from django.shortcuts import render
from django.utils import timezone
from django.views.generic import TemplateView, DetailView
# Templated Docs
from templated_docs import fill_template
from templated_docs.http import FileResponse
# Utils
from eda5.core.utils import zaokrozen_zmin, pretvori_v_ure
# Models
from eda5.deli.models import Podskupina, DelStavbe
from eda5.delovninalogi.models import DelovniNalog, Delo, DeloVrsta
from eda5.etaznalastnina.models import LastniskaSkupina, Program
from eda5.moduli.models import Zavihek
from eda5.narocila.models import Narocilo
from eda5.planiranje.models import PlaniranoOpravilo
from eda5.racunovodstvo.models import VrstaStroska
from eda5.skladisce.models import Dnevnik, Artikel
# Forms
from ..forms import ObracunIzrednaDelaForm
from ..forms import DeliSeznamFilterForm, ObracunFilterForm, ObracunIzpisVrstaForm
class ObracunZbirniDelovniNalogView(TemplateView):
form_class= ObracunIzrednaDelaForm
template_name = "reports/obracuni/obracuni_zbirni_delovni_nalog.html"
def get_context_data(self, *args, **kwargs):
context = super(ObracunZbirniDelovniNalogView, self).get_context_data(*args, **kwargs)
context['obracun_izpis_vrsta_form'] = ObracunIzpisVrstaForm
return context
def get_form_kwargs(self):
return {
'initial': self.get_initial(),
'prefix': self.get_prefix(),
'data': self.request.GET or None
}
def get(self, request, *args, **kwargs):
#form = self.get_form(self.get_form_class())
obracun_izredna_dela_form = ObracunIzrednaDelaForm(request.GET or None)
if obracun_izredna_dela_form.is_valid():
obdobje_od = obracun_izredna_dela_form.cleaned_data['datum_od']
obdobje_do = obracun_izredna_dela_form.cleaned_data['datum_do']
vrsta_stroska = obracun_izredna_dela_form.cleaned_data['vrsta_stroska']
narocilo = obracun_izredna_dela_form.cleaned_data['narocilo']
''' izpolnemo podakte o računskem času delovnega naloga '''
stroskovnomesto = vrsta_stroska
# seznam delovnih nalogov ki so del osnovnega filtra
delovninalog_filtered_list = DelovniNalog.objects.filter(
opravilo__narocilo=narocilo, # delovni nalogi ki so del naročila
# opravilo__planirano_opravilo__isnull=True, # samo neplanirana opravila
opravilo__vrsta_stroska=vrsta_stroska,
datum_stop__isnull=False)
#.filter(
#Q(datum_stop__gte=obdobje_od) & Q(datum_stop__lte=obdobje_do)
#)
delovninalog_filtered_list = delovninalog_filtered_list.filter(
Q(datum_stop__gte=obdobje_od) & Q(datum_stop__lte=obdobje_do)
)
for delovninalog in delovninalog_filtered_list:
# enota za zaokroževanje
zmin = delovninalog.opravilo.zmin
# seznam del pod delovnim nalogom
delo_list = Delo.objects.filter(delovninalog=delovninalog).annotate(delo_cas=F('time_stop')-F('time_start'))
'''
Izdelamo porabljen čas za izvedbo dela
v enoti UR in ga shranimo v bazo pod delo_cas_rac
'''
for delo in delo_list:
# osnovni delo_cas
delo_cas = delo.delo_cas
# zaokrožen delo_cas
delo_cas_rac = zaokrozen_zmin(delo_cas, zmin, '+')
# pretvorjen v decimalno številko delo_cas
delo_cas_rac = pretvori_v_ure(delo_cas_rac)
# shranimo v bazo
delo.delo_cas_rac = delo_cas_rac
delo.save()
'''
Glede na izračunane čase za izvedbo del
izračunamo skupne porabljene čase po VRSTA_DELA.
'''
dn_list = []
for dn in delovninalog_filtered_list:
# skupni porabljen čas za izvedbo glede na vrsto del (ZUN damo ben ker se ne obračunajo)
dela_filtered_list = Delo.objects.filter(delovninalog=dn).exclude(vrsta_dela__oznaka="ZUN").order_by('datum', 'time_start')
vrstadel_cas_list = dela_filtered_list.values('vrsta_dela__oznaka', 'vrsta_dela__naziv').order_by('vrsta_dela').annotate(vrstadela_cas_rac_sum=Sum('delo_cas_rac'))
# dnevnik porabljenega materiala
material = Dnevnik.objects.filter(delovninalog=dn)
# če v delovnemu nalogu ni del, ki se obračunajo, ga ne prikažem
if dela_filtered_list or material:
dn_list.append((dn, dela_filtered_list, vrstadel_cas_list, material))
naslov_data = {
'st_dokumenta': "DN-ZBIRNI-" + vrsta_stroska.oznaka + str(obdobje_do),
'tip_dokumenta': "ZBIRNI DELOVNI NALOG",
'obdobje_od': str(obdobje_od),
'obdobje_do': str(obdobje_do),
'stroskovnomesto': stroskovnomesto,
'narocnik': narocilo.narocnik.kratko_ime + " (" + narocilo.narocnik.davcna_st + ")",
'narocilo': narocilo.oznaka + "|" + narocilo.predmet,
'delovninalog_filtered_list': delovninalog_filtered_list,
}
context = self.get_context_data(
obracun_izredna_dela_form=obracun_izredna_dela_form,
naslov_data=naslov_data,
dn_list=dn_list,
)
else:
context = self.get_context_data(
obracun_izredna_dela_form=obracun_izredna_dela_form,
)
return self.render_to_response(context)
def post(self, request, *args, **kwargs):
###########################################################################
# FORMS
###########################################################################
obracun_izpis_vrsta_form = ObracunIzpisVrstaForm(request.POST or None)
obracun_izredna_dela_form = ObracunIzrednaDelaForm(request.POST or None)
obracun_izpis_vrsta_form_is_valid = False
obracun_izredna_dela_form_is_valid = False
###########################################################################
# PRIDOBIMO PODATKE
###########################################################################
if obracun_izpis_vrsta_form.is_valid():
vrsta_izpisa = obracun_izpis_vrsta_form.cleaned_data['vrsta_izpisa_field']
obracun_izpis_vrsta_form_is_valid = True
if obracun_izredna_dela_form.is_valid():
obdobje_od = obracun_izredna_dela_form.cleaned_data['datum_od']
obdobje_do = obracun_izredna_dela_form.cleaned_data['datum_do']
vrsta_stroska = obracun_izredna_dela_form.cleaned_data['vrsta_stroska']
narocilo = obracun_izredna_dela_form.cleaned_data['narocilo']
obracun_izredna_dela_form_is_valid = True
''' izpolnemo podakte o računskem času delovnega naloga '''
stroskovnomesto = vrsta_stroska
# seznam delovnih nalogov ki so del osnovnega filtra
delovninalog_filtered_list = DelovniNalog.objects.filter(
opravilo__narocilo=narocilo, # delovni nalogi ki so del naročila
# opravilo__planirano_opravilo__isnull=True, # samo neplanirana opravila
opravilo__vrsta_stroska=vrsta_stroska,
datum_stop__isnull=False)
#.filter(
#Q(datum_stop__gte=obdobje_od) & Q(datum_stop__lte=obdobje_do)
#)
delovninalog_filtered_list = delovninalog_filtered_list.filter(
Q(datum_stop__gte=obdobje_od) & Q(datum_stop__lte=obdobje_do)
)
'''
Glede na izračunane čase za izvedbo del
izračunamo skupne porabljene čase po VRSTA_DELA.
'''
dn_list = []
for dn in delovninalog_filtered_list:
# skupni porabljen čas za izvedbo glede na vrsto del (ZUN damo ben ker se ne obračunajo)
dela_filtered_list = Delo.objects.filter(delovninalog=dn).exclude(vrsta_dela__oznaka="ZUN").order_by('datum', 'time_start')
vrstadel_cas_list = dela_filtered_list.values('vrsta_dela__oznaka', 'vrsta_dela__naziv').order_by('vrsta_dela').annotate(vrstadela_cas_rac_sum=Sum('delo_cas_rac'))
# dnevnik porabljenega materiala
material = Dnevnik.objects.filter(delovninalog=dn)
# če v delovnemu nalogu ni del, ki se obračunajo, ga ne prikažem
if dela_filtered_list or material:
dn_list.append((dn, dela_filtered_list, vrstadel_cas_list, material))
#Če so formi pravilno izpolnjeni
if obracun_izpis_vrsta_form_is_valid == True and obracun_izredna_dela_form_is_valid == True:
###########################################################################
# UKAZI
###########################################################################
''' izpolnemo podakte o računskem času delovnega naloga '''
stroskovnomesto = vrsta_stroska
# seznam delovnih nalogov ki so del osnovnega filtra
delovninalog_filtered_list = DelovniNalog.objects.filter(
opravilo__narocilo=narocilo, # delovni nalogi ki so del naročila
# opravilo__planirano_opravilo__isnull=True, # samo neplanirana opravila
opravilo__vrsta_stroska=vrsta_stroska,
datum_stop__isnull=False)
#.filter(
#Q(datum_stop__gte=obdobje_od) & Q(datum_stop__lte=obdobje_do)
#)
delovninalog_filtered_list = delovninalog_filtered_list.filter(
Q(datum_stop__gte=obdobje_od) & Q(datum_stop__lte=obdobje_do)
)
'''
Glede na izračunane čase za izvedbo del
izračunamo skupne porabljene čase po VRSTA_DELA.
'''
dn_list = []
for dn in delovninalog_filtered_list:
# skupni porabljen čas za izvedbo glede na vrsto del (ZUN damo ben ker se ne obračunajo)
dela_filtered_list = Delo.objects.filter(delovninalog=dn).exclude(vrsta_dela__oznaka="ZUN").order_by('datum', 'time_start')
vrstadel_cas_list = dela_filtered_list.values('vrsta_dela__oznaka', 'vrsta_dela__naziv').order_by('vrsta_dela').annotate(vrstadela_cas_rac_sum=Sum('delo_cas_rac'))
# dnevnik porabljenega materiala
material = Dnevnik.objects.filter(delovninalog=dn)
# če v delovnemu nalogu ni del, ki se obračunajo, ga ne prikažem
if dela_filtered_list or material:
dn_list.append((dn, dela_filtered_list, vrstadel_cas_list, material))
naslov_data = {
'st_dokumenta': "DN-ZBIRNI-" + vrsta_stroska.oznaka + str(obdobje_do),
'tip_dokumenta': "ZBIRNI DELOVNI NALOG",
'obdobje_od': str(obdobje_od),
'obdobje_do': str(obdobje_do),
'stroskovnomesto': stroskovnomesto,
'narocnik': narocilo.narocnik.kratko_ime + " (" + narocilo.narocnik.davcna_st + ")",
'narocilo': narocilo.oznaka + "|" + narocilo.predmet,
'delovninalog_filtered_list': delovninalog_filtered_list,
}
# prenos podatkov za aplikacijo templated_docs
if vrsta_izpisa == "neplanirano":
filename = fill_template(
'reports/obracuni/obracun_neplanirana.ods',
{'naslov_data': naslov_data, 'dn_list': dn_list},
output_format="xlsx"
)
visible_filename = 'zbirni_obracun.{}'.format("xlsx")
return FileResponse(filename, visible_filename)
# prenos podatkov za aplikacijo templated_docs
if vrsta_izpisa == "neplanirano_delitev":
filename = fill_template(
'reports/obracuni/obracun_neplanirana_delitev.ods',
{'naslov_data': naslov_data, 'dn_list': dn_list},
output_format="xlsx"
)
visible_filename = 'zbirni_obracun.{}'.format("xlsx")
return FileResponse(filename, visible_filename)
# v primeru, da so zgornji Form-i NISO ustrezno izpolnjeni
# izvrši spodnje ukaze
else:
return render(request, self.template_name, {
'obracun_izpis_vrsta_form': obracun_izpis_vrsta_form,
'obracun_izredna_dela_form': obracun_izredna_dela_form,
}
)
class ObracunZbirniDelovniNalogPlaniranaView(TemplateView):
form_class= ObracunIzrednaDelaForm
template_name = "reports/obracuni/obracuni_zbirni_delovni_nalog_planirano.html"
def get_context_data(self, *args, **kwargs):
context = super(ObracunZbirniDelovniNalogPlaniranaView, self).get_context_data(*args, **kwargs)
context['obracun_izpis_vrsta_form'] = ObracunIzpisVrstaForm
return context
def get_form_kwargs(self):
return {
'initial': self.get_initial(),
'prefix': self.get_prefix(),
'data': self.request.GET or None
}
def get(self, request, *args, **kwargs):
### PODATKI ##############################################################################
# naložimo form za filtriranje seznama delovnih nalogov
# ki se upoštevanjo pri izdelavi poročila
obracun_izredna_dela_form = ObracunIzrednaDelaForm(request.GET or None)
# na začetku predvidimo, da form ni pravilno izpolnjen
obracun_izredna_dela_form_is_valid = False
# ko je zgornji form pravilno izpolnjen
# pridobimo podatke iz njega, ki jih uporabimo za filter
if obracun_izredna_dela_form.is_valid():
# pridobimo posamezne podatke iz forma
obdobje_od = obracun_izredna_dela_form.cleaned_data['datum_od']
obdobje_do = obracun_izredna_dela_form.cleaned_data['datum_do']
vrsta_stroska = obracun_izredna_dela_form.cleaned_data['vrsta_stroska']
narocilo = obracun_izredna_dela_form.cleaned_data['narocilo']
prikazi_izpis_del = obracun_izredna_dela_form.cleaned_data['prikazi_izpis_del']
prikazi_izpis_dn = obracun_izredna_dela_form.cleaned_data['prikazi_izpis_dn']
# sporočimo, da je form ustrezno izpolnjen
# na podlagi katerega se izvršijo ukazi
obracun_izredna_dela_form_is_valid = True
### UKAZI ##############################################################################
if obracun_izredna_dela_form_is_valid == True:
# pripravimo seznam delovnih nalogov, ki so del
# izbranega filtra in bodo uporabljeni pri izdelavi poročila
delovninalog_filtered_list = DelovniNalog.objects.filter(
# delovni nalogi ki so del naročila
opravilo__narocilo=narocilo,
# za izbrano stroškovno mesto
opravilo__vrsta_stroska=vrsta_stroska,
# samo zaključene delovne naloge
datum_stop__isnull=False,
)
# dodatno filtriranje glede na izbrano odbodje
delovninalog_filtered_list = delovninalog_filtered_list.filter(
Q(datum_stop__gte=obdobje_od) & Q(datum_stop__lte=obdobje_do)
)
# samo delovne naloge planiranih opravil
delovninalog_filtered_list = delovninalog_filtered_list.filter(
opravilo__planirano_opravilo__isnull=False,
)
# za delovne naloge v filtru izračunamo zaokrožen čas izvedbe
# glede na parameter zmin
for dn in delovninalog_filtered_list:
# enota za zaokroževanje
zmin = dn.opravilo.zmin
# seznam del pod delovnim nalogom
delo_list = Delo.objects.filter(delovninalog=dn).annotate(delo_cas=F('time_stop')-F('time_start'))
# Izdelamo porabljen čas za izvedbo dela
# v enoti UR in ga shranimo v bazo pod delo_cas_rac
for delo in delo_list:
# osnovni delo_cas izračunan zgoraj
delo_cas = delo.delo_cas
# delo_cas zaokrožimo glede na funkcijo
delo_cas_rac = zaokrozen_zmin(delo_cas, zmin, '+')
# delo_cas_rac pretvorimo v ure
delo_cas_rac = pretvori_v_ure(delo_cas_rac)
# delo_cas_rac shranimo v bazo
delo.delo_cas_rac = delo_cas_rac
delo.save()
'''
Glede na izračunane čase za izvedbo del
izračunamo skupne porabljene čase po VRSTA_DELA.
'''
vrstadel_cas_list = Delo.objects.filter(delovninalog__in=delovninalog_filtered_list).exclude(vrsta_dela__oznaka="ZUN").values(
'delovninalog__opravilo__planirano_opravilo__id', 'delovninalog__opravilo__planirano_opravilo__oznaka', 'vrsta_dela__id').annotate(
vrstadela_cas_rac_sum=Sum('delo_cas_rac')).order_by('delovninalog__opravilo__planirano_opravilo__oznaka')
planiranoopravilo_vrstadel_sum_list = []
for vrsta_del_sum in vrstadel_cas_list:
planirano_opravilo_id = vrsta_del_sum['delovninalog__opravilo__planirano_opravilo__id']
vrsta_dela_id = vrsta_del_sum['vrsta_dela__id']
vrstadela_cas_rac_sum = vrsta_del_sum['vrstadela_cas_rac_sum']
# pridobimo instanco planiranega opravila
# da bomo filtrirali vse vnose glede na
# to opravilo
planiranoopravilo = PlaniranoOpravilo.objects.filter(id=planirano_opravilo_id).first()
# seznam id-jev od delovnih nalogov, ki se
# uporabijo za izdelavo poročila
delovninalog_filtered_id_list = delovninalog_filtered_list.values_list('id', flat=True)
# pridobimo instanco vrste_dela, da lahko v izpisu
# izpisujemo posamezne atribute instance
vrstadela = DeloVrsta.objects.get(id=vrsta_dela_id)
# priprava seznama delovnih nalogov z izračunanim računskim porabljenim
# časom izvedbe delo_cas_rac
dn_list = []
dela_delovninalog_list = DelovniNalog.objects.filter(
id__in=delovninalog_filtered_id_list, opravilo__planirano_opravilo=planiranoopravilo).values(
'id').annotate(dn_rac_sum=Sum('delo__delo_cas_rac')).order_by('datum_start')
for dn in dela_delovninalog_list:
dn_id = dn['id']
delovninalog = DelovniNalog.objects.get(id=dn_id)
dn_rac_sum = dn['dn_rac_sum']
# Končni rezultate:
# v planiranemu opravilu imamo delovninalog in računski čas
dn_list.append((delovninalog, dn_rac_sum))
# priprava seznama Materiala pod delovnim nalogom
material_dn_list = []
delovninalog_filtered_id_list = delovninalog_filtered_list.values_list('id', flat=True)
material_delovninalog_list = DelovniNalog.objects.filter(
id__in=delovninalog_filtered_id_list, opravilo__planirano_opravilo=planiranoopravilo).values(
'dnevnik__artikel__id').annotate(material_kom=Sum('dnevnik__kom'))
for i in material_delovninalog_list:
artikel_id = i['dnevnik__artikel__id']
artikel = Artikel.objects.filter(id=artikel_id).first()
if artikel:
artikel_dn_kom = i['material_kom']
material_dn_list.append((artikel, artikel_dn_kom))
# Izdelamo output, ki je del templeata
vnos = {
'plan': planiranoopravilo.plan,
'planiranoopravilo': planiranoopravilo,
'vrstadela': vrstadela,
'dn_list':dn_list,
'material_dn_list': material_dn_list,
'planiranoopravilo_vrstadela_cas_rac_sum': vrstadela_cas_rac_sum,
}
planiranoopravilo_vrstadel_sum_list.append(vnos)
skupaj_ur = vrstadel_cas_list.aggregate(skupaj_ur=Sum('vrstadela_cas_rac_sum'))
naslov_data = {
'st_dokumenta': "DN-ZBIRNI-" + vrsta_stroska.oznaka + str(obdobje_do),
'tip_dokumenta': "ZBIRNI DELOVNI NALOG",
'obdobje_od': str(obdobje_od),
'obdobje_do': str(obdobje_do),
'stroskovnomesto': vrsta_stroska,
'narocnik': narocilo.narocnik.kratko_ime + " (" + narocilo.narocnik.davcna_st + ")",
'narocilo': narocilo.oznaka + "|" + narocilo.predmet,
'delovninalog_filtered_list': delovninalog_filtered_list,
'prikazi_izpis_del': prikazi_izpis_del,
'prikazi_izpis_dn': prikazi_izpis_dn,
}
context = self.get_context_data(
obracun_izredna_dela_form=obracun_izredna_dela_form,
naslov_data=naslov_data,
planiranoopravilo_vrstadel_sum_list=planiranoopravilo_vrstadel_sum_list,
skupaj_ur=skupaj_ur,
)
else:
context = self.get_context_data(
obracun_izredna_dela_form=obracun_izredna_dela_form,
)
return self.render_to_response(context)
def post(self, request, *args, **kwargs):
###########################################################################
# FORMS
###########################################################################
obracun_izpis_vrsta_form = ObracunIzpisVrstaForm(request.POST or None)
obracun_izredna_dela_form = ObracunIzrednaDelaForm(request.POST or None)
obracun_izpis_vrsta_form_is_valid = False
obracun_izredna_dela_form_is_valid = False
###########################################################################
# PRIDOBIMO PODATKE
###########################################################################
if obracun_izpis_vrsta_form.is_valid():
vrsta_izpisa = obracun_izpis_vrsta_form.cleaned_data['vrsta_izpisa_field']
obracun_izpis_vrsta_form_is_valid = True
if obracun_izredna_dela_form.is_valid():
obdobje_od = obracun_izredna_dela_form.cleaned_data['datum_od']
obdobje_do = obracun_izredna_dela_form.cleaned_data['datum_do']
vrsta_stroska = obracun_izredna_dela_form.cleaned_data['vrsta_stroska']
narocilo = obracun_izredna_dela_form.cleaned_data['narocilo']
prikazi_izpis_del = obracun_izredna_dela_form.cleaned_data['prikazi_izpis_del']
prikazi_izpis_dn = obracun_izredna_dela_form.cleaned_data['prikazi_izpis_dn']
obracun_izredna_dela_form_is_valid = True
#Če so formi pravilno izpolnjeni
if obracun_izpis_vrsta_form_is_valid == True and obracun_izredna_dela_form_is_valid == True:
# pripravimo seznam delovnih nalogov, ki so del
# izbranega filtra in bodo uporabljeni pri izdelavi poročila
delovninalog_filtered_list = DelovniNalog.objects.filter(
# delovni nalogi ki so del naročila
opravilo__narocilo=narocilo,
# za izbrano stroškovno mesto
opravilo__vrsta_stroska=vrsta_stroska,
# samo zaključene delovne naloge
datum_stop__isnull=False,
)
# dodatno filtriranje glede na izbrano odbodje
delovninalog_filtered_list = delovninalog_filtered_list.filter(
Q(datum_stop__gte=obdobje_od) & Q(datum_stop__lte=obdobje_do)
)
# samo delovne naloge planiranih opravil
delovninalog_filtered_list = delovninalog_filtered_list.filter(
opravilo__planirano_opravilo__isnull=False,
)
# za delovne naloge v filtru izračunamo zaokrožen čas izvedbe
# glede na parameter zmin
for dn in delovninalog_filtered_list:
# enota za zaokroževanje
zmin = dn.opravilo.zmin
# seznam del pod delovnim nalogom
delo_list = Delo.objects.filter(delovninalog=dn).annotate(delo_cas=F('time_stop')-F('time_start'))
# Izdelamo porabljen čas za izvedbo dela
# v enoti UR in ga shranimo v bazo pod delo_cas_rac
for delo in delo_list:
# osnovni delo_cas izračunan zgoraj
delo_cas = delo.delo_cas
# delo_cas zaokrožimo glede na funkcijo
delo_cas_rac = zaokrozen_zmin(delo_cas, zmin, '+')
# delo_cas_rac pretvorimo v ure
delo_cas_rac = pretvori_v_ure(delo_cas_rac)
# delo_cas_rac shranimo v bazo
delo.delo_cas_rac = delo_cas_rac
delo.save()
'''
Glede na izračunane čase za izvedbo del
izračunamo skupne porabljene čase po VRSTA_DELA.
'''
vrstadel_cas_list = Delo.objects.filter(delovninalog__in=delovninalog_filtered_list).exclude(vrsta_dela__oznaka="ZUN").values(
'delovninalog__opravilo__planirano_opravilo__id', 'delovninalog__opravilo__planirano_opravilo__oznaka', 'vrsta_dela__id').annotate(
vrstadela_cas_rac_sum=Sum('delo_cas_rac')).order_by('delovninalog__opravilo__planirano_opravilo__oznaka')
planiranoopravilo_vrstadel_sum_list = []
for vrsta_del_sum in vrstadel_cas_list:
planirano_opravilo_id = vrsta_del_sum['delovninalog__opravilo__planirano_opravilo__id']
vrsta_dela_id = vrsta_del_sum['vrsta_dela__id']
vrstadela_cas_rac_sum = vrsta_del_sum['vrstadela_cas_rac_sum']
# pridobimo instanco planiranega opravila
# da bomo filtrirali vse vnose glede na
# to opravilo
planiranoopravilo = PlaniranoOpravilo.objects.filter(id=planirano_opravilo_id).first()
# seznam id-jev od delovnih nalogov, ki se
# uporabijo za izdelavo poročila
delovninalog_filtered_id_list = delovninalog_filtered_list.values_list('id', flat=True)
# pridobimo instanco vrste_dela, da lahko v izpisu
# izpisujemo posamezne atribute instance
vrstadela = DeloVrsta.objects.get(id=vrsta_dela_id)
# priprava seznama delovnih nalogov z izračunanim računskim porabljenim
# časom izvedbe delo_cas_rac
dn_list = []
dela_delovninalog_list = DelovniNalog.objects.filter(
id__in=delovninalog_filtered_id_list, opravilo__planirano_opravilo=planiranoopravilo).values(
'id').annotate(dn_rac_sum=Sum('delo__delo_cas_rac')).order_by('datum_start')
for dn in dela_delovninalog_list:
dn_id = dn['id']
delovninalog = DelovniNalog.objects.get(id=dn_id)
dn_rac_sum = dn['dn_rac_sum']
# Končni rezultate:
# v planiranemu opravilu imamo delovninalog in računski čas
dn_list.append((delovninalog, dn_rac_sum))
# priprava seznama Materiala pod delovnim nalogom
material_dn_list = []
delovninalog_filtered_id_list = delovninalog_filtered_list.values_list('id', flat=True)
material_delovninalog_list = DelovniNalog.objects.filter(
id__in=delovninalog_filtered_id_list, opravilo__planirano_opravilo=planiranoopravilo).values(
'dnevnik__artikel__id').annotate(material_kom=Sum('dnevnik__kom'))
for i in material_delovninalog_list:
artikel_id = i['dnevnik__artikel__id']
artikel = Artikel.objects.filter(id=artikel_id).first()
if artikel:
artikel_dn_kom = i['material_kom']
material_dn_list.append((artikel, artikel_dn_kom))
# Izdelamo output, ki je del templeata
vnos = {
'plan': planiranoopravilo.plan,
'planiranoopravilo': planiranoopravilo,
'vrstadela': vrstadela,
'dn_list':dn_list,
'material_dn_list': material_dn_list,
'planiranoopravilo_vrstadela_cas_rac_sum': vrstadela_cas_rac_sum,
}
planiranoopravilo_vrstadel_sum_list.append(vnos)
skupaj_ur = vrstadel_cas_list.aggregate(skupaj_ur=Sum('vrstadela_cas_rac_sum'))
naslov_data = {
'st_dokumenta': "DN-ZBIRNI-" + vrsta_stroska.oznaka + str(obdobje_do),
'tip_dokumenta': "ZBIRNI DELOVNI NALOG",
'obdobje_od': str(obdobje_od),
'obdobje_do': str(obdobje_do),
'stroskovnomesto': vrsta_stroska,
'narocnik': narocilo.narocnik.kratko_ime + " (" + narocilo.narocnik.davcna_st + ")",
'narocilo': narocilo.oznaka + "|" + narocilo.predmet,
'delovninalog_filtered_list': delovninalog_filtered_list,
'prikazi_izpis_del': prikazi_izpis_del,
'prikazi_izpis_dn': prikazi_izpis_dn,
}
skupaj_ur = vrstadel_cas_list.aggregate(skupaj_ur=Sum('vrstadela_cas_rac_sum'))
naslov_data = {
'st_dokumenta': "DN-ZBIRNI-" + vrsta_stroska.oznaka + str(obdobje_do),
'tip_dokumenta': "ZBIRNI DELOVNI NALOG",
'obdobje_od': str(obdobje_od),
'obdobje_do': str(obdobje_do),
'stroskovnomesto': vrsta_stroska,
'narocnik': narocilo.narocnik.kratko_ime + " (" + narocilo.narocnik.davcna_st + ")",
'narocilo': narocilo.oznaka + "|" + narocilo.predmet,
'delovninalog_filtered_list': delovninalog_filtered_list,
'prikazi_izpis_del': prikazi_izpis_del,
'prikazi_izpis_dn': prikazi_izpis_dn,
}
# prenos podatkov za aplikacijo templated_docs
if vrsta_izpisa == "planirano":
filename = fill_template(
'reports/obracuni/obracun_planirano.ods',
{
'naslov_data': naslov_data,
'planiranoopravilo_vrstadel_sum_list': planiranoopravilo_vrstadel_sum_list,
'skupaj_ur': skupaj_ur,
},
output_format="xlsx"
)
visible_filename = 'zbirni_obracun.{}'.format("xlsx")
return FileResponse(filename, visible_filename)
# prenos podatkov za aplikacijo templated_docs
if vrsta_izpisa == "planirano_delitev":
filename = fill_template(
{
'naslov_data': naslov_data,
'planiranoopravilo_vrstadel_sum_list': planiranoopravilo_vrstadel_sum_list,
'skupaj_ur': skupaj_ur,
},
output_format="xlsx"
)
visible_filename = 'zbirni_obracun.{}'.format("xlsx")
return FileResponse(filename, visible_filename)
# v primeru, da so zgornji Form-i NISO ustrezno izpolnjeni
# izvrši spodnje ukaze
else:
return render(request, self.template_name, {
'obracun_izpis_vrsta_form': obracun_izpis_vrsta_form,
'obracun_izredna_dela_form': obracun_izredna_dela_form,
}
)
| 42.861979 | 179 | 0.598669 | 3,335 | 32,918 | 5.528936 | 0.096852 | 0.034492 | 0.047833 | 0.058463 | 0.920495 | 0.917946 | 0.915668 | 0.902706 | 0.895168 | 0.887467 | 0 | 0.000395 | 0.307734 | 32,918 | 767 | 180 | 42.917862 | 0.808759 | 0.139164 | 0 | 0.811881 | 0 | 0 | 0.114565 | 0.043736 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019802 | false | 0 | 0.054455 | 0.00495 | 0.118812 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
16a13189012a38bfd591cf960fd894b02aa31a9a | 3,482 | py | Python | mkdocs/tests/toc_tests.py | davidhrbac/mkdocs | 3c8a1fccca29272ce327e89c398a55771a7f5635 | [
"BSD-2-Clause"
] | 57 | 2016-09-28T01:19:35.000Z | 2022-01-07T13:59:21.000Z | mkdocs/tests/toc_tests.py | hufyhang/mkdocs | 4c4ef7fa7224713e17d479742c2df1b2fc78edcb | [
"BSD-2-Clause"
] | 16 | 2017-02-06T15:48:03.000Z | 2018-02-28T21:40:10.000Z | mkdocs/tests/toc_tests.py | hufyhang/mkdocs | 4c4ef7fa7224713e17d479742c2df1b2fc78edcb | [
"BSD-2-Clause"
] | 81 | 2016-09-06T04:21:06.000Z | 2022-03-10T06:32:45.000Z | #!/usr/bin/env python
# coding: utf-8
from __future__ import unicode_literals
import unittest
from mkdocs.tests.base import dedent, markdown_to_toc
class TableOfContentsTests(unittest.TestCase):
def test_indented_toc(self):
md = dedent("""
# Heading 1
## Heading 2
### Heading 3
""")
expected = dedent("""
Heading 1 - #heading-1
Heading 2 - #heading-2
Heading 3 - #heading-3
""")
toc = markdown_to_toc(md)
self.assertEqual(str(toc).strip(), expected)
def test_indented_toc_html(self):
md = dedent("""
# Heading 1
## <code>Heading</code> 2
## Heading 3
""")
expected = dedent("""
Heading 1 - #heading-1
Heading 2 - #heading-2
Heading 3 - #heading-3
""")
toc = markdown_to_toc(md)
self.assertEqual(str(toc).strip(), expected)
def test_flat_toc(self):
md = dedent("""
# Heading 1
# Heading 2
# Heading 3
""")
expected = dedent("""
Heading 1 - #heading-1
Heading 2 - #heading-2
Heading 3 - #heading-3
""")
toc = markdown_to_toc(md)
self.assertEqual(str(toc).strip(), expected)
def test_flat_h2_toc(self):
md = dedent("""
## Heading 1
## Heading 2
## Heading 3
""")
expected = dedent("""
Heading 1 - #heading-1
Heading 2 - #heading-2
Heading 3 - #heading-3
""")
toc = markdown_to_toc(md)
self.assertEqual(str(toc).strip(), expected)
def test_mixed_toc(self):
md = dedent("""
# Heading 1
## Heading 2
# Heading 3
### Heading 4
### Heading 5
""")
expected = dedent("""
Heading 1 - #heading-1
Heading 2 - #heading-2
Heading 3 - #heading-3
Heading 4 - #heading-4
Heading 5 - #heading-5
""")
toc = markdown_to_toc(md)
self.assertEqual(str(toc).strip(), expected)
def test_mixed_html(self):
md = dedent("""
# Heading 1
## Heading 2
# Heading 3
### Heading 4
### <a>Heading 5</a>
""")
expected = dedent("""
Heading 1 - #heading-1
Heading 2 - #heading-2
Heading 3 - #heading-3
Heading 4 - #heading-4
Heading 5 - #heading-5
""")
toc = markdown_to_toc(md)
self.assertEqual(str(toc).strip(), expected)
def test_nested_anchor(self):
md = dedent("""
# Heading 1
## Heading 2
# Heading 3
### Heading 4
### <a href="/">Heading 5</a>
""")
expected = dedent("""
Heading 1 - #heading-1
Heading 2 - #heading-2
Heading 3 - #heading-3
Heading 4 - #heading-4
Heading 5 - #heading-5
""")
toc = markdown_to_toc(md)
self.assertEqual(str(toc).strip(), expected)
def test_entityref(self):
md = dedent("""
# Heading & 1
## Heading > 2
### Heading < 3
""")
expected = dedent("""
Heading & 1 - #heading-1
Heading > 2 - #heading-2
Heading < 3 - #heading-3
""")
toc = markdown_to_toc(md)
self.assertEqual(str(toc).strip(), expected)
| 25.792593 | 53 | 0.487651 | 378 | 3,482 | 4.386243 | 0.134921 | 0.115802 | 0.199035 | 0.177322 | 0.841375 | 0.837153 | 0.820265 | 0.820265 | 0.820265 | 0.820265 | 0 | 0.042771 | 0.382252 | 3,482 | 134 | 54 | 25.985075 | 0.728033 | 0.009765 | 0 | 0.791667 | 0 | 0 | 0.52989 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.025 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
16d2b35f97c0837810d64d2e8f133c126d6fe41b | 157 | py | Python | src/pandas_frequency_distribution/__init__.py | davidghobson1/martian | ae51cb1ea5488d4d8849b9d6cee0dbab88251352 | [
"MIT"
] | null | null | null | src/pandas_frequency_distribution/__init__.py | davidghobson1/martian | ae51cb1ea5488d4d8849b9d6cee0dbab88251352 | [
"MIT"
] | null | null | null | src/pandas_frequency_distribution/__init__.py | davidghobson1/martian | ae51cb1ea5488d4d8849b9d6cee0dbab88251352 | [
"MIT"
] | null | null | null | from pandas_frequency_distribution.pandas_freq_dist import PandasFreqDist
from pandas_frequency_distribution.pandas_word_freq_dist import PandasWordFreqDist | 78.5 | 82 | 0.936306 | 19 | 157 | 7.263158 | 0.526316 | 0.144928 | 0.275362 | 0.449275 | 0.536232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050955 | 157 | 2 | 82 | 78.5 | 0.926175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bc6d95c065e289b3617d7e131b12eca61b6a7551 | 50,323 | py | Python | api/client/swagger_client/api/model_service_api.py | krishnakumar27/mlx | dce67d58dffa24ca7a6a4d6b5fd8d4eb94e35215 | [
"Apache-2.0"
] | 98 | 2021-05-03T23:27:53.000Z | 2022-03-13T02:29:12.000Z | api/client/swagger_client/api/model_service_api.py | krishnakumar27/mlx | dce67d58dffa24ca7a6a4d6b5fd8d4eb94e35215 | [
"Apache-2.0"
] | 296 | 2021-05-03T22:44:26.000Z | 2022-03-31T11:50:16.000Z | api/client/swagger_client/api/model_service_api.py | krishnakumar27/mlx | dce67d58dffa24ca7a6a4d6b5fd8d4eb94e35215 | [
"Apache-2.0"
] | 38 | 2021-05-03T22:52:59.000Z | 2022-03-31T03:58:34.000Z | # Copyright 2021 The MLX Contributors
#
# SPDX-License-Identifier: Apache-2.0
# coding: utf-8
"""
MLX API
MLX API Extension for Kubeflow Pipelines # noqa: E501
OpenAPI spec version: 0.1.30-upload-catalog-from-url
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class ModelServiceApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def approve_models_for_publishing(self, model_ids, **kwargs): # noqa: E501
"""approve_models_for_publishing # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.approve_models_for_publishing(model_ids, async_req=True)
>>> result = thread.get()
:param async_req bool
:param list[str] model_ids: Array of model IDs to be approved for publishing. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.approve_models_for_publishing_with_http_info(model_ids, **kwargs) # noqa: E501
else:
(data) = self.approve_models_for_publishing_with_http_info(model_ids, **kwargs) # noqa: E501
return data
def approve_models_for_publishing_with_http_info(self, model_ids, **kwargs): # noqa: E501
"""approve_models_for_publishing # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.approve_models_for_publishing_with_http_info(model_ids, async_req=True)
>>> result = thread.get()
:param async_req bool
:param list[str] model_ids: Array of model IDs to be approved for publishing. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['model_ids'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method approve_models_for_publishing" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'model_ids' is set
if ('model_ids' not in params or
params['model_ids'] is None):
raise ValueError("Missing the required parameter `model_ids` when calling `approve_models_for_publishing`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'model_ids' in params:
body_params = params['model_ids']
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/publish_approved', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_model(self, body, **kwargs): # noqa: E501
"""create_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_model(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param ApiModel body: (required)
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_model_with_http_info(body, **kwargs) # noqa: E501
else:
(data) = self.create_model_with_http_info(body, **kwargs) # noqa: E501
return data
def create_model_with_http_info(self, body, **kwargs): # noqa: E501
"""create_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_model_with_http_info(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param ApiModel body: (required)
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_model" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_model`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ApiModel', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_model(self, id, **kwargs): # noqa: E501
"""delete_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_model(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_model_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.delete_model_with_http_info(id, **kwargs) # noqa: E501
return data
def delete_model_with_http_info(self, id, **kwargs): # noqa: E501
"""delete_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_model_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_model" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `delete_model`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def download_model_files(self, id, **kwargs): # noqa: E501
"""Returns the model artifacts compressed into a .tgz (.tar.gz) file. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.download_model_files(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:param bool include_generated_code: Include generated run scripts in download
:return: urllib3.response.HTTPResponse (assuming _preload_content=False)
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.download_model_files_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.download_model_files_with_http_info(id, **kwargs) # noqa: E501
return data
def download_model_files_with_http_info(self, id, **kwargs): # noqa: E501
"""Returns the model artifacts compressed into a .tgz (.tar.gz) file. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.download_model_files_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:param bool include_generated_code: Include generated run scripts in download
:return: urllib3.response.HTTPResponse (assuming _preload_content=False)
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'include_generated_code'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method download_model_files" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `download_model_files`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
if 'include_generated_code' in params:
query_params.append(('include_generated_code', params['include_generated_code'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/gzip']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/{id}/download', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='file', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', False),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def generate_model_code(self, id, **kwargs): # noqa: E501
"""generate_model_code # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.generate_model_code(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:return: ApiGenerateModelCodeResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.generate_model_code_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.generate_model_code_with_http_info(id, **kwargs) # noqa: E501
return data
def generate_model_code_with_http_info(self, id, **kwargs): # noqa: E501
"""generate_model_code # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.generate_model_code_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:return: ApiGenerateModelCodeResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method generate_model_code" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `generate_model_code`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/{id}/generate_code', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ApiGenerateModelCodeResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_model(self, id, **kwargs): # noqa: E501
"""get_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_model(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_model_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.get_model_with_http_info(id, **kwargs) # noqa: E501
return data
def get_model_with_http_info(self, id, **kwargs): # noqa: E501
"""get_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_model_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_model" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_model`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ApiModel', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_model_template(self, id, **kwargs): # noqa: E501
"""get_model_template # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_model_template(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:return: ApiGetTemplateResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_model_template_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.get_model_template_with_http_info(id, **kwargs) # noqa: E501
return data
def get_model_template_with_http_info(self, id, **kwargs): # noqa: E501
"""get_model_template # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_model_template_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:return: ApiGetTemplateResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_model_template" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_model_template`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/{id}/templates', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ApiGetTemplateResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_models(self, **kwargs): # noqa: E501
"""list_models # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_models(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str page_token:
:param int page_size:
:param str sort_by: Can be format of \"field_name\", \"field_name asc\" or \"field_name desc\" Ascending by default.
:param str filter: A string-serialized JSON dictionary with key-value pairs that correspond to the Model's attribute names and their respective values to be filtered for.
:return: ApiListModelsResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_models_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_models_with_http_info(**kwargs) # noqa: E501
return data
def list_models_with_http_info(self, **kwargs): # noqa: E501
"""list_models # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_models_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str page_token:
:param int page_size:
:param str sort_by: Can be format of \"field_name\", \"field_name asc\" or \"field_name desc\" Ascending by default.
:param str filter: A string-serialized JSON dictionary with key-value pairs that correspond to the Model's attribute names and their respective values to be filtered for.
:return: ApiListModelsResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['page_token', 'page_size', 'sort_by', 'filter'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_models" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'page_token' in params:
query_params.append(('page_token', params['page_token'])) # noqa: E501
if 'page_size' in params:
query_params.append(('page_size', params['page_size'])) # noqa: E501
if 'sort_by' in params:
query_params.append(('sort_by', params['sort_by'])) # noqa: E501
if 'filter' in params:
query_params.append(('filter', params['filter'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ApiListModelsResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def run_model(self, id, pipeline_stage, execution_platform, **kwargs): # noqa: E501
"""run_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.run_model(id, pipeline_stage, execution_platform, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:param str pipeline_stage: pipeline stage, either 'train' or 'serve' (required)
:param str execution_platform: execution platform, i.e. 'kubernetes', 'knative' (required)
:param str run_name: name to identify the run on the Kubeflow Pipelines UI, defaults to model identifier
:param Dictionary parameters: optional run parameters, must include 'github_url' and 'github_token' if credentials are required
:return: ApiRunCodeResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.run_model_with_http_info(id, pipeline_stage, execution_platform, **kwargs) # noqa: E501
else:
(data) = self.run_model_with_http_info(id, pipeline_stage, execution_platform, **kwargs) # noqa: E501
return data
def run_model_with_http_info(self, id, pipeline_stage, execution_platform, **kwargs): # noqa: E501
"""run_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.run_model_with_http_info(id, pipeline_stage, execution_platform, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:param str pipeline_stage: pipeline stage, either 'train' or 'serve' (required)
:param str execution_platform: execution platform, i.e. 'kubernetes', 'knative' (required)
:param str run_name: name to identify the run on the Kubeflow Pipelines UI, defaults to model identifier
:param Dictionary parameters: optional run parameters, must include 'github_url' and 'github_token' if credentials are required
:return: ApiRunCodeResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'pipeline_stage', 'execution_platform', 'run_name', 'parameters'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method run_model" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `run_model`") # noqa: E501
# verify the required parameter 'pipeline_stage' is set
if ('pipeline_stage' not in params or
params['pipeline_stage'] is None):
raise ValueError("Missing the required parameter `pipeline_stage` when calling `run_model`") # noqa: E501
# verify the required parameter 'execution_platform' is set
if ('execution_platform' not in params or
params['execution_platform'] is None):
raise ValueError("Missing the required parameter `execution_platform` when calling `run_model`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
if 'pipeline_stage' in params:
query_params.append(('pipeline_stage', params['pipeline_stage'])) # noqa: E501
if 'execution_platform' in params:
query_params.append(('execution_platform', params['execution_platform'])) # noqa: E501
if 'run_name' in params:
query_params.append(('run_name', params['run_name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'parameters' in params:
body_params = params['parameters']
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/{id}/run', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ApiRunCodeResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def set_featured_models(self, model_ids, **kwargs): # noqa: E501
"""set_featured_models # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_featured_models(model_ids, async_req=True)
>>> result = thread.get()
:param async_req bool
:param list[str] model_ids: Array of model IDs to be featured. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.set_featured_models_with_http_info(model_ids, **kwargs) # noqa: E501
else:
(data) = self.set_featured_models_with_http_info(model_ids, **kwargs) # noqa: E501
return data
def set_featured_models_with_http_info(self, model_ids, **kwargs): # noqa: E501
"""set_featured_models # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_featured_models_with_http_info(model_ids, async_req=True)
>>> result = thread.get()
:param async_req bool
:param list[str] model_ids: Array of model IDs to be featured. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['model_ids'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method set_featured_models" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'model_ids' is set
if ('model_ids' not in params or
params['model_ids'] is None):
raise ValueError("Missing the required parameter `model_ids` when calling `set_featured_models`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'model_ids' in params:
body_params = params['model_ids']
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/featured', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def upload_model(self, uploadfile, **kwargs): # noqa: E501
"""upload_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_model(uploadfile, async_req=True)
>>> result = thread.get()
:param async_req bool
:param file uploadfile: The model YAML file to upload. Can be a GZip-compressed TAR file (.tgz, .tar.gz) or a YAML file (.yaml, .yml). Maximum size is 32MB. (required)
:param str name:
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.upload_model_with_http_info(uploadfile, **kwargs) # noqa: E501
else:
(data) = self.upload_model_with_http_info(uploadfile, **kwargs) # noqa: E501
return data
def upload_model_with_http_info(self, uploadfile, **kwargs): # noqa: E501
"""upload_model # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_model_with_http_info(uploadfile, async_req=True)
>>> result = thread.get()
:param async_req bool
:param file uploadfile: The model YAML file to upload. Can be a GZip-compressed TAR file (.tgz, .tar.gz) or a YAML file (.yaml, .yml). Maximum size is 32MB. (required)
:param str name:
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['uploadfile', 'name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method upload_model" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'uploadfile' is set
if ('uploadfile' not in params or
params['uploadfile'] is None):
raise ValueError("Missing the required parameter `uploadfile` when calling `upload_model`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'name' in params:
query_params.append(('name', params['name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
if 'uploadfile' in params:
local_var_files['uploadfile'] = params['uploadfile'] # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/upload', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ApiModel', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def upload_model_file(self, id, uploadfile, **kwargs): # noqa: E501
"""upload_model_file # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_model_file(id, uploadfile, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: The model identifier. (required)
:param file uploadfile: The file to upload, overwriting existing. Can be a GZip-compressed TAR file (.tgz), a YAML file (.yaml), Python script (.py), or Markdown file (.md) (required)
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.upload_model_file_with_http_info(id, uploadfile, **kwargs) # noqa: E501
else:
(data) = self.upload_model_file_with_http_info(id, uploadfile, **kwargs) # noqa: E501
return data
def upload_model_file_with_http_info(self, id, uploadfile, **kwargs): # noqa: E501
"""upload_model_file # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_model_file_with_http_info(id, uploadfile, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: The model identifier. (required)
:param file uploadfile: The file to upload, overwriting existing. Can be a GZip-compressed TAR file (.tgz), a YAML file (.yaml), Python script (.py), or Markdown file (.md) (required)
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'uploadfile'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method upload_model_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `upload_model_file`") # noqa: E501
# verify the required parameter 'uploadfile' is set
if ('uploadfile' not in params or
params['uploadfile'] is None):
raise ValueError("Missing the required parameter `uploadfile` when calling `upload_model_file`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'uploadfile' in params:
local_var_files['uploadfile'] = params['uploadfile'] # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/{id}/upload', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ApiModel', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def upload_model_from_url(self, url, **kwargs): # noqa: E501
"""upload_model_from_url # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_model_from_url(url, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str url: URL pointing to the model YAML file. (required)
:param str name: Optional, the name of the model to be created overriding the name in the YAML file.
:param str access_token: Optional, the Bearer token to access the 'url'.
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.upload_model_from_url_with_http_info(url, **kwargs) # noqa: E501
else:
(data) = self.upload_model_from_url_with_http_info(url, **kwargs) # noqa: E501
return data
def upload_model_from_url_with_http_info(self, url, **kwargs): # noqa: E501
"""upload_model_from_url # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_model_from_url_with_http_info(url, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str url: URL pointing to the model YAML file. (required)
:param str name: Optional, the name of the model to be created overriding the name in the YAML file.
:param str access_token: Optional, the Bearer token to access the 'url'.
:return: ApiModel
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['url', 'name', 'access_token'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method upload_model_from_url" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'url' is set
if ('url' not in params or
params['url'] is None):
raise ValueError("Missing the required parameter `url` when calling `upload_model_from_url`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'name' in params:
query_params.append(('name', params['name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
if 'url' in params:
form_params.append(('url', params['url'])) # noqa: E501
if 'access_token' in params:
form_params.append(('access_token', params['access_token'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/models/upload_from_url', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ApiModel', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 39.345582 | 191 | 0.604972 | 5,844 | 50,323 | 4.960472 | 0.043635 | 0.045259 | 0.025113 | 0.032288 | 0.946393 | 0.926282 | 0.914899 | 0.908414 | 0.897582 | 0.891166 | 0 | 0.014658 | 0.303162 | 50,323 | 1,278 | 192 | 39.376369 | 0.812017 | 0.333625 | 0 | 0.770982 | 0 | 0 | 0.180655 | 0.042143 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038407 | false | 0 | 0.00569 | 0 | 0.100996 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bca20a0a30cb7db24b19f2b6e4485cdf6b92507e | 249 | py | Python | telemetry/clients/default.py | trevorgrayson/telemetry | efeca1b2062f40a2557f2c030dc687f546f3c60b | [
"MIT"
] | 3 | 2020-03-20T20:04:37.000Z | 2021-08-23T20:11:10.000Z | telemetry/clients/default.py | trevorgrayson/telemetry | efeca1b2062f40a2557f2c030dc687f546f3c60b | [
"MIT"
] | 2 | 2019-06-10T08:02:05.000Z | 2019-06-10T08:02:22.000Z | telemetry/clients/default.py | trevorgrayson/telemetry | efeca1b2062f40a2557f2c030dc687f546f3c60b | [
"MIT"
] | null | null | null | class Default:
def __init__(*args, **kwargs):
pass
def gauge(*args, **kwargs):
pass
def incr(*args, **kwargs):
pass
def decr(*args, **kwargs):
pass
def timing(*args, **kwargs):
pass
| 13.105263 | 34 | 0.497992 | 27 | 249 | 4.444444 | 0.407407 | 0.416667 | 0.583333 | 0.566667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.35743 | 249 | 18 | 35 | 13.833333 | 0.75 | 0 | 0 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | true | 0.454545 | 0 | 0 | 0.545455 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
bcb785216baaddfcfc0ac2e335cd902424676df3 | 10,189 | py | Python | mpf/tests/test_BallDeviceTriggerEvents.py | cloudjor/mpf | 1cf6bf18b0d81120383b0b128b0ebbfa1c62717c | [
"MIT"
] | null | null | null | mpf/tests/test_BallDeviceTriggerEvents.py | cloudjor/mpf | 1cf6bf18b0d81120383b0b128b0ebbfa1c62717c | [
"MIT"
] | null | null | null | mpf/tests/test_BallDeviceTriggerEvents.py | cloudjor/mpf | 1cf6bf18b0d81120383b0b128b0ebbfa1c62717c | [
"MIT"
] | null | null | null | from mpf.tests.MpfTestCase import MpfTestCase
from unittest.mock import MagicMock
class TestBallDeviceTriggerEvents(MpfTestCase):
def __init__(self, test_map):
super().__init__(test_map)
self._captured = 0
self._enter = 0
self._missing = 0
self._requesting = 0
self._queue = False
def getConfigFile(self):
return 'test_ball_device_trigger_events.yaml'
def getMachinePath(self):
return 'tests/machine_files/ball_device/'
def _missing_ball(self, **kwargs):
del kwargs
self._missing += 1
def _requesting_ball(self, balls, **kwargs):
del kwargs
self._requesting += balls
def _ball_enter(self, new_balls, unclaimed_balls, **kwargs):
del unclaimed_balls
del kwargs
self._enter += new_balls
def _captured_from_pf(self, balls, **kwargs):
del kwargs
self._captured += balls
def test_manual_successful_eject_to_pf(self):
coil1 = self.machine.coils['eject_coil1']
coil2 = self.machine.coils['eject_coil2']
device1 = self.machine.ball_devices['test_trough']
device2 = self.machine.ball_devices['test_launcher']
playfield = self.machine.ball_devices['playfield']
self.machine.events.add_handler('balldevice_captured_from_playfield',
self._captured_from_pf)
self.machine.events.add_handler('balldevice_ball_missing',
self._missing_ball)
self._enter = 0
self._captured = 0
self._missing = 0
# add an initial ball to trough
self.machine.switch_controller.process_switch("s_ball_switch1", 1)
self.advance_time_and_run(1)
self.assertEqual(1, self._captured)
self._captured = 0
self.assertEqual(0, playfield.balls)
# it should keep the ball
coil1.pulse = MagicMock()
coil2.pulse = MagicMock()
self.assertEqual(1, device1.balls)
assert not coil1.pulse.called
assert not coil2.pulse.called
# request an ball
playfield.add_ball(player_controlled=True)
self.advance_time_and_run(1)
# trough eject
self.assertTrue(coil1.pulse.called)
assert not coil2.pulse.called
self.machine.switch_controller.process_switch("s_ball_switch1", 0)
self.advance_time_and_run(1)
self.assertEqual(0, device1.balls)
# launcher receives but waits for player to eject
self.machine.switch_controller.process_switch("s_ball_switch_launcher",
1)
self.advance_time_and_run(1)
self.assertEqual(1, device2.balls)
self.assertTrue(coil1.pulse.called)
assert not coil2.pulse.called
# player shoots the ball
self.machine.switch_controller.process_switch("s_launch", 1)
self.advance_time_and_run(1)
self.machine.switch_controller.process_switch("s_launch", 0)
self.advance_time_and_run(1)
self.machine.switch_controller.process_switch("s_ball_switch_launcher",
0)
self.advance_time_and_run(1)
self.assertEqual(0, device2.balls)
self.machine.switch_controller.process_switch("s_playfield", 1)
self.advance_time_and_run(0.1)
self.machine.switch_controller.process_switch("s_playfield", 0)
self.advance_time_and_run(1)
self.assertEqual(1, playfield.balls)
self.assertEqual(0, self._captured)
self.assertEqual(0, self._missing)
def test_eject_without_trigger_events(self):
coil1 = self.machine.coils['eject_coil1']
coil2 = self.machine.coils['eject_coil2']
device1 = self.machine.ball_devices['test_trough']
device2 = self.machine.ball_devices['test_launcher']
playfield = self.machine.ball_devices['playfield']
self.machine.events.add_handler('balldevice_captured_from_playfield',
self._captured_from_pf)
self.machine.events.add_handler('balldevice_ball_missing',
self._missing_ball)
self._enter = 0
self._captured = 0
self._missing = 0
# add an initial ball to trough
self.machine.switch_controller.process_switch("s_ball_switch1", 1)
self.advance_time_and_run(1)
self.assertEqual(1, self._captured)
self._captured = 0
self.assertEqual(0, playfield.balls)
# it should keep the ball
coil1.pulse = MagicMock()
coil2.pulse = MagicMock()
self.assertEqual(1, device1.balls)
assert not coil1.pulse.called
assert not coil2.pulse.called
# request an ball without player_controlled eject
playfield.add_ball(player_controlled=False)
self.advance_time_and_run(1)
# trough eject
self.assertTrue(coil1.pulse.called)
assert not coil2.pulse.called
self.machine.switch_controller.process_switch("s_ball_switch1", 0)
self.advance_time_and_run(1)
self.assertEqual(0, device1.balls)
# launcher receives and should just eject
self.machine.switch_controller.process_switch("s_ball_switch_launcher",
1)
self.advance_time_and_run(1)
self.assertEqual(1, device2.balls)
self.assertTrue(coil1.pulse.called)
self.assertTrue(coil2.pulse.called)
coil1.pulse = MagicMock()
coil2.pulse = MagicMock()
self.machine.switch_controller.process_switch("s_ball_switch_launcher",
0)
self.advance_time_and_run(1)
self.assertEqual(0, device2.balls)
self.machine.switch_controller.process_switch("s_playfield", 1)
self.advance_time_and_run(0.1)
self.machine.switch_controller.process_switch("s_playfield", 0)
self.advance_time_and_run(1)
self.assertEqual(1, playfield.balls)
self.assertEqual(0, self._captured)
self.assertEqual(0, self._missing)
def test_manual_with_retry_to_pf(self):
coil1 = self.machine.coils['eject_coil1']
coil2 = self.machine.coils['eject_coil2']
device1 = self.machine.ball_devices['test_trough']
device2 = self.machine.ball_devices['test_launcher']
playfield = self.machine.ball_devices['playfield']
self.machine.events.add_handler('balldevice_captured_from_playfield',
self._captured_from_pf)
self.machine.events.add_handler('balldevice_ball_missing',
self._missing_ball)
self._enter = 0
self._captured = 0
self._missing = 0
# add an initial ball to trough
self.machine.switch_controller.process_switch("s_ball_switch1", 1)
self.advance_time_and_run(1)
self.assertEqual(1, self._captured)
self._captured = 0
self.assertEqual(0, playfield.balls)
# it should keep the ball
coil1.pulse = MagicMock()
coil2.pulse = MagicMock()
self.assertEqual(1, device1.balls)
assert not coil1.pulse.called
assert not coil2.pulse.called
# request an ball
playfield.add_ball(player_controlled=True)
self.advance_time_and_run(1)
# trough eject
self.assertTrue(coil1.pulse.called)
assert not coil2.pulse.called
self.machine.switch_controller.process_switch("s_ball_switch1", 0)
self.advance_time_and_run(1)
self.assertEqual(0, device1.balls)
# launcher receives but waits for player to eject
self.machine.switch_controller.process_switch("s_ball_switch_launcher",
1)
self.advance_time_and_run(1)
self.assertEqual(1, device2.balls)
self.assertTrue(coil1.pulse.called)
assert not coil2.pulse.called
# player shoots the ball
self.machine.switch_controller.process_switch("s_launch", 1)
self.advance_time_and_run(1)
self.machine.switch_controller.process_switch("s_launch", 0)
self.advance_time_and_run(1)
self.assertTrue(coil1.pulse.called)
self.assertTrue(coil2.pulse.called)
coil1.pulse = MagicMock()
coil2.pulse = MagicMock()
self.machine.switch_controller.process_switch("s_ball_switch_launcher",
0)
self.advance_time_and_run(1)
self.assertEqual(0, device2.balls)
self.advance_time_and_run(3)
# too soft and it comes back
self.machine.switch_controller.process_switch("s_ball_switch_launcher",
1)
self.advance_time_and_run(3)
self.assertEqual(1, device2.balls)
# player drinks his coffee
self.advance_time_and_run(300)
assert not coil1.pulse.called
assert not coil2.pulse.called
# player shoots the ball again
self.machine.switch_controller.process_switch("s_launch", 1)
self.advance_time_and_run(1)
self.machine.switch_controller.process_switch("s_launch", 0)
self.advance_time_and_run(1)
assert not coil1.pulse.called
self.assertTrue(coil2.pulse.called)
self.machine.switch_controller.process_switch("s_ball_switch_launcher",
0)
self.advance_time_and_run(1)
self.assertEqual(0, device2.balls)
assert not coil1.pulse.called
self.assertTrue(coil2.pulse.called)
self.machine.switch_controller.process_switch("s_playfield", 1)
self.advance_time_and_run(0.1)
self.machine.switch_controller.process_switch("s_playfield", 0)
self.advance_time_and_run(1)
self.assertEqual(1, playfield.balls)
self.assertEqual(0, self._captured)
self.assertEqual(0, self._missing)
| 36.389286 | 79 | 0.634312 | 1,200 | 10,189 | 5.110833 | 0.084167 | 0.084298 | 0.075819 | 0.090983 | 0.888961 | 0.875754 | 0.863525 | 0.863525 | 0.863525 | 0.863525 | 0 | 0.024796 | 0.279615 | 10,189 | 279 | 80 | 36.519713 | 0.810763 | 0.053293 | 0 | 0.876238 | 0 | 0 | 0.080831 | 0.043117 | 0 | 0 | 0 | 0 | 0.272277 | 1 | 0.049505 | false | 0 | 0.009901 | 0.009901 | 0.074257 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4c48310ba07df153dcb1c9a1376bcc6c8d53b7f2 | 10,562 | py | Python | _unittests/ut_sphinxext/test_epkg_extension.py | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 18 | 2015-11-10T08:09:23.000Z | 2022-02-16T11:46:45.000Z | _unittests/ut_sphinxext/test_epkg_extension.py | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 321 | 2015-06-14T21:34:28.000Z | 2021-11-28T17:10:03.000Z | _unittests/ut_sphinxext/test_epkg_extension.py | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 10 | 2015-06-20T01:35:00.000Z | 2022-01-19T15:54:32.000Z | """
@brief test log(time=4s)
@author Xavier Dupre
"""
import sys
import os
import unittest
import warnings
from pyquickhelper.pycode import get_temp_folder
from pyquickhelper.helpgen import rst2html
from pyquickhelper.cli.cli_helper import clean_documentation_for_cli
class TestEpkgExtension(unittest.TestCase):
def test_epkg_module(self):
from docutils import nodes as skip_
content = """
test a directive
================
abeforea :epkg:`pandas` aaftera
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
html = rst2html(content, writer="custom", keep_warnings=True,
directives=None, layout="sphinx",
epkg_dictionary={'pandas': ('http://pandas.pydata.org/pandas-docs/stable/generated/',
('http://pandas.pydata.org/pandas-docs/stable/generated/{0}.html', 1))
})
t1 = "abeforea"
if t1 not in html:
raise Exception(html)
t1 = "aftera"
if t1 not in html:
raise Exception(html)
t1 = "http://pandas.pydata.org/pandas-docs/stable/generated/"
if t1 not in html:
raise Exception(html)
def test_epkg_module_twice(self):
from docutils import nodes as skip_
content = """
abeforea :epkg:`pandas` aaftera
test a directive
================
abeforea :epkg:`pandas` aaftera
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
html = rst2html(content, writer="custom", keep_warnings=True,
directives=None, layout="sphinx",
epkg_dictionary={'pandas': 'http://pandas.pydata.org/pandas-docs/stable/generated/', })
self.assertIn(
"http://pandas.pydata.org/pandas-docs/stable/generated/", html)
def test_epkg_sub(self):
from docutils import nodes as skip_
content = """
test a directive
================
abeforea :epkg:`pandas:DataFrame.to_html` aaftera
7za :epkg:`7z` 7zb
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
html = rst2html(content, writer="custom", keep_warnings=True,
directives=None, layout="sphinx",
epkg_dictionary={'pandas': ('http://pandas.pydata.org/pandas-docs/stable/generated/',
('http://pandas.pydata.org/pandas-docs/stable/generated/{0}.html', 1)),
'7z': "http://www.7-zip.org/", })
t1 = "abeforea"
if t1 not in html:
raise Exception(html)
t1 = "aftera"
if t1 not in html:
raise Exception(html)
spl = html.split("abeforea")[-1].split("aftera")[0]
t1 = "`"
if t1 in html:
raise Exception("\n**{0}**\n----\n{1}".format(spl, html))
t1 = 'href="http://www.7-zip.org/"'
if t1 not in html:
raise Exception(html)
t1 = 'href="http://pandas.pydata.org/pandas-docs/stable/generated/DataFrame.to_html.html"'
if t1 not in html:
raise Exception(html)
temp = get_temp_folder(__file__, "temp_epkg_inline")
with open(os.path.join(temp, "out_sharenet.html"), "w", encoding="utf8") as f:
f.write(html)
def test_epkg_function(self):
from docutils import nodes as skip_
content = """
test a directive
================
abeforea :epkg:`pandas:DataFrame:to_html` aaftera
7za :epkg:`7z` 7zb
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
def pandas_link(input):
return "MYA", "|".join(input.split(":"))
html = rst2html(content, writer="custom", keep_warnings=True,
directives=None, layout="sphinx",
epkg_dictionary={'pandas': ('http://pandas.pydata.org/pandas-docs/stable/generated/',
('http://pandas.pydata.org/pandas-docs/stable/generated/{0}.html', 1),
pandas_link),
'7z': "http://www.7-zip.org/", })
t1 = "abeforea"
if t1 not in html:
raise Exception(html)
t1 = "aftera"
if t1 not in html:
raise Exception(html)
spl = html.split("abeforea")[-1].split("aftera")[0]
t1 = "`"
if t1 in html:
raise Exception("\n**{0}**\n----\n{1}".format(spl, html))
t1 = 'href="http://www.7-zip.org/"'
if t1 not in html:
raise Exception(html)
t1 = 'href="pandas|DataFrame|to_html"'
if t1 not in html:
raise Exception(html)
temp = get_temp_folder(__file__, "temp_epkg_inline")
with open(os.path.join(temp, "out_sharenet.html"), "w", encoding="utf8") as f:
f.write(html)
def test_epkg_class(self):
from docutils import nodes as skip_
content = """
test a directive
================
abeforea :epkg:`pandas:DataFrame:to_html` aaftera
7za :epkg:`7z` 7zb
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
class pandas_link:
def __call__(self, input):
return "MYA", "|".join(input.split(":"))
html = rst2html(content, writer="custom", keep_warnings=True,
directives=None, layout="sphinx",
epkg_dictionary={'pandas': ('http://pandas.pydata.org/pandas-docs/stable/generated/',
('http://pandas.pydata.org/pandas-docs/stable/generated/{0}.html', 1),
pandas_link),
'7z': "http://www.7-zip.org/", })
t1 = "abeforea"
if t1 not in html:
raise Exception(html)
t1 = "aftera"
if t1 not in html:
raise Exception(html)
spl = html.split("abeforea")[-1].split("aftera")[0]
t1 = "`"
if t1 in html:
raise Exception("\n**{0}**\n----\n{1}".format(spl, html))
t1 = 'href="http://www.7-zip.org/"'
if t1 not in html:
raise Exception(html)
t1 = 'href="pandas|DataFrame|to_html"'
if t1 not in html:
raise Exception(html)
temp = get_temp_folder(__file__, "temp_epkg_inline")
with open(os.path.join(temp, "out_sharenet.html"), "w", encoding="utf8") as f:
f.write(html)
def test_epkg_function_string(self):
from docutils import nodes as skip_
content = """
test a directive
================
abeforea :epkg:`pandas:DataFrame:to_html` aaftera
7za :epkg:`7z` 7zb
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
html = rst2html(content, writer="custom", keep_warnings=True,
directives=None, layout="sphinx",
epkg_dictionary={'pandas': ('http://pandas.pydata.org/pandas-docs/stable/generated/',
('http://pandas.pydata.org/pandas-docs/stable/generated/{0}.html', 1),
('pyquickhelper.sphinxext._private_for_unittest._private_unittest', None)),
'7z': "http://www.7-zip.org/", })
t1 = "abeforea"
if t1 not in html:
raise Exception(html)
t1 = "aftera"
if t1 not in html:
raise Exception(html)
spl = html.split("abeforea")[-1].split("aftera")[0]
t1 = "`"
if t1 in html:
raise Exception("\n**{0}**\n----\n{1}".format(spl, html))
t1 = 'href="http://www.7-zip.org/"'
if t1 not in html:
raise Exception(html)
t1 = 'href="pandas|DataFrame|to_html"'
if t1 not in html:
raise Exception(html)
temp = get_temp_folder(__file__, "temp_epkg_inline")
with open(os.path.join(temp, "out_sharenet.html"), "w", encoding="utf8") as f:
f.write(html)
def test_epkg_function_long_link(self):
from docutils import nodes as skip_
content = """
test a directive
================
`one link on two lines <http://first.part/
second part>`_.
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
html = rst2html(content,
writer="custom", keep_warnings=True,
directives=None, layout="sphinx")
t1 = 'href="http://first.part/secondpart"'
if t1 not in html:
raise Exception(html)
t1 = '>one link on two lines</a>'
if t1 not in html:
raise Exception(html)
temp = get_temp_folder(__file__, "temp_epkg_inline")
with open(os.path.join(temp, "out_sharenet.html"), "w", encoding="utf8") as f:
f.write(html)
def test_epkg_module_clean(self):
from docutils import nodes as skip_
content = """
test a directive
================
abeforea :epkg:`pandas` aaftera
""".replace(" ", "")
res = clean_documentation_for_cli(content, cleandoc=("epkg", "link"))
self.assertIn('abeforea `pandas` aaftera', res)
if __name__ == "__main__":
unittest.main()
| 34.403909 | 127 | 0.481254 | 1,095 | 10,562 | 4.520548 | 0.113242 | 0.020202 | 0.055556 | 0.10101 | 0.872323 | 0.860808 | 0.860808 | 0.860808 | 0.819798 | 0.812727 | 0 | 0.019506 | 0.378716 | 10,562 | 306 | 128 | 34.51634 | 0.734837 | 0.005113 | 0 | 0.820628 | 0 | 0.004484 | 0.334667 | 0.027048 | 0 | 0 | 0 | 0 | 0.008969 | 1 | 0.044843 | false | 0 | 0.067265 | 0.008969 | 0.130045 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4c666f1664fa12c9f7229b44e7e647e363fd8c22 | 6 | py | Python | src/test/data/pa1/AdditionalTestCase/correct_list_with_2_element.py | Leo-Enrique-Wu/chocopy_compiler_parser | b6e94b6a1950407879e921dc379c951ea365f5a4 | [
"BSD-2-Clause"
] | null | null | null | src/test/data/pa1/AdditionalTestCase/correct_list_with_2_element.py | Leo-Enrique-Wu/chocopy_compiler_parser | b6e94b6a1950407879e921dc379c951ea365f5a4 | [
"BSD-2-Clause"
] | null | null | null | src/test/data/pa1/AdditionalTestCase/correct_list_with_2_element.py | Leo-Enrique-Wu/chocopy_compiler_parser | b6e94b6a1950407879e921dc379c951ea365f5a4 | [
"BSD-2-Clause"
] | null | null | null | [1, 0] | 6 | 6 | 0.333333 | 2 | 6 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 0.166667 | 6 | 1 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d5b89dc36dee805a833d7ce75ec1619ab94008fe | 88,856 | py | Python | cinder/tests/unit/volume/drivers/ibm/test_xiv_proxy.py | liangintel/stx-cinder | f4c43797a3f8c0caebfd8fb67244c084d26d9741 | [
"Apache-2.0"
] | null | null | null | cinder/tests/unit/volume/drivers/ibm/test_xiv_proxy.py | liangintel/stx-cinder | f4c43797a3f8c0caebfd8fb67244c084d26d9741 | [
"Apache-2.0"
] | 2 | 2018-10-25T13:04:01.000Z | 2019-08-17T13:15:24.000Z | cinder/tests/unit/volume/drivers/ibm/test_xiv_proxy.py | liangintel/stx-cinder | f4c43797a3f8c0caebfd8fb67244c084d26d9741 | [
"Apache-2.0"
] | 2 | 2018-10-17T13:32:50.000Z | 2018-11-08T08:39:39.000Z | # Copyright (c) 2016 IBM Corporation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import mock
import six
from xml.etree import ElementTree
from cinder import context
from cinder import exception
from cinder import objects
from cinder.objects import fields
from cinder import test
from cinder.tests.unit import fake_constants as fake
from cinder.tests.unit import utils as testutils
from cinder.tests.unit.volume.drivers.ibm import fake_pyxcli
import cinder.volume.drivers.ibm.ibm_storage as storage
from cinder.volume.drivers.ibm.ibm_storage import cryptish
from cinder.volume.drivers.ibm.ibm_storage.xiv_proxy import XIVProxy
from cinder.volume.drivers.ibm.ibm_storage import xiv_replication
from cinder.volume import group_types
errors = fake_pyxcli.pyxcli_client.errors
mirroring = fake_pyxcli.pyxcli_client.mirroring
test_mock = mock.MagicMock()
module_patcher = mock.MagicMock()
test_mock.cinder.exception = exception
TEST_LOG_PREFIX = storage.XIV_LOG_PREFIX
TEST_VOLUME = {
'name': 'BLA',
'id': 23,
'size': 17,
'group_id': fake.CONSISTENCY_GROUP_ID,
}
TEST_GROUP_SPECS = {
'group_replication_enabled': '<is> True',
'replication_type': 'sync',
}
TEST_EXTRA_SPECS = {
'replication_enabled': '<is> False',
}
TEST_EXTRA_SPECS_REPL = {
'replication_enabled': '<is> True',
'replication_type': 'sync',
}
TEST_WWPNS = ["50017380FE020160", "50017380FE020161", "50017380FE020162"]
TEST_INITIATOR = 'c5507606d5680e05'
TEST_CONNECTOR = {
'ip': '129.123.123.123',
'initiator': TEST_INITIATOR,
'wwpns': [TEST_INITIATOR],
}
TEST_TARGET_MAP = {TEST_INITIATOR: TEST_WWPNS}
TEST_HOST_ID = 11
TEST_HOST_NAME = 'WTF32'
TEST_CHAP_NAME = 'WTF64'
TEST_CHAP_SECRET = 'V1RGNjRfXw=='
FC_TARGETS_OPTIMIZED = [
"50017380FE020160", "50017380FE020190", "50017380FE020192"]
FC_TARGETS_OPTIMIZED_WITH_HOST = [
"50017380FE020160", "50017380FE020192"]
FC_TARGETS_BEFORE_SORTING = [
"50017380FE020160", "50017380FE020161", "50017380FE020162",
"50017380FE020190", "50017380FE020191", "50017380FE020192"]
FC_TARGETS_AFTER_SORTING = [
"50017380FE020190", "50017380FE020160", "50017380FE020191",
"50017380FE020161", "50017380FE020162", "50017380FE020192"]
FC_PORT_LIST_OUTPUT = [
{'component_id': '1:FC_Port:4:1', 'port_state': 'Online', 'role': 'Target',
'wwpn': '50017380FE020160'},
{'component_id': '1:FC_Port:5:1', 'port_state': 'Link Problem',
'role': 'Target', 'wwpn': '50017380FE020161'},
{'component_id': '1:FC_Port:6:1', 'port_state': 'Online',
'role': 'Initiator', 'wwpn': '50017380FE020162'},
{'component_id': '1:FC_Port:7:1', 'port_state': 'Link Problem',
'role': 'Initiator', 'wwpn': '50017380FE020163'},
{'component_id': '1:FC_Port:8:1', 'port_state': 'Online', 'role': 'Target',
'wwpn': '50017380FE020190'},
{'component_id': '1:FC_Port:9:1', 'port_state': 'Link Problem',
'role': 'Target', 'wwpn': '50017380FE020191'},
{'component_id': '1:FC_Port:4:1', 'port_state': 'Online', 'role': 'Target',
'wwpn': '50017380FE020192'},
{'component_id': '1:FC_Port:5:1', 'port_state': 'Link Problem',
'role': 'Initiator', 'wwpn': '50017380FE020193'}]
HOST_CONNECTIVITY_LIST = [
{'host': 'nova-compute-c5507606d5680e05', 'host_port': '10000000C97D26DB',
'local_fc_port': '1:FC_Port:4:1', 'local_iscsi_port': '',
'module': '1:Module:4', 'type': 'FC'}]
HOST_CONNECTIVITY_LIST_UNKNOWN_HOST = [
{'host': 'nova-compute-c5507606d5680f115', 'host_port': '10000000C97D26DE',
'local_fc_port': '1:FC_Port:3:1', 'local_iscsi_port': '',
'module': '1:Module:3', 'type': 'FC'}]
REPLICA_ID = 'WTF32'
REPLICA_IP = '1.2.3.4'
REPLICA_USER = 'WTF64'
REPLICA_PASSWORD = 'WTFWTF'
REPLICA_POOL = 'WTF64'
REPLICA_PARAMS = {
'san_ip': REPLICA_IP,
'san_login': REPLICA_USER,
'san_password': cryptish.encrypt(REPLICA_PASSWORD),
'san_clustername': REPLICA_POOL
}
class XIVProxyTest(test.TestCase):
"""Tests the main Proxy driver"""
def setUp(self):
"""import at setup to ensure module patchers are in place"""
super(XIVProxyTest, self).setUp()
self.proxy = XIVProxy
self.version = "cinder"
self.proxy.configuration = {}
self.ctxt = context.get_admin_context()
self.default_storage_info = {
'user': "WTF32",
'password': cryptish.encrypt("WTF32"),
'address': "WTF32",
'vol_pool': "WTF32",
'management_ips': "WTF32",
'system_id': "WTF32"
}
self.proxy.configuration['replication_device'] = {
'backend_id': REPLICA_ID,
'san_ip': REPLICA_IP,
'san_user': REPLICA_USER,
'san_password': REPLICA_PASSWORD,
}
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.pyxcli")
def test_wrong_pyxcli(self, mock_pyxcli):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
mock_pyxcli.version = '1.1.4'
self.assertRaises(test_mock.cinder.exception.CinderException,
p.setup, {})
@mock.patch("cinder.volume.drivers.ibm.ibm_storage"
".xiv_proxy.socket.getfqdn", new=mock.MagicMock(
return_value='test_hostname'))
def test_setup_should_fail_if_password_is_not_encrypted(self):
"""Passing an unencrypted password should raise an error"""
storage_info = self.default_storage_info.copy()
storage_info['password'] = "WTF32"
p = self.proxy(storage_info, mock.MagicMock(),
test_mock.cinder.exception)
self.assertRaises(test_mock.cinder.exception.InvalidParameterValue,
p.setup, {})
@mock.patch("cinder.volume.drivers.ibm.ibm_storage.xiv_proxy.client."
"XCLIClient")
@mock.patch("cinder.volume.drivers.ibm.ibm_storage.xiv_proxy.socket."
"getfqdn", new=mock.MagicMock(
return_value='test_hostname'))
def test_setup_should_fail_if_credentials_are_invalid(self, mock_xcli):
"""Passing invalid credentials should raise an error"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
mock_xcli.connect_multiendpoint_ssl = mock.MagicMock(
side_effect=errors.CredentialsError(
'bla', 'bla', ElementTree.Element("bla")))
self.assertRaises(test_mock.cinder.exception.NotAuthorized,
p.setup, {})
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.client.XCLIClient")
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.socket.getfqdn", new=mock.MagicMock(
return_value='test_hostname'))
def test_setup_should_fail_if_connection_is_invalid(self, mock_xcli):
"""Passing an invalid host to the setup should raise an error"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
mock_xcli.connect_multiendpoint_ssl = mock.MagicMock(
side_effect=errors.ConnectionError(
'bla', 'bla', ElementTree.Element("bla")))
self.assertRaises(test_mock.cinder.exception.HostNotFound,
p.setup, {})
@mock.patch("cinder.volume.drivers.ibm.ibm_storage.xiv_proxy."
"client.XCLIClient")
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.storage.get_online_iscsi_ports",
mock.MagicMock(return_value=['WTF32']))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.socket.getfqdn", new=mock.MagicMock(
return_value='test_hostname'))
def test_setup_should_set_iqn_and_portal(self, mock_xcli):
"""Test setup
Setup should retrieve values from xcli
and set the IQN and Portal
"""
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception)
cmd = mock_xcli.connect_multiendpoint_ssl.return_value.cmd
item = cmd.config_get.return_value.as_dict.return_value.__getitem__
item.return_value.value = "BLA"
p.setup({})
self.assertEqual("BLA", p.meta.get('ibm_storage_iqn'))
self.assertEqual("WTF32:3260", p.meta.get('ibm_storage_portal'))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage.xiv_proxy."
"client.XCLIClient")
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.storage.get_online_iscsi_ports",
mock.MagicMock(return_value=['WTF32']))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.socket.getfqdn", new=mock.MagicMock(
return_value='test_hostname'))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target_params",
mock.MagicMock(return_value=REPLICA_PARAMS))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target",
mock.MagicMock(return_value="BLABLA"))
def test_setup_should_succeed_if_replica_is_set(self, mock_xcli):
"""Test setup
Setup should succeed if replica is set
"""
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception)
cmd = mock_xcli.connect_multiendpoint_ssl.return_value.cmd
item = cmd.config_get.return_value.as_dict.return_value.__getitem__
item.return_value.value = "BLA"
SCHEDULE_LIST_RESPONSE = {
'00:01:00': {'interval': 120},
'00:02:00': {'interval': 300},
'00:05:00': {'interval': 600},
'00:10:00': {'interval': 1200},
}
cmd = mock_xcli.connect_multiendpoint_ssl.return_value.cmd
cmd.schedule_list.return_value\
.as_dict.return_value = SCHEDULE_LIST_RESPONSE
p.setup({})
@mock.patch("cinder.volume.drivers.ibm.ibm_storage.xiv_proxy."
"client.XCLIClient")
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.storage.get_online_iscsi_ports",
mock.MagicMock(return_value=['WTF32']))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.socket.getfqdn", new=mock.MagicMock(
return_value='test_hostname'))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target_params",
mock.MagicMock(return_value=REPLICA_PARAMS))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target",
mock.MagicMock(return_value="BLABLA"))
def test_setup_should_fail_if_schedule_create_fails(self, mock_xcli):
"""Test setup
Setup should fail if replica is set and schedule_create fails
"""
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception)
cmd = mock_xcli.connect_multiendpoint_ssl.return_value.cmd
item = cmd.config_get.return_value.as_dict.return_value.__getitem__
item.return_value.value = "BLA"
cmd.schedule_list.return_value.as_dict.return_value = {}
cmd.schedule_create.side_effect = (
errors.XCLIError('bla'))
self.assertRaises(exception.VolumeBackendAPIException, p.setup, {})
def test_create_volume_should_call_xcli(self):
"""Create volume should call xcli with the correct parameters"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
volume = testutils.create_volume(
self.ctxt, size=16, display_name='WTF32')
p.create_volume(volume)
p.ibm_storage_cli.cmd.vol_create.assert_called_once_with(
vol=volume.name,
size_blocks=storage.gigabytes_to_blocks(16),
pool='WTF32')
def test_create_volume_from_snapshot(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
volume = testutils.create_volume(
self.ctxt, size=16, display_name='WTF32')
snapshot = testutils.create_snapshot(self.ctxt, volume.id)
p.create_volume_from_snapshot(volume, snapshot)
p.ibm_storage_cli.cmd.vol_copy.assert_called_once_with(
vol_src=snapshot.name,
vol_trg=volume.name)
def test_create_volume_should_fail_if_no_pool_space(self):
"""Test create volume
Create volume should raise an error
if there's no pool space left
"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_create.side_effect = (
errors.PoolOutOfSpaceError(
'bla', 'bla', ElementTree.Element('bla')))
volume = testutils.create_volume(
self.ctxt, size=16, display_name='WTF32',
volume_type_id='b3fcacb5-fbd8-4394-8c00-06853bc13929')
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_volume, volume)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_replication.VolumeReplication.create_replication",
mock.MagicMock())
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_replication.GroupReplication.create_replication",
mock.MagicMock())
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target_params",
mock.MagicMock(return_value=REPLICA_PARAMS))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target",
mock.MagicMock(return_value="BLABLA"))
def test_enable_replication(self):
"""Test enable_replication"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p._call_remote_xiv_xcli = mock.MagicMock()
p._update_consistencygroup = mock.MagicMock()
p.targets = {'tgt1': 'info1'}
group = self._create_test_group('WTF')
vol = testutils.create_volume(self.ctxt)
ret = p.enable_replication(self.ctxt, group, [vol])
self.assertEqual((
{'replication_status': fields.ReplicationStatus.ENABLED},
[{'id': vol['id'],
'replication_status': fields.ReplicationStatus.ENABLED}]), ret)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_replication.VolumeReplication.delete_replication",
mock.MagicMock())
@mock.patch("cinder.volume.group_types.get_group_type_specs",
mock.MagicMock(return_value=TEST_GROUP_SPECS))
def test_disable_replication(self):
"""Test disable_replication"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p._call_remote_xiv_xcli = mock.MagicMock()
p._update_consistencygroup = mock.MagicMock()
group = self._create_test_group('WTF')
ret = p.disable_replication(self.ctxt, group, [])
self.assertEqual((
{'replication_status': fields.ReplicationStatus.DISABLED}, []),
ret)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._using_default_backend",
mock.MagicMock(return_value=False))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target_params",
mock.MagicMock(return_value={'san_clustername': "master"}))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._init_xcli",
mock.MagicMock())
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._init_xcli",
mock.MagicMock())
@mock.patch("cinder.volume.group_types.get_group_type_specs",
mock.MagicMock(return_value=TEST_GROUP_SPECS))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_replication.GroupReplication.failover",
mock.MagicMock(return_value=(True, 'good')))
def test_failover_replication_with_default(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
group = self._create_test_group('WTF')
group.replication_status = fields.ReplicationStatus.FAILED_OVER
vol = testutils.create_volume(self.ctxt)
group_update, vol_update = p.failover_replication(self.ctxt, group,
[vol], 'default')
updates = {'status': 'available'}
self.assertEqual(({'replication_status': 'enabled'},
[{'id': vol['id'],
'updates': updates}]), (group_update, vol_update))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._using_default_backend",
mock.MagicMock(return_value=True))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target_params",
mock.MagicMock(return_value={'san_clustername': "master"}))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._init_xcli",
mock.MagicMock())
@mock.patch("cinder.volume.group_types.get_group_type_specs",
mock.MagicMock(return_value=TEST_GROUP_SPECS))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_replication.GroupReplication.failover",
mock.MagicMock(return_value=(True, 'good')))
def test_failover_replication(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
group = self._create_test_group('WTF')
failed_over = fields.ReplicationStatus.FAILED_OVER
group.replication_status = failed_over
vol = testutils.create_volume(self.ctxt)
group_update, vol_update = p.failover_replication(self.ctxt, group,
[vol],
'secondary_id')
failed_over = fields.ReplicationStatus.FAILED_OVER
updates = {'status': failed_over}
self.assertEqual(({'replication_status': failed_over},
[{'id': vol['id'],
'updates': updates}]), (group_update, vol_update))
def test_failover_resource_no_mirror(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
recovery_mgr = mock.MagicMock()
recovery_mgr.is_mirror_active = mock.MagicMock()
recovery_mgr.is_mirror_active.return_value = False
group = self._create_test_group('WTF')
ret = xiv_replication.Replication(p)._failover_resource(
group, recovery_mgr, mock.MagicMock, 'cg', True)
msg = ("%(rep_type)s %(res)s: no active mirroring and can not "
"failback" % {'rep_type': 'cg',
'res': group['name']})
self.assertEqual((False, msg), ret)
def test_failover_resource_mirror(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
recovery_mgr = mock.MagicMock()
recovery_mgr.is_mirror_active = mock.MagicMock()
recovery_mgr.is_mirror_active.return_value = True
group = self._create_test_group('WTF')
ret = xiv_replication.Replication(p)._failover_resource(
group, recovery_mgr, mock.MagicMock, 'cg', True)
self.assertEqual((True, None), ret)
def test_failover_resource_change_role(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
recovery_mgr = mock.MagicMock()
recovery_mgr.is_mirror_active = mock.MagicMock()
recovery_mgr.is_mirror_active.return_value = True
recovery_mgr.switch_roles.side_effect = (
errors.XCLIError(''))
failover_rep_mgr = mock.MagicMock()
failover_rep_mgr.change_role = mock.MagicMock()
group = self._create_test_group('WTF')
xiv_replication.Replication(p)._failover_resource(
group, recovery_mgr, failover_rep_mgr, 'cg', True)
failover_rep_mgr.change_role.assert_called_once_with(
resource_id=group['name'],
new_role='Slave')
@mock.patch("cinder.volume.utils.is_group_a_cg_snapshot_type",
mock.MagicMock(return_value=True))
def test_create_volume_with_consistency_group(self):
"""Test Create volume with consistency_group"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p._cg_name_from_volume = mock.MagicMock(return_value="cg")
vol_type = testutils.create_volume_type(self.ctxt, name='WTF')
volume = testutils.create_volume(
self.ctxt, size=16, volume_type_id=vol_type.id)
grp = self._create_test_group('WTF')
volume.group = grp
p.create_volume(volume)
p.ibm_storage_cli.cmd.vol_create.assert_called_once_with(
vol=volume['name'],
size_blocks=storage.gigabytes_to_blocks(16),
pool='WTF32')
p.ibm_storage_cli.cmd.cg_add_vol.assert_called_once_with(
vol=volume['name'],
cg='cg')
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_replication.VolumeReplication.create_replication",
mock.MagicMock())
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_qos_specs",
mock.MagicMock(return_value=None))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_extra_specs",
mock.MagicMock(return_value=TEST_EXTRA_SPECS_REPL))
def test_create_volume_with_replication(self):
"""Test Create volume with replication"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
volume = testutils.create_volume(
self.ctxt, size=16, display_name='WTF32',
volume_type_id='b3fcacb5-fbd8-4394-8c00-06853bc13929')
volume.group = None
p.create_volume(volume)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_replication.VolumeReplication.create_replication",
mock.MagicMock())
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_qos_specs",
mock.MagicMock(return_value=None))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_extra_specs",
mock.MagicMock(return_value=TEST_EXTRA_SPECS_REPL))
def test_create_volume_with_replication_and_cg(self):
"""Test Create volume with replication and CG"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
volume = testutils.create_volume(
self.ctxt, size=16, display_name='WTF32',
volume_type_id='b3fcacb5-fbd8-4394-8c00-06853bc13929')
grp = testutils.create_group(self.ctxt, name='bla', group_type_id='1')
volume.group = grp
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_volume, volume)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_qos_specs",
mock.MagicMock(return_value=None))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_extra_specs",
mock.MagicMock(return_value=TEST_EXTRA_SPECS_REPL))
def test_create_volume_with_replication_multiple_targets(self):
"""Test Create volume with replication and multiple targets"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
volume = testutils.create_volume(
self.ctxt, size=16, display_name='WTF32',
volume_type_id='b3fcacb5-fbd8-4394-8c00-06853bc13929')
volume.group = None
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_volume, volume)
def test_delete_volume_should_pass_the_correct_parameters(self):
"""Delete volume should call xcli with the correct parameters"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_list.return_value.as_list = ['aa']
p.delete_volume({'name': 'WTF32'})
p.ibm_storage_cli.cmd.vol_delete.assert_called_once_with(vol='WTF32')
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_replication.VolumeReplication.delete_replication",
mock.MagicMock())
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_extra_specs",
mock.MagicMock(return_value=TEST_EXTRA_SPECS_REPL))
def test_delete_volume_with_replication(self):
"""Test Delete volume with replication"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
volume = {'size': 16, 'name': 'WTF32', 'volume_type_id': 'WTF'}
p.delete_volume(volume)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_extra_specs",
mock.MagicMock(return_value=TEST_EXTRA_SPECS_REPL))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.client.XCLIClient")
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target_params",
mock.MagicMock(return_value=REPLICA_PARAMS))
def test_failover_host(self, mock_xcli):
"""Test failover_host with valid target"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock_xcli
p.ibm_storage_cli.connect_multiendpoint_ssl.return_value
mock_xcli.connect_multiendpoint_ssl.return_value = mock_xcli
volume = {'id': 'WTF64', 'size': 16,
'name': 'WTF32', 'volume_type_id': 'WTF'}
target = REPLICA_ID
p.failover_host({}, [volume], target, [])
def test_failover_host_invalid_target(self):
"""Test failover_host with invalid target"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
volume = {'id': 'WTF64', 'size': 16,
'name': 'WTF32', 'volume_type_id': 'WTF'}
target = 'Invalid'
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.failover_host, {}, [volume], target, [])
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.client.XCLIClient")
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target_params",
mock.MagicMock(return_value=REPLICA_PARAMS))
def test_failover_host_no_connection_to_target(self, mock_xcli):
"""Test failover_host that fails to connect to target"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock_xcli
p.ibm_storage_cli.connect_multiendpoint_ssl.return_value
mock_xcli.connect_multiendpoint_ssl.side_effect = errors.XCLIError('')
volume = {'id': 'WTF64', 'size': 16,
'name': 'WTF32', 'volume_type_id': 'WTF'}
target = REPLICA_ID
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.failover_host, {}, [volume], target, [])
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.client.XCLIClient")
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_target_params",
mock.MagicMock(return_value=REPLICA_PARAMS))
def test_failback_host(self, mock_xcli):
"""Test failing back after DR"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
volume = {'id': 'WTF64', 'size': 16,
'name': 'WTF32', 'volume_type_id': 'WTF'}
target = 'default'
p.failover_host(None, [volume], target, [])
def qos_test_empty_name_if_no_specs(self):
"""Test empty name in case no specs are specified"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
perf_name = p._check_perf_class_on_backend({})
self.assertEqual('', perf_name)
def test_qos_class_name_contains_qos_type(self):
"""Test backend naming
Test if the naming convention is correct
when getting the right specs with qos type
"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.perf_class_list.return_value.as_list = []
perf_name = p._check_perf_class_on_backend({'bw': '100',
'type': 'independent'})
self.assertEqual('cinder-qos_bw_100_type_independent', perf_name)
def test_qos_called_with_type_parameter(self):
"""Test xcli call for qos creation with type"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.perf_class_list.return_value.as_list = []
perf_name = p._check_perf_class_on_backend({'bw': '100',
'type': 'independent'})
p.ibm_storage_cli.cmd.perf_class_create.assert_called_once_with(
perf_class=perf_name,
type='independent')
def test_qos_called_with_wrong_type_parameter(self):
"""Test xcli call for qos creation with wrong type"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.perf_class_list.return_value.as_list = []
p.ibm_storage_cli.cmd.perf_class_create.side_effect = (
errors.XCLIError('llegal value'))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p._check_perf_class_on_backend,
{'bw': '100', 'type': 'BAD'})
def test_qos_class_on_backend_name_correct(self):
"""Test backend naming
Test if the naming convention is correct
when getting the right specs
"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.perf_class_list.return_value.as_list = []
perf_name = p._check_perf_class_on_backend({'bw': '100'})
self.assertEqual('cinder-qos_bw_100', perf_name)
def test_qos_xcli_exception(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.perf_class_list.side_effect = (
errors.XCLIError(''))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p._check_perf_class_on_backend, {'bw': '100'})
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._qos_create_kwargs_for_xcli",
mock.MagicMock(return_value={}))
def test_regex_from_perf_class_name(self):
"""Test type extraction from perf_class with Regex"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
perf_class_names_list = [
{'class_name': 'cinder-qos_iops_1000_type_independent_bw_1000',
'type': 'independent'},
{'class_name': 'cinder-qos_iops_1000_bw_1000_type_shared',
'type': 'shared'},
{'class_name': 'cinder-qos_type_badtype_bw_1000',
'type': None}]
for element in perf_class_names_list:
_type = p._get_type_from_perf_class_name(
perf_class_name=element['class_name'])
self.assertEqual(element['type'], _type)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._qos_create_kwargs_for_xcli",
mock.MagicMock(return_value={}))
def test_create_qos_class_with_type(self):
"""Test performance class creation with type"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.perf_class_set_rate.return_value = None
p.ibm_storage_cli.cmd.perf_class_create.return_value = None
perf_class_name = 'cinder-qos_iops_1000_type_independent_bw_1000'
p_class_name = p._create_qos_class(perf_class_name=perf_class_name,
specs=None)
p.ibm_storage_cli.cmd.perf_class_create.assert_called_once_with(
perf_class=perf_class_name,
type='independent')
self.assertEqual('cinder-qos_iops_1000_type_independent_bw_1000',
p_class_name)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._check_storage_version_for_qos_support",
mock.MagicMock(return_value=True))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_qos_specs",
mock.MagicMock(return_value='specs'))
def test_qos_specs_exist_if_type_exists(self):
"""Test a case where type was found and qos were found"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
volume = {'name': 'bla', 'volume_type_id': '7'}
specs = p._qos_specs_from_volume(volume)
self.assertEqual('specs', specs)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._check_storage_version_for_qos_support",
mock.MagicMock(return_value=True))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_qos_specs",
mock.MagicMock(return_value=None))
def test_no_qos_but_type_exists(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
volume = {'name': 'bla', 'volume_type_id': '7'}
specs = p._qos_specs_from_volume(volume)
self.assertIsNone(specs)
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._check_storage_version_for_qos_support",
mock.MagicMock(return_value=True))
@mock.patch("cinder.volume.drivers.ibm.ibm_storage."
"xiv_proxy.XIVProxy._get_qos_specs",
mock.MagicMock(return_value=None))
def test_qos_specs_doesnt_exist_if_no_type(self):
"""Test _qos_specs_from_volume
Test a case where no type was defined
and therefore no specs exist
"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
volume = {'name': 'bla'}
specs = p._qos_specs_from_volume(volume)
self.assertIsNone(specs)
def test_manage_volume_should_call_xcli(self):
"""Manage volume should call xcli with the correct parameters"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_list.return_value.as_list = [
{'name': 'WTF64', 'size': 34}]
p.manage_volume(volume={'name': 'WTF32'},
reference={'source-name': 'WTF64'})
p.ibm_storage_cli.cmd.vol_list.assert_called_once_with(
vol='WTF64')
def test_manage_volume_should_return_volume_if_exists(self):
"""Manage volume should return with no errors"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_list.return_value.as_list = [
{'name': 'WTF64', 'size': 34}]
volume = {'name': 'WTF32'}
p.manage_volume(volume=volume,
reference={'source-name': 'WTF64'})
self.assertEqual(34, volume['size'])
def test_manage_volume_should_raise_exception_if_not_exists(self):
"""Test manage_volume
Manage volume should return with exception
if volume does not exist
"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_list.return_value.as_list = []
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.manage_volume, volume={'name': 'WTF32'},
reference={'source-name': 'WTF64'})
def test_manage_volume_get_size_if_volume_exists(self):
"""Manage volume get size should return size"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_list.return_value.as_list = [
{'name': 'WTF64', 'size': 34}]
volume = {'name': 'WTF32'}
size = p.manage_volume_get_size(volume=volume,
reference={'source-name': 'WTF64'})
self.assertEqual(34, size)
def test_retype_false_if_no_location(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
volume = {'display_name': 'vol'}
new_type = {}
new_type['name'] = "type1"
host = {'capabilities': ''}
diff = {}
ret = p.retype({}, volume, new_type, diff, host)
self.assertFalse(ret)
def test_retype_false_if_dest_not_xiv_backend(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
host = {'capabilities': {'location_info': "IBM-XIV:host:pool"}}
volume = {'display_name': 'vol', 'host': "origdest_orighost_origpool"}
new_type = {'name': "type1"}
diff = {}
ret = p.retype({}, volume, new_type, diff, host)
self.assertFalse(ret)
def test_retype_true_if_dest_is_xiv_backend(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.migrate_volume = mock.MagicMock()
p.migrate_volume.return_value = (True, None)
p._qos_specs_from_volume = mock.MagicMock()
p._get_qos_specs = mock.MagicMock()
p._qos_specs_from_volume.return_value = {}
p._get_qos_specs.return_value = {}
host = {'capabilities': {'location_info': "IBM-XIV:host:pool"}}
volume = {'display_name': 'vol', 'host': "IBM-XIV_host_pool"}
new_type = {'name': "type1"}
diff = {}
ret = p.retype({}, volume, new_type, diff, host)
self.assertTrue(ret)
def test_manage_volume_get_size_should_raise_exception_if_not_exists(self):
"""Test manage_volume
Manage volume get size should raise exception
if volume does not exist
"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_list.return_value.as_list = []
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.manage_volume_get_size,
volume={'name': 'WTF32'},
reference={'source-name': 'WTF64'})
def test_initialize_connection(self):
"""Test initialize_connection
Ensure that initialize connection returns,
all the correct connection values
"""
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception)
p.ibm_storage_iqn = "BLAIQN"
p.ibm_storage_portal = "BLAPORTAL"
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_list.return_value.as_list = ['aa']
host = self._get_test_host()
setattr(
p, '_get_host_and_fc_targets', mock.MagicMock(return_value=(
[], host)))
setattr(
p, '_vol_map_and_get_lun_id', mock.MagicMock(return_value=100))
p.volume_exists = mock.MagicMock(return_value=True)
info = p.initialize_connection(TEST_VOLUME, {})
self.assertEqual(
p.meta.get('ibm_storage_portal'),
info['data']['target_portal'])
self.assertEqual(
p.meta.get('ibm_storage_iqn'),
info['data']['target_iqn'])
self.assertEqual(100, info['data']['target_lun'])
def test_initialize_connection_no_initiator(self):
"""Initialize connection raises exception on missing initiator"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
connector = TEST_CONNECTOR.copy()
connector['initiator'] = None
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.initialize_connection, TEST_VOLUME,
connector)
def test_initialize_connection_bad_iqn(self):
"""Initialize connection raises exception on bad formatted IQN"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
connector = TEST_CONNECTOR.copy()
# any string would pass for initiator
connector['initiator'] = 5555
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.initialize_connection, TEST_VOLUME,
connector)
def test_get_fc_targets_returns_optimized_wwpns_list(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.fc_port_list.return_value = FC_PORT_LIST_OUTPUT
fc_targets = p._get_fc_targets(None)
six.assertCountEqual(self, FC_TARGETS_OPTIMIZED, fc_targets)
def test_get_fc_targets_returns_host_optimized_wwpns_list(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
hostname = storage.get_host_or_create_from_iqn(TEST_CONNECTOR)
host = {'name': hostname}
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.fc_port_list.return_value = FC_PORT_LIST_OUTPUT
p.ibm_storage_cli.cmd.host_connectivity_list.return_value = (
HOST_CONNECTIVITY_LIST)
fc_targets = p._get_fc_targets(host)
self.assertEqual(FC_TARGETS_OPTIMIZED_WITH_HOST, fc_targets,
"FC targets are different from the expected")
def test_get_fc_targets_returns_host_all_wwpns_list(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
hostname = storage.get_host_or_create_from_iqn(TEST_CONNECTOR)
host = {'name': hostname}
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.fc_port_list.return_value = FC_PORT_LIST_OUTPUT
p.ibm_storage_cli.cmd.host_connectivity_list.return_value = (
HOST_CONNECTIVITY_LIST_UNKNOWN_HOST)
fc_targets = p._get_fc_targets(host)
self.assertEqual(FC_TARGETS_OPTIMIZED, fc_targets,
"FC targets are different from the expected")
def test_define_ports_returns_sorted_wwpns_list(self):
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p._get_connection_type = mock.MagicMock(
return_value=storage.XIV_CONNECTION_TYPE_FC)
p._define_fc = mock.MagicMock(return_value=FC_TARGETS_BEFORE_SORTING)
fc_targets = p._define_ports(self._get_test_host())
fc_result = list(map(lambda x: x[-1:], fc_targets))
expected_result = list(map(lambda x: x[-1:], FC_TARGETS_AFTER_SORTING))
self.assertEqual(expected_result, fc_result,
"FC targets are different from the expected")
def test_get_host_and_fc_targets_if_host_not_defined(self):
"""Test host and FC targets
Tests that host and fc targets are provided
if the host is not defined
"""
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception)
p.meta = mock.MagicMock()
p.meta.ibm_storage_iqn = "BLAIQN"
p.meta.ibm_storage_portal = "BLAPORTAL"
p.meta.openstack_version = "cinder-2013.2"
pool = {'name': "WTF32", 'domain': 'pool_domain_bla'}
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.host_list.return_value.as_list = []
p.ibm_storage_cli.cmd.host_list_ports.return_value = []
p.ibm_storage_cli.cmd.pool_list.return_value.as_list = [pool]
p._get_bunch_from_host = mock.MagicMock()
p._get_bunch_from_host.return_value = {
'name': "nova-compute-%s" % TEST_INITIATOR,
'initiator': TEST_INITIATOR,
'id': 123, 'wwpns': 111, 'chap': 'chap', }
fc_targets, host = getattr(p, '_get_host_and_fc_targets')(
TEST_VOLUME, TEST_CONNECTOR)
hostname = storage.get_host_or_create_from_iqn(TEST_CONNECTOR)
p.ibm_storage_cli.cmd.host_define.assert_called_once_with(
host=hostname, domain=pool.get('domain'))
p.ibm_storage_cli.cmd.host_add_port.assert_called_once_with(
host=hostname, iscsi_name=TEST_CONNECTOR['initiator'])
def test_get_lun_id_if_host_already_mapped(self):
"""Test lun id
Tests that a lun is provided if host is already
mapped to other volumes
"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
vol_mapping_list = p.ibm_storage_cli.cmd.vol_mapping_list
vol_mapping_list.return_value.as_dict.return_value = {}
lun1 = {'lun': 1}
lun2 = {'lun': 2}
p.ibm_storage_cli.cmd.mapping_list.return_value.as_list = [lun1, lun2]
host = self._get_test_host()
self.assertEqual(
3, getattr(p, '_vol_map_and_get_lun_id')(
TEST_VOLUME, TEST_CONNECTOR, host))
def test_terminate_connection_should_call_unmap_vol(self):
"""Terminate connection should call unmap vol"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p._get_connection_type = mock.MagicMock(
return_value=storage.XIV_CONNECTION_TYPE_FC)
p._get_fc_targets = mock.MagicMock(return_value=TEST_WWPNS)
p.ibm_storage_cli = mock.MagicMock()
vol_mapping_ret = p.ibm_storage_cli.cmd.vol_mapping_list.return_value
vol_mapping_ret.as_dict.return_value.has_keys.return_value = True
p.ibm_storage_cli.cmd.vol_list.return_value.as_list = ['aa']
hostname = storage.get_host_or_create_from_iqn(TEST_CONNECTOR)
host = {
'name': hostname,
'initiator': TEST_CONNECTOR['initiator'],
'id': 1
}
TEST_CONNECTOR['wwpns'] = [TEST_INITIATOR]
setattr(p, "_get_host", mock.MagicMock(return_value=host))
meta = p.terminate_connection(TEST_VOLUME, TEST_CONNECTOR)
self.assertEqual(
TEST_TARGET_MAP, meta['data']['initiator_target_map'])
p.ibm_storage_cli.cmd.unmap_vol.assert_called_once_with(
vol=TEST_VOLUME['name'], host=hostname)
def test_terminate_connection_multiple_connections(self):
# Terminate connection should not return meta if host is still
# connected
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception)
p.ibm_storage_cli = mock.MagicMock()
vol_dict = p.ibm_storage_cli.cmd.vol_mapping_list.return_value.as_dict
vol_dict.return_value.has_keys.return_value = True
p.ibm_storage_cli.cmd.vol_list.return_value.as_list = ['aa']
hostname = storage.get_host_or_create_from_iqn(TEST_CONNECTOR)
host = {
'name': hostname,
'initiator': TEST_CONNECTOR['initiator'],
'id': 1
}
TEST_CONNECTOR['wwpns'] = [TEST_INITIATOR]
map_dict = p.ibm_storage_cli.cmd.mapping_list.return_value.as_dict
map_dict.return_value.has_keys.return_value = host
setattr(p, "_get_host", mock.MagicMock(return_value=host))
meta = p.terminate_connection(TEST_VOLUME, TEST_CONNECTOR)
self.assertIsNone(meta)
p.ibm_storage_cli.cmd.unmap_vol.assert_called_once_with(
vol=TEST_VOLUME['name'], host=hostname)
def test_attach_deleted_volume_should_fail_with_info_to_log(self):
"""Test attach deleted volume should fail with info to log"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
mock_log = mock.MagicMock()
setattr(p, "_log", mock_log)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_mapping_list.side_effect = (
errors.VolumeBadNameError('bla', 'bla',
ElementTree.Element('Bla')))
p._define_host_according_to_chap = mock.MagicMock()
p._define_host_according_to_chap.return_value = dict(id=100)
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.initialize_connection, TEST_VOLUME,
TEST_CONNECTOR)
def _get_test_host(self):
host = {
'name': TEST_HOST_NAME,
'initiator': TEST_INITIATOR,
'id': TEST_HOST_ID,
'wwpns': [TEST_INITIATOR],
'chap': (TEST_CHAP_NAME, TEST_CHAP_SECRET)
}
return host
def _create_test_group(self, g_name='group', is_cg=True):
extra_specs = {}
if is_cg:
extra_specs['consistent_group_snapshot_enabled'] = '<is> True'
group_type = group_types.create(self.ctxt, g_name, extra_specs)
return testutils.create_group(self.ctxt,
host=self._get_test_host()['name'],
group_type_id=group_type.id,
volume_type_ids=[])
def _create_test_cgsnapshot(self, group_id):
group_type = group_types.create(
self.ctxt, 'group_snapshot',
{'consistent_group_snapshot_enabled': '<is> True'})
return testutils.create_group_snapshot(self.ctxt, group_id=group_id,
group_type_id=group_type.id)
def test_create_generic_group(self):
"""test create generic group"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group(is_cg=False)
self.assertRaises(NotImplementedError,
p.create_group, {}, group_obj)
def test_create_consistencygroup(self):
"""test a successful cg create"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
model_update = p.create_group({}, group_obj)
p.ibm_storage_cli.cmd.cg_create.assert_called_once_with(
cg=p._cg_name_from_id(group_obj.id),
pool='WTF32')
self.assertEqual('available', model_update['status'])
def test_create_consistencygroup_already_exists(self):
"""test create_consistenygroup when cg already exists"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_create.side_effect = errors.CgNameExistsError(
'bla', 'bla', ElementTree.Element('bla'))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group, {}, self._create_test_group())
def test_create_consistencygroup_reached_limit(self):
"""test create_consistenygroup when reached maximum CGs"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_create.side_effect = (
errors.CgLimitReachedError(
'bla', 'bla', ElementTree.Element('bla')))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group, {}, self._create_test_group())
@mock.patch("cinder.volume.drivers.ibm.ibm_storage.xiv_proxy."
"client.XCLIClient")
def test_create_consistencygroup_with_replication(self, mock_xcli):
"""test create_consistenygroup when replication is set"""
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
vol_type = objects.VolumeType(context=self.ctxt,
name='volume_type_rep',
extra_specs=(
{'replication_enabled': '<is> True',
'replication_type': 'sync'}))
group_obj.volume_types = objects.VolumeTypeList(context=self.ctxt,
objects=[vol_type])
model_update = p.create_group({}, group_obj)
self.assertEqual('available', model_update['status'])
def test_create_consistencygroup_from_src_cgsnapshot(self):
"""test a successful cg create from cgsnapshot"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.create_volume_from_snapshot.return_value = []
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
volume = testutils.create_volume(self.ctxt)
snapshot = testutils.create_snapshot(self.ctxt, volume.id)
model_update, vols_model_update = p.create_group_from_src(
{}, group_obj, [volume],
cgsnap_group_obj, [snapshot], None, None)
p.ibm_storage_cli.cmd.cg_create.assert_called_once_with(
cg=p._cg_name_from_id(group_obj.id), pool='WTF32')
self.assertEqual('available', model_update['status'])
def test_create_consistencygroup_from_src_cg(self):
"""test a successful cg create from consistencygroup"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.create_volume_from_snapshot.return_value = []
group_obj = self._create_test_group()
src_group_obj = self._create_test_group(g_name='src_group')
volume = testutils.create_volume(self.ctxt)
src_volume = testutils.create_volume(self.ctxt)
model_update, vols_model_update = p.create_group_from_src(
{}, group_obj, [volume],
None, None, src_group_obj, [src_volume])
p.ibm_storage_cli.cmd.cg_create.assert_called_once_with(cg=group_obj,
pool='WTF32')
self.assertEqual('available', model_update['status'])
def test_create_consistencygroup_from_src_fails_cg_create_from_cgsnapshot(
self):
"""test cg create from cgsnapshot fails on cg_create"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_create.side_effect = errors.XCLIError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
volume = testutils.create_volume(self.ctxt)
snapshot = testutils.create_snapshot(self.ctxt, volume.id)
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group_from_src, {},
group_obj, [volume], cgsnap_group_obj,
[snapshot], None, None)
def test_create_consistencygroup_from_src_fails_cg_create_from_cg(self):
"""test cg create from cg fails on cg_create"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_create.side_effect = errors.XCLIError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
src_group_obj = self._create_test_group(g_name='src_group')
volume = testutils.create_volume(self.ctxt)
src_volume = testutils.create_volume(self.ctxt)
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group_from_src, {},
group_obj, [volume], None, None,
src_group_obj, [src_volume])
def test_create_consistencygroup_from_src_fails_vol_create_from_cgsnapshot(
self):
"""test cg create from cgsnapshot fails on vol_create"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_create.side_effect = errors.XCLIError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
volume = testutils.create_volume(self.ctxt)
snapshot = testutils.create_snapshot(self.ctxt, volume.id)
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group_from_src, {},
group_obj, [volume], cgsnap_group_obj,
[snapshot], None, None)
def test_create_consistencygroup_from_src_fails_vol_create_from_cg(self):
"""test cg create from cg fails on vol_create"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_create.side_effect = errors.XCLIError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
src_group_obj = self._create_test_group(g_name='src_group')
volume = testutils.create_volume(self.ctxt)
src_volume = testutils.create_volume(self.ctxt)
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group_from_src, {},
group_obj, [volume], None, None,
src_group_obj, [src_volume])
def test_create_consistencygroup_from_src_fails_vol_copy_from_cgsnapshot(
self):
"""test cg create from cgsnapshot fails on vol_copy"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_copy.side_effect = errors.XCLIError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
volume = testutils.create_volume(self.ctxt)
snapshot = testutils.create_snapshot(self.ctxt, volume.id)
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group_from_src, {}, group_obj,
[volume], cgsnap_group_obj, [snapshot],
None, None)
def test_create_consistencygroup_from_src_fails_vol_copy_from_cg(self):
"""test cg create from cg fails on vol_copy"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_copy.side_effect = errors.XCLIError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
src_group_obj = self._create_test_group(g_name='src_group')
volume = testutils.create_volume(self.ctxt)
src_volume = testutils.create_volume(self.ctxt)
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group_from_src, {},
group_obj, [volume], None, None,
src_group_obj, [src_volume])
def test_delete_consistencygroup_with_no_volumes(self):
"""test a successful cg delete"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
model_update, volumes = p.delete_group({}, group_obj, [])
p.ibm_storage_cli.cmd.cg_delete.assert_called_once_with(
cg=p._cg_name_from_id(group_obj.id))
self.assertEqual('deleted', model_update['status'])
def test_delete_consistencygroup_not_exists(self):
"""test delete_consistenygroup when CG does not exist"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_delete.side_effect = (
errors.CgDoesNotExistError(
'bla', 'bla', ElementTree.Element('bla')))
group_obj = self._create_test_group()
model_update, volumes = p.delete_group({}, group_obj, [])
p.ibm_storage_cli.cmd.cg_delete.assert_called_once_with(
cg=p._cg_name_from_id(group_obj.id))
self.assertEqual('deleted', model_update['status'])
def test_delete_consistencygroup_not_exists_2(self):
"""test delete_consistenygroup when CG does not exist bad name"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_delete.side_effect = (
errors.CgBadNameError(
'bla', 'bla', ElementTree.Element('bla')))
group_obj = self._create_test_group()
model_update, volumes = p.delete_group({}, group_obj, [])
p.ibm_storage_cli.cmd.cg_delete.assert_called_once_with(
cg=p._cg_name_from_id(group_obj.id))
self.assertEqual('deleted', model_update['status'])
def test_delete_consistencygroup_not_empty(self):
"""test delete_consistenygroup when CG is not empty"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_delete.side_effect = errors.CgNotEmptyError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.delete_group, {}, group_obj, [])
def test_delete_consistencygroup_replicated(self):
"""test delete cg when CG is not empty and replicated"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
group_obj['replication_status'] = fields.ReplicationStatus.ENABLED
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.delete_group, {}, group_obj, [])
def test_delete_consistencygroup_faildover(self):
"""test delete cg when CG is faildover"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
group_obj['replication_status'] = fields.ReplicationStatus.FAILED_OVER
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.delete_group, {}, group_obj, [])
def test_delete_consistencygroup_is_mirrored(self):
"""test delete_consistenygroup when CG is mirroring"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_delete.side_effect = errors.CgHasMirrorError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.delete_group, {}, group_obj, [])
def test_update_consistencygroup(self):
"""test update_consistencygroup"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
vol_add = testutils.create_volume(self.ctxt, display_name='WTF32')
vol_remove = testutils.create_volume(self.ctxt, display_name='WTF64')
model_update, add_model_update, remove_model_update = (
p.update_group({}, group_obj, [vol_add], [vol_remove]))
p.ibm_storage_cli.cmd.cg_add_vol.assert_called_once_with(
vol=vol_add['name'], cg=p._cg_name_from_id(group_obj.id))
p.ibm_storage_cli.cmd.cg_remove_vol.assert_called_once_with(
vol=vol_remove['name'])
self.assertEqual('available', model_update['status'])
def test_update_consistencygroup_exception_in_add_vol(self):
"""test update_consistencygroup with exception in cg_add_vol"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_add_vol.side_effect = errors.XCLIError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
vol_add = testutils.create_volume(self.ctxt, display_name='WTF32')
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.update_group, {}, group_obj, [vol_add], [])
def test_update_consistencygroup_exception_in_remove_vol(self):
"""test update_consistencygroup with exception in cg_remove_vol"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_remove_vol.side_effect = errors.XCLIError(
'bla', 'bla', ElementTree.Element('bla'))
group_obj = self._create_test_group()
vol_remove = testutils.create_volume(self.ctxt)
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.update_group, {},
group_obj, [], [vol_remove])
def test_update_consistencygroup_remove_non_exist_vol_(self):
"""test update_group with exception in cg_remove_vol"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.cg_remove_vol.side_effect = (
errors.VolumeNotInConsGroup(
'bla', 'bla', ElementTree.Element('bla')))
group_obj = self._create_test_group()
vol_remove = testutils.create_volume(self.ctxt)
model_update, add_model_update, remove_model_update = (
p.update_group({}, group_obj, [], [vol_remove]))
p.ibm_storage_cli.cmd.cg_remove_vol.assert_called_once_with(
vol=vol_remove['name'])
self.assertEqual('available', model_update['status'])
def test_create_cgsnapshot(self):
"""test a successful cgsnapshot create"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
model_update, snapshots_model_update = (
p.create_group_snapshot({}, cgsnap_group_obj, []))
p.ibm_storage_cli.cmd.cg_snapshots_create.assert_called_once_with(
cg=p._cg_name_from_cgsnapshot(cgsnap_group_obj),
snap_group=p._group_name_from_cgsnapshot_id(
cgsnap_group_obj['id']))
self.assertEqual('available', model_update['status'])
def test_create_cgsnapshot_is_empty(self):
"""test create_cgsnapshot when CG is empty"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
p.ibm_storage_cli.cmd.cg_snapshots_create.side_effect = (
errors.CgEmptyError('bla', 'bla', ElementTree.Element('bla')))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group_snapshot, {},
cgsnap_group_obj, [])
def test_create_cgsnapshot_cg_not_exist(self):
"""test create_cgsnapshot when CG does not exist"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
p.ibm_storage_cli.cmd.cg_snapshots_create.side_effect = (
errors.CgDoesNotExistError(
'bla', 'bla', ElementTree.Element('bla')))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group_snapshot, {},
cgsnap_group_obj, [])
def test_create_cgsnapshot_snapshot_limit(self):
"""test create_cgsnapshot when reached snapshot limit"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
p.ibm_storage_cli.cmd.cg_snapshots_create.side_effect = (
errors.PoolSnapshotLimitReachedError(
'bla', 'bla', ElementTree.Element('bla')))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.create_group_snapshot, {},
cgsnap_group_obj, [])
def test_delete_cgsnapshot(self):
"""test a successful cgsnapshot delete"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
model_update, snapshots_model_update = p.delete_group_snapshot(
{}, cgsnap_group_obj, [])
p.ibm_storage_cli.cmd.snap_group_delete.assert_called_once_with(
snap_group=p._group_name_from_cgsnapshot_id(
cgsnap_group_obj['id']))
self.assertEqual('deleted', model_update['status'])
def test_delete_cgsnapshot_cg_does_not_exist(self):
"""test delete_cgsnapshot with bad CG name"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
p.ibm_storage_cli.cmd.snap_group_delete.side_effect = (
errors.CgDoesNotExistError(
'bla', 'bla', ElementTree.Element('bla')))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.delete_group_snapshot, {},
cgsnap_group_obj, [])
def test_delete_cgsnapshot_no_space_left_for_snapshots(self):
"""test delete_cgsnapshot when no space left for snapshots"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
p.ibm_storage_cli.cmd.snap_group_delete.side_effect = (
errors.PoolSnapshotLimitReachedError(
'bla', 'bla', ElementTree.Element('bla')))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.delete_group_snapshot, {},
cgsnap_group_obj, [])
def test_delete_cgsnapshot_with_empty_consistency_group(self):
"""test delete_cgsnapshot with empty consistency group"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
group_obj = self._create_test_group()
cgsnap_group_obj = self._create_test_cgsnapshot(group_obj.id)
p.ibm_storage_cli.cmd.snap_group_delete.side_effect = (
errors.CgEmptyError('bla', 'bla', ElementTree.Element('bla')))
ex = getattr(p, "_get_exception")()
self.assertRaises(ex, p.delete_group_snapshot, {},
cgsnap_group_obj, [])
def test_silent_delete_volume(self):
"""test _silent_delete_volume fails silently without exception"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
p.ibm_storage_cli = mock.MagicMock()
p.ibm_storage_cli.cmd.vol_delete.side_effect = errors.XCLIError(
'bla', 'bla', ElementTree.Element('bla'))
# check no assertion occurs
p._silent_delete_volume(TEST_VOLUME)
@mock.patch("cinder.volume.utils.group_get_by_id", mock.MagicMock())
@mock.patch("cinder.volume.utils.is_group_a_cg_snapshot_type",
mock.MagicMock(return_value=False))
def test_create_cloned_volume_calls_vol_create_and_copy(self):
"""test create_cloned_volume
check if calls the appropriate xiv_backend functions
are being called
"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
vol_src = testutils.create_volume(self.ctxt, display_name='bla',
size=17)
vol_trg = testutils.create_volume(self.ctxt, display_name='bla',
size=17)
p.ibm_storage_cli = mock.MagicMock()
p._cg_name_from_volume = mock.MagicMock(return_value="cg")
p.create_cloned_volume(vol_trg, vol_src)
p._create_volume = test_mock.MagicMock()
p.ibm_storage_cli.cmd.vol_create.assert_called_once_with(
pool='WTF32',
size_blocks=storage.gigabytes_to_blocks(17),
vol=vol_trg['name'])
p.ibm_storage_cli.cmd.vol_copy.assert_called_once_with(
vol_src=vol_src['name'],
vol_trg=vol_trg['name'])
@mock.patch("cinder.volume.utils.group_get_by_id", mock.MagicMock())
@mock.patch("cinder.volume.utils.is_group_a_cg_snapshot_type",
mock.MagicMock(return_value=False))
def test_handle_created_vol_properties_returns_vol_update(self):
"""test handle_created_vol_props
returns replication enables if replication info is True
"""
driver = mock.MagicMock()
driver.VERSION = "VERSION"
p = self.proxy(
self.default_storage_info,
mock.MagicMock(),
test_mock.cinder.exception,
driver)
xiv_replication.VolumeReplication = mock.MagicMock()
grp = testutils.create_group(self.ctxt, name='bla', group_type_id='1')
volume = testutils.create_volume(self.ctxt, display_name='bla')
volume.group = grp
ret_val = p.handle_created_vol_properties({'enabled': True}, volume)
self.assertEqual(ret_val, {'replication_status': 'enabled'})
| 36.566255 | 79 | 0.61818 | 10,161 | 88,856 | 5.085031 | 0.050192 | 0.085293 | 0.032573 | 0.040914 | 0.825408 | 0.789719 | 0.75585 | 0.734502 | 0.71898 | 0.703168 | 0 | 0.01402 | 0.273532 | 88,856 | 2,429 | 80 | 36.581309 | 0.786417 | 0.055303 | 0 | 0.729685 | 0 | 0 | 0.126972 | 0.063108 | 0 | 0 | 0 | 0 | 0.059149 | 1 | 0.053621 | false | 0.00387 | 0.008845 | 0 | 0.064677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d5da48d89502249e1d0e44a15a551eb8877194c4 | 5,898 | py | Python | app/tests/test_quiz.py | Orbital-Knewbie/Knewbie | 930d3fbbfba9123e52b7c4c6f70b8a1e994e2883 | [
"MIT"
] | 2 | 2020-11-28T17:57:25.000Z | 2021-06-06T08:37:06.000Z | app/tests/test_quiz.py | Orbital-Knewbie/Knewbie | 930d3fbbfba9123e52b7c4c6f70b8a1e994e2883 | [
"MIT"
] | 63 | 2020-05-28T01:42:29.000Z | 2022-03-12T00:35:33.000Z | app/tests/test_quiz.py | Orbital-Knewbie/Knewbie | 930d3fbbfba9123e52b7c4c6f70b8a1e994e2883 | [
"MIT"
] | 1 | 2021-02-13T04:58:56.000Z | 2021-02-13T04:58:56.000Z | import unittest
from app.tests.basetest import BaseTest
from app.models import *
from flask import url_for
from flask_login import current_user
class QuizTest(BaseTest):
def test_quiz(self):
with self.app:
self.login('testes@test.com', 'testtest')
rv = self.app.get(url_for('quiz.quiz'))
self.assertEqual(rv.status_code, 200)
self.assertIn(b'TAILORED QUIZ', rv.data)
def test_edu_quiz(self):
with self.app:
self.login('edutest@test.com', 'strongtest')
rv = self.app.get(url_for('quiz.createquizsuccess', quizID=1))
self.assertEqual(rv.status_code, 200)
self.assertIn(b'Quiz was successfully created!', rv.data)
def test_delete_quiz(self):
'''Delete educator quiz'''
with self.app:
self.login('edutest@test.com', 'strongtest')
rv = self.app.post(url_for('quiz.deletequiz', quizID=1), data={}, follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'Quiz deleted', rv.data)
self.assertIn(b'Create A New Class', rv.data)
def test_create_quiz(self):
with self.app:
self.login('edutest@test.com', 'strongtest')
rv = self.app.post(url_for('quiz.createquiz'), data={'quiz-title':'new quiz'}, follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'Create A New Quiz', rv.data)
self.assertIn(b'Select Topic', rv.data)
self.assertIn(b'Input Question', rv.data)
quiz = Quiz.query.filter_by(name='new quiz').first()
rv = self.app.post(url_for('quiz.createqn', quizID=quiz.id), data={'topic':1, 'qn':'testqn', 'op1':'op1', 'op2':'op2','op3':'op3','op4':'op4', 'corrOp':1},follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'Question added', rv.data)
self.assertIn(b'Create A New Quiz', rv.data)
self.assertIn(b'Select Topic', rv.data)
self.assertIn(b'Input Question', rv.data)
rv = self.app.post(url_for('quiz.createqn', quizID=quiz.id), data={'topic':1, 'qn':'testqn2', 'op1':'op1', 'op2':'op2','op3':'op3','op4':'op4','corrOp':1, 'complete' : True},follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'Quiz was successfully created!', rv.data)
def test_preview_quiz(self):
'''Preview educator created quiz'''
with self.app:
self.login('edutest@test.com', 'strongtest')
rv = self.app.get(url_for('quiz.preview_quiz',quizID=1), follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'testquestion', rv.data)
self.assertIn(b'testoption1', rv.data)
self.assertIn(b'Answer: ', rv.data)
def test_delete_qnquiz(self):
'''Delete question from educator quiz'''
with self.app:
self.login('edutest@test.com', 'strongtest')
rv = self.app.post(url_for('quiz.deleteqn', quizID=1, qnID=1), follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'Question removed from Quiz', rv.data)
self.assertIn(b'testquiz', rv.data)
def test_get_edit_qn(self):
with self.app:
self.login('edutest@test.com', 'strongtest')
rv = self.app.get(url_for('quiz.editqn',qnID=1), follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'testquestion', rv.data)
self.assertIn(b'testoption1', rv.data)
self.assertIn(b'Edit Question', rv.data)
def test_edit_qn(self):
with self.app:
self.login('edutest@test.com', 'strongtest')
rv = self.app.post(url_for('quiz.editqn', qnID=1), data={'topic':1, 'qn':'edited question', 'op1':'op1', 'op2':'op2','op3':'op3','op4':'op4','corrOp':1},follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'Question Edited Successfully!', rv.data)
rv = self.app.get(url_for('quiz.preview_quiz',quizID=1), follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'edited question', rv.data)
def test_tailored_qn(self):
'''Submitting a question in tailored quiz'''
with self.app:
self.login('testes@test.com', 'testtest')
rv = self.app.post(url_for('quiz.quiz'), data={'option':1}, follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'TAILORED QUIZ', rv.data)
def test_edu_qn(self):
'''Starting educator quiz'''
with self.app:
self.login('testes@test.com', 'testtest')
rv = self.app.get(url_for('quiz.edu_quiz', quizID=1, qnNum=1), follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'testquiz', rv.data)
def test_post_edu_qn(self):
'''Completing educator quiz'''
with self.app:
self.login('testes@test.com', 'testtest')
rv = self.app.post(url_for('quiz.edu_quiz', quizID=1, qnNum=1), data={'option':1}, follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'testquiz', rv.data)
self.assertIn(b'Score:', rv.data)
def test_create_quiz_same_name(self):
with self.app:
self.login('edutest@test.com', 'strongtest')
rv = self.app.post(url_for('quiz.createquiz'), data={'quiz-title':'testquiz'}, follow_redirects=True)
self.assertEqual(rv.status_code, 200)
self.assertIn(b'You have already created a Quiz with this name. Please choose a different name.', rv.data)
if __name__ == '__main__':
unittest.main()
| 45.369231 | 208 | 0.61292 | 787 | 5,898 | 4.484117 | 0.133418 | 0.053556 | 0.099462 | 0.097761 | 0.816662 | 0.792292 | 0.77274 | 0.770473 | 0.748937 | 0.737886 | 0 | 0.020449 | 0.237199 | 5,898 | 129 | 209 | 45.72093 | 0.763948 | 0.029162 | 0 | 0.568627 | 0 | 0 | 0.207024 | 0.003863 | 0 | 0 | 0 | 0 | 0.411765 | 1 | 0.117647 | false | 0 | 0.04902 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9108b85f355d325b45421452da72fb43370bd41c | 15,113 | py | Python | infoblox_netmri/api/broker/v2_1_0/device_broker.py | infobloxopen/infoblox_netmri | aa1c744df7e439dbe163bb9edd165e4e85a9771b | [
"Apache-2.0"
] | 12 | 2016-02-19T12:37:54.000Z | 2022-03-04T20:11:08.000Z | infoblox_netmri/api/broker/v2_1_0/device_broker.py | azinfoblox/infoblox-netmri | 02372c5231e2677ab6299cb659a73c9a41b4b0f4 | [
"Apache-2.0"
] | 18 | 2015-11-12T18:37:00.000Z | 2021-05-19T07:59:55.000Z | infoblox_netmri/api/broker/v2_1_0/device_broker.py | azinfoblox/infoblox-netmri | 02372c5231e2677ab6299cb659a73c9a41b4b0f4 | [
"Apache-2.0"
] | 18 | 2016-01-07T12:04:34.000Z | 2022-03-31T11:05:41.000Z | from ..broker import Broker
class DeviceBroker(Broker):
controller = "devices"
def index(self, **kwargs):
"""Lists the available devices. Any of the inputs listed may be be used to narrow the list; other inputs will be ignored. Of the various ways to query lists, using this method is most efficient.
**Inputs**
| ``api version min:`` 2
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceFirstOccurrenceTime: No description is available for DeviceFirstOccurrenceTime.
:type DeviceFirstOccurrenceTime: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceFirstOccurrenceTime: No description is available for DeviceFirstOccurrenceTime.
:type DeviceFirstOccurrenceTime: Array of String
| ``api version min:`` 2
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Array of Integer
| ``api version min:`` 2
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceIPDotted: The management IP address of the device, in dotted (or colon-delimited for IPv6) format.
:type DeviceIPDotted: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceIPDotted: The management IP address of the device, in dotted (or colon-delimited for IPv6) format.
:type DeviceIPDotted: Array of String
| ``api version min:`` 2
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceIPNumeric: The numerical value of the device IP address.
:type DeviceIPNumeric: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceIPNumeric: The numerical value of the device IP address.
:type DeviceIPNumeric: Array of Integer
| ``api version min:`` 2
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceMAC: The MAC of the interface corresponding to the management IP, if available. Otherwise, it is the lowest numbered non-zero MAC for any interface on the device. If no interface records are available for the device, the lowest non-zero MAC address corresponding to the management IP address found in the global ARP table will be used.
:type DeviceMAC: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceMAC: The MAC of the interface corresponding to the management IP, if available. Otherwise, it is the lowest numbered non-zero MAC for any interface on the device. If no interface records are available for the device, the lowest non-zero MAC address corresponding to the management IP address found in the global ARP table will be used.
:type DeviceMAC: Array of String
| ``api version min:`` 2
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceName: The NetMRI name of the device; this will be either the same as DeviceSysName or DeviceDNSName, depending on your NetMRI configuration.
:type DeviceName: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceName: The NetMRI name of the device; this will be either the same as DeviceSysName or DeviceDNSName, depending on your NetMRI configuration.
:type DeviceName: Array of String
| ``api version min:`` 2
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ParentDeviceID: The internal NetMRI identifier for the device containing this virtual device.
:type ParentDeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ParentDeviceID: The internal NetMRI identifier for the device containing this virtual device.
:type ParentDeviceID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the devices as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of device methods. The listed methods will be called on each device returned and included in the output. Available methods are: data_source.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: data_source.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` DeviceID
:param sort: The data field(s) to use for sorting the output. Default is DeviceID. Valid values are DataSourceID, DeviceID, DeviceStartTime, DeviceEndTime, DeviceChangedCols, DeviceIPDotted, DeviceIPNumeric, DeviceName, DeviceType, DeviceAssurance, DeviceVendor, DeviceModel, DeviceVersion, DeviceCommunity, DeviceSysName, DeviceSysDescr, DeviceSysLocation, DeviceDNSName, DeviceSNMPPolling, DeviceConfigPolling, DevicePortScanning, DeviceSNMPAnalysis, DeviceFingerPrint, DeviceCCSCollection, DeviceVendorDefaultCollection, DeviceConfigTimestamp, DeviceFirstOccurrence, DeviceTimestamp, DeviceSAAVersion, DeviceRebootTime, DeviceRunningConfigLastChange, DeviceSavedConfigLastChange, DeviceConfigLocked, DeviceConfigLockLastChangeBy, DeviceConfigLockLastChange, DeviceConfigLastChecked, DeviceLicensed, DevicePolicyScheduleMode, NetworkDeviceInd, RoutingInd, SwitchingInd, DeviceRank, DeviceAddlInfo, DeviceMAC, DeviceStandardsCompliance, ParentDeviceID, DeviceContextName, VirtualInd, DeviceSysContact, DeviceManagedInd, SPMLicensedInd, DeviceNetBIOSName, DeviceNetBIOSScanningInd, DeviceOUI, ARPCacheRefreshInd, FilteringInd.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each Device. Valid values are DataSourceID, DeviceID, DeviceStartTime, DeviceEndTime, DeviceChangedCols, DeviceIPDotted, DeviceIPNumeric, DeviceName, DeviceType, DeviceAssurance, DeviceVendor, DeviceModel, DeviceVersion, DeviceCommunity, DeviceSysName, DeviceSysDescr, DeviceSysLocation, DeviceDNSName, DeviceSNMPPolling, DeviceConfigPolling, DevicePortScanning, DeviceSNMPAnalysis, DeviceFingerPrint, DeviceCCSCollection, DeviceVendorDefaultCollection, DeviceConfigTimestamp, DeviceFirstOccurrence, DeviceTimestamp, DeviceSAAVersion, DeviceRebootTime, DeviceRunningConfigLastChange, DeviceSavedConfigLastChange, DeviceConfigLocked, DeviceConfigLockLastChangeBy, DeviceConfigLockLastChange, DeviceConfigLastChecked, DeviceLicensed, DevicePolicyScheduleMode, NetworkDeviceInd, RoutingInd, SwitchingInd, DeviceRank, DeviceAddlInfo, DeviceMAC, DeviceStandardsCompliance, ParentDeviceID, DeviceContextName, VirtualInd, DeviceSysContact, DeviceManagedInd, SPMLicensedInd, DeviceNetBIOSName, DeviceNetBIOSScanningInd, DeviceOUI, ARPCacheRefreshInd, FilteringInd. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param NetworkID: The network id to which results would be limited.
:type NetworkID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return devices: An array of the Device objects that match the specified input criteria.
:rtype devices: Array of DeviceConfig
"""
return self.api_list_request(self._get_method_fullname("index"), kwargs)
def show(self, **kwargs):
"""Shows the details for the specified device.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of device methods. The listed methods will be called on each device returned and included in the output. Available methods are: data_source.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: data_source.
:type include: Array of String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return device: The device identified by the specified DeviceID.
:rtype device: DeviceConfig
"""
return self.api_request(self._get_method_fullname("show"), kwargs)
def destroy(self, **kwargs):
"""Deletes the specified device from NetMRI.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param DeviceID: An internal NetMRI identifier for the device.
:type DeviceID: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` False
:param exclude_ind: Set to true if you want the device to be excluded from the discovery process.
:type exclude_ind: Boolean
**Outputs**
"""
return self.api_request(self._get_method_fullname("destroy"), kwargs)
def delete(self, **kwargs):
"""Remove many devices, with the option to remove them from discovery
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param ids: The ids of the devices to delete comma separated ex: 1,2,3,4. When sending form encoded use ids[].
:type ids: Array
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` False
:param exclude_ind: Set to 1 if you want these devices to be excluded from the discovery process
:type exclude_ind: Boolean
**Outputs**
"""
return self.api_request(self._get_method_fullname("delete"), kwargs)
| 45.79697 | 1,175 | 0.599682 | 1,576 | 15,113 | 5.731599 | 0.18401 | 0.077494 | 0.050371 | 0.052696 | 0.797188 | 0.794088 | 0.794088 | 0.794088 | 0.789549 | 0.789549 | 0 | 0.006624 | 0.310726 | 15,113 | 329 | 1,176 | 45.93617 | 0.860516 | 0.784358 | 0 | 0 | 0 | 0 | 0.051237 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0.090909 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
913f53fe5588755a9af28a4dec88fbe9b25b0f01 | 4,286 | py | Python | Sort.py | FishPain/DTRA-Assignment | 156c1e78526f3779a783006fff3c55d6d1e1e4ed | [
"MIT"
] | null | null | null | Sort.py | FishPain/DTRA-Assignment | 156c1e78526f3779a783006fff3c55d6d1e1e4ed | [
"MIT"
] | null | null | null | Sort.py | FishPain/DTRA-Assignment | 156c1e78526f3779a783006fff3c55d6d1e1e4ed | [
"MIT"
] | null | null | null | from Menu import Menu
class Sort(Menu):
def bubble(self, arr=None, seq="asc", filter_by="index"):
if arr is None:
arr = self.menu_dict
key = [*arr]
n = len(arr)
if seq == "asc":
# Perform n-1 bubble operations on the sequence
for i in range(n - 1, 0, -1):
noSwap = True
# Bubble the largest item to the end
for j in range(i):
if arr[key[j]]._record_dict[filter_by] > arr[key[j+1]]._record_dict[filter_by]:
# Swap the j and j+1 items
tmp = arr[key[j]]
arr[key[j]] = arr[key[j+1]]
arr[key[j+1]] = tmp
noSwap = False
if noSwap:
break
elif seq == "desc":
# Perform n-1 bubble operations on the sequence
for i in range(n - 1, 0, -1):
# Bubble the largest item to the end
for j in range(i):
if arr[key[j]]._record_dict[filter_by] < arr[key[j+1]]._record_dict[filter_by]:
# Swap the j and j+1 items
tmp = arr[key[j]]
arr[key[j]] = arr[key[j+1]]
arr[key[j+1]] = tmp
def selection(self, arr=None, seq="asc", filter_by="index"):
if arr is None:
arr = self.menu_dict
n = len(arr)
if seq == "asc":
for i in range(n - 1):
# Assume the ith element is the smallest.
smallNdx = i
# Determine if any other element contains a smaller value.
for j in range(i + 1, n):
if arr[j + 1]._record_dict[filter_by] < arr[smallNdx + 1]._record_dict[filter_by]:
smallNdx = j
# Swap the ith value and smallNdx value only if the smallest
# value is not already in its proper position.
if smallNdx != i:
tmp = arr[i + 1]
arr[i + 1] = arr[smallNdx + 1]
arr[smallNdx + 1] = tmp
elif seq == "desc":
for i in range(n - 1):
# Assume the ith element is the smallest.
smallNdx = i
# Determine if any other element contains a smaller value.
for j in range(i + 1, n):
if arr[j + 1]._record_dict[filter_by] > arr[smallNdx + 1]._record_dict[filter_by]:
smallNdx = j
# Swap the ith value and smallNdx value only if the smallest
# value is not already in its proper position.
if smallNdx != i:
tmp = arr[i + 1]
arr[i + 1] = arr[smallNdx + 1]
arr[smallNdx + 1] = tmp
def insertion(self, arr=None, seq="asc", filter_by="index"):
if arr is None:
arr = self.menu_dict
n = len(arr)
if seq == "asc":
# Starts with the first item as the only sorted entry.
for i in range(2, n + 1):
# Save the value to be positioned
value = arr[i]
# ordered part of the list.
pos = i
while pos > 1 and value._record_dict[filter_by] < arr[pos - 1]._record_dict[filter_by]:
# Shift the items to the right during the search
arr[pos] = arr[pos - 1]
pos -= 1
# Put the saved value into the open slot.
arr[pos] = value
elif seq == "desc":
# Starts with the first item as the only sorted entry.
for i in range(2, n + 1):
# Save the value to be positioned
value = arr[i]
# ordered part of the list.
pos = i
while pos > 1 and value._record_dict[filter_by] > arr[pos - 1]._record_dict[filter_by]:
# Shift the items to the right during the search
arr[pos] = arr[pos - 1]
pos -= 1
# Put the saved value into the open slot.
arr[pos] = value
| 42.019608 | 103 | 0.458236 | 551 | 4,286 | 3.488203 | 0.168784 | 0.062435 | 0.043704 | 0.112383 | 0.92872 | 0.92872 | 0.920916 | 0.920916 | 0.920916 | 0.920916 | 0 | 0.018747 | 0.452403 | 4,286 | 101 | 104 | 42.435644 | 0.80017 | 0.235418 | 0 | 0.768116 | 0 | 0 | 0.013838 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.014493 | 0 | 0.072464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
914270e29ed384923e260ff60697549a7e114b7a | 9,105 | py | Python | ivy_tests/test_array_api/array_api_tests/special_cases/test_dunder_add.py | djl11/ivy | 209f74b5a1a82ca69ad712788ae0469c3f8614d9 | [
"Apache-2.0"
] | null | null | null | ivy_tests/test_array_api/array_api_tests/special_cases/test_dunder_add.py | djl11/ivy | 209f74b5a1a82ca69ad712788ae0469c3f8614d9 | [
"Apache-2.0"
] | null | null | null | ivy_tests/test_array_api/array_api_tests/special_cases/test_dunder_add.py | djl11/ivy | 209f74b5a1a82ca69ad712788ae0469c3f8614d9 | [
"Apache-2.0"
] | null | null | null | """
Special cases tests for __add__.
These tests are generated from the special cases listed in the spec.
NOTE: This file is generated automatically by the generate_stubs.py script. Do
not modify it directly.
"""
from ..array_helpers import (NaN, assert_exactly_equal, exactly_equal, infinity, isfinite,
logical_and, logical_or, non_zero, zero)
from ..hypothesis_helpers import numeric_arrays
from hypothesis import given
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_either(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If either `x1_i` or `x2_i` is `NaN`, the result is `NaN`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_or(exactly_equal(arg1, NaN(arg1.shape, arg1.dtype)), exactly_equal(arg2, NaN(arg1.shape, arg1.dtype)))
# assert_exactly_equal(res[mask], (NaN(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_1(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `+infinity` and `x2_i` is `-infinity`, the result is `NaN`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, infinity(arg1.shape, arg1.dtype)), exactly_equal(arg2, -infinity(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (NaN(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_2(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `-infinity` and `x2_i` is `+infinity`, the result is `NaN`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, -infinity(arg1.shape, arg1.dtype)), exactly_equal(arg2, infinity(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (NaN(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_3(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `+infinity` and `x2_i` is `+infinity`, the result is `+infinity`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, infinity(arg1.shape, arg1.dtype)), exactly_equal(arg2, infinity(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (infinity(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_4(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `-infinity` and `x2_i` is `-infinity`, the result is `-infinity`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, -infinity(arg1.shape, arg1.dtype)), exactly_equal(arg2, -infinity(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (-infinity(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_5(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `+infinity` and `x2_i` is a finite number, the result is `+infinity`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, infinity(arg1.shape, arg1.dtype)), isfinite(arg2))
# assert_exactly_equal(res[mask], (infinity(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_6(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `-infinity` and `x2_i` is a finite number, the result is `-infinity`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, -infinity(arg1.shape, arg1.dtype)), isfinite(arg2))
# assert_exactly_equal(res[mask], (-infinity(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_7(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is a finite number and `x2_i` is `+infinity`, the result is `+infinity`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(isfinite(arg1), exactly_equal(arg2, infinity(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (infinity(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_8(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is a finite number and `x2_i` is `-infinity`, the result is `-infinity`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(isfinite(arg1), exactly_equal(arg2, -infinity(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (-infinity(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_9(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `-0` and `x2_i` is `-0`, the result is `-0`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, -zero(arg1.shape, arg1.dtype)), exactly_equal(arg2, -zero(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (-zero(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_10(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `-0` and `x2_i` is `+0`, the result is `+0`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, -zero(arg1.shape, arg1.dtype)), exactly_equal(arg2, zero(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (zero(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_11(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `+0` and `x2_i` is `-0`, the result is `+0`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, zero(arg1.shape, arg1.dtype)), exactly_equal(arg2, -zero(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (zero(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_12(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is `+0` and `x2_i` is `+0`, the result is `+0`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(exactly_equal(arg1, zero(arg1.shape, arg1.dtype)), exactly_equal(arg2, zero(arg2.shape, arg2.dtype)))
# assert_exactly_equal(res[mask], (zero(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__equal_13(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is a nonzero finite number and `x2_i` is `-x1_i`, the result is `+0`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(logical_and(isfinite(arg1), non_zero(arg1)), exactly_equal(arg2, -arg1))
# assert_exactly_equal(res[mask], (zero(arg1.shape, arg1.dtype))[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_either__equal(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is either `+0` or `-0` and `x2_i` is a nonzero finite number, the result is `x2_i`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(logical_or(exactly_equal(arg1, zero(arg1.shape, arg1.dtype)), exactly_equal(arg1, -zero(arg1.shape, arg1.dtype))), logical_and(isfinite(arg2), non_zero(arg2)))
# assert_exactly_equal(res[mask], (arg2)[mask])
#
#
# @given(numeric_arrays, numeric_arrays)
# def test_add_special_cases_two_args_equal__either(arg1, arg2):
# """
# Special case test for `__add__(self, other, /)`:
#
# - If `x1_i` is a nonzero finite number and `x2_i` is either `+0` or `-0`, the result is `x1_i`.
#
# """
# res = arg1.__add__(arg2)
# mask = logical_and(logical_and(isfinite(arg1), non_zero(arg1)), logical_or(exactly_equal(arg2, zero(arg2.shape, arg2.dtype)), exactly_equal(arg2, -zero(arg2.shape, arg2.dtype))))
# assert_exactly_equal(res[mask], (arg1)[mask])
#
# # TODO: Implement REMAINING test for:
# # - In the remaining cases, when neither `infinity`, `+0`, `-0`, nor a `NaN` is involved, and the operands have the same mathematical sign or have different magnitudes, the sum must be computed and rounded to the nearest representable value according to IEEE 754-2019 and a supported round mode. If the magnitude is too large to represent, the operation overflows and the result is an `infinity` of appropriate mathematical sign.
| 40.287611 | 433 | 0.665898 | 1,291 | 9,105 | 4.36251 | 0.0945 | 0.095881 | 0.064631 | 0.089489 | 0.854048 | 0.844283 | 0.831499 | 0.825462 | 0.814986 | 0.808416 | 0 | 0.034864 | 0.174629 | 9,105 | 225 | 434 | 40.466667 | 0.714571 | 0.926304 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004444 | 0.25 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
e68ae199489a31b23254fad071bbba9a3019546b | 172,199 | py | Python | gui/forgui.py | andreif/generateDS | 995fa381b3f1079937bb0c1cb14f8167e879c33d | [
"MIT"
] | null | null | null | gui/forgui.py | andreif/generateDS | 995fa381b3f1079937bb0c1cb14f8167e879c33d | [
"MIT"
] | null | null | null | gui/forgui.py | andreif/generateDS | 995fa381b3f1079937bb0c1cb14f8167e879c33d | [
"MIT"
] | 1 | 2020-10-18T07:58:16.000Z | 2020-10-18T07:58:16.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
##VERSION##
VERSION = '2.22a'
##VERSION##
"""
FILE TRANSLATOR *.GLADE FOR GENERATEDS_GUI.
Usage:
python forgui.py <The fully qualified name of the base file.glade> -c <The fully qualified name of the base file.dictionary>
python forgui.py <The fully qualified name of the base file.glade> -d <The fully qualified name of the language file.dictionary>
Example:
Step1: Create a base file dictionary interface based on generateds_gui.glade
>>>python forgui.py d:\Python33\Scripts\generateds_gui.glade -c d:\Python33\Scripts\en.dictionary
Step2: Open the resulting file <..\en.dictionary> and translate into Russian language. Keep under the name <..\rus.dictionary>.
<English key> [<-|->] <Language value>
... User methods module:<-|->Пользовательский модуль методов:
... Validator bodies path:<-|->Путь корпусов контрольного устройства:
... _Capture CL<-|->_Capture CL
... _File<-|->_Файл
... _Generate<-|->_Произвести
... _Help<-|->_Помощь
... _Tools<-|->_Инструменты
............................
Step3: Create a new file based on glade generateds_gui. glade using dictionary Eng. dictionary. At the output of the get file .glade generateds_gui_rus.
>>>python forgui.py d:\Python33\Scripts\generateds_gui.glade -d d:\Python33\Scripts\rus.dictionary
"""
#
# Generated Tue Apr 19 17:15:20 2016 by generateDS.py version 2.22a.
#
# Command line options:
# ('--no-questions', '')
# ('-f', '')
# ('-o', 'D:\\Python33\\Scripts\\forGUI.py')
# ('--external-encoding', 'utf-8')
#
# Command line arguments:
# D:\Python33\Scripts\forGUI.xsd
#
# Command line:
# d:\Python33\Scripts\generateDS.py --no-questions -f -o "D:\Python33\Scripts\forGUI.py" --external-encoding="utf-8" D:\Python33\Scripts\forGUI.xsd
#
# Current working directory (os.getcwd()):
# Scripts
#
import sys
import re as re_
import base64
import datetime as datetime_
import warnings as warnings_
from lxml import etree as etree_
#----------------------------------USER FUNCTION----------------------------------------------
import os.path
LIST_HELPNAMES = {}
def create_dictfiles(args):
s = list(LIST_HELPNAMES.keys())
s.sort()
start_dictfile = open(str(args[2]), 'w', encoding='utf-8')
for e in s:
print('''%s<-|->%s''' % (e.replace('\n', ' '),e), file=start_dictfile)
start_dictfile.close()
def read_dictfiles(args):
dict_file = open(args[2], 'r', encoding='utf-8')
d = {}
list_d = dict_file.readlines()
for s in list_d:
e = s.split("<-|->")
d[e[0].replace('\\n', ' ')] = d.get(e[0].replace('\\n', ' '), re_.sub(r'\n$','',e[1]))
dict_file.close()
return d
def appLIST_HELPNAMES(inst):
re_n = re_.compile(r'(.*\n.*)')
if len(inst.valueOf_) > 0:
rs = inst.valueOf_.replace('\n', '\\n')
rkey = rs.replace('\\n', ' ')
LIST_HELPNAMES[rkey] = LIST_HELPNAMES.get(rkey, rs)
inst.valueOf_ = re_.sub(r'\n$','', LIST_HELPNAMES[rkey].replace('\\n', '\n'))
#----------------------------------------------------------------------------------------------
Validate_simpletypes_ = True
if sys.version_info.major == 2:
BaseStrType_ = basestring
else:
BaseStrType_ = str
def parsexml_(infile, parser=None, **kwargs):
if parser is None:
# Use the lxml ElementTree compatible parser so that, e.g.,
# we ignore comments.
parser = etree_.ETCompatXMLParser()
doc = etree_.parse(infile, parser=parser, **kwargs)
return doc
#
# User methods
#
# Calls to the methods in these classes are generated by generateDS.py.
# You can replace these methods by re-implementing the following class
# in a module named generatedssuper.py.
try:
from generatedssuper import GeneratedsSuper
except ImportError as exp:
class GeneratedsSuper(object):
tzoff_pattern = re_.compile(r'(\+|-)((0\d|1[0-3]):[0-5]\d|14:00)$')
class _FixedOffsetTZ(datetime_.tzinfo):
def __init__(self, offset, name):
self.__offset = datetime_.timedelta(minutes=offset)
self.__name = name
def utcoffset(self, dt):
return self.__offset
def tzname(self, dt):
return self.__name
def dst(self, dt):
return None
def gds_format_string(self, input_data, input_name=''):
return input_data
def gds_validate_string(self, input_data, node=None, input_name=''):
if not input_data:
return ''
else:
return input_data
def gds_format_base64(self, input_data, input_name=''):
return base64.b64encode(input_data)
def gds_validate_base64(self, input_data, node=None, input_name=''):
return input_data
def gds_format_integer(self, input_data, input_name=''):
return '%d' % input_data
def gds_validate_integer(self, input_data, node=None, input_name=''):
return input_data
def gds_format_integer_list(self, input_data, input_name=''):
return '%s' % ' '.join(input_data)
def gds_validate_integer_list(
self, input_data, node=None, input_name=''):
values = input_data.split()
for value in values:
try:
int(value)
except (TypeError, ValueError):
raise_parse_error(node, 'Requires sequence of integers')
return values
def gds_format_float(self, input_data, input_name=''):
return ('%.15f' % input_data).rstrip('0')
def gds_validate_float(self, input_data, node=None, input_name=''):
return input_data
def gds_format_float_list(self, input_data, input_name=''):
return '%s' % ' '.join(input_data)
def gds_validate_float_list(
self, input_data, node=None, input_name=''):
values = input_data.split()
for value in values:
try:
float(value)
except (TypeError, ValueError):
raise_parse_error(node, 'Requires sequence of floats')
return values
def gds_format_double(self, input_data, input_name=''):
return '%e' % input_data
def gds_validate_double(self, input_data, node=None, input_name=''):
return input_data
def gds_format_double_list(self, input_data, input_name=''):
return '%s' % ' '.join(input_data)
def gds_validate_double_list(
self, input_data, node=None, input_name=''):
values = input_data.split()
for value in values:
try:
float(value)
except (TypeError, ValueError):
raise_parse_error(node, 'Requires sequence of doubles')
return values
def gds_format_boolean(self, input_data, input_name=''):
return ('%s' % input_data).lower()
def gds_validate_boolean(self, input_data, node=None, input_name=''):
return input_data
def gds_format_boolean_list(self, input_data, input_name=''):
return '%s' % ' '.join(input_data)
def gds_validate_boolean_list(
self, input_data, node=None, input_name=''):
values = input_data.split()
for value in values:
if value not in ('true', '1', 'false', '0', ):
raise_parse_error(
node,
'Requires sequence of booleans '
'("true", "1", "false", "0")')
return values
def gds_validate_datetime(self, input_data, node=None, input_name=''):
return input_data
def gds_format_datetime(self, input_data, input_name=''):
if input_data.microsecond == 0:
_svalue = '%04d-%02d-%02dT%02d:%02d:%02d' % (
input_data.year,
input_data.month,
input_data.day,
input_data.hour,
input_data.minute,
input_data.second,
)
else:
_svalue = '%04d-%02d-%02dT%02d:%02d:%02d.%s' % (
input_data.year,
input_data.month,
input_data.day,
input_data.hour,
input_data.minute,
input_data.second,
('%f' % (float(input_data.microsecond) / 1000000))[2:],
)
if input_data.tzinfo is not None:
tzoff = input_data.tzinfo.utcoffset(input_data)
if tzoff is not None:
total_seconds = tzoff.seconds + (86400 * tzoff.days)
if total_seconds == 0:
_svalue += 'Z'
else:
if total_seconds < 0:
_svalue += '-'
total_seconds *= -1
else:
_svalue += '+'
hours = total_seconds // 3600
minutes = (total_seconds - (hours * 3600)) // 60
_svalue += '{0:02d}:{1:02d}'.format(hours, minutes)
return _svalue
@classmethod
def gds_parse_datetime(cls, input_data):
tz = None
if input_data[-1] == 'Z':
tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC')
input_data = input_data[:-1]
else:
results = GeneratedsSuper.tzoff_pattern.search(input_data)
if results is not None:
tzoff_parts = results.group(2).split(':')
tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1])
if results.group(1) == '-':
tzoff *= -1
tz = GeneratedsSuper._FixedOffsetTZ(
tzoff, results.group(0))
input_data = input_data[:-6]
time_parts = input_data.split('.')
if len(time_parts) > 1:
micro_seconds = int(float('0.' + time_parts[1]) * 1000000)
input_data = '%s.%s' % (time_parts[0], micro_seconds, )
dt = datetime_.datetime.strptime(
input_data, '%Y-%m-%dT%H:%M:%S.%f')
else:
dt = datetime_.datetime.strptime(
input_data, '%Y-%m-%dT%H:%M:%S')
dt = dt.replace(tzinfo=tz)
return dt
def gds_validate_date(self, input_data, node=None, input_name=''):
return input_data
def gds_format_date(self, input_data, input_name=''):
_svalue = '%04d-%02d-%02d' % (
input_data.year,
input_data.month,
input_data.day,
)
try:
if input_data.tzinfo is not None:
tzoff = input_data.tzinfo.utcoffset(input_data)
if tzoff is not None:
total_seconds = tzoff.seconds + (86400 * tzoff.days)
if total_seconds == 0:
_svalue += 'Z'
else:
if total_seconds < 0:
_svalue += '-'
total_seconds *= -1
else:
_svalue += '+'
hours = total_seconds // 3600
minutes = (total_seconds - (hours * 3600)) // 60
_svalue += '{0:02d}:{1:02d}'.format(
hours, minutes)
except AttributeError:
pass
return _svalue
@classmethod
def gds_parse_date(cls, input_data):
tz = None
if input_data[-1] == 'Z':
tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC')
input_data = input_data[:-1]
else:
results = GeneratedsSuper.tzoff_pattern.search(input_data)
if results is not None:
tzoff_parts = results.group(2).split(':')
tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1])
if results.group(1) == '-':
tzoff *= -1
tz = GeneratedsSuper._FixedOffsetTZ(
tzoff, results.group(0))
input_data = input_data[:-6]
dt = datetime_.datetime.strptime(input_data, '%Y-%m-%d')
dt = dt.replace(tzinfo=tz)
return dt.date()
def gds_validate_time(self, input_data, node=None, input_name=''):
return input_data
def gds_format_time(self, input_data, input_name=''):
if input_data.microsecond == 0:
_svalue = '%02d:%02d:%02d' % (
input_data.hour,
input_data.minute,
input_data.second,
)
else:
_svalue = '%02d:%02d:%02d.%s' % (
input_data.hour,
input_data.minute,
input_data.second,
('%f' % (float(input_data.microsecond) / 1000000))[2:],
)
if input_data.tzinfo is not None:
tzoff = input_data.tzinfo.utcoffset(input_data)
if tzoff is not None:
total_seconds = tzoff.seconds + (86400 * tzoff.days)
if total_seconds == 0:
_svalue += 'Z'
else:
if total_seconds < 0:
_svalue += '-'
total_seconds *= -1
else:
_svalue += '+'
hours = total_seconds // 3600
minutes = (total_seconds - (hours * 3600)) // 60
_svalue += '{0:02d}:{1:02d}'.format(hours, minutes)
return _svalue
def gds_validate_simple_patterns(self, patterns, target):
# pat is a list of lists of strings/patterns. We should:
# - AND the outer elements
# - OR the inner elements
found1 = True
for patterns1 in patterns:
found2 = False
for patterns2 in patterns1:
if re_.search(patterns2, target) is not None:
found2 = True
break
if not found2:
found1 = False
break
return found1
@classmethod
def gds_parse_time(cls, input_data):
tz = None
if input_data[-1] == 'Z':
tz = GeneratedsSuper._FixedOffsetTZ(0, 'UTC')
input_data = input_data[:-1]
else:
results = GeneratedsSuper.tzoff_pattern.search(input_data)
if results is not None:
tzoff_parts = results.group(2).split(':')
tzoff = int(tzoff_parts[0]) * 60 + int(tzoff_parts[1])
if results.group(1) == '-':
tzoff *= -1
tz = GeneratedsSuper._FixedOffsetTZ(
tzoff, results.group(0))
input_data = input_data[:-6]
if len(input_data.split('.')) > 1:
dt = datetime_.datetime.strptime(input_data, '%H:%M:%S.%f')
else:
dt = datetime_.datetime.strptime(input_data, '%H:%M:%S')
dt = dt.replace(tzinfo=tz)
return dt.time()
def gds_str_lower(self, instring):
return instring.lower()
def get_path_(self, node):
path_list = []
self.get_path_list_(node, path_list)
path_list.reverse()
path = '/'.join(path_list)
return path
Tag_strip_pattern_ = re_.compile(r'\{.*\}')
def get_path_list_(self, node, path_list):
if node is None:
return
tag = GeneratedsSuper.Tag_strip_pattern_.sub('', node.tag)
if tag:
path_list.append(tag)
self.get_path_list_(node.getparent(), path_list)
def get_class_obj_(self, node, default_class=None):
class_obj1 = default_class
if 'xsi' in node.nsmap:
classname = node.get('{%s}type' % node.nsmap['xsi'])
if classname is not None:
names = classname.split(':')
if len(names) == 2:
classname = names[1]
class_obj2 = globals().get(classname)
if class_obj2 is not None:
class_obj1 = class_obj2
return class_obj1
def gds_build_any(self, node, type_name=None):
return None
@classmethod
def gds_reverse_node_mapping(cls, mapping):
return dict(((v, k) for k, v in mapping.iteritems()))
@staticmethod
def gds_encode(instring):
if sys.version_info.major == 2:
return instring.encode(ExternalEncoding)
else:
return instring
def getSubclassFromModule_(module, class_):
'''Get the subclass of a class from a specific module.'''
name = class_.__name__ + 'Sub'
if hasattr(module, name):
return getattr(module, name)
else:
return None
#
# If you have installed IPython you can uncomment and use the following.
# IPython is available from http://ipython.scipy.org/.
#
## from IPython.Shell import IPShellEmbed
## args = ''
## ipshell = IPShellEmbed(args,
## banner = 'Dropping into IPython',
## exit_msg = 'Leaving Interpreter, back to program.')
# Then use the following line where and when you want to drop into the
# IPython shell:
# ipshell('<some message> -- Entering ipshell.\nHit Ctrl-D to exit')
#
# Globals
#
ExternalEncoding = 'utf-8'
Tag_pattern_ = re_.compile(r'({.*})?(.*)')
String_cleanup_pat_ = re_.compile(r"[\n\r\s]+")
Namespace_extract_pat_ = re_.compile(r'{(.*)}(.*)')
CDATA_pattern_ = re_.compile(r"<!\[CDATA\[.*?\]\]>", re_.DOTALL)
# Change this to redirect the generated superclass module to use a
# specific subclass module.
CurrentSubclassModule_ = None
#
# Support/utility functions.
#
def showIndent(outfile, level, pretty_print=True):
if pretty_print:
for idx in range(level):
outfile.write(' ')
def quote_xml(inStr):
"Escape markup chars, but do not modify CDATA sections."
if not inStr:
return ''
s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr)
s2 = ''
pos = 0
matchobjects = CDATA_pattern_.finditer(s1)
for mo in matchobjects:
s3 = s1[pos:mo.start()]
s2 += quote_xml_aux(s3)
s2 += s1[mo.start():mo.end()]
pos = mo.end()
s3 = s1[pos:]
s2 += quote_xml_aux(s3)
return s2
def quote_xml_aux(inStr):
s1 = inStr.replace('&', '&')
s1 = s1.replace('<', '<')
s1 = s1.replace('>', '>')
return s1
def quote_attrib(inStr):
s1 = (isinstance(inStr, BaseStrType_) and inStr or '%s' % inStr)
s1 = s1.replace('&', '&')
s1 = s1.replace('<', '<')
s1 = s1.replace('>', '>')
if '"' in s1:
if "'" in s1:
s1 = '"%s"' % s1.replace('"', """)
else:
s1 = "'%s'" % s1
else:
s1 = '"%s"' % s1
return s1
def quote_python(inStr):
s1 = inStr
if s1.find("'") == -1:
if s1.find('\n') == -1:
return "'%s'" % s1
else:
return "'''%s'''" % s1
else:
if s1.find('"') != -1:
s1 = s1.replace('"', '\\"')
if s1.find('\n') == -1:
return '"%s"' % s1
else:
return '"""%s"""' % s1
def get_all_text_(node):
if node.text is not None:
text = node.text
else:
text = ''
for child in node:
if child.tail is not None:
text += child.tail
return text
def find_attr_value_(attr_name, node):
attrs = node.attrib
attr_parts = attr_name.split(':')
value = None
if len(attr_parts) == 1:
value = attrs.get(attr_name)
elif len(attr_parts) == 2:
prefix, name = attr_parts
namespace = node.nsmap.get(prefix)
if namespace is not None:
value = attrs.get('{%s}%s' % (namespace, name, ))
return value
class GDSParseError(Exception):
pass
def raise_parse_error(node, msg):
msg = '%s (element %s/line %d)' % (msg, node.tag, node.sourceline, )
raise GDSParseError(msg)
class MixedContainer:
# Constants for category:
CategoryNone = 0
CategoryText = 1
CategorySimple = 2
CategoryComplex = 3
# Constants for content_type:
TypeNone = 0
TypeText = 1
TypeString = 2
TypeInteger = 3
TypeFloat = 4
TypeDecimal = 5
TypeDouble = 6
TypeBoolean = 7
TypeBase64 = 8
def __init__(self, category, content_type, name, value):
self.category = category
self.content_type = content_type
self.name = name
self.value = value
def getCategory(self):
return self.category
def getContenttype(self, content_type):
return self.content_type
def getValue(self):
return self.value
def getName(self):
return self.name
def export(self, outfile, level, name, namespace, pretty_print=True):
if self.category == MixedContainer.CategoryText:
# Prevent exporting empty content as empty lines.
if self.value.strip():
outfile.write(self.value)
elif self.category == MixedContainer.CategorySimple:
self.exportSimple(outfile, level, name)
else: # category == MixedContainer.CategoryComplex
self.value.export(outfile, level, namespace, name, pretty_print)
def exportSimple(self, outfile, level, name):
if self.content_type == MixedContainer.TypeString:
outfile.write('<%s>%s</%s>' % (
self.name, self.value, self.name))
elif self.content_type == MixedContainer.TypeInteger or \
self.content_type == MixedContainer.TypeBoolean:
outfile.write('<%s>%d</%s>' % (
self.name, self.value, self.name))
elif self.content_type == MixedContainer.TypeFloat or \
self.content_type == MixedContainer.TypeDecimal:
outfile.write('<%s>%f</%s>' % (
self.name, self.value, self.name))
elif self.content_type == MixedContainer.TypeDouble:
outfile.write('<%s>%g</%s>' % (
self.name, self.value, self.name))
elif self.content_type == MixedContainer.TypeBase64:
outfile.write('<%s>%s</%s>' % (
self.name, base64.b64encode(self.value), self.name))
def to_etree(self, element):
if self.category == MixedContainer.CategoryText:
# Prevent exporting empty content as empty lines.
if self.value.strip():
if len(element) > 0:
if element[-1].tail is None:
element[-1].tail = self.value
else:
element[-1].tail += self.value
else:
if element.text is None:
element.text = self.value
else:
element.text += self.value
elif self.category == MixedContainer.CategorySimple:
subelement = etree_.SubElement(element, '%s' % self.name)
subelement.text = self.to_etree_simple()
else: # category == MixedContainer.CategoryComplex
self.value.to_etree(element)
def to_etree_simple(self):
if self.content_type == MixedContainer.TypeString:
text = self.value
elif (self.content_type == MixedContainer.TypeInteger or
self.content_type == MixedContainer.TypeBoolean):
text = '%d' % self.value
elif (self.content_type == MixedContainer.TypeFloat or
self.content_type == MixedContainer.TypeDecimal):
text = '%f' % self.value
elif self.content_type == MixedContainer.TypeDouble:
text = '%g' % self.value
elif self.content_type == MixedContainer.TypeBase64:
text = '%s' % base64.b64encode(self.value)
return text
def exportLiteral(self, outfile, level, name):
if self.category == MixedContainer.CategoryText:
showIndent(outfile, level)
outfile.write(
'model_.MixedContainer(%d, %d, "%s", "%s"),\n' % (
self.category, self.content_type, self.name, self.value))
elif self.category == MixedContainer.CategorySimple:
showIndent(outfile, level)
outfile.write(
'model_.MixedContainer(%d, %d, "%s", "%s"),\n' % (
self.category, self.content_type, self.name, self.value))
else: # category == MixedContainer.CategoryComplex
showIndent(outfile, level)
outfile.write(
'model_.MixedContainer(%d, %d, "%s",\n' % (
self.category, self.content_type, self.name,))
self.value.exportLiteral(outfile, level + 1)
showIndent(outfile, level)
outfile.write(')\n')
class MemberSpec_(object):
def __init__(self, name='', data_type='', container=0):
self.name = name
self.data_type = data_type
self.container = container
def set_name(self, name): self.name = name
def get_name(self): return self.name
def set_data_type(self, data_type): self.data_type = data_type
def get_data_type_chain(self): return self.data_type
def get_data_type(self):
if isinstance(self.data_type, list):
if len(self.data_type) > 0:
return self.data_type[-1]
else:
return 'xs:string'
else:
return self.data_type
def set_container(self, container): self.container = container
def get_container(self): return self.container
def _cast(typ, value):
if typ is None or value is None:
return value
return typ(value)
#
# Data representation classes.
#
class interface(GeneratedsSuper):
"""Generated with glade 3.18.3"""
subclass = None
superclass = None
def __init__(self, requires=None, object=None):
self.original_tagname_ = None
self.requires = requires
if object is None:
self.object = []
else:
self.object = object
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, interface)
if subclass is not None:
return subclass(*args_, **kwargs_)
if interface.subclass:
return interface.subclass(*args_, **kwargs_)
else:
return interface(*args_, **kwargs_)
factory = staticmethod(factory)
def get_requires(self): return self.requires
def set_requires(self, requires): self.requires = requires
def get_object(self): return self.object
def set_object(self, object): self.object = object
def add_object(self, value): self.object.append(value)
def insert_object_at(self, index, value): self.object.insert(index, value)
def replace_object_at(self, index, value): self.object[index] = value
def hasContent_(self):
if (
self.requires is not None or
self.object
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='interface', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='interface')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='interface', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='interface'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='interface', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.requires is not None:
self.requires.export(outfile, level, namespace_, name_='requires', pretty_print=pretty_print)
for object_ in self.object:
object_.export(outfile, level, namespace_, name_='object', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'requires':
obj_ = requiresType.factory()
obj_.build(child_)
self.requires = obj_
obj_.original_tagname_ = 'requires'
elif nodeName_ == 'object':
obj_ = objectType.factory()
obj_.build(child_)
self.object.append(obj_)
obj_.original_tagname_ = 'object'
# end class interface
class requiresType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, lib=None, version=None, valueOf_=None):
self.original_tagname_ = None
self.lib = _cast(None, lib)
self.version = _cast(float, version)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, requiresType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if requiresType.subclass:
return requiresType.subclass(*args_, **kwargs_)
else:
return requiresType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_lib(self): return self.lib
def set_lib(self, lib): self.lib = lib
def get_version(self): return self.version
def set_version(self, version): self.version = version
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='requiresType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='requiresType')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='requiresType', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='requiresType'):
if self.lib is not None and 'lib' not in already_processed:
already_processed.add('lib')
outfile.write(' lib=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.lib), input_name='lib')), ))
if self.version is not None and 'version' not in already_processed:
already_processed.add('version')
outfile.write(' version="%s"' % self.gds_format_float(self.version, input_name='version'))
def exportChildren(self, outfile, level, namespace_='', name_='requiresType', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('lib', node)
if value is not None and 'lib' not in already_processed:
already_processed.add('lib')
self.lib = value
value = find_attr_value_('version', node)
if value is not None and 'version' not in already_processed:
already_processed.add('version')
try:
self.version = float(value)
except ValueError as exp:
raise ValueError('Bad float/double attribute (version): %s' % exp)
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class requiresType
class objectType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, class_=None, id=None, property=None, accel_groups=None, signal=None, child=None, action_widgets=None, valueOf_=None, mixedclass_=None, content_=None):
self.original_tagname_ = None
self.class_ = _cast(None, class_)
self.id = _cast(None, id)
if property is None:
self.property = []
else:
self.property = property
self.accel_groups = accel_groups
self.signal = signal
self.child = child
self.action_widgets = action_widgets
self.valueOf_ = valueOf_
if mixedclass_ is None:
self.mixedclass_ = MixedContainer
else:
self.mixedclass_ = mixedclass_
if content_ is None:
self.content_ = []
else:
self.content_ = content_
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, objectType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if objectType.subclass:
return objectType.subclass(*args_, **kwargs_)
else:
return objectType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_property(self): return self.property
def set_property(self, property): self.property = property
def add_property(self, value): self.property.append(value)
def insert_property_at(self, index, value): self.property.insert(index, value)
def replace_property_at(self, index, value): self.property[index] = value
def get_accel_groups(self): return self.accel_groups
def set_accel_groups(self, accel_groups): self.accel_groups = accel_groups
def get_signal(self): return self.signal
def set_signal(self, signal): self.signal = signal
def get_child(self): return self.child
def set_child(self, child): self.child = child
def get_action_widgets(self): return self.action_widgets
def set_action_widgets(self, action_widgets): self.action_widgets = action_widgets
def get_class(self): return self.class_
def set_class(self, class_): self.class_ = class_
def get_id(self): return self.id
def set_id(self, id): self.id = id
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
self.property or
self.accel_groups is not None or
self.signal is not None or
self.child is not None or
self.action_widgets is not None or
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='objectType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='objectType')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='objectType', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='objectType'):
if self.class_ is not None and 'class_' not in already_processed:
already_processed.add('class_')
outfile.write(' class=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.class_), input_name='class')), ))
if self.id is not None and 'id' not in already_processed:
already_processed.add('id')
outfile.write(' id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.id), input_name='id')), ))
def exportChildren(self, outfile, level, namespace_='', name_='objectType', fromsubclass_=False, pretty_print=True):
if not fromsubclass_:
for item_ in self.content_:
item_.export(outfile, level, item_.name, namespace_, pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
if node.text is not None:
obj_ = self.mixedclass_(MixedContainer.CategoryText,
MixedContainer.TypeNone, '', node.text)
self.content_.append(obj_)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('class', node)
if value is not None and 'class' not in already_processed:
already_processed.add('class')
self.class_ = value
value = find_attr_value_('id', node)
if value is not None and 'id' not in already_processed:
already_processed.add('id')
self.id = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'property':
obj_ = propertyType.factory()
obj_.build(child_)
obj_ = self.mixedclass_(MixedContainer.CategoryComplex,
MixedContainer.TypeNone, 'property', obj_)
self.content_.append(obj_)
if hasattr(self, 'add_property'):
self.add_property(obj_.value)
elif hasattr(self, 'set_property'):
self.set_property(obj_.value)
elif nodeName_ == 'accel-groups':
obj_ = accel_groupsType.factory()
obj_.build(child_)
obj_ = self.mixedclass_(MixedContainer.CategoryComplex,
MixedContainer.TypeNone, 'accel-groups', obj_)
self.content_.append(obj_)
if hasattr(self, 'add_accel-groups'):
self.add_accel-groups(obj_.value)
elif hasattr(self, 'set_accel-groups'):
self.set_accel-groups(obj_.value)
elif nodeName_ == 'signal':
obj_ = signalType.factory()
obj_.build(child_)
obj_ = self.mixedclass_(MixedContainer.CategoryComplex,
MixedContainer.TypeNone, 'signal', obj_)
self.content_.append(obj_)
if hasattr(self, 'add_signal'):
self.add_signal(obj_.value)
elif hasattr(self, 'set_signal'):
self.set_signal(obj_.value)
elif nodeName_ == 'child':
obj_ = childType.factory()
obj_.build(child_)
obj_ = self.mixedclass_(MixedContainer.CategoryComplex,
MixedContainer.TypeNone, 'child', obj_)
self.content_.append(obj_)
if hasattr(self, 'add_child'):
self.add_child(obj_.value)
elif hasattr(self, 'set_child'):
self.set_child(obj_.value)
elif nodeName_ == 'action-widgets':
obj_ = action_widgetsType.factory()
obj_.build(child_)
obj_ = self.mixedclass_(MixedContainer.CategoryComplex,
MixedContainer.TypeNone, 'action-widgets', obj_)
self.content_.append(obj_)
if hasattr(self, 'add_action-widgets'):
self.add_action-widgets(obj_.value)
elif hasattr(self, 'set_action-widgets'):
self.set_action-widgets(obj_.value)
if not fromsubclass_ and child_.tail is not None:
obj_ = self.mixedclass_(MixedContainer.CategoryText,
MixedContainer.TypeNone, '', child_.tail)
self.content_.append(obj_)
# end class objectType
class propertyType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, translatable=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.translatable = _cast(None, translatable)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, propertyType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if propertyType.subclass:
return propertyType.subclass(*args_, **kwargs_)
else:
return propertyType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_translatable(self): return self.translatable
def set_translatable(self, translatable): self.translatable = translatable
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='propertyType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, rv_isbool(namespacedef_) and ' ' + rv_isbool(namespacedef_) or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='propertyType')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='propertyType', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='propertyType'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
if self.translatable is not None and 'translatable' not in already_processed:
already_processed.add('translatable')
appLIST_HELPNAMES(self)
outfile.write(' translatable=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.translatable), input_name='translatable')), ))
def exportChildren(self, outfile, level, namespace_='', name_='propertyType', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
value = find_attr_value_('translatable', node)
if value is not None and 'translatable' not in already_processed:
already_processed.add('translatable')
self.translatable = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
def rv_isbool(namespacedef_):
if isinstance(namespacedef_, bool):
namespacedef_ = ''
return namespacedef_
# end class propertyType
class accel_groupsType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, group=None):
self.original_tagname_ = None
self.group = group
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, accel_groupsType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if accel_groupsType.subclass:
return accel_groupsType.subclass(*args_, **kwargs_)
else:
return accel_groupsType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_group(self): return self.group
def set_group(self, group): self.group = group
def hasContent_(self):
if (
self.group is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='accel-groupsType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, rv_isbool(namespacedef_) and ' ' + rv_isbool(namespacedef_) or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='accel-groupsType')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='accel-groupsType', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='accel-groupsType'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='accel-groupsType', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.group is not None:
self.group.export(outfile, level, namespace_, name_='group', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'group':
obj_ = groupType.factory()
obj_.build(child_)
self.group = obj_
obj_.original_tagname_ = 'group'
# end class accel_groupsType
class groupType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, groupType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if groupType.subclass:
return groupType.subclass(*args_, **kwargs_)
else:
return groupType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='groupType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='groupType')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='groupType', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='groupType'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
def exportChildren(self, outfile, level, namespace_='', name_='groupType', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class groupType
class signalType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, handler=None, swapped=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.handler = _cast(None, handler)
self.swapped = _cast(None, swapped)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, signalType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if signalType.subclass:
return signalType.subclass(*args_, **kwargs_)
else:
return signalType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_handler(self): return self.handler
def set_handler(self, handler): self.handler = handler
def get_swapped(self): return self.swapped
def set_swapped(self, swapped): self.swapped = swapped
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='signalType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, rv_isbool(namespacedef_) and ' ' + rv_isbool(namespacedef_) or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='signalType')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='signalType', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='signalType'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
if self.handler is not None and 'handler' not in already_processed:
already_processed.add('handler')
outfile.write(' handler=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.handler), input_name='handler')), ))
if self.swapped is not None and 'swapped' not in already_processed:
already_processed.add('swapped')
outfile.write(' swapped=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.swapped), input_name='swapped')), ))
def exportChildren(self, outfile, level, namespace_='', name_='signalType', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
value = find_attr_value_('handler', node)
if value is not None and 'handler' not in already_processed:
already_processed.add('handler')
self.handler = value
value = find_attr_value_('swapped', node)
if value is not None and 'swapped' not in already_processed:
already_processed.add('swapped')
self.swapped = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class signalType
class childType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, internal_child=None, object=None):
self.original_tagname_ = None
self.internal_child = _cast(None, internal_child)
self.object = object
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, childType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if childType.subclass:
return childType.subclass(*args_, **kwargs_)
else:
return childType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_object(self): return self.object
def set_object(self, object): self.object = object
def get_internal_child(self): return self.internal_child
def set_internal_child(self, internal_child): self.internal_child = internal_child
def hasContent_(self):
if (
self.object is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='childType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, rv_isbool(namespacedef_) and ' ' + rv_isbool(namespacedef_) or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='childType')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='childType', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='childType'):
if self.internal_child is not None and 'internal_child' not in already_processed:
already_processed.add('internal_child')
outfile.write(' internal-child=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.internal_child), input_name='internal-child')), ))
def exportChildren(self, outfile, level, namespace_='', name_='childType', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.object is not None:
self.object.export(outfile, level, namespace_, name_='object', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('internal-child', node)
if value is not None and 'internal-child' not in already_processed:
already_processed.add('internal-child')
self.internal_child = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'object':
obj_ = objectType1.factory()
obj_.build(child_)
self.object = obj_
obj_.original_tagname_ = 'object'
# end class childType
class objectType1(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, class_=None, id=None, property=None, child=None):
self.original_tagname_ = None
self.class_ = _cast(None, class_)
self.id = _cast(None, id)
if property is None:
self.property = []
else:
self.property = property
if child is None:
self.child = []
else:
self.child = child
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, objectType1)
if subclass is not None:
return subclass(*args_, **kwargs_)
if objectType1.subclass:
return objectType1.subclass(*args_, **kwargs_)
else:
return objectType1(*args_, **kwargs_)
factory = staticmethod(factory)
def get_property(self): return self.property
def set_property(self, property): self.property = property
def add_property(self, value): self.property.append(value)
def insert_property_at(self, index, value): self.property.insert(index, value)
def replace_property_at(self, index, value): self.property[index] = value
def get_child(self): return self.child
def set_child(self, child): self.child = child
def add_child(self, value): self.child.append(value)
def insert_child_at(self, index, value): self.child.insert(index, value)
def replace_child_at(self, index, value): self.child[index] = value
def get_class(self): return self.class_
def set_class(self, class_): self.class_ = class_
def get_id(self): return self.id
def set_id(self, id): self.id = id
def hasContent_(self):
if (
self.property or
self.child
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='objectType1', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='objectType1')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='objectType1', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='objectType1'):
if self.class_ is not None and 'class_' not in already_processed:
already_processed.add('class_')
outfile.write(' class=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.class_), input_name='class')), ))
if self.id is not None and 'id' not in already_processed:
already_processed.add('id')
outfile.write(' id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.id), input_name='id')), ))
def exportChildren(self, outfile, level, namespace_='', name_='objectType1', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
for property_ in self.property:
property_.export(outfile, level, namespace_, name_='property', pretty_print=pretty_print)
for child_ in self.child:
child_.export(outfile, level, namespace_, name_='child', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('class', node)
if value is not None and 'class' not in already_processed:
already_processed.add('class')
self.class_ = value
value = find_attr_value_('id', node)
if value is not None and 'id' not in already_processed:
already_processed.add('id')
self.id = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'property':
obj_ = propertyType2.factory()
obj_.build(child_)
self.property.append(obj_)
obj_.original_tagname_ = 'property'
elif nodeName_ == 'child':
obj_ = childType3.factory()
obj_.build(child_)
self.child.append(obj_)
obj_.original_tagname_ = 'child'
# end class objectType1
class propertyType2(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, propertyType2)
if subclass is not None:
return subclass(*args_, **kwargs_)
if propertyType2.subclass:
return propertyType2.subclass(*args_, **kwargs_)
else:
return propertyType2(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='propertyType2', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='propertyType2')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='propertyType2', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='propertyType2'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
def exportChildren(self, outfile, level, namespace_='', name_='propertyType2', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class propertyType2
class childType3(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, internal_child=None, object=None, packing=None, placeholder=None):
self.original_tagname_ = None
self.internal_child = _cast(None, internal_child)
self.object = object
self.packing = packing
self.placeholder = placeholder
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, childType3)
if subclass is not None:
return subclass(*args_, **kwargs_)
if childType3.subclass:
return childType3.subclass(*args_, **kwargs_)
else:
return childType3(*args_, **kwargs_)
factory = staticmethod(factory)
def get_object(self): return self.object
def set_object(self, object): self.object = object
def get_packing(self): return self.packing
def set_packing(self, packing): self.packing = packing
def get_placeholder(self): return self.placeholder
def set_placeholder(self, placeholder): self.placeholder = placeholder
def get_internal_child(self): return self.internal_child
def set_internal_child(self, internal_child): self.internal_child = internal_child
def hasContent_(self):
if (
self.object is not None or
self.packing is not None or
self.placeholder is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='childType3', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='childType3')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='childType3', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='childType3'):
if self.internal_child is not None and 'internal_child' not in already_processed:
already_processed.add('internal_child')
outfile.write(' internal-child=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.internal_child), input_name='internal-child')), ))
def exportChildren(self, outfile, level, namespace_='', name_='childType3', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.object is not None:
self.object.export(outfile, level, namespace_, name_='object', pretty_print=pretty_print)
if self.packing is not None:
self.packing.export(outfile, level, namespace_, name_='packing', pretty_print=pretty_print)
if self.placeholder is not None:
showIndent(outfile, level, pretty_print)
outfile.write('<%splaceholder>%s</%splaceholder>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.placeholder), input_name='placeholder')), namespace_, eol_))
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('internal-child', node)
if value is not None and 'internal-child' not in already_processed:
already_processed.add('internal-child')
self.internal_child = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'object':
obj_ = objectType4.factory()
obj_.build(child_)
self.object = obj_
obj_.original_tagname_ = 'object'
elif nodeName_ == 'packing':
obj_ = packingType21.factory()
obj_.build(child_)
self.packing = obj_
obj_.original_tagname_ = 'packing'
elif nodeName_ == 'placeholder':
placeholder_ = child_.text
placeholder_ = self.gds_validate_string(placeholder_, node, 'placeholder')
self.placeholder = placeholder_
# end class childType3
class objectType4(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, class_=None, id=None, property=None, child=None):
self.original_tagname_ = None
self.class_ = _cast(None, class_)
self.id = _cast(None, id)
if property is None:
self.property = []
else:
self.property = property
if child is None:
self.child = []
else:
self.child = child
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, objectType4)
if subclass is not None:
return subclass(*args_, **kwargs_)
if objectType4.subclass:
return objectType4.subclass(*args_, **kwargs_)
else:
return objectType4(*args_, **kwargs_)
factory = staticmethod(factory)
def get_property(self): return self.property
def set_property(self, property): self.property = property
def add_property(self, value): self.property.append(value)
def insert_property_at(self, index, value): self.property.insert(index, value)
def replace_property_at(self, index, value): self.property[index] = value
def get_child(self): return self.child
def set_child(self, child): self.child = child
def add_child(self, value): self.child.append(value)
def insert_child_at(self, index, value): self.child.insert(index, value)
def replace_child_at(self, index, value): self.child[index] = value
def get_class(self): return self.class_
def set_class(self, class_): self.class_ = class_
def get_id(self): return self.id
def set_id(self, id): self.id = id
def hasContent_(self):
if (
self.property or
self.child
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='objectType4', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='objectType4')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='objectType4', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='objectType4'):
if self.class_ is not None and 'class_' not in already_processed:
already_processed.add('class_')
outfile.write(' class=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.class_), input_name='class')), ))
if self.id is not None and 'id' not in already_processed:
already_processed.add('id')
outfile.write(' id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.id), input_name='id')), ))
def exportChildren(self, outfile, level, namespace_='', name_='objectType4', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
for property_ in self.property:
property_.export(outfile, level, namespace_, name_='property', pretty_print=pretty_print)
for child_ in self.child:
child_.export(outfile, level, namespace_, name_='child', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('class', node)
if value is not None and 'class' not in already_processed:
already_processed.add('class')
self.class_ = value
value = find_attr_value_('id', node)
if value is not None and 'id' not in already_processed:
already_processed.add('id')
self.id = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'property':
obj_ = propertyType5.factory()
obj_.build(child_)
self.property.append(obj_)
obj_.original_tagname_ = 'property'
elif nodeName_ == 'child':
obj_ = childType6.factory()
obj_.build(child_)
self.child.append(obj_)
obj_.original_tagname_ = 'child'
# end class objectType4
class propertyType5(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, propertyType5)
if subclass is not None:
return subclass(*args_, **kwargs_)
if propertyType5.subclass:
return propertyType5.subclass(*args_, **kwargs_)
else:
return propertyType5(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='propertyType5', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='propertyType5')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='propertyType5', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='propertyType5'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
def exportChildren(self, outfile, level, namespace_='', name_='propertyType5', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class propertyType5
class childType6(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, object=None, packing=None):
self.original_tagname_ = None
self.object = object
self.packing = packing
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, childType6)
if subclass is not None:
return subclass(*args_, **kwargs_)
if childType6.subclass:
return childType6.subclass(*args_, **kwargs_)
else:
return childType6(*args_, **kwargs_)
factory = staticmethod(factory)
def get_object(self): return self.object
def set_object(self, object): self.object = object
def get_packing(self): return self.packing
def set_packing(self, packing): self.packing = packing
def hasContent_(self):
if (
self.object is not None or
self.packing is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='childType6', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='childType6')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='childType6', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='childType6'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='childType6', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.object is not None:
self.object.export(outfile, level, namespace_, name_='object', pretty_print=pretty_print)
if self.packing is not None:
self.packing.export(outfile, level, namespace_, name_='packing', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'object':
obj_ = objectType7.factory()
obj_.build(child_)
self.object = obj_
obj_.original_tagname_ = 'object'
elif nodeName_ == 'packing':
obj_ = packingType19.factory()
obj_.build(child_)
self.packing = obj_
obj_.original_tagname_ = 'packing'
# end class childType6
class objectType7(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, class_=None, id=None, property=None, child=None, signal=None):
self.original_tagname_ = None
self.class_ = _cast(None, class_)
self.id = _cast(None, id)
if property is None:
self.property = []
else:
self.property = property
self.child = child
self.signal = signal
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, objectType7)
if subclass is not None:
return subclass(*args_, **kwargs_)
if objectType7.subclass:
return objectType7.subclass(*args_, **kwargs_)
else:
return objectType7(*args_, **kwargs_)
factory = staticmethod(factory)
def get_property(self): return self.property
def set_property(self, property): self.property = property
def add_property(self, value): self.property.append(value)
def insert_property_at(self, index, value): self.property.insert(index, value)
def replace_property_at(self, index, value): self.property[index] = value
def get_child(self): return self.child
def set_child(self, child): self.child = child
def get_signal(self): return self.signal
def set_signal(self, signal): self.signal = signal
def get_class(self): return self.class_
def set_class(self, class_): self.class_ = class_
def get_id(self): return self.id
def set_id(self, id): self.id = id
def hasContent_(self):
if (
self.property or
self.child is not None or
self.signal is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='objectType7', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='objectType7')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='objectType7', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='objectType7'):
if self.class_ is not None and 'class_' not in already_processed:
already_processed.add('class_')
outfile.write(' class=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.class_), input_name='class')), ))
if self.id is not None and 'id' not in already_processed:
already_processed.add('id')
outfile.write(' id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.id), input_name='id')), ))
def exportChildren(self, outfile, level, namespace_='', name_='objectType7', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
for property_ in self.property:
property_.export(outfile, level, namespace_, name_='property', pretty_print=pretty_print)
if self.child is not None:
self.child.export(outfile, level, namespace_, name_='child', pretty_print=pretty_print)
if self.signal is not None:
self.signal.export(outfile, level, namespace_, name_='signal', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('class', node)
if value is not None and 'class' not in already_processed:
already_processed.add('class')
self.class_ = value
value = find_attr_value_('id', node)
if value is not None and 'id' not in already_processed:
already_processed.add('id')
self.id = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'property':
obj_ = propertyType8.factory()
obj_.build(child_)
self.property.append(obj_)
obj_.original_tagname_ = 'property'
elif nodeName_ == 'child':
obj_ = childType9.factory()
obj_.build(child_)
self.child = obj_
obj_.original_tagname_ = 'child'
elif nodeName_ == 'signal':
obj_ = signalType18.factory()
obj_.build(child_)
self.signal = obj_
obj_.original_tagname_ = 'signal'
# end class objectType7
class propertyType8(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, translatable=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.translatable = _cast(None, translatable)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, propertyType8)
if subclass is not None:
return subclass(*args_, **kwargs_)
if propertyType8.subclass:
return propertyType8.subclass(*args_, **kwargs_)
else:
return propertyType8(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_translatable(self): return self.translatable
def set_translatable(self, translatable): self.translatable = translatable
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='propertyType8', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='propertyType8')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='propertyType8', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='propertyType8'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
if self.translatable is not None and 'translatable' not in already_processed:
already_processed.add('translatable')
appLIST_HELPNAMES(self)
outfile.write(' translatable=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.translatable), input_name='translatable')), ))
def exportChildren(self, outfile, level, namespace_='', name_='propertyType8', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
value = find_attr_value_('translatable', node)
if value is not None and 'translatable' not in already_processed:
already_processed.add('translatable')
self.translatable = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class propertyType8
class childType9(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, type_=None, object=None):
self.original_tagname_ = None
self.type_ = _cast(None, type_)
self.object = object
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, childType9)
if subclass is not None:
return subclass(*args_, **kwargs_)
if childType9.subclass:
return childType9.subclass(*args_, **kwargs_)
else:
return childType9(*args_, **kwargs_)
factory = staticmethod(factory)
def get_object(self): return self.object
def set_object(self, object): self.object = object
def get_type(self): return self.type_
def set_type(self, type_): self.type_ = type_
def hasContent_(self):
if (
self.object is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='childType9', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='childType9')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='childType9', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='childType9'):
if self.type_ is not None and 'type_' not in already_processed:
already_processed.add('type_')
outfile.write(' type=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.type_), input_name='type')), ))
def exportChildren(self, outfile, level, namespace_='', name_='childType9', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.object is not None:
self.object.export(outfile, level, namespace_, name_='object', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('type', node)
if value is not None and 'type' not in already_processed:
already_processed.add('type')
self.type_ = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'object':
obj_ = objectType10.factory()
obj_.build(child_)
self.object = obj_
obj_.original_tagname_ = 'object'
# end class childType9
class objectType10(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, class_=None, id=None, property=None, child=None):
self.original_tagname_ = None
self.class_ = _cast(None, class_)
self.id = _cast(None, id)
if property is None:
self.property = []
else:
self.property = property
if child is None:
self.child = []
else:
self.child = child
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, objectType10)
if subclass is not None:
return subclass(*args_, **kwargs_)
if objectType10.subclass:
return objectType10.subclass(*args_, **kwargs_)
else:
return objectType10(*args_, **kwargs_)
factory = staticmethod(factory)
def get_property(self): return self.property
def set_property(self, property): self.property = property
def add_property(self, value): self.property.append(value)
def insert_property_at(self, index, value): self.property.insert(index, value)
def replace_property_at(self, index, value): self.property[index] = value
def get_child(self): return self.child
def set_child(self, child): self.child = child
def add_child(self, value): self.child.append(value)
def insert_child_at(self, index, value): self.child.insert(index, value)
def replace_child_at(self, index, value): self.child[index] = value
def get_class(self): return self.class_
def set_class(self, class_): self.class_ = class_
def get_id(self): return self.id
def set_id(self, id): self.id = id
def hasContent_(self):
if (
self.property or
self.child
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='objectType10', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='objectType10')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='objectType10', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='objectType10'):
if self.class_ is not None and 'class_' not in already_processed:
already_processed.add('class_')
outfile.write(' class=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.class_), input_name='class')), ))
if self.id is not None and 'id' not in already_processed:
already_processed.add('id')
outfile.write(' id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.id), input_name='id')), ))
def exportChildren(self, outfile, level, namespace_='', name_='objectType10', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
for property_ in self.property:
property_.export(outfile, level, namespace_, name_='property', pretty_print=pretty_print)
for child_ in self.child:
child_.export(outfile, level, namespace_, name_='child', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('class', node)
if value is not None and 'class' not in already_processed:
already_processed.add('class')
self.class_ = value
value = find_attr_value_('id', node)
if value is not None and 'id' not in already_processed:
already_processed.add('id')
self.id = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'property':
obj_ = propertyType11.factory()
obj_.build(child_)
self.property.append(obj_)
obj_.original_tagname_ = 'property'
elif nodeName_ == 'child':
obj_ = childType12.factory()
obj_.build(child_)
self.child.append(obj_)
obj_.original_tagname_ = 'child'
# end class objectType10
class propertyType11(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, propertyType11)
if subclass is not None:
return subclass(*args_, **kwargs_)
if propertyType11.subclass:
return propertyType11.subclass(*args_, **kwargs_)
else:
return propertyType11(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='propertyType11', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='propertyType11')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='propertyType11', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='propertyType11'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
def exportChildren(self, outfile, level, namespace_='', name_='propertyType11', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class propertyType11
class childType12(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, placeholder=None, object=None, packing=None):
self.original_tagname_ = None
self.placeholder = placeholder
self.object = object
self.packing = packing
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, childType12)
if subclass is not None:
return subclass(*args_, **kwargs_)
if childType12.subclass:
return childType12.subclass(*args_, **kwargs_)
else:
return childType12(*args_, **kwargs_)
factory = staticmethod(factory)
def get_placeholder(self): return self.placeholder
def set_placeholder(self, placeholder): self.placeholder = placeholder
def get_object(self): return self.object
def set_object(self, object): self.object = object
def get_packing(self): return self.packing
def set_packing(self, packing): self.packing = packing
def hasContent_(self):
if (
self.placeholder is not None or
self.object is not None or
self.packing is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='childType12', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='childType12')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='childType12', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='childType12'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='childType12', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.placeholder is not None:
showIndent(outfile, level, pretty_print)
outfile.write('<%splaceholder>%s</%splaceholder>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.placeholder), input_name='placeholder')), namespace_, eol_))
if self.object is not None:
self.object.export(outfile, level, namespace_, name_='object', pretty_print=pretty_print)
if self.packing is not None:
self.packing.export(outfile, level, namespace_, name_='packing', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'placeholder':
placeholder_ = child_.text
placeholder_ = self.gds_validate_string(placeholder_, node, 'placeholder')
self.placeholder = placeholder_
elif nodeName_ == 'object':
obj_ = objectType13.factory()
obj_.build(child_)
self.object = obj_
obj_.original_tagname_ = 'object'
elif nodeName_ == 'packing':
obj_ = packingType.factory()
obj_.build(child_)
self.packing = obj_
obj_.original_tagname_ = 'packing'
# end class childType12
class objectType13(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, class_=None, id=None, property=None, signal=None, accelerator=None, child=None):
self.original_tagname_ = None
self.class_ = _cast(None, class_)
self.id = _cast(None, id)
if property is None:
self.property = []
else:
self.property = property
self.signal = signal
self.accelerator = accelerator
self.child = child
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, objectType13)
if subclass is not None:
return subclass(*args_, **kwargs_)
if objectType13.subclass:
return objectType13.subclass(*args_, **kwargs_)
else:
return objectType13(*args_, **kwargs_)
factory = staticmethod(factory)
def get_property(self): return self.property
def set_property(self, property): self.property = property
def add_property(self, value): self.property.append(value)
def insert_property_at(self, index, value): self.property.insert(index, value)
def replace_property_at(self, index, value): self.property[index] = value
def get_signal(self): return self.signal
def set_signal(self, signal): self.signal = signal
def get_accelerator(self): return self.accelerator
def set_accelerator(self, accelerator): self.accelerator = accelerator
def get_child(self): return self.child
def set_child(self, child): self.child = child
def get_class(self): return self.class_
def set_class(self, class_): self.class_ = class_
def get_id(self): return self.id
def set_id(self, id): self.id = id
def hasContent_(self):
if (
self.property or
self.signal is not None or
self.accelerator is not None or
self.child is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='objectType13', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='objectType13')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='objectType13', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='objectType13'):
if self.class_ is not None and 'class_' not in already_processed:
already_processed.add('class_')
outfile.write(' class=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.class_), input_name='class')), ))
if self.id is not None and 'id' not in already_processed:
already_processed.add('id')
outfile.write(' id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.id), input_name='id')), ))
def exportChildren(self, outfile, level, namespace_='', name_='objectType13', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
for property_ in self.property:
property_.export(outfile, level, namespace_, name_='property', pretty_print=pretty_print)
if self.signal is not None:
self.signal.export(outfile, level, namespace_, name_='signal', pretty_print=pretty_print)
if self.accelerator is not None:
self.accelerator.export(outfile, level, namespace_, name_='accelerator', pretty_print=pretty_print)
if self.child is not None:
self.child.export(outfile, level, namespace_, name_='child', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('class', node)
if value is not None and 'class' not in already_processed:
already_processed.add('class')
self.class_ = value
value = find_attr_value_('id', node)
if value is not None and 'id' not in already_processed:
already_processed.add('id')
self.id = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'property':
obj_ = propertyType14.factory()
obj_.build(child_)
self.property.append(obj_)
obj_.original_tagname_ = 'property'
elif nodeName_ == 'signal':
obj_ = signalType15.factory()
obj_.build(child_)
self.signal = obj_
obj_.original_tagname_ = 'signal'
elif nodeName_ == 'accelerator':
obj_ = acceleratorType.factory()
obj_.build(child_)
self.accelerator = obj_
obj_.original_tagname_ = 'accelerator'
elif nodeName_ == 'child':
obj_ = childType16.factory()
obj_.build(child_)
self.child = obj_
obj_.original_tagname_ = 'child'
# end class objectType13
class propertyType14(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, translatable=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.translatable = _cast(None, translatable)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, propertyType14)
if subclass is not None:
return subclass(*args_, **kwargs_)
if propertyType14.subclass:
return propertyType14.subclass(*args_, **kwargs_)
else:
return propertyType14(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_translatable(self): return self.translatable
def set_translatable(self, translatable): self.translatable = translatable
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='propertyType14', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='propertyType14')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='propertyType14', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='propertyType14'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
if self.translatable is not None and 'translatable' not in already_processed:
already_processed.add('translatable')
appLIST_HELPNAMES(self)
outfile.write(' translatable=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.translatable), input_name='translatable')), ))
def exportChildren(self, outfile, level, namespace_='', name_='propertyType14', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
value = find_attr_value_('translatable', node)
if value is not None and 'translatable' not in already_processed:
already_processed.add('translatable')
self.translatable = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class propertyType14
class signalType15(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, handler=None, swapped=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.handler = _cast(None, handler)
self.swapped = _cast(None, swapped)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, signalType15)
if subclass is not None:
return subclass(*args_, **kwargs_)
if signalType15.subclass:
return signalType15.subclass(*args_, **kwargs_)
else:
return signalType15(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_handler(self): return self.handler
def set_handler(self, handler): self.handler = handler
def get_swapped(self): return self.swapped
def set_swapped(self, swapped): self.swapped = swapped
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='signalType15', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='signalType15')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='signalType15', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='signalType15'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
if self.handler is not None and 'handler' not in already_processed:
already_processed.add('handler')
outfile.write(' handler=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.handler), input_name='handler')), ))
if self.swapped is not None and 'swapped' not in already_processed:
already_processed.add('swapped')
outfile.write(' swapped=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.swapped), input_name='swapped')), ))
def exportChildren(self, outfile, level, namespace_='', name_='signalType15', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
value = find_attr_value_('handler', node)
if value is not None and 'handler' not in already_processed:
already_processed.add('handler')
self.handler = value
value = find_attr_value_('swapped', node)
if value is not None and 'swapped' not in already_processed:
already_processed.add('swapped')
self.swapped = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class signalType15
class acceleratorType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, key=None, signal=None, modifiers=None, valueOf_=None):
self.original_tagname_ = None
self.key = _cast(None, key)
self.signal = _cast(None, signal)
self.modifiers = _cast(None, modifiers)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, acceleratorType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if acceleratorType.subclass:
return acceleratorType.subclass(*args_, **kwargs_)
else:
return acceleratorType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_key(self): return self.key
def set_key(self, key): self.key = key
def get_signal(self): return self.signal
def set_signal(self, signal): self.signal = signal
def get_modifiers(self): return self.modifiers
def set_modifiers(self, modifiers): self.modifiers = modifiers
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='acceleratorType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='acceleratorType')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='acceleratorType', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='acceleratorType'):
if self.key is not None and 'key' not in already_processed:
already_processed.add('key')
outfile.write(' key=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.key), input_name='key')), ))
if self.signal is not None and 'signal' not in already_processed:
already_processed.add('signal')
outfile.write(' signal=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.signal), input_name='signal')), ))
if self.modifiers is not None and 'modifiers' not in already_processed:
already_processed.add('modifiers')
outfile.write(' modifiers=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.modifiers), input_name='modifiers')), ))
def exportChildren(self, outfile, level, namespace_='', name_='acceleratorType', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('key', node)
if value is not None and 'key' not in already_processed:
already_processed.add('key')
self.key = value
value = find_attr_value_('signal', node)
if value is not None and 'signal' not in already_processed:
already_processed.add('signal')
self.signal = value
value = find_attr_value_('modifiers', node)
if value is not None and 'modifiers' not in already_processed:
already_processed.add('modifiers')
self.modifiers = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class acceleratorType
class childType16(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, placeholder=None):
self.original_tagname_ = None
self.placeholder = placeholder
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, childType16)
if subclass is not None:
return subclass(*args_, **kwargs_)
if childType16.subclass:
return childType16.subclass(*args_, **kwargs_)
else:
return childType16(*args_, **kwargs_)
factory = staticmethod(factory)
def get_placeholder(self): return self.placeholder
def set_placeholder(self, placeholder): self.placeholder = placeholder
def hasContent_(self):
if (
self.placeholder is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='childType16', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='childType16')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='childType16', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='childType16'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='childType16', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.placeholder is not None:
showIndent(outfile, level, pretty_print)
outfile.write('<%splaceholder>%s</%splaceholder>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.placeholder), input_name='placeholder')), namespace_, eol_))
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'placeholder':
placeholder_ = child_.text
placeholder_ = self.gds_validate_string(placeholder_, node, 'placeholder')
self.placeholder = placeholder_
# end class childType16
class packingType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, property=None):
self.original_tagname_ = None
if property is None:
self.property = []
else:
self.property = property
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, packingType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if packingType.subclass:
return packingType.subclass(*args_, **kwargs_)
else:
return packingType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_property(self): return self.property
def set_property(self, property): self.property = property
def add_property(self, value): self.property.append(value)
def insert_property_at(self, index, value): self.property.insert(index, value)
def replace_property_at(self, index, value): self.property[index] = value
def hasContent_(self):
if (
self.property
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='packingType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='packingType')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='packingType', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='packingType'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='packingType', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
for property_ in self.property:
property_.export(outfile, level, namespace_, name_='property', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'property':
obj_ = propertyType17.factory()
obj_.build(child_)
self.property.append(obj_)
obj_.original_tagname_ = 'property'
# end class packingType
class propertyType17(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, propertyType17)
if subclass is not None:
return subclass(*args_, **kwargs_)
if propertyType17.subclass:
return propertyType17.subclass(*args_, **kwargs_)
else:
return propertyType17(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='propertyType17', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='propertyType17')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='propertyType17', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='propertyType17'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
def exportChildren(self, outfile, level, namespace_='', name_='propertyType17', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class propertyType17
class signalType18(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, handler=None, swapped=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.handler = _cast(None, handler)
self.swapped = _cast(None, swapped)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, signalType18)
if subclass is not None:
return subclass(*args_, **kwargs_)
if signalType18.subclass:
return signalType18.subclass(*args_, **kwargs_)
else:
return signalType18(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_handler(self): return self.handler
def set_handler(self, handler): self.handler = handler
def get_swapped(self): return self.swapped
def set_swapped(self, swapped): self.swapped = swapped
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='signalType18', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='signalType18')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='signalType18', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='signalType18'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
if self.handler is not None and 'handler' not in already_processed:
already_processed.add('handler')
outfile.write(' handler=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.handler), input_name='handler')), ))
if self.swapped is not None and 'swapped' not in already_processed:
already_processed.add('swapped')
outfile.write(' swapped=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.swapped), input_name='swapped')), ))
def exportChildren(self, outfile, level, namespace_='', name_='signalType18', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
value = find_attr_value_('handler', node)
if value is not None and 'handler' not in already_processed:
already_processed.add('handler')
self.handler = value
value = find_attr_value_('swapped', node)
if value is not None and 'swapped' not in already_processed:
already_processed.add('swapped')
self.swapped = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class signalType18
class packingType19(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, property=None):
self.original_tagname_ = None
if property is None:
self.property = []
else:
self.property = property
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, packingType19)
if subclass is not None:
return subclass(*args_, **kwargs_)
if packingType19.subclass:
return packingType19.subclass(*args_, **kwargs_)
else:
return packingType19(*args_, **kwargs_)
factory = staticmethod(factory)
def get_property(self): return self.property
def set_property(self, property): self.property = property
def add_property(self, value): self.property.append(value)
def insert_property_at(self, index, value): self.property.insert(index, value)
def replace_property_at(self, index, value): self.property[index] = value
def hasContent_(self):
if (
self.property
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='packingType19', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='packingType19')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='packingType19', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='packingType19'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='packingType19', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
for property_ in self.property:
property_.export(outfile, level, namespace_, name_='property', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'property':
obj_ = propertyType20.factory()
obj_.build(child_)
self.property.append(obj_)
obj_.original_tagname_ = 'property'
# end class packingType19
class propertyType20(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, propertyType20)
if subclass is not None:
return subclass(*args_, **kwargs_)
if propertyType20.subclass:
return propertyType20.subclass(*args_, **kwargs_)
else:
return propertyType20(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='propertyType20', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='propertyType20')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='propertyType20', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='propertyType20'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
def exportChildren(self, outfile, level, namespace_='', name_='propertyType20', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class propertyType20
class packingType21(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, property=None):
self.original_tagname_ = None
if property is None:
self.property = []
else:
self.property = property
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, packingType21)
if subclass is not None:
return subclass(*args_, **kwargs_)
if packingType21.subclass:
return packingType21.subclass(*args_, **kwargs_)
else:
return packingType21(*args_, **kwargs_)
factory = staticmethod(factory)
def get_property(self): return self.property
def set_property(self, property): self.property = property
def add_property(self, value): self.property.append(value)
def insert_property_at(self, index, value): self.property.insert(index, value)
def replace_property_at(self, index, value): self.property[index] = value
def hasContent_(self):
if (
self.property
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='packingType21', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='packingType21')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='packingType21', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='packingType21'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='packingType21', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
for property_ in self.property:
property_.export(outfile, level, namespace_, name_='property', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'property':
obj_ = propertyType22.factory()
obj_.build(child_)
self.property.append(obj_)
obj_.original_tagname_ = 'property'
# end class packingType21
class propertyType22(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, name=None, valueOf_=None):
self.original_tagname_ = None
self.name = _cast(None, name)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, propertyType22)
if subclass is not None:
return subclass(*args_, **kwargs_)
if propertyType22.subclass:
return propertyType22.subclass(*args_, **kwargs_)
else:
return propertyType22(*args_, **kwargs_)
factory = staticmethod(factory)
def get_name(self): return self.name
def set_name(self, name): self.name = name
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='propertyType22', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='propertyType22')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='propertyType22', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='propertyType22'):
if self.name is not None and 'name' not in already_processed:
already_processed.add('name')
outfile.write(' name=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.name), input_name='name')), ))
def exportChildren(self, outfile, level, namespace_='', name_='propertyType22', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('name', node)
if value is not None and 'name' not in already_processed:
already_processed.add('name')
self.name = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class propertyType22
class action_widgetsType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, action_widget=None):
self.original_tagname_ = None
self.action_widget = action_widget
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, action_widgetsType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if action_widgetsType.subclass:
return action_widgetsType.subclass(*args_, **kwargs_)
else:
return action_widgetsType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_action_widget(self): return self.action_widget
def set_action_widget(self, action_widget): self.action_widget = action_widget
def hasContent_(self):
if (
self.action_widget is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='action-widgetsType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, rv_isbool(namespacedef_) and ' ' + rv_isbool(namespacedef_) or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='action-widgetsType')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='action-widgetsType', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='action-widgetsType'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='action-widgetsType', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.action_widget is not None:
self.action_widget.export(outfile, level, namespace_, name_='action-widget', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'action-widget':
obj_ = action_widgetType.factory()
obj_.build(child_)
self.action_widget = obj_
obj_.original_tagname_ = 'action-widget'
# end class action_widgetsType
class action_widgetType(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, response=None, valueOf_=None):
self.original_tagname_ = None
self.response = _cast(int, response)
self.valueOf_ = valueOf_
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, action_widgetType)
if subclass is not None:
return subclass(*args_, **kwargs_)
if action_widgetType.subclass:
return action_widgetType.subclass(*args_, **kwargs_)
else:
return action_widgetType(*args_, **kwargs_)
factory = staticmethod(factory)
def get_response(self): return self.response
def set_response(self, response): self.response = response
def get_valueOf_(self): return self.valueOf_
def set_valueOf_(self, valueOf_): self.valueOf_ = valueOf_
def hasContent_(self):
if (
1 if type(self.valueOf_) in [int,float] else self.valueOf_
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='action-widgetType', namespacedef_='', pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='action-widgetType')
if self.hasContent_():
outfile.write('>')
outfile.write((quote_xml(self.valueOf_) if type(self.valueOf_) is str else self.gds_encode(str(self.valueOf_))))
self.exportChildren(outfile, level + 1, namespace_='', name_='action-widgetType', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='action-widgetType'):
if self.response is not None and 'response' not in already_processed:
already_processed.add('response')
outfile.write(' response="%s"' % self.gds_format_integer(self.response, input_name='response'))
def exportChildren(self, outfile, level, namespace_='', name_='action-widgetType', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
self.valueOf_ = get_all_text_(node)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('response', node)
if value is not None and 'response' not in already_processed:
already_processed.add('response')
try:
self.response = int(value)
except ValueError as exp:
raise_parse_error(node, 'Bad integer attribute: %s' % exp)
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class action_widgetType
GDSClassesMapping = {
'accel-groups': accel_groupsType,
'accelerator': acceleratorType,
'action-widget': action_widgetType,
'action-widgets': action_widgetsType,
'child': childType16,
'group': groupType,
'object': objectType13,
'packing': packingType,
'property': propertyType22,
'requires': requiresType,
'signal': signalType15,
}
USAGE_TEXT = """
Usage:
python forgui.py <The fully qualified name of the base file.glade> -c <The fully qualified name of the base file.dictionary>
python forgui.py <The fully qualified name of the base file.glade> -d <The fully qualified name of the language file.dictionary>
Example:
Step1: Create a base file dictionary interface based on generateds_gui.glade
>>>python forgui.py d:\Python33\Scripts\generateds_gui.glade -c d:\Python33\Scripts\en.dictionary
Step2: Open the resulting file en. dictionary and translate into Russian language. Keep under the name rus. dictionary.
<English key> [<-|->] <Language value>
... User methods module:<-|->Пользовательский модуль методов:
... Validator bodies path:<-|->Путь корпусов контрольного устройства:
... _Capture CL<-|->_Capture CL
... _File<-|->_Файл
... _Generate<-|->_Произвести
... _Help<-|->_Помощь
... _Tools<-|->_Инструменты
............................
Step3: Create a new file based on glade generateds_gui. glade using dictionary Eng. dictionary. At the output of the get file .glade generateds_gui_rus.
>>>python forgui.py d:\Python33\Scripts\generateds_gui.glade -d d:\Python33\Scripts\rus.dictionary
"""
def usage():
print(USAGE_TEXT)
sys.exit(1)
def get_root_tag(node):
tag = Tag_pattern_.match(node.tag).groups()[-1]
rootClass = GDSClassesMapping.get(tag)
if rootClass is None:
rootClass = globals().get(tag)
return tag, rootClass
def parse(inFileName, silence=False):
parser = None
doc = parsexml_(inFileName, parser)
rootNode = doc.getroot()
rootTag, rootClass = get_root_tag(rootNode)
if rootClass is None:
rootTag = 'interface'
rootClass = interface
rootObj = rootClass.factory()
rootObj.build(rootNode)
# Enable Python to collect the space used by the DOM.
doc = None
if not silence:
sys.stdout.write('<?xml version="1.0" ?>\n')
rootObj.export(
sys.stdout, 0, name_=rootTag,
namespacedef_='',
pretty_print=True)
return rootObj
def parseEtree(inFileName, silence=False):
parser = None
doc = parsexml_(inFileName, parser)
rootNode = doc.getroot()
rootTag, rootClass = get_root_tag(rootNode)
if rootClass is None:
rootTag = 'interface'
rootClass = interface
rootObj = rootClass.factory()
rootObj.build(rootNode)
# Enable Python to collect the space used by the DOM.
doc = None
mapping = {}
rootElement = rootObj.to_etree(None, name_=rootTag, mapping_=mapping)
reverse_mapping = rootObj.gds_reverse_node_mapping(mapping)
if not silence:
content = etree_.tostring(
rootElement, pretty_print=True,
xml_declaration=True, encoding="utf-8")
sys.stdout.write(content)
sys.stdout.write('\n')
return rootObj, rootElement, mapping, reverse_mapping
def parseString(inString, silence=False):
from StringIO import StringIO
parser = None
doc = parsexml_(StringIO(inString), parser)
rootNode = doc.getroot()
rootTag, rootClass = get_root_tag(rootNode)
if rootClass is None:
rootTag = 'interface'
rootClass = interface
rootObj = rootClass.factory()
rootObj.build(rootNode)
# Enable Python to collect the space used by the DOM.
doc = None
if not silence:
sys.stdout.write('<?xml version="1.0" ?>\n')
rootObj.export(
sys.stdout, 0, name_=rootTag,
namespacedef_='')
return rootObj
def parseLiteral(inFileName, silence=False):
parser = None
doc = parsexml_(inFileName, parser)
rootNode = doc.getroot()
rootTag, rootClass = get_root_tag(rootNode)
if rootClass is None:
rootTag = 'interface'
rootClass = interface
rootObj = rootClass.factory()
rootObj.build(rootNode)
# Enable Python to collect the space used by the DOM.
doc = None
if not silence:
sys.stdout.write('#from forGUI import *\n\n')
sys.stdout.write('import forGUI as model_\n\n')
sys.stdout.write('rootObj = model_.rootClass(\n')
rootObj.exportLiteral(sys.stdout, 0, name_=rootTag)
sys.stdout.write(')\n')
return rootObj
def main():
args = sys.argv[1:]
if len(args) > 0:
# python forgui.py d:\Python33\Scripts\generateds_gui.glade -d d:\Python33\Scripts\rus.dictionary
if args[1] == '-d':
global LIST_HELPNAMES
LIST_HELPNAMES = read_dictfiles(args)
root = parse(args[0], silence=True)
path_glade, filename_glade = os.path.split(args[0])
basename_glade, extension_glade = os.path.splitext(filename_glade)
path_dictionary , filename_dictionary = os.path.split(args[2])
export_file_name = os.path.join(path_glade, basename_glade+'_'+os.path.splitext(filename_dictionary)[0]+extension_glade)
export_file = open(export_file_name, 'w', encoding='utf-8')
#export_file = sys.stdout
export_file.write('<?xml version="1.0" encoding="UTF-8"?>\n')
root.export(export_file, 0)
export_file.close()
# python forgui.py d:\Python33\Scripts\generateds_gui.glade -c d:\Python33\Scripts\en.dictionary
elif args[1] == '-c':
parse(args[0], silence=False)
create_dictfiles(args)
else:
usage()
if __name__ == '__main__':
#import pdb; pdb.set_trace()
main()
__all__ = [
"accel_groupsType",
"acceleratorType",
"action_widgetType",
"action_widgetsType",
"childType",
"childType12",
"childType16",
"childType3",
"childType6",
"childType9",
"groupType",
"interface",
"objectType",
"objectType1",
"objectType10",
"objectType13",
"objectType4",
"objectType7",
"packingType",
"packingType19",
"packingType21",
"propertyType",
"propertyType11",
"propertyType14",
"propertyType17",
"propertyType2",
"propertyType20",
"propertyType22",
"propertyType5",
"propertyType8",
"requiresType",
"signalType",
"signalType15",
"signalType18"
]
| 44.018149 | 193 | 0.616798 | 18,902 | 172,199 | 5.366945 | 0.029732 | 0.059302 | 0.021913 | 0.023904 | 0.853439 | 0.831388 | 0.811072 | 0.797311 | 0.784782 | 0.773377 | 0 | 0.007681 | 0.272632 | 172,199 | 3,911 | 194 | 44.029404 | 0.802255 | 0.019489 | 0 | 0.729529 | 1 | 0.00193 | 0.053675 | 0.003149 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170389 | false | 0.014337 | 0.003309 | 0.035015 | 0.292804 | 0.067549 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e6a814ac36928ce65e0d4d57a0fa6061cb273ea7 | 36,904 | py | Python | sdk/python/pulumi_alicloud/ehpc/job_template.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/ehpc/job_template.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/ehpc/job_template.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['JobTemplateArgs', 'JobTemplate']
@pulumi.input_type
class JobTemplateArgs:
def __init__(__self__, *,
command_line: pulumi.Input[str],
job_template_name: pulumi.Input[str],
array_request: Optional[pulumi.Input[str]] = None,
clock_time: Optional[pulumi.Input[str]] = None,
gpu: Optional[pulumi.Input[int]] = None,
mem: Optional[pulumi.Input[str]] = None,
node: Optional[pulumi.Input[int]] = None,
package_path: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[int]] = None,
queue: Optional[pulumi.Input[str]] = None,
re_runable: Optional[pulumi.Input[bool]] = None,
runas_user: Optional[pulumi.Input[str]] = None,
stderr_redirect_path: Optional[pulumi.Input[str]] = None,
stdout_redirect_path: Optional[pulumi.Input[str]] = None,
task: Optional[pulumi.Input[int]] = None,
thread: Optional[pulumi.Input[int]] = None,
variables: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a JobTemplate resource.
:param pulumi.Input[str] command_line: Job Commands.
:param pulumi.Input[str] job_template_name: A Job Template Name.
:param pulumi.Input[str] array_request: Queue Jobs, Is of the Form: 1-10:2.
:param pulumi.Input[str] clock_time: Job Maximum Run Time.
:param pulumi.Input[int] gpu: A Single Compute Node Using the GPU Number.Possible Values: 1~20000.
:param pulumi.Input[str] mem: A Single Compute Node Maximum Memory.
:param pulumi.Input[int] node: Submit a Task Is Required for Computing the Number of Data Nodes to Be. Possible Values: 1~5000 .
:param pulumi.Input[str] package_path: Job Commands the Directory.
:param pulumi.Input[int] priority: The Job Priority.
:param pulumi.Input[str] queue: The Job Queue.
:param pulumi.Input[bool] re_runable: If the Job Is Support for the Re-Run.
:param pulumi.Input[str] runas_user: The name of the user who performed the job.
:param pulumi.Input[str] stderr_redirect_path: Error Output Path.
:param pulumi.Input[str] stdout_redirect_path: Standard Output Path and.
:param pulumi.Input[int] task: A Single Compute Node Required Number of Tasks. Possible Values: 1~20000 .
:param pulumi.Input[int] thread: A Single Task and the Number of Required Threads.
:param pulumi.Input[str] variables: The Job of the Environment Variable.
"""
pulumi.set(__self__, "command_line", command_line)
pulumi.set(__self__, "job_template_name", job_template_name)
if array_request is not None:
pulumi.set(__self__, "array_request", array_request)
if clock_time is not None:
pulumi.set(__self__, "clock_time", clock_time)
if gpu is not None:
pulumi.set(__self__, "gpu", gpu)
if mem is not None:
pulumi.set(__self__, "mem", mem)
if node is not None:
pulumi.set(__self__, "node", node)
if package_path is not None:
pulumi.set(__self__, "package_path", package_path)
if priority is not None:
pulumi.set(__self__, "priority", priority)
if queue is not None:
pulumi.set(__self__, "queue", queue)
if re_runable is not None:
pulumi.set(__self__, "re_runable", re_runable)
if runas_user is not None:
pulumi.set(__self__, "runas_user", runas_user)
if stderr_redirect_path is not None:
pulumi.set(__self__, "stderr_redirect_path", stderr_redirect_path)
if stdout_redirect_path is not None:
pulumi.set(__self__, "stdout_redirect_path", stdout_redirect_path)
if task is not None:
pulumi.set(__self__, "task", task)
if thread is not None:
pulumi.set(__self__, "thread", thread)
if variables is not None:
pulumi.set(__self__, "variables", variables)
@property
@pulumi.getter(name="commandLine")
def command_line(self) -> pulumi.Input[str]:
"""
Job Commands.
"""
return pulumi.get(self, "command_line")
@command_line.setter
def command_line(self, value: pulumi.Input[str]):
pulumi.set(self, "command_line", value)
@property
@pulumi.getter(name="jobTemplateName")
def job_template_name(self) -> pulumi.Input[str]:
"""
A Job Template Name.
"""
return pulumi.get(self, "job_template_name")
@job_template_name.setter
def job_template_name(self, value: pulumi.Input[str]):
pulumi.set(self, "job_template_name", value)
@property
@pulumi.getter(name="arrayRequest")
def array_request(self) -> Optional[pulumi.Input[str]]:
"""
Queue Jobs, Is of the Form: 1-10:2.
"""
return pulumi.get(self, "array_request")
@array_request.setter
def array_request(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "array_request", value)
@property
@pulumi.getter(name="clockTime")
def clock_time(self) -> Optional[pulumi.Input[str]]:
"""
Job Maximum Run Time.
"""
return pulumi.get(self, "clock_time")
@clock_time.setter
def clock_time(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "clock_time", value)
@property
@pulumi.getter
def gpu(self) -> Optional[pulumi.Input[int]]:
"""
A Single Compute Node Using the GPU Number.Possible Values: 1~20000.
"""
return pulumi.get(self, "gpu")
@gpu.setter
def gpu(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "gpu", value)
@property
@pulumi.getter
def mem(self) -> Optional[pulumi.Input[str]]:
"""
A Single Compute Node Maximum Memory.
"""
return pulumi.get(self, "mem")
@mem.setter
def mem(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "mem", value)
@property
@pulumi.getter
def node(self) -> Optional[pulumi.Input[int]]:
"""
Submit a Task Is Required for Computing the Number of Data Nodes to Be. Possible Values: 1~5000 .
"""
return pulumi.get(self, "node")
@node.setter
def node(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "node", value)
@property
@pulumi.getter(name="packagePath")
def package_path(self) -> Optional[pulumi.Input[str]]:
"""
Job Commands the Directory.
"""
return pulumi.get(self, "package_path")
@package_path.setter
def package_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "package_path", value)
@property
@pulumi.getter
def priority(self) -> Optional[pulumi.Input[int]]:
"""
The Job Priority.
"""
return pulumi.get(self, "priority")
@priority.setter
def priority(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "priority", value)
@property
@pulumi.getter
def queue(self) -> Optional[pulumi.Input[str]]:
"""
The Job Queue.
"""
return pulumi.get(self, "queue")
@queue.setter
def queue(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "queue", value)
@property
@pulumi.getter(name="reRunable")
def re_runable(self) -> Optional[pulumi.Input[bool]]:
"""
If the Job Is Support for the Re-Run.
"""
return pulumi.get(self, "re_runable")
@re_runable.setter
def re_runable(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "re_runable", value)
@property
@pulumi.getter(name="runasUser")
def runas_user(self) -> Optional[pulumi.Input[str]]:
"""
The name of the user who performed the job.
"""
return pulumi.get(self, "runas_user")
@runas_user.setter
def runas_user(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "runas_user", value)
@property
@pulumi.getter(name="stderrRedirectPath")
def stderr_redirect_path(self) -> Optional[pulumi.Input[str]]:
"""
Error Output Path.
"""
return pulumi.get(self, "stderr_redirect_path")
@stderr_redirect_path.setter
def stderr_redirect_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "stderr_redirect_path", value)
@property
@pulumi.getter(name="stdoutRedirectPath")
def stdout_redirect_path(self) -> Optional[pulumi.Input[str]]:
"""
Standard Output Path and.
"""
return pulumi.get(self, "stdout_redirect_path")
@stdout_redirect_path.setter
def stdout_redirect_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "stdout_redirect_path", value)
@property
@pulumi.getter
def task(self) -> Optional[pulumi.Input[int]]:
"""
A Single Compute Node Required Number of Tasks. Possible Values: 1~20000 .
"""
return pulumi.get(self, "task")
@task.setter
def task(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "task", value)
@property
@pulumi.getter
def thread(self) -> Optional[pulumi.Input[int]]:
"""
A Single Task and the Number of Required Threads.
"""
return pulumi.get(self, "thread")
@thread.setter
def thread(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "thread", value)
@property
@pulumi.getter
def variables(self) -> Optional[pulumi.Input[str]]:
"""
The Job of the Environment Variable.
"""
return pulumi.get(self, "variables")
@variables.setter
def variables(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "variables", value)
@pulumi.input_type
class _JobTemplateState:
def __init__(__self__, *,
array_request: Optional[pulumi.Input[str]] = None,
clock_time: Optional[pulumi.Input[str]] = None,
command_line: Optional[pulumi.Input[str]] = None,
gpu: Optional[pulumi.Input[int]] = None,
job_template_name: Optional[pulumi.Input[str]] = None,
mem: Optional[pulumi.Input[str]] = None,
node: Optional[pulumi.Input[int]] = None,
package_path: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[int]] = None,
queue: Optional[pulumi.Input[str]] = None,
re_runable: Optional[pulumi.Input[bool]] = None,
runas_user: Optional[pulumi.Input[str]] = None,
stderr_redirect_path: Optional[pulumi.Input[str]] = None,
stdout_redirect_path: Optional[pulumi.Input[str]] = None,
task: Optional[pulumi.Input[int]] = None,
thread: Optional[pulumi.Input[int]] = None,
variables: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering JobTemplate resources.
:param pulumi.Input[str] array_request: Queue Jobs, Is of the Form: 1-10:2.
:param pulumi.Input[str] clock_time: Job Maximum Run Time.
:param pulumi.Input[str] command_line: Job Commands.
:param pulumi.Input[int] gpu: A Single Compute Node Using the GPU Number.Possible Values: 1~20000.
:param pulumi.Input[str] job_template_name: A Job Template Name.
:param pulumi.Input[str] mem: A Single Compute Node Maximum Memory.
:param pulumi.Input[int] node: Submit a Task Is Required for Computing the Number of Data Nodes to Be. Possible Values: 1~5000 .
:param pulumi.Input[str] package_path: Job Commands the Directory.
:param pulumi.Input[int] priority: The Job Priority.
:param pulumi.Input[str] queue: The Job Queue.
:param pulumi.Input[bool] re_runable: If the Job Is Support for the Re-Run.
:param pulumi.Input[str] runas_user: The name of the user who performed the job.
:param pulumi.Input[str] stderr_redirect_path: Error Output Path.
:param pulumi.Input[str] stdout_redirect_path: Standard Output Path and.
:param pulumi.Input[int] task: A Single Compute Node Required Number of Tasks. Possible Values: 1~20000 .
:param pulumi.Input[int] thread: A Single Task and the Number of Required Threads.
:param pulumi.Input[str] variables: The Job of the Environment Variable.
"""
if array_request is not None:
pulumi.set(__self__, "array_request", array_request)
if clock_time is not None:
pulumi.set(__self__, "clock_time", clock_time)
if command_line is not None:
pulumi.set(__self__, "command_line", command_line)
if gpu is not None:
pulumi.set(__self__, "gpu", gpu)
if job_template_name is not None:
pulumi.set(__self__, "job_template_name", job_template_name)
if mem is not None:
pulumi.set(__self__, "mem", mem)
if node is not None:
pulumi.set(__self__, "node", node)
if package_path is not None:
pulumi.set(__self__, "package_path", package_path)
if priority is not None:
pulumi.set(__self__, "priority", priority)
if queue is not None:
pulumi.set(__self__, "queue", queue)
if re_runable is not None:
pulumi.set(__self__, "re_runable", re_runable)
if runas_user is not None:
pulumi.set(__self__, "runas_user", runas_user)
if stderr_redirect_path is not None:
pulumi.set(__self__, "stderr_redirect_path", stderr_redirect_path)
if stdout_redirect_path is not None:
pulumi.set(__self__, "stdout_redirect_path", stdout_redirect_path)
if task is not None:
pulumi.set(__self__, "task", task)
if thread is not None:
pulumi.set(__self__, "thread", thread)
if variables is not None:
pulumi.set(__self__, "variables", variables)
@property
@pulumi.getter(name="arrayRequest")
def array_request(self) -> Optional[pulumi.Input[str]]:
"""
Queue Jobs, Is of the Form: 1-10:2.
"""
return pulumi.get(self, "array_request")
@array_request.setter
def array_request(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "array_request", value)
@property
@pulumi.getter(name="clockTime")
def clock_time(self) -> Optional[pulumi.Input[str]]:
"""
Job Maximum Run Time.
"""
return pulumi.get(self, "clock_time")
@clock_time.setter
def clock_time(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "clock_time", value)
@property
@pulumi.getter(name="commandLine")
def command_line(self) -> Optional[pulumi.Input[str]]:
"""
Job Commands.
"""
return pulumi.get(self, "command_line")
@command_line.setter
def command_line(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "command_line", value)
@property
@pulumi.getter
def gpu(self) -> Optional[pulumi.Input[int]]:
"""
A Single Compute Node Using the GPU Number.Possible Values: 1~20000.
"""
return pulumi.get(self, "gpu")
@gpu.setter
def gpu(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "gpu", value)
@property
@pulumi.getter(name="jobTemplateName")
def job_template_name(self) -> Optional[pulumi.Input[str]]:
"""
A Job Template Name.
"""
return pulumi.get(self, "job_template_name")
@job_template_name.setter
def job_template_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "job_template_name", value)
@property
@pulumi.getter
def mem(self) -> Optional[pulumi.Input[str]]:
"""
A Single Compute Node Maximum Memory.
"""
return pulumi.get(self, "mem")
@mem.setter
def mem(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "mem", value)
@property
@pulumi.getter
def node(self) -> Optional[pulumi.Input[int]]:
"""
Submit a Task Is Required for Computing the Number of Data Nodes to Be. Possible Values: 1~5000 .
"""
return pulumi.get(self, "node")
@node.setter
def node(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "node", value)
@property
@pulumi.getter(name="packagePath")
def package_path(self) -> Optional[pulumi.Input[str]]:
"""
Job Commands the Directory.
"""
return pulumi.get(self, "package_path")
@package_path.setter
def package_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "package_path", value)
@property
@pulumi.getter
def priority(self) -> Optional[pulumi.Input[int]]:
"""
The Job Priority.
"""
return pulumi.get(self, "priority")
@priority.setter
def priority(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "priority", value)
@property
@pulumi.getter
def queue(self) -> Optional[pulumi.Input[str]]:
"""
The Job Queue.
"""
return pulumi.get(self, "queue")
@queue.setter
def queue(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "queue", value)
@property
@pulumi.getter(name="reRunable")
def re_runable(self) -> Optional[pulumi.Input[bool]]:
"""
If the Job Is Support for the Re-Run.
"""
return pulumi.get(self, "re_runable")
@re_runable.setter
def re_runable(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "re_runable", value)
@property
@pulumi.getter(name="runasUser")
def runas_user(self) -> Optional[pulumi.Input[str]]:
"""
The name of the user who performed the job.
"""
return pulumi.get(self, "runas_user")
@runas_user.setter
def runas_user(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "runas_user", value)
@property
@pulumi.getter(name="stderrRedirectPath")
def stderr_redirect_path(self) -> Optional[pulumi.Input[str]]:
"""
Error Output Path.
"""
return pulumi.get(self, "stderr_redirect_path")
@stderr_redirect_path.setter
def stderr_redirect_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "stderr_redirect_path", value)
@property
@pulumi.getter(name="stdoutRedirectPath")
def stdout_redirect_path(self) -> Optional[pulumi.Input[str]]:
"""
Standard Output Path and.
"""
return pulumi.get(self, "stdout_redirect_path")
@stdout_redirect_path.setter
def stdout_redirect_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "stdout_redirect_path", value)
@property
@pulumi.getter
def task(self) -> Optional[pulumi.Input[int]]:
"""
A Single Compute Node Required Number of Tasks. Possible Values: 1~20000 .
"""
return pulumi.get(self, "task")
@task.setter
def task(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "task", value)
@property
@pulumi.getter
def thread(self) -> Optional[pulumi.Input[int]]:
"""
A Single Task and the Number of Required Threads.
"""
return pulumi.get(self, "thread")
@thread.setter
def thread(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "thread", value)
@property
@pulumi.getter
def variables(self) -> Optional[pulumi.Input[str]]:
"""
The Job of the Environment Variable.
"""
return pulumi.get(self, "variables")
@variables.setter
def variables(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "variables", value)
class JobTemplate(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
array_request: Optional[pulumi.Input[str]] = None,
clock_time: Optional[pulumi.Input[str]] = None,
command_line: Optional[pulumi.Input[str]] = None,
gpu: Optional[pulumi.Input[int]] = None,
job_template_name: Optional[pulumi.Input[str]] = None,
mem: Optional[pulumi.Input[str]] = None,
node: Optional[pulumi.Input[int]] = None,
package_path: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[int]] = None,
queue: Optional[pulumi.Input[str]] = None,
re_runable: Optional[pulumi.Input[bool]] = None,
runas_user: Optional[pulumi.Input[str]] = None,
stderr_redirect_path: Optional[pulumi.Input[str]] = None,
stdout_redirect_path: Optional[pulumi.Input[str]] = None,
task: Optional[pulumi.Input[int]] = None,
thread: Optional[pulumi.Input[int]] = None,
variables: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides a Ehpc Job Template resource.
For information about Ehpc Job Template and how to use it, see [What is Job Template](https://www.alibabacloud.com/help/product/57664.html).
> **NOTE:** Available in v1.133.0+.
## Example Usage
Basic Usage
```python
import pulumi
import pulumi_alicloud as alicloud
default = alicloud.ehpc.JobTemplate("default",
command_line="./LammpsTest/lammps.pbs",
job_template_name="example_value")
```
## Import
Ehpc Job Template can be imported using the id, e.g.
```sh
$ pulumi import alicloud:ehpc/jobTemplate:JobTemplate example <id>
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] array_request: Queue Jobs, Is of the Form: 1-10:2.
:param pulumi.Input[str] clock_time: Job Maximum Run Time.
:param pulumi.Input[str] command_line: Job Commands.
:param pulumi.Input[int] gpu: A Single Compute Node Using the GPU Number.Possible Values: 1~20000.
:param pulumi.Input[str] job_template_name: A Job Template Name.
:param pulumi.Input[str] mem: A Single Compute Node Maximum Memory.
:param pulumi.Input[int] node: Submit a Task Is Required for Computing the Number of Data Nodes to Be. Possible Values: 1~5000 .
:param pulumi.Input[str] package_path: Job Commands the Directory.
:param pulumi.Input[int] priority: The Job Priority.
:param pulumi.Input[str] queue: The Job Queue.
:param pulumi.Input[bool] re_runable: If the Job Is Support for the Re-Run.
:param pulumi.Input[str] runas_user: The name of the user who performed the job.
:param pulumi.Input[str] stderr_redirect_path: Error Output Path.
:param pulumi.Input[str] stdout_redirect_path: Standard Output Path and.
:param pulumi.Input[int] task: A Single Compute Node Required Number of Tasks. Possible Values: 1~20000 .
:param pulumi.Input[int] thread: A Single Task and the Number of Required Threads.
:param pulumi.Input[str] variables: The Job of the Environment Variable.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: JobTemplateArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a Ehpc Job Template resource.
For information about Ehpc Job Template and how to use it, see [What is Job Template](https://www.alibabacloud.com/help/product/57664.html).
> **NOTE:** Available in v1.133.0+.
## Example Usage
Basic Usage
```python
import pulumi
import pulumi_alicloud as alicloud
default = alicloud.ehpc.JobTemplate("default",
command_line="./LammpsTest/lammps.pbs",
job_template_name="example_value")
```
## Import
Ehpc Job Template can be imported using the id, e.g.
```sh
$ pulumi import alicloud:ehpc/jobTemplate:JobTemplate example <id>
```
:param str resource_name: The name of the resource.
:param JobTemplateArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(JobTemplateArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
array_request: Optional[pulumi.Input[str]] = None,
clock_time: Optional[pulumi.Input[str]] = None,
command_line: Optional[pulumi.Input[str]] = None,
gpu: Optional[pulumi.Input[int]] = None,
job_template_name: Optional[pulumi.Input[str]] = None,
mem: Optional[pulumi.Input[str]] = None,
node: Optional[pulumi.Input[int]] = None,
package_path: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[int]] = None,
queue: Optional[pulumi.Input[str]] = None,
re_runable: Optional[pulumi.Input[bool]] = None,
runas_user: Optional[pulumi.Input[str]] = None,
stderr_redirect_path: Optional[pulumi.Input[str]] = None,
stdout_redirect_path: Optional[pulumi.Input[str]] = None,
task: Optional[pulumi.Input[int]] = None,
thread: Optional[pulumi.Input[int]] = None,
variables: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = JobTemplateArgs.__new__(JobTemplateArgs)
__props__.__dict__["array_request"] = array_request
__props__.__dict__["clock_time"] = clock_time
if command_line is None and not opts.urn:
raise TypeError("Missing required property 'command_line'")
__props__.__dict__["command_line"] = command_line
__props__.__dict__["gpu"] = gpu
if job_template_name is None and not opts.urn:
raise TypeError("Missing required property 'job_template_name'")
__props__.__dict__["job_template_name"] = job_template_name
__props__.__dict__["mem"] = mem
__props__.__dict__["node"] = node
__props__.__dict__["package_path"] = package_path
__props__.__dict__["priority"] = priority
__props__.__dict__["queue"] = queue
__props__.__dict__["re_runable"] = re_runable
__props__.__dict__["runas_user"] = runas_user
__props__.__dict__["stderr_redirect_path"] = stderr_redirect_path
__props__.__dict__["stdout_redirect_path"] = stdout_redirect_path
__props__.__dict__["task"] = task
__props__.__dict__["thread"] = thread
__props__.__dict__["variables"] = variables
super(JobTemplate, __self__).__init__(
'alicloud:ehpc/jobTemplate:JobTemplate',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
array_request: Optional[pulumi.Input[str]] = None,
clock_time: Optional[pulumi.Input[str]] = None,
command_line: Optional[pulumi.Input[str]] = None,
gpu: Optional[pulumi.Input[int]] = None,
job_template_name: Optional[pulumi.Input[str]] = None,
mem: Optional[pulumi.Input[str]] = None,
node: Optional[pulumi.Input[int]] = None,
package_path: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[int]] = None,
queue: Optional[pulumi.Input[str]] = None,
re_runable: Optional[pulumi.Input[bool]] = None,
runas_user: Optional[pulumi.Input[str]] = None,
stderr_redirect_path: Optional[pulumi.Input[str]] = None,
stdout_redirect_path: Optional[pulumi.Input[str]] = None,
task: Optional[pulumi.Input[int]] = None,
thread: Optional[pulumi.Input[int]] = None,
variables: Optional[pulumi.Input[str]] = None) -> 'JobTemplate':
"""
Get an existing JobTemplate resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] array_request: Queue Jobs, Is of the Form: 1-10:2.
:param pulumi.Input[str] clock_time: Job Maximum Run Time.
:param pulumi.Input[str] command_line: Job Commands.
:param pulumi.Input[int] gpu: A Single Compute Node Using the GPU Number.Possible Values: 1~20000.
:param pulumi.Input[str] job_template_name: A Job Template Name.
:param pulumi.Input[str] mem: A Single Compute Node Maximum Memory.
:param pulumi.Input[int] node: Submit a Task Is Required for Computing the Number of Data Nodes to Be. Possible Values: 1~5000 .
:param pulumi.Input[str] package_path: Job Commands the Directory.
:param pulumi.Input[int] priority: The Job Priority.
:param pulumi.Input[str] queue: The Job Queue.
:param pulumi.Input[bool] re_runable: If the Job Is Support for the Re-Run.
:param pulumi.Input[str] runas_user: The name of the user who performed the job.
:param pulumi.Input[str] stderr_redirect_path: Error Output Path.
:param pulumi.Input[str] stdout_redirect_path: Standard Output Path and.
:param pulumi.Input[int] task: A Single Compute Node Required Number of Tasks. Possible Values: 1~20000 .
:param pulumi.Input[int] thread: A Single Task and the Number of Required Threads.
:param pulumi.Input[str] variables: The Job of the Environment Variable.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _JobTemplateState.__new__(_JobTemplateState)
__props__.__dict__["array_request"] = array_request
__props__.__dict__["clock_time"] = clock_time
__props__.__dict__["command_line"] = command_line
__props__.__dict__["gpu"] = gpu
__props__.__dict__["job_template_name"] = job_template_name
__props__.__dict__["mem"] = mem
__props__.__dict__["node"] = node
__props__.__dict__["package_path"] = package_path
__props__.__dict__["priority"] = priority
__props__.__dict__["queue"] = queue
__props__.__dict__["re_runable"] = re_runable
__props__.__dict__["runas_user"] = runas_user
__props__.__dict__["stderr_redirect_path"] = stderr_redirect_path
__props__.__dict__["stdout_redirect_path"] = stdout_redirect_path
__props__.__dict__["task"] = task
__props__.__dict__["thread"] = thread
__props__.__dict__["variables"] = variables
return JobTemplate(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="arrayRequest")
def array_request(self) -> pulumi.Output[Optional[str]]:
"""
Queue Jobs, Is of the Form: 1-10:2.
"""
return pulumi.get(self, "array_request")
@property
@pulumi.getter(name="clockTime")
def clock_time(self) -> pulumi.Output[Optional[str]]:
"""
Job Maximum Run Time.
"""
return pulumi.get(self, "clock_time")
@property
@pulumi.getter(name="commandLine")
def command_line(self) -> pulumi.Output[str]:
"""
Job Commands.
"""
return pulumi.get(self, "command_line")
@property
@pulumi.getter
def gpu(self) -> pulumi.Output[Optional[int]]:
"""
A Single Compute Node Using the GPU Number.Possible Values: 1~20000.
"""
return pulumi.get(self, "gpu")
@property
@pulumi.getter(name="jobTemplateName")
def job_template_name(self) -> pulumi.Output[str]:
"""
A Job Template Name.
"""
return pulumi.get(self, "job_template_name")
@property
@pulumi.getter
def mem(self) -> pulumi.Output[Optional[str]]:
"""
A Single Compute Node Maximum Memory.
"""
return pulumi.get(self, "mem")
@property
@pulumi.getter
def node(self) -> pulumi.Output[Optional[int]]:
"""
Submit a Task Is Required for Computing the Number of Data Nodes to Be. Possible Values: 1~5000 .
"""
return pulumi.get(self, "node")
@property
@pulumi.getter(name="packagePath")
def package_path(self) -> pulumi.Output[Optional[str]]:
"""
Job Commands the Directory.
"""
return pulumi.get(self, "package_path")
@property
@pulumi.getter
def priority(self) -> pulumi.Output[Optional[int]]:
"""
The Job Priority.
"""
return pulumi.get(self, "priority")
@property
@pulumi.getter
def queue(self) -> pulumi.Output[Optional[str]]:
"""
The Job Queue.
"""
return pulumi.get(self, "queue")
@property
@pulumi.getter(name="reRunable")
def re_runable(self) -> pulumi.Output[bool]:
"""
If the Job Is Support for the Re-Run.
"""
return pulumi.get(self, "re_runable")
@property
@pulumi.getter(name="runasUser")
def runas_user(self) -> pulumi.Output[Optional[str]]:
"""
The name of the user who performed the job.
"""
return pulumi.get(self, "runas_user")
@property
@pulumi.getter(name="stderrRedirectPath")
def stderr_redirect_path(self) -> pulumi.Output[Optional[str]]:
"""
Error Output Path.
"""
return pulumi.get(self, "stderr_redirect_path")
@property
@pulumi.getter(name="stdoutRedirectPath")
def stdout_redirect_path(self) -> pulumi.Output[Optional[str]]:
"""
Standard Output Path and.
"""
return pulumi.get(self, "stdout_redirect_path")
@property
@pulumi.getter
def task(self) -> pulumi.Output[Optional[int]]:
"""
A Single Compute Node Required Number of Tasks. Possible Values: 1~20000 .
"""
return pulumi.get(self, "task")
@property
@pulumi.getter
def thread(self) -> pulumi.Output[Optional[int]]:
"""
A Single Task and the Number of Required Threads.
"""
return pulumi.get(self, "thread")
@property
@pulumi.getter
def variables(self) -> pulumi.Output[Optional[str]]:
"""
The Job of the Environment Variable.
"""
return pulumi.get(self, "variables")
| 38.764706 | 148 | 0.620366 | 4,418 | 36,904 | 4.96967 | 0.050475 | 0.112725 | 0.127209 | 0.093186 | 0.926763 | 0.915103 | 0.899208 | 0.885407 | 0.8819 | 0.860129 | 0 | 0.0062 | 0.265744 | 36,904 | 951 | 149 | 38.805468 | 0.804074 | 0.248997 | 0 | 0.867754 | 1 | 0 | 0.083989 | 0.001448 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.001812 | 0.009058 | 0 | 0.275362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e6cb4ae26a270192bebd16356dce0781ed355a1b | 264 | py | Python | MITx 6.00.1x Intro CS/intFloatTest.py | LeafDragon/Pyhton-Practice | fec9f3da7a4e083bf186155f4ba54744aa27198c | [
"MIT"
] | null | null | null | MITx 6.00.1x Intro CS/intFloatTest.py | LeafDragon/Pyhton-Practice | fec9f3da7a4e083bf186155f4ba54744aa27198c | [
"MIT"
] | null | null | null | MITx 6.00.1x Intro CS/intFloatTest.py | LeafDragon/Pyhton-Practice | fec9f3da7a4e083bf186155f4ba54744aa27198c | [
"MIT"
] | null | null | null | #Python test on if it is a int or float.
>>> h = "1.1"
>>> int(h) if "." not in h else float(h)
1.1
>>> h = "1"
>>> int(h) if "." not in h else float(h)
1
>>> float(h) if float(h) % 1 else int(h)
1
>>> h = "1.1"
>>> float(h) if float(h) % 1 else int(h)
1.1
| 13.2 | 40 | 0.503788 | 60 | 264 | 2.216667 | 0.25 | 0.135338 | 0.263158 | 0.120301 | 0.729323 | 0.729323 | 0.729323 | 0.729323 | 0.729323 | 0.729323 | 0 | 0.06599 | 0.253788 | 264 | 19 | 41 | 13.894737 | 0.609137 | 0.147727 | 0 | 0.727273 | 0 | 0 | 0.040179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fc10d93fcbe4dfc01af2b4c2ae71b7481cf20d47 | 83 | py | Python | eod/tasks/det/plugins/__init__.py | scott-mao/EOD | f10e64de86c0f356ebf5c7e923f4042eec4207b1 | [
"Apache-2.0"
] | 1 | 2022-01-12T01:51:39.000Z | 2022-01-12T01:51:39.000Z | eod/tasks/det/plugins/__init__.py | YZW-explorer/EOD | f10e64de86c0f356ebf5c7e923f4042eec4207b1 | [
"Apache-2.0"
] | null | null | null | eod/tasks/det/plugins/__init__.py | YZW-explorer/EOD | f10e64de86c0f356ebf5c7e923f4042eec4207b1 | [
"Apache-2.0"
] | null | null | null | from .yolox import * # noqa
from .yolov5 import * # noqa
from .efl import * # noqa
| 20.75 | 28 | 0.674699 | 12 | 83 | 4.666667 | 0.5 | 0.535714 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.216867 | 83 | 3 | 29 | 27.666667 | 0.846154 | 0.168675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
fc45fa53d0f4fecfba67558894c88712dc2a2661 | 39,488 | py | Python | pygsti/modelpacks/smq2Q_XXII.py | colibri-coruscans/pyGSTi | da54f4abf668a28476030528f81afa46a1fbba33 | [
"Apache-2.0"
] | 73 | 2016-01-28T05:02:05.000Z | 2022-03-30T07:46:33.000Z | pygsti/modelpacks/smq2Q_XXII.py | colibri-coruscans/pyGSTi | da54f4abf668a28476030528f81afa46a1fbba33 | [
"Apache-2.0"
] | 113 | 2016-02-25T15:32:18.000Z | 2022-03-31T13:18:13.000Z | pygsti/modelpacks/smq2Q_XXII.py | colibri-coruscans/pyGSTi | da54f4abf668a28476030528f81afa46a1fbba33 | [
"Apache-2.0"
] | 41 | 2016-03-15T19:32:07.000Z | 2022-02-16T10:22:05.000Z | """
A standard multi-qubit gate set module.
Variables for working with the 2-qubit model containing the gates
I*I, I*X(pi/2), I*Y(pi/2), X(pi/2)*I, Y(pi/2)*I, and X(pi/2)*X(pi/2)
"""
#***************************************************************************************************
# Copyright 2015, 2019 National Technology & Engineering Solutions of Sandia, LLC (NTESS).
# Under the terms of Contract DE-NA0003525 with NTESS, the U.S. Government retains certain rights
# in this software.
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0 or in the LICENSE file in the root pyGSTi directory.
#***************************************************************************************************
from pygsti.modelpacks._modelpack import GSTModelPack
class _Module(GSTModelPack):
description = "I*I, I*X(pi/2), I*Y(pi/2), X(pi/2)*I, Y(pi/2)*I, and X(pi/2)*X(pi/2) gates"
gates = [(), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))]
_sslbls = (0, 1)
_germs = [((), ), (('Gxpi2', 0), ), (('Gypi2', 0), ), (('Gxpi2', 1), ), (('Gypi2', 1), ), ((('Gxpi2', 0), ('Gxpi2', 1)), ),
(('Gxpi2', 0), ('Gypi2', 0)), (('Gxpi2', 1), ('Gypi2', 1)), (('Gypi2', 1), ('Gypi2', 0)), (('Gxpi2', 1), ('Gxpi2', 0)),
(('Gxpi2', 1), ('Gypi2', 0)), (('Gypi2', 1), ('Gxpi2', 0)), ((), ('Gxpi2', 1)), ((), ('Gypi2', 1)), ((), ('Gypi2', 0)),
(('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1))), (('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))), (('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 0)),
(('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 1)), (('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 0)), (('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1)),
(('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 0)), (('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0)), (('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 1)),
(('Gxpi2', 1), ('Gypi2', 0), ('Gxpi2', 0)), (('Gxpi2', 1), ('Gypi2', 0), ('Gypi2', 1)), (('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 0)),
(('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0)), (('Gypi2', 1), ('Gypi2', 0), ('Gxpi2', 0)), (('Gxpi2', 0), ('Gypi2', 0), ()),
(('Gxpi2', 0), (), ('Gypi2', 0)), (('Gxpi2', 0), (), ()), (('Gypi2', 0), (), ()), (('Gxpi2', 1), ('Gypi2', 1), ()),
(('Gxpi2', 1), (), ('Gypi2', 1)), (('Gxpi2', 1), (), ()), (('Gypi2', 1), (), ()), ((), ('Gxpi2', 1), ('Gypi2', 0)),
((), ('Gypi2', 1), ('Gypi2', 0)), ((), ('Gypi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))),
(('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1))), (('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1))),
(('Gypi2', 1), ('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))), (('Gxpi2', 1), ('Gxpi2', 1), (('Gxpi2', 0), ('Gxpi2', 1))),
(('Gxpi2', 1), ('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1))), (('Gxpi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 0)),
((), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1))), (('Gxpi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 1)),
(('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 0)), (('Gxpi2', 0), ('Gxpi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))),
(('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 1)), (('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0)),
(('Gxpi2', 1), ('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 1)), (('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 0)),
(('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 1), ('Gypi2', 0)), (('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1), ('Gxpi2', 1)),
(('Gxpi2', 0), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 0), ()),
(('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1), ()), (('Gypi2', 1), ('Gxpi2', 0), (), ()), (('Gxpi2', 1), (), (), ('Gxpi2', 0)),
((('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 0), ('Gypi2', 0)), (('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 0), ('Gypi2', 1)),
((('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 1), ('Gxpi2', 0)),
(('Gypi2', 1), ('Gypi2', 0), ('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 1)),
(('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0), ('Gypi2', 1)),
(('Gypi2', 1), ('Gxpi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 0)),
(('Gypi2', 0), ('Gypi2', 1), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)),
(('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 0)),
(('Gypi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0), ('Gxpi2', 0)),
(('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 1)),
(('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0)),
(('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)),
(('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 0)),
(('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1)),
(('Gypi2', 0), ('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 1)),
(('Gypi2', 0), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1)),
(('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0)),
(('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1), ('Gxpi2', 1), ('Gxpi2', 0), ('Gxpi2', 0)),
(('Gypi2', 0), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 1), ('Gypi2', 1)),
(('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 1), ('Gypi2', 0), ('Gxpi2', 1)),
(('Gypi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 1)),
(('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 0)),
(('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 0), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 1)),
((), ('Gypi2', 0), ('Gxpi2', 0), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 1), (('Gxpi2', 0), ('Gxpi2', 1))),
((('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 0), ('Gxpi2', 1), (), ('Gypi2', 1)),
(('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 1)),
(('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0))]
_germs_lite = [((), ), (('Gxpi2', 0), ), (('Gypi2', 0), ), (('Gxpi2', 1), ), (('Gypi2', 1), ), ((('Gxpi2', 0), ('Gxpi2', 1)), ),
(('Gxpi2', 0), ('Gypi2', 0)), (('Gxpi2', 1), ('Gypi2', 1)), (('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 0)),
(('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 1)),
((('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 1), ('Gxpi2', 0)),
(('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0)),
(('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)),
(('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 1))]
_fiducials = [(), (('Gxpi2', 1), ), (('Gypi2', 1), ), (('Gxpi2', 1), ('Gxpi2', 1)), (('Gxpi2', 0), ), (('Gxpi2', 0), ('Gxpi2', 1)),
(('Gxpi2', 0), ('Gypi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)), (('Gypi2', 0), ), (('Gypi2', 0), ('Gxpi2', 1)),
(('Gypi2', 0), ('Gypi2', 1)), (('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 0)),
(('Gxpi2', 0), ('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 1)),
(('Gxpi2', 0), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 1))]
_prepfiducials = [(), (('Gxpi2', 1), ), (('Gypi2', 1), ), (('Gxpi2', 1), ('Gxpi2', 1)), (('Gxpi2', 0), ), (('Gxpi2', 0), ('Gxpi2', 1)),
(('Gxpi2', 0), ('Gypi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)), (('Gypi2', 0), ), (('Gypi2', 0), ('Gxpi2', 1)),
(('Gypi2', 0), ('Gypi2', 1)), (('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 0)),
(('Gxpi2', 0), ('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 1)),
(('Gxpi2', 0), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 1))]
_measfiducials = [(), (('Gxpi2', 1), ), (('Gypi2', 1), ), (('Gxpi2', 1), ('Gxpi2', 1)), (('Gxpi2', 0), ), (('Gypi2', 0), ),
(('Gxpi2', 0), ('Gxpi2', 0)), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gypi2', 1)), (('Gypi2', 0), ('Gxpi2', 1)),
(('Gypi2', 0), ('Gypi2', 1))]
global_fidpairs = [(0, 5), (0, 8), (1, 2), (1, 5), (2, 7), (2, 8), (3, 4), (3, 9), (3, 10), (4, 5), (5, 0), (5, 2),
(5, 7), (6, 5), (6, 7), (6, 8), (6, 10), (7, 7), (8, 2), (8, 9), (8, 10), (9, 4),
(9, 9), (10, 6), (10, 7), (11, 3), (11, 4), (12, 4), (13, 3), (14, 1), (14, 2), (15, 1), (15, 4),
(15, 7)]
_pergerm_fidpairsdict = {
(('Gxpi2', 1), ): [(0, 5), (1, 0), (1, 1), (2, 2), (2, 5), (2, 9), (3, 3), (3, 4), (3, 8), (4, 0), (4, 2), (4, 7),
(4, 8), (4, 10), (5, 0), (5, 1), (5, 2), (5, 6), (5, 8), (6, 7), (6, 8), (6, 9), (7, 0), (7, 4),
(8, 5), (8, 9), (9, 5), (10, 8), (10, 10), (12, 2), (12, 4), (12, 7), (13, 2), (13, 3), (13, 9),
(14, 0), (14, 5), (14, 6), (15, 5), (15, 8), (15, 9)],
(('Gypi2', 0), ): [(3, 1), (4, 1), (4, 2), (5, 0), (5, 1), (5, 7), (6, 0), (6, 8), (7, 2), (7, 4), (7, 9), (8, 0),
(8, 7), (9, 2), (9, 3), (10, 9), (10, 10), (14, 7), (14, 9), (15, 10)],
((('Gxpi2', 0), ('Gxpi2', 1)), ): [(0, 0), (1, 5), (2, 4), (3, 3), (3, 5), (5, 2), (6, 1), (6, 8), (6, 10), (8, 6), (10, 2), (10, 8),
(10, 10), (11, 8), (12, 1), (13, 1), (13, 4), (13, 6), (13, 10), (14, 8), (15, 3)],
((), ): [(0, 8), (1, 0), (1, 1), (1, 3), (1, 10), (2, 5), (2, 9), (3, 3), (3, 9), (4, 3), (4, 8), (5, 0),
(5, 5), (5, 7), (6, 4), (6, 6), (6, 8), (6, 10), (7, 0), (7, 2), (7, 3), (7, 4), (7, 6), (7, 10),
(8, 3), (8, 5), (9, 3), (9, 4), (9, 5), (9, 6), (9, 8), (9, 9), (10, 3), (10, 9), (10, 10), (11, 1),
(11, 5), (12, 5), (12, 7), (12, 9), (13, 0), (13, 10), (14, 0), (14, 1), (14, 2), (14, 6), (15, 0),
(15, 5), (15, 6), (15, 7), (15, 8)],
(('Gypi2', 1), ): [(0, 0), (0, 7), (1, 1), (3, 5), (3, 6), (4, 2), (4, 4), (4, 5), (5, 3), (5, 7), (7, 1), (7, 8),
(8, 5), (9, 4), (9, 5), (9, 9), (10, 5), (11, 5), (11, 6), (11, 8), (11, 10), (12, 0), (12, 3),
(13, 10), (14, 0), (14, 5), (14, 6), (14, 7), (15, 0), (15, 6), (15, 9)],
(('Gxpi2', 0), ): [(0, 7), (1, 1), (1, 7), (2, 7), (3, 3), (4, 9), (5, 4), (7, 2), (7, 10), (8, 2), (9, 2), (9, 8),
(9, 9), (10, 1), (10, 10), (11, 2), (11, 5), (11, 6), (13, 2), (14, 7), (15, 2), (15, 3)],
(('Gypi2', 1), ('Gypi2', 0)): [(0, 6), (0, 8), (0, 10), (1, 0), (1, 1), (1, 3), (2, 9), (3, 8), (4, 4), (4, 7), (5, 7),
(6, 1), (7, 0), (7, 8), (9, 10), (10, 5), (11, 5), (12, 5), (12, 6), (14, 0), (15, 0), (15, 6),
(15, 8)],
(('Gypi2', 1), ('Gxpi2', 0)): [(1, 1), (2, 8), (3, 0), (3, 2), (3, 6), (4, 7), (7, 2), (8, 6), (9, 1), (9, 7), (9, 9),
(10, 2), (10, 10), (11, 8), (12, 6), (13, 2), (13, 7), (14, 2), (15, 5)],
(('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))): [(1, 10), (2, 10), (4, 8), (5, 5), (5, 6), (6, 10), (7, 0), (7, 5), (7, 6), (7, 8), (8, 5),
(12, 5), (13, 0), (13, 2), (14, 1)],
((), ('Gypi2', 1)): [(0, 0), (0, 7), (1, 1), (3, 5), (3, 6), (4, 2), (4, 4), (4, 5), (5, 3), (5, 7), (7, 1), (7, 8),
(8, 5), (9, 4), (9, 5), (9, 9), (10, 5), (11, 5), (11, 6), (11, 8), (11, 10), (12, 0), (12, 3),
(13, 10), (14, 0), (14, 5), (14, 6), (14, 7), (15, 0), (15, 6), (15, 9)],
(('Gxpi2', 1), ('Gypi2', 1)): [(1, 0), (1, 10), (4, 0), (4, 4), (4, 7), (4, 8), (5, 5), (7, 6), (8, 9), (9, 9), (10, 2),
(10, 8), (11, 10), (12, 6), (12, 9), (13, 9), (15, 1)],
(('Gxpi2', 1), ('Gxpi2', 0)): [(0, 0), (1, 5), (2, 4), (3, 3), (3, 5), (5, 2), (6, 1), (6, 8), (6, 10), (8, 6), (10, 2),
(10, 8), (10, 10), (11, 8), (12, 1), (13, 1), (13, 4), (13, 6), (13, 10), (14, 8), (15, 3)],
((), ('Gypi2', 0)): [(3, 1), (4, 1), (4, 2), (5, 0), (5, 1), (5, 7), (6, 0), (6, 8), (7, 2), (7, 4), (7, 9), (8, 0),
(8, 7), (9, 2), (9, 3), (10, 9), (10, 10), (14, 7), (14, 9), (15, 10)],
(('Gxpi2', 1), ('Gypi2', 0)): [(0, 5), (0, 9), (1, 6), (3, 1), (3, 2), (5, 0), (5, 4), (6, 0), (6, 8), (9, 7), (10, 9),
(11, 1), (11, 4), (14, 4), (14, 9), (15, 5), (15, 7)],
((), ('Gxpi2', 1)): [(0, 5), (1, 0), (1, 1), (2, 2), (2, 5), (2, 9), (3, 3), (3, 4), (3, 8), (4, 0), (4, 2), (4, 7),
(4, 8), (4, 10), (5, 0), (5, 1), (5, 2), (5, 6), (5, 8), (6, 7), (6, 8), (6, 9), (7, 0),
(7, 4), (8, 5), (8, 9), (9, 5), (10, 8), (10, 10), (12, 2), (12, 4), (12, 7), (13, 2), (13, 3),
(13, 9), (14, 0), (14, 5), (14, 6), (15, 5), (15, 8), (15, 9)],
(('Gxpi2', 0), ('Gypi2', 0)): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8), (5, 5), (7, 0), (9, 3),
(9, 9), (9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1))): [(0, 6), (3, 0), (5, 0), (6, 7), (7, 1), (8, 3), (9, 9), (10, 4), (10, 9), (12, 9), (13, 2),
(14, 5), (14, 8), (14, 10), (15, 6)],
(('Gypi2', 0), (), ()): [(3, 1), (4, 1), (4, 2), (5, 0), (5, 1), (5, 7), (6, 0), (6, 8), (7, 2), (7, 4), (7, 9),
(8, 0), (8, 7), (9, 2), (9, 3), (10, 9), (10, 10), (14, 7), (14, 9), (15, 10)],
(('Gxpi2', 0), ('Gxpi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))): [(0, 6), (1, 3), (1, 7), (1, 10), (2, 10), (4, 1), (5, 1), (5, 5), (7, 3), (8, 2),
(8, 3), (9, 8), (10, 1), (10, 6), (10, 10), (11, 7), (15, 3)],
(('Gxpi2', 1), ('Gypi2', 1), ()): [(1, 0), (1, 10), (4, 0), (4, 4), (4, 7), (4, 8), (5, 5), (7, 6), (8, 9), (9, 9),
(10, 2), (10, 8), (11, 10), (12, 6), (12, 9), (13, 9), (15, 1)],
(('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1))): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), ('Gypi2', 0), ('Gxpi2', 0)): [(1, 10), (2, 10), (4, 8), (5, 5), (5, 6), (6, 10), (7, 0), (7, 5), (7, 6), (7, 8),
(8, 5), (12, 5), (13, 0), (13, 2), (14, 1)],
(('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 1)): [(0, 6), (3, 0), (5, 0), (6, 7), (7, 1), (8, 3), (9, 9), (10, 4), (10, 9), (12, 9),
(13, 2), (14, 5), (14, 8), (14, 10), (15, 6)],
(('Gxpi2', 0), (), ()): [(0, 7), (1, 1), (1, 7), (2, 7), (3, 3), (4, 9), (5, 4), (7, 2), (7, 10), (8, 2), (9, 2),
(9, 8), (9, 9), (10, 1), (10, 10), (11, 2), (11, 5), (11, 6), (13, 2), (14, 7), (15, 2),
(15, 3)],
((), ('Gypi2', 1), ('Gypi2', 0)): [(0, 6), (0, 8), (0, 10), (1, 0), (1, 1), (1, 3), (2, 9), (3, 8), (4, 4), (4, 7), (5, 7),
(6, 1), (7, 0), (7, 8), (9, 10), (10, 5), (11, 5), (12, 5), (12, 6), (14, 0), (15, 0),
(15, 6), (15, 8)],
(('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 0)): [(0, 1), (4, 2), (4, 7), (6, 7), (8, 3), (9, 5), (9, 7), (10, 0), (10, 4), (10, 5),
(11, 2), (11, 9), (14, 6), (14, 8), (15, 3)],
(('Gypi2', 1), (), ()): [(0, 0), (0, 7), (1, 1), (3, 5), (3, 6), (4, 2), (4, 4), (4, 5), (5, 3), (5, 7), (7, 1),
(7, 8), (8, 5), (9, 4), (9, 5), (9, 9), (10, 5), (11, 5), (11, 6), (11, 8), (11, 10),
(12, 0), (12, 3), (13, 10), (14, 0), (14, 5), (14, 6), (14, 7), (15, 0), (15, 6),
(15, 9)],
(('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 0)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 0)): [(1, 5), (3, 3), (4, 1), (6, 1), (6, 6), (6, 8), (8, 6), (10, 10), (11, 8), (13, 1),
(13, 4), (13, 6), (13, 10), (14, 8), (15, 3)],
(('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 0)): [(1, 7), (2, 2), (4, 8), (7, 2), (7, 10), (8, 6), (9, 8), (9, 9), (10, 1), (11, 4),
(11, 9), (12, 8), (12, 9), (13, 0), (13, 1), (13, 9)],
((), ('Gxpi2', 1), ('Gypi2', 0)): [(0, 5), (0, 9), (1, 6), (3, 1), (3, 2), (5, 0), (5, 4), (6, 0), (6, 8), (9, 7), (10, 9),
(11, 1), (11, 4), (14, 4), (14, 9), (15, 5), (15, 7)],
(('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 0)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), ('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1))): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), ('Gypi2', 0), ('Gypi2', 1)): [(3, 0), (4, 4), (5, 1), (5, 8), (6, 5), (7, 3), (8, 6), (8, 7), (9, 5), (10, 3),
(11, 4), (14, 0), (14, 6), (14, 9), (15, 5)],
(('Gypi2', 1), ('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8), (5, 5), (7, 0), (9, 3),
(9, 9), (9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 0), (), ('Gypi2', 0)): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8), (5, 5), (7, 0), (9, 3),
(9, 9), (9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
((), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1))): [(1, 5), (3, 3), (4, 1), (6, 1), (6, 6), (6, 8), (8, 6), (10, 10), (11, 8), (13, 1),
(13, 4), (13, 6), (13, 10), (14, 8), (15, 3)],
(('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1)): [(0, 4), (0, 5), (0, 7), (1, 1), (1, 6), (2, 3), (4, 10), (5, 4), (6, 8),
(7, 4), (7, 10), (8, 8), (8, 9), (10, 5), (11, 5), (11, 6), (11, 9), (13, 10), (14, 1),
(14, 9)],
(('Gxpi2', 0), ('Gypi2', 0), ()): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8), (5, 5), (7, 0), (9, 3),
(9, 9), (9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 1)): [(0, 0), (0, 6), (1, 0), (1, 10), (4, 0), (4, 4), (4, 7), (4, 8), (5, 5), (6, 7), (7, 6),
(8, 9), (9, 9), (10, 2), (10, 8), (11, 10), (12, 6), (12, 9), (13, 1), (13, 9),
(15, 1)],
(('Gxpi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 1)): [(0, 3), (1, 0), (1, 4), (3, 10), (4, 3), (5, 7), (7, 2), (7, 4), (7, 7), (7, 8), (8, 1),
(8, 5), (8, 7), (8, 9), (9, 2), (9, 6), (10, 3), (14, 10), (15, 4)],
(('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0)): [(0, 9), (1, 1), (1, 9), (2, 7), (3, 4), (4, 4), (4, 10), (6, 0), (6, 3), (7, 0), (9, 4),
(11, 5), (12, 4), (13, 7), (14, 0)],
(('Gypi2', 1), ('Gypi2', 0), ('Gxpi2', 0)): [(0, 9), (1, 1), (1, 9), (2, 7), (3, 4), (4, 4), (4, 10), (6, 0), (6, 3), (7, 0), (9, 4),
(11, 5), (12, 4), (13, 7), (14, 0)],
((), ('Gypi2', 0), ('Gxpi2', 1)): [(0, 5), (0, 9), (1, 6), (3, 1), (3, 2), (5, 0), (5, 4), (6, 0), (6, 8), (9, 7), (10, 9),
(11, 1), (11, 4), (14, 4), (14, 9), (15, 5), (15, 7)],
(('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0)): [(0, 6), (3, 0), (5, 0), (6, 7), (7, 1), (8, 3), (9, 9), (10, 4), (10, 9), (12, 9),
(13, 2), (14, 5), (14, 8), (14, 10), (15, 6)],
(('Gxpi2', 1), ('Gxpi2', 1), (('Gxpi2', 0), ('Gxpi2', 1))): [(0, 0), (1, 5), (2, 4), (3, 3), (3, 5), (5, 2), (6, 1), (6, 8), (6, 10), (8, 6),
(10, 2), (10, 8), (10, 10), (11, 8), (12, 1), (13, 1), (13, 4), (13, 6), (13, 10),
(14, 8), (15, 3)],
(('Gxpi2', 0), ('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1))): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), (), ('Gypi2', 1)): [(1, 0), (1, 10), (4, 0), (4, 4), (4, 7), (4, 8), (5, 5), (7, 6), (8, 9), (9, 9),
(10, 2), (10, 8), (11, 10), (12, 6), (12, 9), (13, 9), (15, 1)],
(('Gxpi2', 1), (), ()): [(0, 5), (1, 0), (1, 1), (2, 2), (2, 5), (2, 9), (3, 3), (3, 4), (3, 8), (4, 0), (4, 2),
(4, 7), (4, 8), (4, 10), (5, 0), (5, 1), (5, 2), (5, 6), (5, 8), (6, 7), (6, 8), (6, 9),
(7, 0), (7, 4), (8, 5), (8, 9), (9, 5), (10, 8), (10, 10), (12, 2), (12, 4), (12, 7),
(13, 2), (13, 3), (13, 9), (14, 0), (14, 5), (14, 6), (15, 5), (15, 8), (15, 9)],
(('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 0)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 1)): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8), (5, 5), (7, 0),
(9, 3), (9, 9), (9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), (), (), ('Gxpi2', 0)): [(0, 0), (1, 5), (2, 4), (3, 3), (3, 5), (5, 2), (6, 1), (6, 8), (6, 10), (8, 6),
(10, 2), (10, 8), (10, 10), (11, 8), (12, 1), (13, 1), (13, 4), (13, 6),
(13, 10), (14, 8), (15, 3)],
(('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0)): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8), (5, 5), (7, 0),
(9, 3), (9, 9), (9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 1), ('Gxpi2', 0), (), ()): [(1, 1), (2, 8), (3, 0), (3, 2), (3, 6), (4, 7), (7, 2), (8, 6), (9, 1), (9, 7),
(9, 9), (10, 2), (10, 10), (11, 8), (12, 6), (13, 2), (13, 7), (14, 2), (15, 5)],
((('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 0), ('Gypi2', 0)): [(0, 1), (0, 3), (0, 9), (2, 3), (2, 6), (3, 10), (5, 7), (6, 0), (7, 2), (7, 6),
(7, 7), (8, 1), (8, 5), (9, 4), (14, 10)],
(('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1), ('Gxpi2', 1)): [(0, 5), (0, 9), (1, 6), (3, 1), (3, 2), (5, 0), (5, 4), (6, 0), (6, 8), (9, 7),
(10, 9), (11, 1), (11, 4), (14, 4), (14, 9), (15, 5), (15, 7)],
(('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 1), ('Gypi2', 0)): [(0, 2), (1, 1), (1, 4), (2, 1), (2, 10), (3, 10), (4, 0), (5, 3), (5, 7), (6, 4),
(6, 10), (8, 2), (8, 3), (9, 0), (10, 8), (11, 1), (11, 7), (13, 1), (13, 8)],
(('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1), ()): [(0, 4), (0, 5), (0, 7), (1, 1), (1, 6), (2, 3), (4, 10), (5, 4), (6, 8), (7, 4),
(7, 10), (8, 8), (8, 9), (10, 5), (11, 5), (11, 6), (11, 9), (13, 10), (14, 1),
(14, 9)],
(('Gxpi2', 0), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)): [(1, 10), (2, 10), (4, 8), (5, 5), (5, 6), (6, 10), (7, 0), (7, 5), (7, 6),
(7, 8), (8, 5), (12, 5), (13, 0), (13, 2), (14, 1)],
(('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 0)): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8), (5, 5), (7, 0),
(9, 3), (9, 9), (9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 0), ('Gypi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), ('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 1)): [(1, 0), (1, 10), (4, 0), (4, 4), (4, 7), (4, 8), (5, 5), (7, 6), (8, 9), (9, 9),
(10, 2), (10, 8), (11, 10), (12, 6), (12, 9), (13, 9), (15, 1)],
(('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 0), ()): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0), ('Gxpi2', 0)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10),
(10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 1), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10),
(10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 0), ('Gypi2', 1), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10),
(10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 1), ('Gypi2', 0), ('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10),
(10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 0)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10),
(10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0), ('Gypi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10),
(10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
((('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 1), ('Gxpi2', 0)): [(1, 5), (3, 3), (4, 1), (6, 1), (6, 6), (6, 8), (8, 6), (10, 10), (11, 8),
(13, 1), (13, 4), (13, 6), (13, 10), (14, 8), (15, 3)],
(('Gypi2', 1), ('Gxpi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 0)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10),
(10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1), ('Gxpi2', 1), ('Gxpi2', 0), ('Gxpi2', 0)): [(1, 1), (2, 5), (4, 3), (5, 5), (6, 3), (7, 1), (10, 2), (10, 5),
(11, 2), (11, 5), (12, 7), (12, 10), (13, 0), (13, 4), (14, 5)],
(('Gypi2', 0), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 1), ('Gypi2', 1)): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8),
(5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8), (12, 2), (12, 6),
(14, 6), (15, 0), (15, 5)],
(('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8),
(5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8), (12, 2), (12, 6),
(14, 6), (15, 0), (15, 5)],
(('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9),
(9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9),
(9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gypi2', 0)): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10), (3, 8),
(5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8), (12, 2), (12, 6),
(14, 6), (15, 0), (15, 5)],
(('Gypi2', 0), ('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9),
(9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 0), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9),
(9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 1)): [(0, 4), (0, 6), (1, 1), (2, 2), (4, 1), (4, 3), (5, 1), (5, 3),
(6, 10), (8, 2), (8, 8), (9, 4), (10, 7), (12, 1), (13, 2),
(15, 6), (15, 9)],
(('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 0)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3), (9, 9),
(9, 10), (10, 8), (12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 0), ('Gypi2', 1), ('Gypi2', 0), ('Gxpi2', 1)): [(0, 3), (1, 0), (1, 4), (3, 10), (4, 3), (5, 7), (7, 2), (7, 4),
(7, 7), (7, 8), (8, 1), (8, 5), (8, 7), (8, 9), (9, 2), (9, 6),
(10, 3), (14, 10), (15, 4)],
(('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 1), ('Gypi2', 1)): [(1, 0), (1, 10), (4, 0), (4, 4), (4, 7), (4, 8), (5, 5), (7, 6),
(8, 9), (9, 9), (10, 2), (10, 8), (11, 10), (12, 6), (12, 9),
(13, 9), (15, 1)],
((('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 0), ('Gxpi2', 1), (), ('Gypi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3),
(9, 9), (9, 10), (10, 8), (12, 2), (12, 6), (14, 6),
(15, 0), (15, 5)],
((), ('Gypi2', 0), ('Gxpi2', 0), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 1), (('Gxpi2', 0), ('Gxpi2', 1))): [(0, 1), (0, 2), (0, 5), (1, 3), (1, 9), (2, 4), (2, 10),
(3, 8), (5, 5), (7, 0), (9, 3), (9, 9), (9, 10), (10, 8),
(12, 2), (12, 6), (14, 6), (15, 0), (15, 5)],
(('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 0), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0), (9, 3),
(9, 9), (9, 10), (10, 8), (12, 2), (12, 6), (14, 6),
(15, 0), (15, 5)],
(('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 1)): [(0, 1), (0, 5), (1, 3), (3, 8), (5, 5), (7, 0),
(9, 3), (9, 9), (9, 10), (10, 8), (12, 2), (12, 6),
(14, 6), (15, 0), (15, 5)],
(('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0)): [(1, 1), (2, 5), (4, 3), (5, 5), (6, 3), (7, 1),
(10, 2), (10, 5), (11, 2), (11, 5), (12, 7),
(12, 10), (13, 0), (13, 4), (14, 5)]
}
global_fidpairs_lite = None
_pergerm_fidpairsdict_lite = {
((),): [
(0, 3), (0, 4), (0, 6), (0, 7), (0, 8), (0, 9), (1, 1),
(1, 4), (1, 5), (1, 9), (2, 1), (2, 3), (2, 4), (2, 6),
(2, 7), (3, 5), (3, 7), (4, 0), (4, 2), (4, 4), (4, 9),
(5, 0), (5, 2), (5, 7), (5, 9), (5, 10), (6, 0), (6, 1),
(6, 2), (6, 3), (6, 4), (6, 8), (6, 9), (7, 6), (7, 7),
(8, 0), (8, 2), (8, 6), (8, 7), (8, 10), (9, 2), (9, 8),
(9, 9), (9, 10), (10, 4), (10, 7), (10, 8), (10, 9),
(10, 10), (11, 3), (11, 5), (11, 8), (11, 10), (12, 8),
(13, 2), (13, 7), (13, 9), (13, 10), (14, 1), (14, 2),
(14, 7), (14, 8), (14, 10), (15, 4), (15, 8)],
(('Gxpi2', 0),): [
(0, 8), (1, 2), (1, 7), (2, 4), (2, 6), (2, 7), (3, 3),
(3, 5), (3, 8), (4, 8), (5, 0), (5, 4), (5, 5), (5, 7),
(6, 8), (7, 9), (8, 3), (8, 5), (9, 0), (9, 2), (9, 9),
(10, 2), (10, 8), (10, 10), (11, 7), (11, 10), (12, 10),
(13, 1), (13, 2), (13, 6), (13, 9), (14, 0), (14, 5),
(14, 8), (15, 10)],
(('Gypi2', 0),): [
(1, 0), (1, 1), (1, 3), (1, 4), (2, 1), (2, 4), (3, 1),
(3, 2), (3, 8), (4, 3), (4, 5), (5, 7), (6, 0), (6, 1),
(6, 2), (6, 4), (6, 8), (7, 4), (7, 7), (8, 1), (9, 4),
(10, 2), (10, 7), (11, 2), (11, 6), (11, 7), (11, 10),
(12, 2), (12, 3), (12, 8), (13, 9), (13, 10), (15, 6),
(15, 9)],
(('Gxpi2', 1),): [
(0, 4), (0, 9), (1, 5), (1, 8), (1, 9), (1, 10), (2, 2),
(2, 6), (2, 7), (3, 5), (4, 1), (4, 7), (5, 0), (5, 2),
(5, 5), (5, 6), (5, 7), (5, 8), (6, 6), (6, 8), (7, 1),
(7, 5), (7, 9), (7, 10), (8, 5), (8, 6), (9, 0), (9, 4),
(9, 5), (9, 7), (10, 2), (10, 10), (12, 1), (12, 3),
(12, 9), (13, 4), (13, 7), (13, 8), (14, 2), (14, 6),
(14, 8), (15, 5)],
(('Gypi2', 1),): [
(0, 3), (2, 7), (3, 2), (3, 3), (3, 6), (4, 6), (4, 8),
(5, 7), (5, 10), (6, 8), (8, 0), (8, 4), (8, 10), (9, 9),
(10, 6), (12, 1), (12, 3), (12, 8), (13, 0), (13, 1),
(13, 6), (13, 10), (14, 0), (14, 1), (15, 0), (15, 5),
(15, 8)],
((('Gxpi2', 0), ('Gxpi2', 1)),): [
(1, 1), (1, 4), (3, 0), (3, 2), (3, 4), (3, 6), (3, 9),
(4, 0), (4, 8), (7, 0), (7, 8), (8, 2), (8, 4), (9, 8),
(10, 10), (11, 6), (11, 10), (12, 4), (12, 8), (13, 6),
(14, 5)],
(('Gxpi2', 0), ('Gypi2', 0)): [
(0, 1), (0, 4), (0, 7), (0, 8), (1, 10), (2, 1), (2, 3),
(2, 7), (5, 5), (8, 0), (8, 2), (10, 2), (13, 6), (15, 0),
(15, 6)],
(('Gxpi2', 1), ('Gypi2', 1)): [
(1, 3), (2, 0), (3, 6), (5, 3), (5, 7), (5, 8), (6, 6),
(6, 7), (7, 10), (8, 9), (8, 10), (11, 5), (12, 0), (12, 5),
(12, 6), (12, 10), (13, 6), (13, 9), (14, 1), (14, 2),
(15, 4)],
(('Gxpi2', 0), ('Gxpi2', 0), ('Gypi2', 0)): [
(0, 1), (0, 4), (0, 7), (0, 8), (1, 10), (2, 1), (2, 3),
(2, 7), (5, 5), (8, 0), (8, 2), (10, 2), (13, 6), (15, 0),
(15, 6)],
(('Gxpi2', 1), ('Gxpi2', 1), ('Gypi2', 1)): [
(0, 0), (0, 2), (0, 3), (1, 2), (2, 4), (3, 8), (4, 5),
(4, 7), (5, 7), (6, 4), (6, 9), (8, 5), (8, 7), (8, 9),
(9, 1), (10, 3), (10, 4), (11, 8), (11, 10), (12, 4),
(12, 10), (14, 9), (15, 5), (15, 8)],
((('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), (('Gxpi2', 0), ('Gxpi2', 1)), ('Gxpi2', 1), ('Gxpi2', 0)): [
(1, 0), (2, 9), (3, 10), (4, 3), (6, 4), (6, 9), (7, 10),
(9, 7), (10, 2), (10, 9), (10, 10), (11, 0), (12, 0),
(13, 6), (13, 9)],
(('Gxpi2', 0), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 1), ('Gypi2', 0)): [
(0, 1), (0, 4), (0, 7), (0, 8), (1, 10), (2, 1), (2, 3),
(2, 7), (5, 5), (8, 0), (8, 2), (10, 2), (13, 6), (15, 0),
(15, 6)],
(('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 1), ('Gypi2', 0), ('Gxpi2', 1), ('Gxpi2', 1)): [
(0, 1), (0, 3), (0, 4), (0, 7), (0, 8), (1, 10), (2, 1),
(2, 3), (2, 7), (3, 5), (4, 2), (5, 5), (8, 0), (8, 2),
(10, 2), (10, 8), (11, 2), (13, 6), (14, 3), (14, 5),
(15, 0), (15, 7), (15, 9)],
(('Gypi2', 0), ('Gxpi2', 0), ('Gypi2', 1), ('Gxpi2', 0), ('Gxpi2', 1), ('Gxpi2', 0), ('Gypi2', 0), ('Gypi2', 1)): [
(0, 1), (0, 4), (0, 7), (0, 8), (1, 10), (2, 1), (2, 3),
(2, 7), (5, 5), (8, 0), (8, 2), (10, 2), (13, 6), (15, 0),
(15, 6)],
}
def _target_model(self, sslbls, **kwargs):
return self._build_explicit_target_model(
sslbls, [(), ('Gxpi2', 1), ('Gypi2', 1), ('Gxpi2', 0), ('Gypi2', 0), (('Gxpi2', 0), ('Gxpi2', 1))],
['I({0}):I({1})', 'I({0}):X(pi/2,{1})', 'I({0}):Y(pi/2,{1})', 'X(pi/2,{0}):I({1})',
'Y(pi/2,{0}):I({1})', 'X(pi/2,{0}):X(pi/2,{1})'],
effect_labels=['00', '01', '10', '11'],
effect_expressions=['0', '1', '2', '3'], **kwargs)
import sys
sys.modules[__name__] = _Module()
| 96.077859 | 203 | 0.282567 | 6,083 | 39,488 | 1.829361 | 0.019398 | 0.136413 | 0.145309 | 0.117541 | 0.870597 | 0.830967 | 0.817487 | 0.805176 | 0.792146 | 0.779835 | 0 | 0.263352 | 0.339951 | 39,488 | 410 | 204 | 96.312195 | 0.163597 | 0.021348 | 0 | 0.237467 | 0 | 0.002639 | 0.114781 | 0.000595 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002639 | false | 0 | 0.005277 | 0.002639 | 0.044855 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5d86abe7cfc8dc366c17bbd6ec38bbd3e6d514be | 38,345 | py | Python | src/ebay_rest/api/sell_feed/api/task_api.py | gbm001/ebay_rest | 077d3478423ccd80ff35e0361821d6a11180bc54 | [
"MIT"
] | 3 | 2021-12-12T04:28:03.000Z | 2022-03-10T03:29:18.000Z | src/ebay_rest/api/sell_feed/api/task_api.py | jdavv/ebay_rest | 20fc88c6aefdae9ab90f9c1330e79abddcd750cd | [
"MIT"
] | 33 | 2021-06-16T20:44:36.000Z | 2022-03-30T14:55:06.000Z | src/ebay_rest/api/sell_feed/api/task_api.py | jdavv/ebay_rest | 20fc88c6aefdae9ab90f9c1330e79abddcd750cd | [
"MIT"
] | 7 | 2021-06-03T09:30:23.000Z | 2022-03-08T19:51:33.000Z | # coding: utf-8
"""
Feed API
<p>The <strong>Feed API</strong> lets sellers upload input files, download reports and files including their status, filter reports using URI parameters, and retrieve customer service metrics task details.</p> # noqa: E501
OpenAPI spec version: v1.3.1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from ...sell_feed.api_client import ApiClient
class TaskApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_task(self, body, **kwargs): # noqa: E501
"""create_task # noqa: E501
This method creates an upload task or a download task without filter criteria. When using this method, specify the feedType and the feed file schemaVersion. The feed type specified sets the task as a download or an upload task. For details about the upload and download flows, see Working with Order Feeds in the Selling Integration Guide. Note: The scope depends on the feed type. An error message results when an unsupported scope or feed type is specified. The following list contains this method's authorization scopes and their corresponding feed types: https://api.ebay.com/oauth/api_scope/sell.inventory: See LMS FeedTypes https://api.ebay.com/oauth/api_scope/sell.fulfillment: LMS_ORDER_ACK (specify for upload tasks). Also see LMS FeedTypes https://api.ebay.com/oauth/api_scope/sell.marketing: None* https://api.ebay.com/oauth/api_scope/commerce.catalog.readonly: None* * Reserved for future release # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_task(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param CreateTaskRequest body: description not needed (required)
:param str x_ebay_c_marketplace_id: The ID of the eBay marketplace where the item is hosted. Note: This value is case sensitive. For example: X-EBAY-C-MARKETPLACE-ID:EBAY_US This identifies the eBay marketplace that applies to this task. See MarketplaceIdEnum.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_task_with_http_info(body, **kwargs) # noqa: E501
else:
(data) = self.create_task_with_http_info(body, **kwargs) # noqa: E501
return data
def create_task_with_http_info(self, body, **kwargs): # noqa: E501
"""create_task # noqa: E501
This method creates an upload task or a download task without filter criteria. When using this method, specify the feedType and the feed file schemaVersion. The feed type specified sets the task as a download or an upload task. For details about the upload and download flows, see Working with Order Feeds in the Selling Integration Guide. Note: The scope depends on the feed type. An error message results when an unsupported scope or feed type is specified. The following list contains this method's authorization scopes and their corresponding feed types: https://api.ebay.com/oauth/api_scope/sell.inventory: See LMS FeedTypes https://api.ebay.com/oauth/api_scope/sell.fulfillment: LMS_ORDER_ACK (specify for upload tasks). Also see LMS FeedTypes https://api.ebay.com/oauth/api_scope/sell.marketing: None* https://api.ebay.com/oauth/api_scope/commerce.catalog.readonly: None* * Reserved for future release # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_task_with_http_info(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param CreateTaskRequest body: description not needed (required)
:param str x_ebay_c_marketplace_id: The ID of the eBay marketplace where the item is hosted. Note: This value is case sensitive. For example: X-EBAY-C-MARKETPLACE-ID:EBAY_US This identifies the eBay marketplace that applies to this task. See MarketplaceIdEnum.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'x_ebay_c_marketplace_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_task" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_task`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
if 'x_ebay_c_marketplace_id' in params:
header_params['X-EBAY-C-MARKETPLACE-ID'] = params['x_ebay_c_marketplace_id'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['api_auth'] # noqa: E501
return self.api_client.call_api(
'/task', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_input_file(self, task_id, **kwargs): # noqa: E501
"""get_input_file # noqa: E501
This method downloads the file previously uploaded using uploadFile. Specify the task_id from the uploadFile call. Note: With respect to LMS, this method applies to all feed types except LMS_ORDER_REPORT and LMS_ACTIVE_INVENTORY_REPORT. See LMS API Feeds in the Selling Integration Guide. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_input_file(task_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str task_id: The task ID associated with the file to be downloaded. (required)
:return: StreamingOutput
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_input_file_with_http_info(task_id, **kwargs) # noqa: E501
else:
(data) = self.get_input_file_with_http_info(task_id, **kwargs) # noqa: E501
return data
def get_input_file_with_http_info(self, task_id, **kwargs): # noqa: E501
"""get_input_file # noqa: E501
This method downloads the file previously uploaded using uploadFile. Specify the task_id from the uploadFile call. Note: With respect to LMS, this method applies to all feed types except LMS_ORDER_REPORT and LMS_ACTIVE_INVENTORY_REPORT. See LMS API Feeds in the Selling Integration Guide. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_input_file_with_http_info(task_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str task_id: The task ID associated with the file to be downloaded. (required)
:return: StreamingOutput
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['task_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_input_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'task_id' is set
if ('task_id' not in params or
params['task_id'] is None):
raise ValueError("Missing the required parameter `task_id` when calling `get_input_file`") # noqa: E501
collection_formats = {}
path_params = {}
if 'task_id' in params:
path_params['task_id'] = params['task_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/octet-stream']) # noqa: E501
# Authentication setting
auth_settings = ['api_auth'] # noqa: E501
return self.api_client.call_api(
'/task/{task_id}/download_input_file', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='StreamingOutput', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_result_file(self, task_id, **kwargs): # noqa: E501
"""get_result_file # noqa: E501
This method retrieves the generated file that is associated with the specified task ID. The response of this call is a compressed or uncompressed CSV, XML, or JSON file, with the applicable file extension (for example: csv.gz). For details about how this method is used, see Working with Order Feeds in the Selling Integration Guide. Note: The status of the task to retrieve must be in the COMPLETED or COMPLETED_WITH_ERROR state before this method can retrieve the file. You can use the getTask or getTasks method to retrieve the status of the task. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_result_file(task_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str task_id: The ID of the task associated with the file you want to download. This ID was generated when the task was created. (required)
:return: StreamingOutput
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_result_file_with_http_info(task_id, **kwargs) # noqa: E501
else:
(data) = self.get_result_file_with_http_info(task_id, **kwargs) # noqa: E501
return data
def get_result_file_with_http_info(self, task_id, **kwargs): # noqa: E501
"""get_result_file # noqa: E501
This method retrieves the generated file that is associated with the specified task ID. The response of this call is a compressed or uncompressed CSV, XML, or JSON file, with the applicable file extension (for example: csv.gz). For details about how this method is used, see Working with Order Feeds in the Selling Integration Guide. Note: The status of the task to retrieve must be in the COMPLETED or COMPLETED_WITH_ERROR state before this method can retrieve the file. You can use the getTask or getTasks method to retrieve the status of the task. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_result_file_with_http_info(task_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str task_id: The ID of the task associated with the file you want to download. This ID was generated when the task was created. (required)
:return: StreamingOutput
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['task_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_result_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'task_id' is set
if ('task_id' not in params or
params['task_id'] is None):
raise ValueError("Missing the required parameter `task_id` when calling `get_result_file`") # noqa: E501
collection_formats = {}
path_params = {}
if 'task_id' in params:
path_params['task_id'] = params['task_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/octet-stream']) # noqa: E501
# Authentication setting
auth_settings = ['api_auth'] # noqa: E501
return self.api_client.call_api(
'/task/{task_id}/download_result_file', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='StreamingOutput', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_task(self, task_id, **kwargs): # noqa: E501
"""get_task # noqa: E501
This method retrieves the details and status of the specified task. The input is task_id. For details of how this method is used, see Working with Order Feeds in the Selling Integration Guide. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_task(task_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str task_id: The ID of the task. This ID was generated when the task was created. (required)
:return: Task
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_task_with_http_info(task_id, **kwargs) # noqa: E501
else:
(data) = self.get_task_with_http_info(task_id, **kwargs) # noqa: E501
return data
def get_task_with_http_info(self, task_id, **kwargs): # noqa: E501
"""get_task # noqa: E501
This method retrieves the details and status of the specified task. The input is task_id. For details of how this method is used, see Working with Order Feeds in the Selling Integration Guide. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_task_with_http_info(task_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str task_id: The ID of the task. This ID was generated when the task was created. (required)
:return: Task
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['task_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_task" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'task_id' is set
if ('task_id' not in params or
params['task_id'] is None):
raise ValueError("Missing the required parameter `task_id` when calling `get_task`") # noqa: E501
collection_formats = {}
path_params = {}
if 'task_id' in params:
path_params['task_id'] = params['task_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['api_auth'] # noqa: E501
return self.api_client.call_api(
'/task/{task_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Task', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_tasks(self, **kwargs): # noqa: E501
"""get_tasks # noqa: E501
This method returns the details and status for an array of tasks based on a specified feed_type or scheduledId. Specifying both feed_type and scheduledId results in an error. Since schedules are based on feed types, you can specify a schedule (schedule_id) that returns the needed feed_type. If specifying the feed_type, limit which tasks are returned by specifying filters, such as the creation date range or period of time using look_back_days. Also, by specifying the feed_type, both on-demand and scheduled reports are returned. If specifying a scheduledId, the schedule template (that the schedule ID is based on) determines which tasks are returned (see schedule_id for additional information). Each scheduledId applies to one feed_type. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_tasks(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str date_range: Specifies the range of task creation dates used to filter the results. The results are filtered to include only tasks with a creation date that is equal to this date or is within specified range. Only tasks that are less than 90 days can be retrieved. Note: Maximum date range window size is 90 days. Valid Format (UTC):yyyy-MM-ddThh:mm:ss.SSSZ..yyyy-MM-ddThh:mm:ss.SSSZ For example: Tasks created on September 8, 2019 2019-09-08T00:00:00.000Z..2019-09-09T00:00:00.000Z
:param str feed_type: The feed type associated with the tasks to be returned. Only use a feedType that is available for your API: Order Feeds: LMS_ORDER_ACK, LMS_ORDER_REPORT Large Merchant Services (LMS) Feeds: See Available FeedTypes Do not use with the schedule_id parameter. Since schedules are based on feed types, you can specify a schedule (schedule_id) that returns the needed feed_type.
:param str limit: The maximum number of tasks that can be returned on each page of the paginated response. Use this parameter in conjunction with the offset parameter to control the pagination of the output. Note: This feature employs a zero-based list, where the first item in the list has an offset of 0. For example, if offset is set to 10 and limit is set to 10, the call retrieves tasks 11 thru 20 from the result set. If this parameter is omitted, the default value is used. Default: 10 Maximum: 500
:param str look_back_days: The number of previous days in which to search for tasks. Do not use with the date_range parameter. If both date_range and look_back_days are omitted, this parameter's default value is used. Default: 7 Range: 1-90 (inclusive)
:param str offset: The number of tasks to skip in the result set before returning the first task in the paginated response. Combine offset with the limit query parameter to control the items returned in the response. For example, if you supply an offset of 0 and a limit of 10, the first page of the response contains the first 10 items from the complete list of items retrieved by the call. If offset is 10 and limit is 20, the first page of the response contains items 11-30 from the complete result set. If this query parameter is not set, the default value is used and the first page of records is returned. Default: 0
:param str schedule_id: The schedule ID associated with the task. A schedule periodically generates a report for the feed type specified by the schedule template (see scheduleTemplateId in createSchedule). Do not use with the feed_type parameter. Since schedules are based on feed types, you can specify a schedule (schedule_id) that returns the needed feed_type.
:return: TaskCollection
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_tasks_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_tasks_with_http_info(**kwargs) # noqa: E501
return data
def get_tasks_with_http_info(self, **kwargs): # noqa: E501
"""get_tasks # noqa: E501
This method returns the details and status for an array of tasks based on a specified feed_type or scheduledId. Specifying both feed_type and scheduledId results in an error. Since schedules are based on feed types, you can specify a schedule (schedule_id) that returns the needed feed_type. If specifying the feed_type, limit which tasks are returned by specifying filters, such as the creation date range or period of time using look_back_days. Also, by specifying the feed_type, both on-demand and scheduled reports are returned. If specifying a scheduledId, the schedule template (that the schedule ID is based on) determines which tasks are returned (see schedule_id for additional information). Each scheduledId applies to one feed_type. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_tasks_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str date_range: Specifies the range of task creation dates used to filter the results. The results are filtered to include only tasks with a creation date that is equal to this date or is within specified range. Only tasks that are less than 90 days can be retrieved. Note: Maximum date range window size is 90 days. Valid Format (UTC):yyyy-MM-ddThh:mm:ss.SSSZ..yyyy-MM-ddThh:mm:ss.SSSZ For example: Tasks created on September 8, 2019 2019-09-08T00:00:00.000Z..2019-09-09T00:00:00.000Z
:param str feed_type: The feed type associated with the tasks to be returned. Only use a feedType that is available for your API: Order Feeds: LMS_ORDER_ACK, LMS_ORDER_REPORT Large Merchant Services (LMS) Feeds: See Available FeedTypes Do not use with the schedule_id parameter. Since schedules are based on feed types, you can specify a schedule (schedule_id) that returns the needed feed_type.
:param str limit: The maximum number of tasks that can be returned on each page of the paginated response. Use this parameter in conjunction with the offset parameter to control the pagination of the output. Note: This feature employs a zero-based list, where the first item in the list has an offset of 0. For example, if offset is set to 10 and limit is set to 10, the call retrieves tasks 11 thru 20 from the result set. If this parameter is omitted, the default value is used. Default: 10 Maximum: 500
:param str look_back_days: The number of previous days in which to search for tasks. Do not use with the date_range parameter. If both date_range and look_back_days are omitted, this parameter's default value is used. Default: 7 Range: 1-90 (inclusive)
:param str offset: The number of tasks to skip in the result set before returning the first task in the paginated response. Combine offset with the limit query parameter to control the items returned in the response. For example, if you supply an offset of 0 and a limit of 10, the first page of the response contains the first 10 items from the complete list of items retrieved by the call. If offset is 10 and limit is 20, the first page of the response contains items 11-30 from the complete result set. If this query parameter is not set, the default value is used and the first page of records is returned. Default: 0
:param str schedule_id: The schedule ID associated with the task. A schedule periodically generates a report for the feed type specified by the schedule template (see scheduleTemplateId in createSchedule). Do not use with the feed_type parameter. Since schedules are based on feed types, you can specify a schedule (schedule_id) that returns the needed feed_type.
:return: TaskCollection
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['date_range', 'feed_type', 'limit', 'look_back_days', 'offset', 'schedule_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_tasks" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'date_range' in params:
query_params.append(('date_range', params['date_range'])) # noqa: E501
if 'feed_type' in params:
query_params.append(('feed_type', params['feed_type'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'look_back_days' in params:
query_params.append(('look_back_days', params['look_back_days'])) # noqa: E501
if 'offset' in params:
query_params.append(('offset', params['offset'])) # noqa: E501
if 'schedule_id' in params:
query_params.append(('schedule_id', params['schedule_id'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['api_auth'] # noqa: E501
return self.api_client.call_api(
'/task', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TaskCollection', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def upload_file(self, task_id, **kwargs): # noqa: E501
"""upload_file # noqa: E501
This method associates the specified file with the specified task ID and uploads the input file. After the file has been uploaded, the processing of the file begins. Reports often take time to generate and it's common for this method to return an HTTP status of 202, which indicates the report is being generated. Use the getTask with the task ID or getTasks to determine the status of a report. The status flow is QUEUED > IN_PROCESS > COMPLETED or COMPLETED_WITH_ERROR. When the status is COMPLETED or COMPLETED_WITH_ERROR, this indicates the file has been processed and the order report can be downloaded. If there are errors, they will be indicated in the report file. For details of how this method is used in the upload flow, see Working with Order Feeds in the Selling Integration Guide. Note: This method applies to all File Exchange feed types and LMS feed types except LMS_ORDER_REPORT and LMS_ACTIVE_INVENTORY_REPORT. See LMS API Feeds in the Selling Integration Guide and File Exchange FeedTypes in the File Exchange Migration Guide. Note: You must use a Content-Type header with its value set to "multipart/form-data". See Samples for information. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_file(task_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str task_id: The task_id associated with the file that will be uploaded. This ID was generated when the specified task was created. (required)
:param str creation_date:
:param str file_name:
:param str modification_date:
:param str name:
:param dict(str, str) parameters:
:param str read_date:
:param int size:
:param str type:
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.upload_file_with_http_info(task_id, **kwargs) # noqa: E501
else:
(data) = self.upload_file_with_http_info(task_id, **kwargs) # noqa: E501
return data
def upload_file_with_http_info(self, task_id, **kwargs): # noqa: E501
"""upload_file # noqa: E501
This method associates the specified file with the specified task ID and uploads the input file. After the file has been uploaded, the processing of the file begins. Reports often take time to generate and it's common for this method to return an HTTP status of 202, which indicates the report is being generated. Use the getTask with the task ID or getTasks to determine the status of a report. The status flow is QUEUED > IN_PROCESS > COMPLETED or COMPLETED_WITH_ERROR. When the status is COMPLETED or COMPLETED_WITH_ERROR, this indicates the file has been processed and the order report can be downloaded. If there are errors, they will be indicated in the report file. For details of how this method is used in the upload flow, see Working with Order Feeds in the Selling Integration Guide. Note: This method applies to all File Exchange feed types and LMS feed types except LMS_ORDER_REPORT and LMS_ACTIVE_INVENTORY_REPORT. See LMS API Feeds in the Selling Integration Guide and File Exchange FeedTypes in the File Exchange Migration Guide. Note: You must use a Content-Type header with its value set to "multipart/form-data". See Samples for information. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_file_with_http_info(task_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str task_id: The task_id associated with the file that will be uploaded. This ID was generated when the specified task was created. (required)
:param str creation_date:
:param str file_name:
:param str modification_date:
:param str name:
:param dict(str, str) parameters:
:param str read_date:
:param int size:
:param str type:
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['task_id', 'creation_date', 'file_name', 'modification_date', 'name', 'parameters', 'read_date', 'size', 'type'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method upload_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'task_id' is set
if ('task_id' not in params or
params['task_id'] is None):
raise ValueError("Missing the required parameter `task_id` when calling `upload_file`") # noqa: E501
collection_formats = {}
path_params = {}
if 'task_id' in params:
path_params['task_id'] = params['task_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'creation_date' in params:
form_params.append(('creationDate', params['creation_date'])) # noqa: E501
if 'file_name' in params:
form_params.append(('fileName', params['file_name'])) # noqa: E501
if 'modification_date' in params:
form_params.append(('modificationDate', params['modification_date'])) # noqa: E501
if 'name' in params:
form_params.append(('name', params['name'])) # noqa: E501
if 'parameters' in params:
form_params.append(('parameters', params['parameters'])) # noqa: E501
if 'read_date' in params:
form_params.append(('readDate', params['read_date'])) # noqa: E501
if 'size' in params:
form_params.append(('size', params['size'])) # noqa: E501
if 'type' in params:
form_params.append(('type', params['type'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = ['api_auth'] # noqa: E501
return self.api_client.call_api(
'/task/{task_id}/upload_file', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 58.098485 | 1,198 | 0.671248 | 5,287 | 38,345 | 4.702667 | 0.076792 | 0.032176 | 0.013514 | 0.017375 | 0.941922 | 0.928569 | 0.917468 | 0.908579 | 0.904839 | 0.902063 | 0 | 0.016271 | 0.254688 | 38,345 | 659 | 1,199 | 58.186646 | 0.853704 | 0.558821 | 0 | 0.718391 | 0 | 0 | 0.189575 | 0.041105 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037356 | false | 0 | 0.011494 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5d86df75441e78b3c1eebae0ea428d1d6e6615c6 | 15,494 | py | Python | src/grad_match.py | eric-hhy/pytorch-nested-unet | b66103ed0571a1aaed85005b24baa8b516710e96 | [
"MIT"
] | null | null | null | src/grad_match.py | eric-hhy/pytorch-nested-unet | b66103ed0571a1aaed85005b24baa8b516710e96 | [
"MIT"
] | null | null | null | src/grad_match.py | eric-hhy/pytorch-nested-unet | b66103ed0571a1aaed85005b24baa8b516710e96 | [
"MIT"
] | null | null | null |
import os
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from .dataset import Dataset
from .models import UnetModel, UnetModel2
from .metrics import PSNR
from .utils import Progbar, create_dir, stitch_images, imsave, Get_gradient
class GradientMatch():
def __init__(self, config):
self.config = config
self.model_name = "Unet"
self.debug = False
self.unet_model = UnetModel(config).to(config.DEVICE)
self.psnr = PSNR(255.0).to(config.DEVICE)
self.train_dataset = Dataset(list_folder = config.LIST_FOLDER,
mode = "train",
crop_size = config.CROP_SIZE,
scale = config.SCALE
)
self.test_dataset = Dataset(list_folder = config.LIST_FOLDER,
mode = "test",
crop_size = config.CROP_SIZE,
scale = config.SCALE
)
self.val_dataset = Dataset(list_folder = config.LIST_FOLDER,
mode = "eval",
crop_size = config.CROP_SIZE,
scale = config.SCALE
)
self.samples_path = os.path.join(config.PATH, 'samples')
self.results_path = os.path.join(config.PATH, 'results')
if config.RESULTS is not None:
self.results_path = os.path.join(config.RESULTS)
if config.DEBUG is not None and config.DEBUG != 0:
self.debug = True
self.log_file = os.path.join(config.PATH, 'log_' + self.model_name + '.dat')
def load(self):
self.unet_model.load()
def save(self):
self.unet_model.save()
def train(self):
train_loader = DataLoader(
dataset=self.train_dataset,
batch_size=self.config.BATCH_SIZE,
num_workers=4,
drop_last=True,
shuffle=True
)
epoch = 0
keep_training = True
model = self.config.MODEL
max_iteration = int(float((self.config.MAX_ITERS)))
total = len(self.train_dataset)
if total == 0:
print('No training data was provided! Check relevant path in the configuration file.')
return
while(keep_training):
epoch += 1
print('\n\nTraining epoch: %d' % epoch)
progbar = Progbar(total, width=20, stateful_metrics=['epoch', 'iter'])
for items in train_loader:
self.unet_model.train()
lr_images, hr_images = self.cuda(*items)
# train
hr_images_pred, hr_grads, mix_loss, logs = self.unet_model.process(lr_images, hr_images)
# metrics
psnr = self.psnr(self.postprocess(hr_images), self.postprocess(hr_images_pred))
mae = (torch.sum(torch.abs(hr_images - hr_images_pred)) / torch.sum(hr_images)).float()
logs.append(('psnr', psnr.item()))
logs.append(('mae', mae.item()))
# backward
self.unet_model.backward(mix_loss)
iteration = self.unet_model.iteration
if iteration > max_iteration:
keep_training = False
print('Maximum number of iterations reached!')
break
logs = [
("epoch", epoch),
("iter", iteration),
] + logs
progbar.add(len(hr_images), values=logs if self.config.VERBOSE else [x for x in logs if not x[0].startswith('l_')])
# log model at checkpoints
if self.config.LOG_INTERVAL and iteration % self.config.LOG_INTERVAL == 0:
self.log(logs)
# sample model at checkpoints
if self.config.SAMPLE_INTERVAL and iteration % self.config.SAMPLE_INTERVAL == 0:
self.sample()
# evaluate model at checkpoints
if self.config.EVAL_INTERVAL and iteration % self.config.EVAL_INTERVAL == 0:
print('\nstart eval...\n')
self.eval()
# save model at checkpoints
if self.config.SAVE_INTERVAL and iteration % self.config.SAVE_INTERVAL == 0:
self.save()
print('\nEnd training....')
def eval(self):
val_loader = DataLoader(
dataset=self.val_dataset,
batch_size=self.config.BATCH_SIZE,
drop_last=True,
shuffle=True
)
total = len(self.val_dataset)
self.unet_model.eval()
progbar = Progbar(total, width=20, stateful_metrics=['iter'])
iteration = 0
for items in val_loader:
iteration += 1
lr_images, hr_images = self.cuda(*items)
hr_images_pred, hr_grads, mix_loss, logs = self.unet_model.process(lr_images, hr_images)
# metrics
psnr = self.psnr(self.postprocess(hr_images), self.postprocess(hr_images_pred))
mae = (torch.sum(torch.abs(hr_images - hr_images_pred)) / torch.sum(hr_images)).float()
logs.append(('psnr', psnr.item()))
logs.append(('mae', mae.item()))
logs = [("iter", iteration), ] + logs
progbar.add(len(hr_images), values=logs)
def test(self):
self.unet_model.eval()
create_dir(self.results_path)
test_loader = DataLoader(
dataset=self.test_dataset,
batch_size=1,
)
index = 0
with torch.no_grad():
for items in test_loader:
name = self.test_dataset.load_name(index)
lr_images, hr_images = self.cuda(*items)
index += 1
outputs = self.unet_model.forward(lr_images)
output = self.postprocess(outputs)[0]
path = os.path.join(self.results_path, name)
print(index, name)
imsave(output, path)
print('\nEnd test....')
def sample(self):
if len(self.val_dataset) == 0:
return
self.unet_model.eval()
items = next(self.val_dataset.create_iterator(self.config.SAMPLE_SIZE))
lr_images, hr_images = self.cuda(*items)
iteration = self.unet_model.iteration
outputs = self.unet_model.forward(lr_images)
get_grad = Get_gradient()
hr_grads = get_grad.forward(hr_images)
image_per_row = 2
if self.config.SAMPLE_SIZE <= 6:
image_per_row = 1
images = stitch_images(
self.postprocess(self.scale(lr_images)),
self.postprocess(hr_images),
self.postprocess(hr_grads),
self.postprocess(outputs),
img_per_row=image_per_row
)
path = os.path.join(self.samples_path, self.model_name)
name = os.path.join(path, str(iteration).zfill(5) + ".png")
create_dir(path)
print('\nsaving sample ' + name)
images.save(name)
def scale(self, tensor):
return F.interpolate(tensor, scale_factor=self.config.SCALE)
def log(self, logs):
with open(self.log_file, 'a') as f:
f.write('%s\n' % ' '.join([str(item[1]) for item in logs]))
def cuda(self, *args):
return (item.to(self.config.DEVICE) for item in args)
def postprocess(self, img):
# [0, 1] => [0, 255]
img = img * 255.0
img = img.permute(0, 2, 3, 1)
return img.int()
class GradientMatch2():
def __init__(self, config):
self.config = config
self.model_name = "Unet2"
self.debug = False
self.unet_model = UnetModel2(config).to(config.DEVICE)
self.psnr = PSNR(255.0).to(config.DEVICE)
self.train_dataset = Dataset(list_folder = config.LIST_FOLDER,
mode = "train",
crop_size = config.CROP_SIZE,
scale = config.SCALE
)
self.test_dataset = Dataset(list_folder = config.LIST_FOLDER,
mode = "test",
crop_size = config.CROP_SIZE,
scale = config.SCALE
)
self.val_dataset = Dataset(list_folder = config.LIST_FOLDER,
mode = "eval",
crop_size = config.CROP_SIZE,
scale = config.SCALE
)
self.samples_path = os.path.join(config.PATH, 'samples')
self.results_path = os.path.join(config.PATH, 'results')
if config.RESULTS is not None:
self.results_path = os.path.join(config.RESULTS)
if config.DEBUG is not None and config.DEBUG != 0:
self.debug = True
self.log_file = os.path.join(config.PATH, 'log_' + self.model_name + '.dat')
def load(self):
self.unet_model.load()
def save(self):
self.unet_model.save()
def train(self):
train_loader = DataLoader(
dataset=self.train_dataset,
batch_size=self.config.BATCH_SIZE,
num_workers=4,
drop_last=True,
shuffle=True
)
epoch = 0
keep_training = True
model = self.config.MODEL
max_iteration = int(float((self.config.MAX_ITERS)))
total = len(self.train_dataset)
if total == 0:
print('No training data was provided! Check relevant path in the configuration file.')
return
while(keep_training):
epoch += 1
print('\n\nTraining epoch: %d' % epoch)
progbar = Progbar(total, width=20, stateful_metrics=['epoch', 'iter'])
for items in train_loader:
self.unet_model.train()
lr_images, hr_images = self.cuda(*items)
# train
hr_images_pred, hr_grads, mix_loss, logs = self.unet_model.process(lr_images, hr_images)
# metrics
psnr = self.psnr(self.postprocess(hr_images), self.postprocess(hr_images_pred))
mae = (torch.sum(torch.abs(hr_images - hr_images_pred)) / torch.sum(hr_images)).float()
logs.append(('psnr', psnr.item()))
logs.append(('mae', mae.item()))
# backward
self.unet_model.backward(mix_loss)
iteration = self.unet_model.iteration
if iteration > max_iteration:
keep_training = False
print('Maximum number of iterations reached!')
break
logs = [
("epoch", epoch),
("iter", iteration),
] + logs
progbar.add(len(hr_images), values=logs if self.config.VERBOSE else [x for x in logs if not x[0].startswith('l_')])
# log model at checkpoints
if self.config.LOG_INTERVAL and iteration % self.config.LOG_INTERVAL == 0:
self.log(logs)
# sample model at checkpoints
if self.config.SAMPLE_INTERVAL and iteration % self.config.SAMPLE_INTERVAL == 0:
self.sample()
# evaluate model at checkpoints
if self.config.EVAL_INTERVAL and iteration % self.config.EVAL_INTERVAL == 0:
print('\nstart eval...\n')
self.eval()
# save model at checkpoints
if self.config.SAVE_INTERVAL and iteration % self.config.SAVE_INTERVAL == 0:
self.save()
print('\nEnd training....')
def eval(self):
val_loader = DataLoader(
dataset=self.val_dataset,
batch_size=self.config.BATCH_SIZE,
drop_last=True,
shuffle=True
)
total = len(self.val_dataset)
self.unet_model.eval()
progbar = Progbar(total, width=20, stateful_metrics=['iter'])
iteration = 0
for items in val_loader:
iteration += 1
lr_images, hr_images = self.cuda(*items)
hr_images_pred, hr_grads, mix_loss, logs = self.unet_model.process(lr_images, hr_images)
# metrics
psnr = self.psnr(self.postprocess(hr_images), self.postprocess(hr_images_pred))
mae = (torch.sum(torch.abs(hr_images - hr_images_pred)) / torch.sum(hr_images)).float()
logs.append(('psnr', psnr.item()))
logs.append(('mae', mae.item()))
logs = [("iter", iteration), ] + logs
progbar.add(len(hr_images), values=logs)
def test(self):
self.unet_model.eval()
create_dir(self.results_path)
test_loader = DataLoader(
dataset=self.test_dataset,
batch_size=1,
)
index = 0
with torch.no_grad():
for items in test_loader:
name = self.test_dataset.load_name(index)
lr_images, hr_images = self.cuda(*items)
index += 1
print(index, name)
outputs = self.unet_model.forward(lr_images)
output = self.postprocess(outputs)[0]
path = os.path.join(self.results_path, name)
print(index, name)
imsave(output, path)
print('\nEnd test....')
def sample(self):
if len(self.val_dataset) == 0:
return
self.unet_model.eval()
items = next(self.val_dataset.create_iterator(self.config.SAMPLE_SIZE))
lr_images, hr_images = self.cuda(*items)
iteration = self.unet_model.iteration
outputs = self.unet_model.forward(lr_images)
get_grad = Get_gradient()
hr_grads = get_grad.forward(hr_images)
image_per_row = 2
if self.config.SAMPLE_SIZE <= 6:
image_per_row = 1
images = stitch_images(
self.postprocess(self.scale(lr_images)),
self.postprocess(hr_images),
self.postprocess(hr_grads),
self.postprocess(outputs),
img_per_row=image_per_row
)
path = os.path.join(self.samples_path, self.model_name)
name = os.path.join(path, str(iteration).zfill(5) + ".png")
create_dir(path)
print('\nsaving sample ' + name)
images.save(name)
def scale(self, tensor):
return F.interpolate(tensor, scale_factor=self.config.SCALE)
def log(self, logs):
with open(self.log_file, 'a') as f:
f.write('%s\n' % ' '.join([str(item[1]) for item in logs]))
def cuda(self, *args):
return (item.to(self.config.DEVICE) for item in args)
def postprocess(self, img):
# [0, 1] => [0, 255]
img = img * 255.0
img = img.permute(0, 2, 3, 1)
return img.int() | 33.32043 | 131 | 0.5324 | 1,742 | 15,494 | 4.56372 | 0.099311 | 0.044277 | 0.045786 | 0.024151 | 0.958491 | 0.958491 | 0.951698 | 0.951698 | 0.951698 | 0.951698 | 0 | 0.009353 | 0.365174 | 15,494 | 465 | 132 | 33.32043 | 0.798902 | 0.02046 | 0 | 0.908257 | 0 | 0 | 0.038593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067278 | false | 0 | 0.030581 | 0.012232 | 0.134557 | 0.051988 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5dbce29868f862b50c8668256a7ba97943b5e3cb | 13,279 | py | Python | dicom2hdf/data_readers/patient_data/factories/patient_data_factories.py | MaxenceLarose/dicom2hdf | 71935358faedb8223d644184921256341b4138cb | [
"Apache-2.0"
] | 11 | 2022-01-13T17:38:17.000Z | 2022-01-25T15:36:00.000Z | dicom2hdf/data_readers/patient_data/factories/patient_data_factories.py | MaxenceLarose/dicom2hdf | 71935358faedb8223d644184921256341b4138cb | [
"Apache-2.0"
] | null | null | null | dicom2hdf/data_readers/patient_data/factories/patient_data_factories.py | MaxenceLarose/dicom2hdf | 71935358faedb8223d644184921256341b4138cb | [
"Apache-2.0"
] | null | null | null | """
@file: patient_data_factories.py
@Author: Maxence Larose
@Creation Date: 01/2022
@Last modification: 03/2022
@Description: This file contains all factories that inherit from the BasePatientDataFactory class.
"""
from typing import Dict, List, Optional
from dicom2hdf.data_readers.patient_data.factories.base_patient_data_factory import BasePatientDataFactory
from dicom2hdf.data_model import ImageAndSegmentationDataModel, PatientDataModel
from dicom2hdf.data_readers.image.dicom_reader import DicomReader
from dicom2hdf.data_readers.segmentation.segmentation_reader import SegmentationReader
class DefaultPatientDataFactory(BasePatientDataFactory):
"""
Class that defines the methods that are used to get the patient data. The default factory consists in obtaining all
the images without any segmentation.
"""
def __init__(
self,
path_to_patient_folder: str,
paths_to_segmentations: Optional[List[str]],
series_descriptions: Optional[Dict[str, List[str]]],
erase_unused_dicom_files: bool = False
):
"""
Constructor of the class DefaultPatientDataFactory.
Parameters
----------
path_to_patient_folder : str
Path to the folder containing the patient's image files.
paths_to_segmentations : Optional[List[str]]
List of paths to the patient's segmentation files.
series_descriptions : Optional[Dict[str, List[str]]]
A dictionary that contains the series descriptions of the images that absolutely needs to be extracted from
the patient's file. Keys are arbitrary names given to the images we want to add and values are lists of
series descriptions.
erase_unused_dicom_files: bool = False
Whether to delete unused DICOM files or not. Use with caution.
"""
super().__init__(
path_to_patient_folder=path_to_patient_folder,
paths_to_segmentations=paths_to_segmentations,
series_descriptions=series_descriptions,
erase_unused_dicom_files=erase_unused_dicom_files
)
def create_patient_data(self) -> PatientDataModel:
"""
Creates a tuple containing all the patient's data.
Returns
-------
patient_data: PatientDataModel
Patient data.
"""
patient_data = PatientDataModel(
patient_id=self.patient_id,
data=[ImageAndSegmentationDataModel(image=image) for image in self._images_data]
)
return patient_data
class SegmentationPatientDataFactory(BasePatientDataFactory):
"""
Class that defines the methods that are used to get the patient data. The segmentation patient data factory consists
in obtaining the images that have the same serial uids as those contained in the file names of the given
segmentations. The final dataset therefore contains both the segmentations and their corresponding images.
"""
def __init__(
self,
path_to_patient_folder: str,
paths_to_segmentations: List[str],
series_descriptions: Optional[Dict[str, List[str]]],
erase_unused_dicom_files: bool = False
):
"""
Constructor of the class SegmentationPatientDataFactory.
Parameters
----------
path_to_patient_folder : str
Path to the folder containing the patient's image files.
paths_to_segmentations : Optional[List[str]]
List of paths to the patient's segmentation files.
series_descriptions : Optional[Dict[str, List[str]]]
A dictionary that contains the series descriptions of the images that absolutely needs to be extracted from
the patient's file. Keys are arbitrary names given to the images we want to add and values are lists of
series descriptions.
erase_unused_dicom_files: bool = False
Whether to delete unused DICOM files or not. Use with caution.
"""
super().__init__(
path_to_patient_folder=path_to_patient_folder,
paths_to_segmentations=paths_to_segmentations,
series_descriptions=series_descriptions,
erase_unused_dicom_files=erase_unused_dicom_files
)
def create_patient_data(self) -> PatientDataModel:
"""
Creates a tuple containing all the patient's data.
Returns
-------
patient_data: PatientDataModel
Patient data.
"""
data = []
for image in self._images_data:
image_added = False
segmentations = []
for path_to_segmentation in self._paths_to_segmentations:
dicom_header = DicomReader.get_dicom_header(path_to_dicom=path_to_segmentation)
if image.dicom_header.SeriesInstanceUID == dicom_header.ReferencedSeriesSequence[0].SeriesInstanceUID:
segmentation_reader = SegmentationReader(
image=image,
path_to_segmentation=path_to_segmentation
)
segmentations.append(segmentation_reader.get_segmentation_data())
if segmentations:
image_and_segmentation_data = ImageAndSegmentationDataModel(
image=image,
segmentations=segmentations
)
data.append(image_and_segmentation_data)
image_added = True
if image_added is False and self._erase_unused_dicom_files:
self.erase_dicom_files(image)
patient_data = PatientDataModel(
patient_id=self.patient_id,
data=data
)
return patient_data
class SeriesDescriptionPatientDataFactory(BasePatientDataFactory):
"""
Class that defines the methods that are used to get the patient data. The series description patient data factory
consists in obtaining only the images that have the given series descriptions. The final dataset therefore contains
both the segmentations and their corresponding images.
"""
def __init__(
self,
path_to_patient_folder: str,
paths_to_segmentations: Optional[List[str]],
series_descriptions: Optional[Dict[str, List[str]]],
erase_unused_dicom_files: bool = False
):
"""
Constructor of the class SeriesDescriptionPatientDataFactory.
Parameters
----------
path_to_patient_folder : str
Path to the folder containing the patient's image files.
paths_to_segmentations : Optional[List[str]]
List of paths to the patient's segmentation files.
series_descriptions : Optional[Dict[str, List[str]]]
A dictionary that contains the series descriptions of the images that absolutely needs to be extracted from
the patient's file. Keys are arbitrary names given to the images we want to add and values are lists of
series descriptions.
erase_unused_dicom_files: bool = False
Whether to delete unused DICOM files or not. Use with caution.
"""
super().__init__(
path_to_patient_folder=path_to_patient_folder,
paths_to_segmentations=paths_to_segmentations,
series_descriptions=series_descriptions,
erase_unused_dicom_files=erase_unused_dicom_files
)
@property
def flatten_series_descriptions(self) -> List[str]:
"""
Flatten series descriptions.
Returns
-------
flatten_series_description : List[str]
Series descriptions as a list instead of a dictionary.
"""
return [val for lst in self._series_descriptions.values() for val in lst]
def create_patient_data(self) -> PatientDataModel:
"""
Creates a tuple containing all the patient's data.
Returns
-------
patient_data: PatientDataModel
Patient data.
"""
data = []
for image_idx, image in enumerate(self._images_data):
image_added = False
if image.dicom_header.SeriesDescription in self.flatten_series_descriptions:
image_data = ImageAndSegmentationDataModel(image=image)
data.append(image_data)
image_added = True
if image_added is False and self._erase_unused_dicom_files:
self.erase_dicom_files(image)
patient_data = PatientDataModel(
patient_id=self.patient_id,
data=data
)
return patient_data
class SegAndSeriesPatientDataFactory(BasePatientDataFactory):
"""
Class that defines the methods that are used to get the patient data. The segmentation and series description
factory consists in obtaining the images that have the same serial uids as those contained in the file names of the
given segmentations and the images that have the given series descriptions.
"""
def __init__(
self,
path_to_patient_folder: str,
paths_to_segmentations: Optional[List[str]],
series_descriptions: Optional[Dict[str, List[str]]],
erase_unused_dicom_files: bool = False
):
"""
Constructor of the class SegAndSeriesPatientDataFactory.
Parameters
----------
path_to_patient_folder : str
Path to the folder containing the patient's image files.
paths_to_segmentations : Optional[List[str]]
List of paths to the patient's segmentation files.
series_descriptions : Optional[Dict[str, List[str]]]
A dictionary that contains the series descriptions of the images that absolutely needs to be extracted from
the patient's file. Keys are arbitrary names given to the images we want to add and values are lists of
series descriptions.
erase_unused_dicom_files: bool = False
Whether to delete unused DICOM files or not. Use with caution.
"""
super().__init__(
path_to_patient_folder=path_to_patient_folder,
paths_to_segmentations=paths_to_segmentations,
series_descriptions=series_descriptions,
erase_unused_dicom_files=erase_unused_dicom_files
)
@property
def flatten_series_descriptions(self) -> List[str]:
"""
Flatten series descriptions.
Returns
-------
flatten_series_description : List[str]
Series descriptions as a list instead of a dictionary.
"""
return [val for lst in self._series_descriptions.values() for val in lst]
def create_patient_data(self) -> PatientDataModel:
"""
Creates a tuple containing all the patient's data.
Returns
-------
patient_data: PatientDataModel
Patient data.
"""
data = []
for image_idx, image in enumerate(self._images_data):
image_added = False
series_description = image.dicom_header.SeriesDescription
segmentations = []
for path_to_segmentation in self._paths_to_segmentations:
seg_header = DicomReader.get_dicom_header(path_to_dicom=path_to_segmentation)
if hasattr(seg_header, "ReferencedSeriesSequence"):
reference_uid = seg_header.ReferencedSeriesSequence[0].SeriesInstanceUID
else:
referenced_frame_of_reference_sequence = seg_header.ReferencedFrameOfReferenceSequence
rt_referenced_study_sequence = referenced_frame_of_reference_sequence[0].RTReferencedStudySequence
rt_referenced_series_sequence = rt_referenced_study_sequence[0].RTReferencedSeriesSequence
reference_uid = rt_referenced_series_sequence[0].SeriesInstanceUID
if image.dicom_header.SeriesInstanceUID == reference_uid:
segmentation_reader = SegmentationReader(
image=image,
path_to_segmentation=path_to_segmentation
)
segmentations.append(segmentation_reader.get_segmentation_data())
if segmentations:
image_and_segmentation_data = ImageAndSegmentationDataModel(
image=image,
segmentations=segmentations
)
data.append(image_and_segmentation_data)
image_added = True
if image_added is False and series_description in self.flatten_series_descriptions:
image_data = ImageAndSegmentationDataModel(image=image)
data.append(image_data)
image_added = True
if image_added is False and self._erase_unused_dicom_files:
self.erase_dicom_files(image)
patient_data = PatientDataModel(
patient_id=self.patient_id,
data=data
)
return patient_data
| 40.484756 | 120 | 0.649221 | 1,419 | 13,279 | 5.821705 | 0.109232 | 0.078441 | 0.044547 | 0.048299 | 0.829561 | 0.810435 | 0.797966 | 0.797966 | 0.787556 | 0.78114 | 0 | 0.002243 | 0.294902 | 13,279 | 327 | 121 | 40.608563 | 0.88006 | 0.358762 | 0 | 0.753247 | 0 | 0 | 0.003161 | 0.003161 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064935 | false | 0 | 0.032468 | 0 | 0.162338 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5dc3d19a23abc769d1912bc7464fb4c0b2c6fca9 | 106 | py | Python | adamsTrigs.py | dudaspm/JupyterLab-Python | 1bc901ca70ea464b8bef3cf5c42f45ce4d54ca7d | [
"BSD-3-Clause"
] | null | null | null | adamsTrigs.py | dudaspm/JupyterLab-Python | 1bc901ca70ea464b8bef3cf5c42f45ce4d54ca7d | [
"BSD-3-Clause"
] | null | null | null | adamsTrigs.py | dudaspm/JupyterLab-Python | 1bc901ca70ea464b8bef3cf5c42f45ce4d54ca7d | [
"BSD-3-Clause"
] | 1 | 2021-09-08T18:42:53.000Z | 2021-09-08T18:42:53.000Z | import math as mt
def printTrigVals(angle):
print( angle,mt.sin(angle),mt.cos(angle),mt.tan(angle) )
| 21.2 | 60 | 0.707547 | 18 | 106 | 4.166667 | 0.611111 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132075 | 106 | 4 | 61 | 26.5 | 0.815217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
5df58ce20b0b6bc32176e825af9b8b14f5388828 | 33,514 | py | Python | hubspot/crm/products/api/basic_api.py | cclauss/hubspot-api-python | 7c60c0f572b98c73e1f1816bf5981396a42735f6 | [
"Apache-2.0"
] | null | null | null | hubspot/crm/products/api/basic_api.py | cclauss/hubspot-api-python | 7c60c0f572b98c73e1f1816bf5981396a42735f6 | [
"Apache-2.0"
] | null | null | null | hubspot/crm/products/api/basic_api.py | cclauss/hubspot-api-python | 7c60c0f572b98c73e1f1816bf5981396a42735f6 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Products
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: v3
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from hubspot.crm.products.api_client import ApiClient
from hubspot.crm.products.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class BasicApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def archive(self, product_id, **kwargs): # noqa: E501
"""Archive # noqa: E501
Move an Object identified by `{productId}` to the recycling bin. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.archive(product_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str product_id: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.archive_with_http_info(product_id, **kwargs) # noqa: E501
def archive_with_http_info(self, product_id, **kwargs): # noqa: E501
"""Archive # noqa: E501
Move an Object identified by `{productId}` to the recycling bin. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.archive_with_http_info(product_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str product_id: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'product_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method archive" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'product_id' is set
if self.api_client.client_side_validation and ('product_id' not in local_var_params or # noqa: E501
local_var_params['product_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `product_id` when calling `archive`") # noqa: E501
collection_formats = {}
path_params = {}
if 'product_id' in local_var_params:
path_params['productId'] = local_var_params['product_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = ['hapikey', 'oauth2'] # noqa: E501
return self.api_client.call_api(
'/crm/v3/objects/products/{productId}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create(self, simple_public_object_input, **kwargs): # noqa: E501
"""Create # noqa: E501
Create a product with the given properties and return a copy of the object, including the ID. Documentation and examples for creating standard products is provided. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create(simple_public_object_input, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param SimplePublicObjectInput simple_public_object_input: (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: SimplePublicObject
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_with_http_info(simple_public_object_input, **kwargs) # noqa: E501
def create_with_http_info(self, simple_public_object_input, **kwargs): # noqa: E501
"""Create # noqa: E501
Create a product with the given properties and return a copy of the object, including the ID. Documentation and examples for creating standard products is provided. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_with_http_info(simple_public_object_input, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param SimplePublicObjectInput simple_public_object_input: (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(SimplePublicObject, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'simple_public_object_input'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'simple_public_object_input' is set
if self.api_client.client_side_validation and ('simple_public_object_input' not in local_var_params or # noqa: E501
local_var_params['simple_public_object_input'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `simple_public_object_input` when calling `create`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'simple_public_object_input' in local_var_params:
body_params = local_var_params['simple_public_object_input']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', '*/*']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['hapikey', 'oauth2'] # noqa: E501
return self.api_client.call_api(
'/crm/v3/objects/products', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SimplePublicObject', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_by_id(self, product_id, **kwargs): # noqa: E501
"""Read # noqa: E501
Read an Object identified by `{productId}`. `{productId}` refers to the internal object ID by default, or optionally any unique property value as specified by the `idProperty` query param. Control what is returned via the `properties` query param. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_by_id(product_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str product_id: (required)
:param list[str] properties: A comma separated list of the properties to be returned in the response. If any of the specified properties are not present on the requested object(s), they will be ignored.
:param list[str] associations: A comma separated list of object types to retrieve associated IDs for. If any of the specified associations do not exist, they will be ignored.
:param bool archived: Whether to return only results that have been archived.
:param str id_property: The name of a property whose values are unique for this object type
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: SimplePublicObjectWithAssociations
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_by_id_with_http_info(product_id, **kwargs) # noqa: E501
def get_by_id_with_http_info(self, product_id, **kwargs): # noqa: E501
"""Read # noqa: E501
Read an Object identified by `{productId}`. `{productId}` refers to the internal object ID by default, or optionally any unique property value as specified by the `idProperty` query param. Control what is returned via the `properties` query param. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_by_id_with_http_info(product_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str product_id: (required)
:param list[str] properties: A comma separated list of the properties to be returned in the response. If any of the specified properties are not present on the requested object(s), they will be ignored.
:param list[str] associations: A comma separated list of object types to retrieve associated IDs for. If any of the specified associations do not exist, they will be ignored.
:param bool archived: Whether to return only results that have been archived.
:param str id_property: The name of a property whose values are unique for this object type
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(SimplePublicObjectWithAssociations, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'product_id',
'properties',
'associations',
'archived',
'id_property'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_by_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'product_id' is set
if self.api_client.client_side_validation and ('product_id' not in local_var_params or # noqa: E501
local_var_params['product_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `product_id` when calling `get_by_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'product_id' in local_var_params:
path_params['productId'] = local_var_params['product_id'] # noqa: E501
query_params = []
if 'properties' in local_var_params and local_var_params['properties'] is not None: # noqa: E501
query_params.append(('properties', local_var_params['properties'])) # noqa: E501
collection_formats['properties'] = 'multi' # noqa: E501
if 'associations' in local_var_params and local_var_params['associations'] is not None: # noqa: E501
query_params.append(('associations', local_var_params['associations'])) # noqa: E501
collection_formats['associations'] = 'multi' # noqa: E501
if 'archived' in local_var_params and local_var_params['archived'] is not None: # noqa: E501
query_params.append(('archived', local_var_params['archived'])) # noqa: E501
if 'id_property' in local_var_params and local_var_params['id_property'] is not None: # noqa: E501
query_params.append(('idProperty', local_var_params['id_property'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', '*/*']) # noqa: E501
# Authentication setting
auth_settings = ['hapikey', 'oauth2'] # noqa: E501
return self.api_client.call_api(
'/crm/v3/objects/products/{productId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SimplePublicObjectWithAssociations', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_page(self, **kwargs): # noqa: E501
"""List # noqa: E501
Read a page of products. Control what is returned via the `properties` query param. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_page(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int limit: The maximum number of results to display per page.
:param str after: The paging cursor token of the last successfully read resource will be returned as the `paging.next.after` JSON property of a paged response containing more results.
:param list[str] properties: A comma separated list of the properties to be returned in the response. If any of the specified properties are not present on the requested object(s), they will be ignored.
:param list[str] associations: A comma separated list of object types to retrieve associated IDs for. If any of the specified associations do not exist, they will be ignored.
:param bool archived: Whether to return only results that have been archived.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: CollectionResponseSimplePublicObjectWithAssociationsForwardPaging
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_page_with_http_info(**kwargs) # noqa: E501
def get_page_with_http_info(self, **kwargs): # noqa: E501
"""List # noqa: E501
Read a page of products. Control what is returned via the `properties` query param. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_page_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int limit: The maximum number of results to display per page.
:param str after: The paging cursor token of the last successfully read resource will be returned as the `paging.next.after` JSON property of a paged response containing more results.
:param list[str] properties: A comma separated list of the properties to be returned in the response. If any of the specified properties are not present on the requested object(s), they will be ignored.
:param list[str] associations: A comma separated list of object types to retrieve associated IDs for. If any of the specified associations do not exist, they will be ignored.
:param bool archived: Whether to return only results that have been archived.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(CollectionResponseSimplePublicObjectWithAssociationsForwardPaging, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'limit',
'after',
'properties',
'associations',
'archived'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_page" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
if 'after' in local_var_params and local_var_params['after'] is not None: # noqa: E501
query_params.append(('after', local_var_params['after'])) # noqa: E501
if 'properties' in local_var_params and local_var_params['properties'] is not None: # noqa: E501
query_params.append(('properties', local_var_params['properties'])) # noqa: E501
collection_formats['properties'] = 'multi' # noqa: E501
if 'associations' in local_var_params and local_var_params['associations'] is not None: # noqa: E501
query_params.append(('associations', local_var_params['associations'])) # noqa: E501
collection_formats['associations'] = 'multi' # noqa: E501
if 'archived' in local_var_params and local_var_params['archived'] is not None: # noqa: E501
query_params.append(('archived', local_var_params['archived'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', '*/*']) # noqa: E501
# Authentication setting
auth_settings = ['hapikey', 'oauth2'] # noqa: E501
return self.api_client.call_api(
'/crm/v3/objects/products', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CollectionResponseSimplePublicObjectWithAssociationsForwardPaging', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update(self, product_id, simple_public_object_input, **kwargs): # noqa: E501
"""Update # noqa: E501
Perform a partial update of an Object identified by `{productId}`. `{productId}` refers to the internal object ID by default, or optionally any unique property value as specified by the `idProperty` query param. Provided property values will be overwritten. Read-only and non-existent properties will be ignored. Properties values can be cleared by passing an empty string. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update(product_id, simple_public_object_input, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str product_id: (required)
:param SimplePublicObjectInput simple_public_object_input: (required)
:param str id_property: The name of a property whose values are unique for this object type
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: SimplePublicObject
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_with_http_info(product_id, simple_public_object_input, **kwargs) # noqa: E501
def update_with_http_info(self, product_id, simple_public_object_input, **kwargs): # noqa: E501
"""Update # noqa: E501
Perform a partial update of an Object identified by `{productId}`. `{productId}` refers to the internal object ID by default, or optionally any unique property value as specified by the `idProperty` query param. Provided property values will be overwritten. Read-only and non-existent properties will be ignored. Properties values can be cleared by passing an empty string. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_with_http_info(product_id, simple_public_object_input, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str product_id: (required)
:param SimplePublicObjectInput simple_public_object_input: (required)
:param str id_property: The name of a property whose values are unique for this object type
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(SimplePublicObject, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'product_id',
'simple_public_object_input',
'id_property'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'product_id' is set
if self.api_client.client_side_validation and ('product_id' not in local_var_params or # noqa: E501
local_var_params['product_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `product_id` when calling `update`") # noqa: E501
# verify the required parameter 'simple_public_object_input' is set
if self.api_client.client_side_validation and ('simple_public_object_input' not in local_var_params or # noqa: E501
local_var_params['simple_public_object_input'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `simple_public_object_input` when calling `update`") # noqa: E501
collection_formats = {}
path_params = {}
if 'product_id' in local_var_params:
path_params['productId'] = local_var_params['product_id'] # noqa: E501
query_params = []
if 'id_property' in local_var_params and local_var_params['id_property'] is not None: # noqa: E501
query_params.append(('idProperty', local_var_params['id_property'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'simple_public_object_input' in local_var_params:
body_params = local_var_params['simple_public_object_input']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', '*/*']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['hapikey', 'oauth2'] # noqa: E501
return self.api_client.call_api(
'/crm/v3/objects/products/{productId}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SimplePublicObject', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 50.020896 | 395 | 0.618756 | 3,863 | 33,514 | 5.149107 | 0.068341 | 0.041024 | 0.063345 | 0.032376 | 0.940978 | 0.935599 | 0.929466 | 0.929466 | 0.918858 | 0.913127 | 0 | 0.01451 | 0.309035 | 33,514 | 669 | 396 | 50.095665 | 0.844453 | 0.490332 | 0 | 0.741433 | 1 | 0 | 0.193973 | 0.059284 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034268 | false | 0 | 0.015576 | 0 | 0.084112 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5d33b1e925d476645b3d8783d519e30e558e6db1 | 11 | py | Python | test/functions/python-init-error/handler.py | akadop/fun | 6329dcbd3f7d99a519cbbd6ac615fec4b46fc28e | [
"Apache-2.0"
] | 101 | 2020-06-15T22:25:10.000Z | 2022-03-31T23:28:26.000Z | test/functions/python-init-error/handler.py | akadop/fun | 6329dcbd3f7d99a519cbbd6ac615fec4b46fc28e | [
"Apache-2.0"
] | 12 | 2020-08-30T05:41:00.000Z | 2022-03-05T15:56:34.000Z | test/functions/python-init-error/handler.py | akadop/fun | 6329dcbd3f7d99a519cbbd6ac615fec4b46fc28e | [
"Apache-2.0"
] | 8 | 2020-06-27T17:27:22.000Z | 2022-03-30T00:50:12.000Z | 10 * (1/0)
| 5.5 | 10 | 0.363636 | 3 | 11 | 1.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0.272727 | 11 | 1 | 11 | 11 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5d3b99c1dfca992f842be43a64e598b3d49778b9 | 480 | py | Python | PyDSRL/cross_circle_gym/envs/__init__.py | zishanqin/Symbolic-transfer | b553f188ad3f6c6492fcff556ac6f597e56cf43e | [
"MIT"
] | 3 | 2021-07-28T11:28:25.000Z | 2021-07-28T11:56:58.000Z | PyDSRL/cross_circle_gym/envs/__init__.py | zishanqin/Symbolic-transfer | b553f188ad3f6c6492fcff556ac6f597e56cf43e | [
"MIT"
] | null | null | null | PyDSRL/cross_circle_gym/envs/__init__.py | zishanqin/Symbolic-transfer | b553f188ad3f6c6492fcff556ac6f597e56cf43e | [
"MIT"
] | 1 | 2021-07-28T11:40:45.000Z | 2021-07-28T11:40:45.000Z | from cross_circle_gym.envs.cross_circle_neg_grid import CrossCircleNegGrid
from cross_circle_gym.envs.cross_circle_mixed_grid import CrossCircleMixedGrid
from cross_circle_gym.envs.cross_circle_neg_rand import CrossCircleNegRand
from cross_circle_gym.envs.cross_circle_mixed_rand import CrossCircleMixedRand
from cross_circle_gym.envs.test_mixed_grid import TestMixedGrid # symbol on grid
from cross_circle_gym.envs.test_mixed_rand import TestMixedRand # sybmbol on mixed grid
| 48 | 87 | 0.895833 | 71 | 480 | 5.661972 | 0.28169 | 0.273632 | 0.223881 | 0.268657 | 0.522388 | 0.522388 | 0.522388 | 0.368159 | 0 | 0 | 0 | 0 | 0.075 | 480 | 9 | 88 | 53.333333 | 0.905405 | 0.075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5d40110a6e7f52f2605956548a1a57bda6ca6a6d | 4,212 | py | Python | biostats/model/ancova.py | hikarimusic2002/BIOSTATS | ffd108c60fcf06073253380cc1d8b9fc448e8812 | [
"MIT"
] | null | null | null | biostats/model/ancova.py | hikarimusic2002/BIOSTATS | ffd108c60fcf06073253380cc1d8b9fc448e8812 | [
"MIT"
] | null | null | null | biostats/model/ancova.py | hikarimusic2002/BIOSTATS | ffd108c60fcf06073253380cc1d8b9fc448e8812 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from statsmodels.formula.api import ols
from statsmodels.stats.anova import anova_lm
def one_way_ancova(data, target, between, covariate, summary=None):
formula = "Q('%s') ~ " % target
formula += "C(Q('%s'), Sum) + " % between
formula += "Q('%s')" % covariate
model = ols(formula, data=data).fit()
result = anova_lm(model, typ=2)
result = result.rename(columns={
'sum_sq' : 'Sum Square',
'F' : 'F Statistic',
'PR(>F)' : 'p-value'
})
index_change = {}
for index in result.index:
changed = index
changed = changed.replace("C(Q('%s'), Sum)" % between, between)
changed = changed.replace("Q('%s')" % covariate, covariate)
index_change[index] = changed
result = result.rename(index_change)
result2 = pd.DataFrame(
{
"Count": data.groupby(between)[target].count(),
"Mean": data.groupby(between)[target].mean(),
"Std.": data.groupby(between)[target].std(),
}
)
result2["Mean({})".format(covariate)] = data.groupby(between)[covariate].mean()
result2.index.name = None
index_change = {}
for index in result2.index:
changed = "{}({})".format(between, index)
index_change[index] = changed
result2 = result2.rename(index_change)
if summary:
return result2
else:
return result
def two_way_ancova(data, target, between, covariate, summary=None):
formula = "Q('%s') ~ " % target
formula += "C(Q('%s'), Sum) + " % between
for var in covariate:
formula += "Q('%s') + " % var
formula = formula[:-3]
model = ols(formula, data=data).fit()
result = anova_lm(model, typ=2)
result = result.rename(columns={
'sum_sq' : 'Sum Square',
'F' : 'F Statistic',
'PR(>F)' : 'p-value'
})
index_change = {}
for index in result.index:
changed = index
changed = changed.replace("C(Q('%s'), Sum)" % between, between)
for var in covariate:
changed = changed.replace("Q('%s')" % var, var)
index_change[index] = changed
result = result.rename(index_change)
result2 = pd.DataFrame(
{
"Count": data.groupby(between)[target].count(),
"Mean": data.groupby(between)[target].mean(),
"Std.": data.groupby(between)[target].std(),
}
)
for var in covariate:
result2["Mean({})".format(var)] = data.groupby(between)[var].mean()
result2.index.name = None
index_change = {}
for index in result2.index:
changed = "{}({})".format(between, index)
index_change[index] = changed
result2 = result2.rename(index_change)
if summary:
return result2
else:
return result
def n_way_ancova(data, target, between, covariate, summary=None):
formula = "Q('%s') ~ " % target
formula += "C(Q('%s'), Sum) + " % between
for var in covariate:
formula += "Q('%s') + " % var
formula = formula[:-3]
model = ols(formula, data=data).fit()
result = anova_lm(model, typ=2)
result = result.rename(columns={
'sum_sq' : 'Sum Square',
'F' : 'F Statistic',
'PR(>F)' : 'p-value'
})
index_change = {}
for index in result.index:
changed = index
changed = changed.replace("C(Q('%s'), Sum)" % between, between)
for var in covariate:
changed = changed.replace("Q('%s')" % var, var)
index_change[index] = changed
result = result.rename(index_change)
result2 = pd.DataFrame(
{
"Count": data.groupby(between)[target].count(),
"Mean": data.groupby(between)[target].mean(),
"Std.": data.groupby(between)[target].std(),
}
)
for var in covariate:
result2["Mean({})".format(var)] = data.groupby(between)[var].mean()
result2.index.name = None
index_change = {}
for index in result2.index:
changed = "{}({})".format(between, index)
index_change[index] = changed
result2 = result2.rename(index_change)
if summary:
return result2
else:
return result
| 30.521739 | 83 | 0.569088 | 489 | 4,212 | 4.838446 | 0.128834 | 0.083686 | 0.091293 | 0.091293 | 0.914624 | 0.904903 | 0.904903 | 0.904903 | 0.904903 | 0.904903 | 0 | 0.008461 | 0.270418 | 4,212 | 137 | 84 | 30.744526 | 0.761471 | 0 | 0 | 0.841667 | 0 | 0 | 0.090477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.033333 | 0 | 0.108333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
537811b59626c5f754f2d80c736964845a501101 | 6,049 | py | Python | thirdparty/blender_autocomplete-master/2.79/bpy/ops/surface.py | Ray1184/HPMSBatch | 3852710e7366361cb9e90f471ddccbbce5ffe8ee | [
"MIT"
] | null | null | null | thirdparty/blender_autocomplete-master/2.79/bpy/ops/surface.py | Ray1184/HPMSBatch | 3852710e7366361cb9e90f471ddccbbce5ffe8ee | [
"MIT"
] | null | null | null | thirdparty/blender_autocomplete-master/2.79/bpy/ops/surface.py | Ray1184/HPMSBatch | 3852710e7366361cb9e90f471ddccbbce5ffe8ee | [
"MIT"
] | null | null | null | import sys
import typing
def primitive_nurbs_surface_circle_add(
radius: float = 1.0,
view_align: bool = False,
enter_editmode: bool = False,
location: float = (0.0, 0.0, 0.0),
rotation: float = (0.0, 0.0, 0.0),
layers: bool = (False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False,
False, False, False, False)):
'''Construct a Nurbs surface Circle
:param radius: Radius
:type radius: float
:param view_align: Align to View, Align the new object to the view
:type view_align: bool
:param enter_editmode: Enter Editmode, Enter editmode when adding this object
:type enter_editmode: bool
:param location: Location, Location for the newly added object
:type location: float
:param rotation: Rotation, Rotation for the newly added object
:type rotation: float
:param layers: Layer
:type layers: bool
'''
pass
def primitive_nurbs_surface_curve_add(
radius: float = 1.0,
view_align: bool = False,
enter_editmode: bool = False,
location: float = (0.0, 0.0, 0.0),
rotation: float = (0.0, 0.0, 0.0),
layers: bool = (False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False,
False, False, False, False)):
'''Construct a Nurbs surface Curve
:param radius: Radius
:type radius: float
:param view_align: Align to View, Align the new object to the view
:type view_align: bool
:param enter_editmode: Enter Editmode, Enter editmode when adding this object
:type enter_editmode: bool
:param location: Location, Location for the newly added object
:type location: float
:param rotation: Rotation, Rotation for the newly added object
:type rotation: float
:param layers: Layer
:type layers: bool
'''
pass
def primitive_nurbs_surface_cylinder_add(
radius: float = 1.0,
view_align: bool = False,
enter_editmode: bool = False,
location: float = (0.0, 0.0, 0.0),
rotation: float = (0.0, 0.0, 0.0),
layers: bool = (False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False,
False, False, False, False)):
'''Construct a Nurbs surface Cylinder
:param radius: Radius
:type radius: float
:param view_align: Align to View, Align the new object to the view
:type view_align: bool
:param enter_editmode: Enter Editmode, Enter editmode when adding this object
:type enter_editmode: bool
:param location: Location, Location for the newly added object
:type location: float
:param rotation: Rotation, Rotation for the newly added object
:type rotation: float
:param layers: Layer
:type layers: bool
'''
pass
def primitive_nurbs_surface_sphere_add(
radius: float = 1.0,
view_align: bool = False,
enter_editmode: bool = False,
location: float = (0.0, 0.0, 0.0),
rotation: float = (0.0, 0.0, 0.0),
layers: bool = (False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False,
False, False, False, False)):
'''Construct a Nurbs surface Sphere
:param radius: Radius
:type radius: float
:param view_align: Align to View, Align the new object to the view
:type view_align: bool
:param enter_editmode: Enter Editmode, Enter editmode when adding this object
:type enter_editmode: bool
:param location: Location, Location for the newly added object
:type location: float
:param rotation: Rotation, Rotation for the newly added object
:type rotation: float
:param layers: Layer
:type layers: bool
'''
pass
def primitive_nurbs_surface_surface_add(
radius: float = 1.0,
view_align: bool = False,
enter_editmode: bool = False,
location: float = (0.0, 0.0, 0.0),
rotation: float = (0.0, 0.0, 0.0),
layers: bool = (False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False,
False, False, False, False)):
'''Construct a Nurbs surface Patch
:param radius: Radius
:type radius: float
:param view_align: Align to View, Align the new object to the view
:type view_align: bool
:param enter_editmode: Enter Editmode, Enter editmode when adding this object
:type enter_editmode: bool
:param location: Location, Location for the newly added object
:type location: float
:param rotation: Rotation, Rotation for the newly added object
:type rotation: float
:param layers: Layer
:type layers: bool
'''
pass
def primitive_nurbs_surface_torus_add(
radius: float = 1.0,
view_align: bool = False,
enter_editmode: bool = False,
location: float = (0.0, 0.0, 0.0),
rotation: float = (0.0, 0.0, 0.0),
layers: bool = (False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False,
False, False, False, False)):
'''Construct a Nurbs surface Torus
:param radius: Radius
:type radius: float
:param view_align: Align to View, Align the new object to the view
:type view_align: bool
:param enter_editmode: Enter Editmode, Enter editmode when adding this object
:type enter_editmode: bool
:param location: Location, Location for the newly added object
:type location: float
:param rotation: Rotation, Rotation for the newly added object
:type rotation: float
:param layers: Layer
:type layers: bool
'''
pass
| 35.374269 | 82 | 0.625393 | 784 | 6,049 | 4.748724 | 0.054847 | 0.306205 | 0.435133 | 0.547945 | 0.968574 | 0.968574 | 0.968574 | 0.968574 | 0.968574 | 0.968574 | 0 | 0.019431 | 0.285336 | 6,049 | 170 | 83 | 35.582353 | 0.841777 | 0.483716 | 0 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0.096774 | 0.032258 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 11 |
53cad3dcd65d900cb32c715bb31bf2c34599fd83 | 8,333 | py | Python | port/modules/font/dvsmb_12.py | diskman88/mpython-desktop-robot | 01cd15fbeeba521ab874cf66f94d3909c4f8c39a | [
"MIT"
] | 53 | 2018-10-15T12:01:24.000Z | 2019-11-22T09:31:02.000Z | port/modules/font/dvsmb_12.py | diskman88/mpython-desktop-robot | 01cd15fbeeba521ab874cf66f94d3909c4f8c39a | [
"MIT"
] | 10 | 2018-10-17T13:42:19.000Z | 2019-11-25T06:42:40.000Z | port/modules/font/dvsmb_12.py | diskman88/mpython-desktop-robot | 01cd15fbeeba521ab874cf66f94d3909c4f8c39a | [
"MIT"
] | 26 | 2018-12-04T03:53:39.000Z | 2019-11-22T03:40:05.000Z | # Code generated by font-to-py.py.
# Font: dsmb.ttf
version = '0.26'
def height():
return 12
def max_width():
return 7
def hmap():
return True
def reverse():
return False
def monospaced():
return False
def min_ch():
return 32
def max_ch():
return 126
_font =\
b'\x07\x00\x00\x70\xb0\x30\x40\x60\x00\x60\x60\x00\x00\x00\x07\x00'\
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\xc0'\
b'\xc0\xc0\xc0\xc0\x00\xc0\xc0\x00\x00\x00\x07\x00\x00\xa0\xa0\xa0'\
b'\x00\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x14\x34\x7e\x28\x28'\
b'\xfc\x58\x50\x00\x00\x00\x07\x00\x00\x20\x70\xa0\xf0\x78\x28\xa8'\
b'\x70\x20\x20\x00\x07\x00\x00\xe0\xa0\xe4\x18\x20\xdc\x14\x1c\x00'\
b'\x00\x00\x07\x00\x00\x70\x60\x60\x20\xf4\xd4\xdc\x7c\x00\x00\x00'\
b'\x07\x00\x00\x80\x80\x80\x00\x00\x00\x00\x00\x00\x00\x00\x07\x00'\
b'\x20\x60\x40\xc0\xc0\xc0\xc0\x40\x60\x20\x00\x00\x07\x00\x80\xc0'\
b'\x40\x60\x60\x60\x60\x40\xc0\x80\x00\x00\x07\x00\x00\x20\xa8\x70'\
b'\x70\xa8\x20\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x20\x20\xf8'\
b'\x20\x20\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x00\x60'\
b'\x60\x40\x80\x00\x07\x00\x00\x00\x00\x00\x00\xe0\xe0\x00\x00\x00'\
b'\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x00\xc0\xc0\x00\x00\x00'\
b'\x07\x00\x00\x08\x10\x10\x20\x20\x20\x40\x40\x80\x00\x00\x07\x00'\
b'\x00\x78\xcc\xcc\xdc\xcc\xcc\xcc\x78\x00\x00\x00\x07\x00\x00\xf0'\
b'\x30\x30\x30\x30\x30\x30\xfc\x00\x00\x00\x07\x00\x00\xf8\x0c\x0c'\
b'\x18\x10\x20\x40\xfc\x00\x00\x00\x07\x00\x00\xf8\x0c\x0c\x70\x0c'\
b'\x0c\x0c\xf8\x00\x00\x00\x07\x00\x00\x18\x38\x78\x58\x98\xfc\x18'\
b'\x18\x00\x00\x00\x07\x00\x00\xfc\xc0\xc0\xf8\x0c\x0c\x0c\xf8\x00'\
b'\x00\x00\x07\x00\x00\x78\x60\xc0\xf8\xcc\xcc\xcc\x78\x00\x00\x00'\
b'\x07\x00\x00\xfc\x0c\x18\x18\x38\x30\x30\x60\x00\x00\x00\x07\x00'\
b'\x00\x78\xcc\xcc\x30\xcc\xcc\xcc\x78\x00\x00\x00\x07\x00\x00\x78'\
b'\xcc\xcc\xcc\x7c\x0c\x18\x78\x00\x00\x00\x07\x00\x00\x00\x00\xc0'\
b'\xc0\x00\x00\xc0\xc0\x00\x00\x00\x07\x00\x00\x00\x00\x60\x60\x00'\
b'\x00\x60\x60\x40\x80\x00\x07\x00\x00\x00\x00\x04\x3c\xe0\xe0\x3c'\
b'\x04\x00\x00\x00\x07\x00\x00\x00\x00\x00\xfc\x00\xfc\x00\x00\x00'\
b'\x00\x00\x07\x00\x00\x00\x00\x80\xf0\x1c\x1c\xf0\x80\x00\x00\x00'\
b'\x07\x00\x00\x70\xb0\x30\x40\x60\x00\x60\x60\x00\x00\x00\x07\x00'\
b'\x00\x38\x44\x9c\xa4\xa4\xa4\x9c\x44\x3c\x00\x00\x07\x00\x00\x30'\
b'\x30\x78\x78\x48\x78\xcc\xcc\x00\x00\x00\x07\x00\x00\xf8\xcc\xcc'\
b'\xf0\xcc\xcc\xcc\xf8\x00\x00\x00\x07\x00\x00\x38\x64\xc0\xc0\xc0'\
b'\xc0\x64\x38\x00\x00\x00\x07\x00\x00\xf8\xc8\xcc\xcc\xcc\xcc\xc8'\
b'\xf8\x00\x00\x00\x07\x00\x00\xfc\xc0\xc0\xf8\xc0\xc0\xc0\xfc\x00'\
b'\x00\x00\x07\x00\x00\xfc\xc0\xc0\xf8\xc0\xc0\xc0\xc0\x00\x00\x00'\
b'\x07\x00\x00\x38\x64\xc0\xc0\xdc\xcc\x6c\x3c\x00\x00\x00\x07\x00'\
b'\x00\xcc\xcc\xcc\xfc\xcc\xcc\xcc\xcc\x00\x00\x00\x07\x00\x00\xfc'\
b'\x30\x30\x30\x30\x30\x30\xfc\x00\x00\x00\x07\x00\x00\x3c\x0c\x0c'\
b'\x0c\x0c\x0c\x8c\x78\x00\x00\x00\x07\x00\x00\xcc\xd8\xd0\xf0\xf0'\
b'\xd8\xcc\xcc\x00\x00\x00\x07\x00\x00\xc0\xc0\xc0\xc0\xc0\xc0\xc0'\
b'\xfc\x00\x00\x00\x07\x00\x00\x84\xcc\xfc\xfc\xfc\xcc\xcc\xcc\x00'\
b'\x00\x00\x07\x00\x00\xcc\xec\xec\xec\xdc\xdc\xdc\xcc\x00\x00\x00'\
b'\x07\x00\x00\x78\xcc\xcc\xcc\xcc\xcc\xcc\x78\x00\x00\x00\x07\x00'\
b'\x00\xf8\xcc\xcc\xcc\xf8\xc0\xc0\xc0\x00\x00\x00\x07\x00\x00\x78'\
b'\xcc\xcc\xcc\xcc\xcc\xcc\x78\x08\x00\x00\x07\x00\x00\xf8\xcc\xcc'\
b'\xcc\xf0\xc8\xcc\xc6\x00\x00\x00\x07\x00\x00\x78\xc4\xc0\xf0\x3c'\
b'\x0c\x8c\x78\x00\x00\x00\x07\x00\x00\xfc\x30\x30\x30\x30\x30\x30'\
b'\x30\x00\x00\x00\x07\x00\x00\xcc\xcc\xcc\xcc\xcc\xcc\xcc\x78\x00'\
b'\x00\x00\x07\x00\x00\xcc\xcc\x48\x48\x78\x78\x30\x30\x00\x00\x00'\
b'\x07\x00\x00\xc6\xc6\xd6\xd6\x6c\x6c\x6c\x6c\x00\x00\x00\x07\x00'\
b'\x00\xcc\x48\x78\x30\x30\x78\x48\xcc\x00\x00\x00\x07\x00\x00\xcc'\
b'\x48\x78\x78\x30\x30\x30\x30\x00\x00\x00\x07\x00\x00\xfc\x0c\x18'\
b'\x30\x20\x60\xc0\xfc\x00\x00\x00\x07\x00\xe0\xc0\xc0\xc0\xc0\xc0'\
b'\xc0\xc0\xc0\xe0\x00\x00\x07\x00\x00\x80\x40\x40\x20\x20\x20\x10'\
b'\x10\x08\x00\x00\x07\x00\xe0\x60\x60\x60\x60\x60\x60\x60\x60\xe0'\
b'\x00\x00\x07\x00\x00\x30\x78\xcc\x00\x00\x00\x00\x00\x00\x00\x00'\
b'\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xfe\x00\x00\x07\x00'\
b'\xc0\x60\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00'\
b'\x00\x78\x0c\x7c\xcc\xcc\x7c\x00\x00\x00\x07\x00\xc0\xc0\xc0\xf8'\
b'\xcc\xcc\xcc\xcc\xf8\x00\x00\x00\x07\x00\x00\x00\x00\x7c\xe0\xc0'\
b'\xc0\xe0\x7c\x00\x00\x00\x07\x00\x0c\x0c\x0c\x7c\xcc\xcc\xcc\xcc'\
b'\x7c\x00\x00\x00\x07\x00\x00\x00\x00\x78\xcc\xfc\xc0\xc0\x7c\x00'\
b'\x00\x00\x07\x00\x38\x60\x60\xf8\x60\x60\x60\x60\x60\x00\x00\x00'\
b'\x07\x00\x00\x00\x00\x7c\xcc\xcc\xcc\xcc\x7c\x0c\x78\x00\x07\x00'\
b'\xc0\xc0\xc0\xf8\xcc\xcc\xcc\xcc\xcc\x00\x00\x00\x07\x00\x30\x30'\
b'\x00\xf0\x30\x30\x30\x30\xfc\x00\x00\x00\x07\x00\x30\x30\x00\xf0'\
b'\x30\x30\x30\x30\x30\x30\xe0\x00\x07\x00\xc0\xc0\xc0\xd8\xd0\xf0'\
b'\xd0\xd8\xcc\x00\x00\x00\x07\x00\xf0\x30\x30\x30\x30\x30\x30\x30'\
b'\x1c\x00\x00\x00\x07\x00\x00\x00\x00\xfc\xd4\xd4\xd4\xd4\xd4\x00'\
b'\x00\x00\x07\x00\x00\x00\x00\xf8\xcc\xcc\xcc\xcc\xcc\x00\x00\x00'\
b'\x07\x00\x00\x00\x00\x78\xcc\xcc\xcc\xcc\x78\x00\x00\x00\x07\x00'\
b'\x00\x00\x00\xf8\xcc\xcc\xcc\xcc\xf8\xc0\xc0\x00\x07\x00\x00\x00'\
b'\x00\x7c\xcc\xcc\xcc\xcc\x7c\x0c\x0c\x00\x07\x00\x00\x00\x00\xf8'\
b'\xc0\xc0\xc0\xc0\xc0\x00\x00\x00\x07\x00\x00\x00\x00\x78\xc4\xf0'\
b'\x3c\x8c\x78\x00\x00\x00\x07\x00\x00\x30\x30\xfc\x30\x30\x30\x30'\
b'\x3c\x00\x00\x00\x07\x00\x00\x00\x00\xcc\xcc\xcc\xcc\xcc\x7c\x00'\
b'\x00\x00\x07\x00\x00\x00\x00\xcc\xcc\x48\x78\x78\x30\x00\x00\x00'\
b'\x07\x00\x00\x00\x00\xc6\xc6\x54\x6c\x6c\x6c\x00\x00\x00\x07\x00'\
b'\x00\x00\x00\xcc\x78\x30\x30\x78\xcc\x00\x00\x00\x07\x00\x00\x00'\
b'\x00\xcc\x48\x58\x78\x30\x30\x20\xe0\x00\x07\x00\x00\x00\x00\xfc'\
b'\x0c\x18\x60\xc0\xfc\x00\x00\x00\x07\x00\x3c\x30\x30\x30\x30\xc0'\
b'\x30\x30\x30\x3c\x00\x00\x07\x00\x80\x80\x80\x80\x80\x80\x80\x80'\
b'\x80\x80\x80\x00\x07\x00\xf0\x30\x30\x30\x30\x0c\x30\x30\x30\xf0'\
b'\x00\x00\x07\x00\x00\x00\x00\x00\x00\xe0\x1c\x00\x00\x00\x00\x00'\
_index =\
b'\x00\x00\x0e\x00\x0e\x00\x1c\x00\x1c\x00\x2a\x00\x2a\x00\x38\x00'\
b'\x38\x00\x46\x00\x46\x00\x54\x00\x54\x00\x62\x00\x62\x00\x70\x00'\
b'\x70\x00\x7e\x00\x7e\x00\x8c\x00\x8c\x00\x9a\x00\x9a\x00\xa8\x00'\
b'\xa8\x00\xb6\x00\xb6\x00\xc4\x00\xc4\x00\xd2\x00\xd2\x00\xe0\x00'\
b'\xe0\x00\xee\x00\xee\x00\xfc\x00\xfc\x00\x0a\x01\x0a\x01\x18\x01'\
b'\x18\x01\x26\x01\x26\x01\x34\x01\x34\x01\x42\x01\x42\x01\x50\x01'\
b'\x50\x01\x5e\x01\x5e\x01\x6c\x01\x6c\x01\x7a\x01\x7a\x01\x88\x01'\
b'\x88\x01\x96\x01\x96\x01\xa4\x01\xa4\x01\xb2\x01\xb2\x01\xc0\x01'\
b'\xc0\x01\xce\x01\xce\x01\xdc\x01\xdc\x01\xea\x01\xea\x01\xf8\x01'\
b'\xf8\x01\x06\x02\x06\x02\x14\x02\x14\x02\x22\x02\x22\x02\x30\x02'\
b'\x30\x02\x3e\x02\x3e\x02\x4c\x02\x4c\x02\x5a\x02\x5a\x02\x68\x02'\
b'\x68\x02\x76\x02\x76\x02\x84\x02\x84\x02\x92\x02\x92\x02\xa0\x02'\
b'\xa0\x02\xae\x02\xae\x02\xbc\x02\xbc\x02\xca\x02\xca\x02\xd8\x02'\
b'\xd8\x02\xe6\x02\xe6\x02\xf4\x02\xf4\x02\x02\x03\x02\x03\x10\x03'\
b'\x10\x03\x1e\x03\x1e\x03\x2c\x03\x2c\x03\x3a\x03\x3a\x03\x48\x03'\
b'\x48\x03\x56\x03\x56\x03\x64\x03\x64\x03\x72\x03\x72\x03\x80\x03'\
b'\x80\x03\x8e\x03\x8e\x03\x9c\x03\x9c\x03\xaa\x03\xaa\x03\xb8\x03'\
b'\xb8\x03\xc6\x03\xc6\x03\xd4\x03\xd4\x03\xe2\x03\xe2\x03\xf0\x03'\
b'\xf0\x03\xfe\x03\xfe\x03\x0c\x04\x0c\x04\x1a\x04\x1a\x04\x28\x04'\
b'\x28\x04\x36\x04\x36\x04\x44\x04\x44\x04\x52\x04\x52\x04\x60\x04'\
b'\x60\x04\x6e\x04\x6e\x04\x7c\x04\x7c\x04\x8a\x04\x8a\x04\x98\x04'\
b'\x98\x04\xa6\x04\xa6\x04\xb4\x04\xb4\x04\xc2\x04\xc2\x04\xd0\x04'\
b'\xd0\x04\xde\x04\xde\x04\xec\x04\xec\x04\xfa\x04\xfa\x04\x08\x05'\
b'\x08\x05\x16\x05\x16\x05\x24\x05\x24\x05\x32\x05\x32\x05\x40\x05'\
_mvfont = memoryview(_font)
def get_ch(ch):
ordch = ord(ch)
ordch = ordch + 1 if ordch >= 32 and ordch <= 126 else 63
idx_offs = 4 * (ordch - 32)
offset = int.from_bytes(_index[idx_offs : idx_offs + 2], 'little')
next_offs = int.from_bytes(_index[idx_offs + 2 : idx_offs + 4], 'little')
width = int.from_bytes(_font[offset:offset + 2], 'little')
return _mvfont[offset + 2:next_offs], 12, width
| 55.926174 | 78 | 0.690748 | 1,949 | 8,333 | 2.942022 | 0.082093 | 0.35368 | 0.273108 | 0.163237 | 0.577084 | 0.522846 | 0.469829 | 0.37984 | 0.303627 | 0.177363 | 0 | 0.350664 | 0.051362 | 8,333 | 148 | 79 | 56.304054 | 0.3747 | 0.00564 | 0 | 0.029851 | 1 | 0.80597 | 0.852366 | 0.849662 | 0 | 1 | 0 | 0 | 0 | 1 | 0.059701 | false | 0 | 0 | 0.052239 | 0.119403 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
53fd86fe16d273e24bd37c293600298d3ecc3813 | 17,802 | py | Python | tests/template/test_schema_parser.py | HumanCellAtlas/ingest-common | 6a230f9606f64cd787b67c143854db36e012a2b7 | [
"Apache-2.0"
] | null | null | null | tests/template/test_schema_parser.py | HumanCellAtlas/ingest-common | 6a230f9606f64cd787b67c143854db36e012a2b7 | [
"Apache-2.0"
] | 15 | 2020-10-06T08:15:34.000Z | 2022-03-24T18:31:04.000Z | tests/unit/template/test_schema_parser.py | ebi-ait/ingest-client | d5654ab50c5f7357d5b08bcc6eae22633176b11e | [
"Apache-2.0"
] | null | null | null | import unittest
from ingest.template.descriptor import ComplexPropertyDescriptor
from ingest.template.schema_parser import SchemaParser
class TestSchemaParser(unittest.TestCase):
""" Testing class for the SchemaParser class. """
def test__removed_ignored_properties_from_descriptor__success(self):
sample_complex_metadata_schema_json = {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/biomaterial/2.0.2/timecourse",
"description": "Information relating to a timecourse.",
"required": [
"unit"
],
"type": "object",
"properties": {
"value": {
"description": "The numerical value in Timecourse unit",
"pattern": "^[0-9]+\\.?[0-9]*-?[0-9]*\\.?[0-9]*$",
"type": "string",
"example": "2; 5.5-10.5",
"user_friendly": "Timecourse value",
"guidelines": "Enter either a single value or a range of values. Indicate a range using a hyphen."
},
"unit": {
"description": "The unit in which the Timecourse value is expressed.",
"type": "object",
"user_friendly": "Timecourse unit"
},
}
}
sample_ignored_properties = ["description"]
schema_parser = SchemaParser(sample_complex_metadata_schema_json, sample_ignored_properties)
expected_descriptor = ComplexPropertyDescriptor(sample_complex_metadata_schema_json)
self.assertEqual(expected_descriptor.get_dictionary_representation_of_descriptor(),
schema_parser.schema_descriptor.get_dictionary_representation_of_descriptor())
# First, check that the ignored property existed when parsing the schema.
[self.assertIn(ignored_property,
schema_parser.schema_descriptor.get_dictionary_representation_of_descriptor().keys())
for ignored_property in sample_ignored_properties]
# Second, check that the ignored properties are removed once the post processing step has completed.
[self.assertNotIn(ignored_property, schema_parser.schema_dictionary.keys())
for ignored_property in sample_ignored_properties]
def test__descriptor_with_no_ignored_properties__unchanged(self):
sample_complex_metadata_schema_json = {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/biomaterial/2.0.2/timecourse",
"description": "Information relating to a timecourse.",
"required": [
"unit"
],
"type": "object",
"properties": {
"value": {
"description": "The numerical value in Timecourse unit",
"pattern": "^[0-9]+\\.?[0-9]*-?[0-9]*\\.?[0-9]*$",
"type": "string",
"example": "2; 5.5-10.5",
"user_friendly": "Timecourse value",
"guidelines": "Enter either a single value or a range of values. Indicate a range using a hyphen."
},
"unit": {
"description": "The unit in which the Timecourse value is expressed.",
"type": "object",
"user_friendly": "Timecourse unit"
},
}
}
sample_ignored_properties = []
schema_parser = SchemaParser(sample_complex_metadata_schema_json, sample_ignored_properties)
expected_descriptor = ComplexPropertyDescriptor(sample_complex_metadata_schema_json)
self.assertEqual(expected_descriptor.get_dictionary_representation_of_descriptor(),
schema_parser.schema_descriptor.get_dictionary_representation_of_descriptor())
# Check to make sure that the initially created Descriptor and the post-processed descriptor are exactly the
# same.
self.assertEqual(schema_parser.schema_descriptor.get_dictionary_representation_of_descriptor(),
schema_parser.schema_dictionary)
def test__get_maps_of_simple_property_paths__success(self):
sample_simple_metadata_schema_json = {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/biomaterial/2.0.2/timecourse",
"description": "Information relating to a timecourse.",
"required": [
"unit"
],
"type": "object",
"properties": {
"value": {
"description": "The numerical value in Timecourse unit",
"pattern": "^[0-9]+\\.?[0-9]*-?[0-9]*\\.?[0-9]*$",
"type": "string",
"example": "2; 5.5-10.5",
"user_friendly": "Timecourse value",
"guidelines": "Enter either a single value or a range of values. Indicate a range using a hyphen."
},
"unit": {
"description": "The unit in which the Timecourse value is expressed.",
"type": "object",
"user_friendly": "Timecourse unit"
},
}
}
sample_ignored_properties = []
schema_parser = SchemaParser(sample_simple_metadata_schema_json, sample_ignored_properties)
actual_label_map = schema_parser.get_map_of_paths_by_property_label(
{"timecourse": schema_parser.schema_dictionary})
expected_label_map = {
"timecourse value": ["timecourse.value"],
"timecourse unit": ["timecourse.unit"],
"timecourse.value": ["timecourse.value"],
"timecourse.unit": ["timecourse.unit"]
}
self.assertEqual(expected_label_map, actual_label_map)
def test__get_maps_of_deep_property_paths__success(self):
sample_complex_metadata_schema_json = {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/biomaterial/2.0.2/timecourse",
"description": "Information relating to a timecourse.",
"required": [
"unit"
],
"type": "object",
"properties": {
"value": {
"description": "The numerical value in Timecourse unit",
"pattern": "^[0-9]+\\.?[0-9]*-?[0-9]*\\.?[0-9]*$",
"type": "string",
"example": "2; 5.5-10.5",
},
"unit": {
"description": "The unit in which the Timecourse value is expressed.",
"type": "object",
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/ontology/5.3.5/time_unit_ontology",
"properties": {
"ontology": {
"description": "An ontology term identifier in the form prefix:accession.",
"type": "string",
"graph_restriction": {
"ontologies": [
"obo:efo",
"obo:uo"
],
"classes": [
"UO:0000003",
"UO:0000149"
],
"relations": [
"rdfs:subClassOf"
],
"direct": False,
"include_self": False
},
"example": "UO:0000010; UO:0000034",
"user_friendly": "Time unit ontology ID"
},
"ontology_label": {
"description": "The preferred label for the ontology term",
"type": "string",
"example": "second; week",
"user_friendly": "Time unit ontology label"
}
},
"user_friendly": "Timecourse unit"
},
}
}
sample_ignored_properties = []
schema_parser = SchemaParser(sample_complex_metadata_schema_json, sample_ignored_properties)
actual_label_map = schema_parser.get_map_of_paths_by_property_label(
{"timecourse": schema_parser.schema_dictionary})
expected_label_map = {
"timecourse unit": ["timecourse.unit"],
"timecourse.value": ["timecourse.value"],
"timecourse.unit": ["timecourse.unit"],
"timecourse.unit.ontology": ["timecourse.unit.ontology"],
"time unit ontology id": ["timecourse.unit.ontology"],
"timecourse.unit.ontology_label": ["timecourse.unit.ontology_label"],
"time unit ontology label": ["timecourse.unit.ontology_label"]
}
self.assertEqual(expected_label_map, actual_label_map)
def test__get_maps_of_duplicated_property_paths__success(self):
sample_complex_metadata_schema_json = {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/biomaterial/2.0.2/timecourse",
"description": "Information relating to a timecourse.",
"required": [
"unit"
],
"type": "object",
"properties": {
"value": {
"description": "The numerical value in Timecourse unit",
"pattern": "^[0-9]+\\.?[0-9]*-?[0-9]*\\.?[0-9]*$",
"type": "string",
"example": "2; 5.5-10.5",
},
"unit": {
"description": "The unit in which the Timecourse value is expressed.",
"type": "object",
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/ontology/5.3.5/time_unit_ontology",
"properties": {
"ontology": {
"description": "An ontology term identifier in the form prefix:accession.",
"type": "string",
"graph_restriction": {
"ontologies": [
"obo:efo",
"obo:uo"
],
"classes": [
"UO:0000003",
"UO:0000149"
],
"relations": [
"rdfs:subClassOf"
],
"direct": False,
"include_self": False
},
"example": "UO:0000010; UO:0000034",
"user_friendly": "Time unit ontology ID"
},
"ontology_label": {
"description": "The preferred label for the ontology term",
"type": "string",
"example": "second; week",
"user_friendly": "Timecourse unit"
}
},
"user_friendly": "Timecourse unit"
},
}
}
sample_ignored_properties = []
schema_parser = SchemaParser(sample_complex_metadata_schema_json, sample_ignored_properties)
actual_label_map = schema_parser.get_map_of_paths_by_property_label(
{"timecourse": schema_parser.schema_dictionary})
expected_label_map = {
"timecourse unit": ["timecourse.unit", "timecourse.unit.ontology_label"],
"timecourse.value": ["timecourse.value"],
"timecourse.unit": ["timecourse.unit"],
"timecourse.unit.ontology": ["timecourse.unit.ontology"],
"time unit ontology id": ["timecourse.unit.ontology"],
"timecourse.unit.ontology_label": ["timecourse.unit.ontology_label"],
}
self.assertEqual(expected_label_map, actual_label_map)
def test__get_tab_presentation_of_simple_metadata_schema__success(self):
sample_simple_metadata_schema_json = {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/biomaterial/2.0.2/timecourse",
"description": "Information relating to a timecourse.",
"required": [
"unit"
],
"type": "object",
"properties": {
"value": {
"description": "The numerical value in Timecourse unit",
"pattern": "^[0-9]+\\.?[0-9]*-?[0-9]*\\.?[0-9]*$",
"type": "string",
"example": "2; 5.5-10.5",
"user_friendly": "Timecourse value",
"guidelines": "Enter either a single value or a range of values. Indicate a range using a hyphen."
},
"unit": {
"description": "The unit in which the Timecourse value is expressed.",
"type": "object",
"user_friendly": "Timecourse unit"
},
}
}
sample_ignored_properties = []
schema_parser = SchemaParser(sample_simple_metadata_schema_json, sample_ignored_properties)
actual_tab_representation = schema_parser.get_tab_representation_of_schema()
expected_tab_representation = {"timecourse": {"display_name": "Timecourse",
"columns": ["timecourse.value", "timecourse.unit"]}}
self.assertEqual(expected_tab_representation, actual_tab_representation)
def test__get_tab_presentation_of_complex_metadata_schema__success(self):
sample_complex_metadata_schema_json = {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/biomaterial/2.0.2/timecourse",
"description": "Information relating to a timecourse.",
"required": [
"unit"
],
"type": "object",
"properties": {
"value": {
"description": "The numerical value in Timecourse unit",
"pattern": "^[0-9]+\\.?[0-9]*-?[0-9]*\\.?[0-9]*$",
"type": "string",
"example": "2; 5.5-10.5",
},
"unit": {
"description": "The unit in which the Timecourse value is expressed.",
"type": "object",
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://schema.humancellatlas.org/module/ontology/5.3.5/time_unit_ontology",
"properties": {
"ontology": {
"description": "An ontology term identifier in the form prefix:accession.",
"type": "string",
"graph_restriction": {
"ontologies": [
"obo:efo",
"obo:uo"
],
"classes": [
"UO:0000003",
"UO:0000149"
],
"relations": [
"rdfs:subClassOf"
],
"direct": False,
"include_self": False
},
"example": "UO:0000010; UO:0000034",
"user_friendly": "Time unit ontology ID"
},
"ontology_label": {
"description": "The preferred label for the ontology term",
"type": "string",
"example": "second; week",
"user_friendly": "Timecourse unit"
}
},
"user_friendly": "Timecourse unit"
},
}
}
sample_ignored_properties = []
schema_parser = SchemaParser(sample_complex_metadata_schema_json, sample_ignored_properties)
actual_tab_representation = schema_parser.get_tab_representation_of_schema()
expected_tab_representation = {"timecourse": {"display_name": "Timecourse",
"columns": ["timecourse.value",
"timecourse.unit.ontology",
"timecourse.unit.ontology_label"]}}
self.assertEqual(expected_tab_representation, actual_tab_representation)
| 48.375 | 118 | 0.487529 | 1,465 | 17,802 | 5.699659 | 0.101706 | 0.072096 | 0.007545 | 0.01006 | 0.923713 | 0.908623 | 0.903353 | 0.897725 | 0.871377 | 0.850778 | 0 | 0.021711 | 0.39973 | 17,802 | 367 | 119 | 48.506812 | 0.759686 | 0.018369 | 0 | 0.79403 | 0 | 0 | 0.340624 | 0.036072 | 0 | 0 | 0 | 0 | 0.029851 | 1 | 0.020896 | false | 0 | 0.008955 | 0 | 0.032836 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
54dfada5dee48d336c28a6409edc65eb6fccb439 | 129 | py | Python | tssearch/search/__init__.py | mluacnunes/tssearch | e8e2cd9f07d66eeec8a839fe72c259232f968da9 | [
"BSD-3-Clause"
] | 11 | 2021-11-11T15:21:14.000Z | 2022-03-31T23:28:34.000Z | tssearch/search/__init__.py | MargaridaAntunes/tssearch | d29c47c6176ebbb25c8785da843946bf3eb27721 | [
"BSD-3-Clause"
] | 1 | 2022-01-23T02:36:02.000Z | 2022-01-24T17:13:32.000Z | tssearch/search/__init__.py | MargaridaAntunes/tssearch | d29c47c6176ebbb25c8785da843946bf3eb27721 | [
"BSD-3-Clause"
] | 5 | 2021-12-13T21:46:36.000Z | 2022-03-31T23:29:36.000Z | from tssearch.search.query_search import *
from tssearch.search.segmentation import *
from tssearch.search.search_utils import *
| 32.25 | 42 | 0.837209 | 17 | 129 | 6.235294 | 0.411765 | 0.339623 | 0.509434 | 0.45283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 129 | 3 | 43 | 43 | 0.905983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
54e630650b9e734e683205b8fc4922b07dafded6 | 291 | py | Python | quine/QuineHelq.py | helq/old_code | a432faf1b340cb379190a2f2b11b997b02d1cd8d | [
"CC0-1.0"
] | null | null | null | quine/QuineHelq.py | helq/old_code | a432faf1b340cb379190a2f2b11b997b02d1cd8d | [
"CC0-1.0"
] | 4 | 2020-03-10T19:20:21.000Z | 2021-06-07T15:39:48.000Z | quine/QuineHelq.py | helq/old_code | a432faf1b340cb379190a2f2b11b997b02d1cd8d | [
"CC0-1.0"
] | null | null | null | #By helq
def p(a,b): print a + chr(10) + b + chr(10) + chr(112) + chr(40) + chr(34) + a + chr(34) + chr(44) + chr(34) + b + chr(34) + chr(41)
p("#By helq","def p(a,b): print a + chr(10) + b + chr(10) + chr(112) + chr(40) + chr(34) + a + chr(34) + chr(44) + chr(34) + b + chr(34) + chr(41)")
| 72.75 | 148 | 0.498282 | 63 | 291 | 2.301587 | 0.222222 | 0.275862 | 0.22069 | 0.137931 | 0.993103 | 0.993103 | 0.993103 | 0.993103 | 0.993103 | 0.993103 | 0 | 0.184211 | 0.216495 | 291 | 3 | 149 | 97 | 0.451754 | 0.024055 | 0 | 0 | 0 | 0.5 | 0.4947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 13 |
070b87a1ce4fa45f49f1ee0bdf224b215be5b3e9 | 998 | py | Python | Codes/python/testevscode/4h.py | Gaazedo/portfolio | 4b09b6fffc6947375a20fee1c5523a12cbcbe970 | [
"MIT"
] | null | null | null | Codes/python/testevscode/4h.py | Gaazedo/portfolio | 4b09b6fffc6947375a20fee1c5523a12cbcbe970 | [
"MIT"
] | null | null | null | Codes/python/testevscode/4h.py | Gaazedo/portfolio | 4b09b6fffc6947375a20fee1c5523a12cbcbe970 | [
"MIT"
] | null | null | null | import math
var = [6.57,6.77,5.60,6.05,6.31,7.98,6.96]
var2 = [45,25,35,50,25,15,40]
total = 0
dpvar = 0
dpvar2 = 0
for m in range(len(var)):
dpvar += (var[m] - 6,60)**2
for n in range(len(var2)):
dpvar2 += (var2[n] - 33.57)**2
dpvar2 = math.sqrt(dpvar2/6)
dpvar = math.sqrt(dpvar/6)
print(dpvar,dpvar2)
for i in range(len(var)):
print("(" + str(var[i]) + " - 6,60)(" + str(var2[i]) + " - 33.57)")
total += (var[i] - 6,60)*(var2[i] - 33.57)
print(total)
print(total/(6*dpvar*dpvar2))
import math
var = [6.57,6.77,5.60,6.05,6.31,7.98,6.96]
var2 = [45,25,35,50,25,15,40]
total = 0
dpvar = 0
dpvar2 = 0
for m in range(len(var)):
dpvar += (var[m] - 66.60)**2
for n in range(len(var2)):
dpvar2 += (var2[n] - 33.57)**2
dpvar2 = math.sqrt(dpvar2/6)
dpvar = math.sqrt(dpvar/6)
print(dpvar,dpvar2)
for i in range(len(var)):
print("(" + str(var[i]) + " - 6.60)(" + str(var2[i]) + " - 33.57)")
total += (var[i] - 6.60)*(var2[i] - 33.57)
print(total)
print(total/(6*dpvar*dpvar2))
| 22.681818 | 72 | 0.578156 | 202 | 998 | 2.856436 | 0.183168 | 0.07279 | 0.103986 | 0.090121 | 0.994801 | 0.994801 | 0.994801 | 0.994801 | 0.994801 | 0.994801 | 0 | 0.181273 | 0.165331 | 998 | 43 | 73 | 23.209302 | 0.511405 | 0 | 0 | 0.833333 | 0 | 0 | 0.042084 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
072ceaf3f353b39fc9e3cc02a138297ef62086a4 | 143 | py | Python | etk/unit_tests/tokenizer/__init__.py | linqyd/etk | dcf0cae4076619f5261573d47b4f5f26baaf15b7 | [
"MIT"
] | null | null | null | etk/unit_tests/tokenizer/__init__.py | linqyd/etk | dcf0cae4076619f5261573d47b4f5f26baaf15b7 | [
"MIT"
] | null | null | null | etk/unit_tests/tokenizer/__init__.py | linqyd/etk | dcf0cae4076619f5261573d47b4f5f26baaf15b7 | [
"MIT"
] | null | null | null | import os, sys
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
sys.path.append(os.path.join(os.path.dirname(__file__), '../..')) | 47.666667 | 65 | 0.692308 | 23 | 143 | 3.956522 | 0.347826 | 0.263736 | 0.285714 | 0.32967 | 0.879121 | 0.879121 | 0.879121 | 0.879121 | 0.879121 | 0.879121 | 0 | 0 | 0.041958 | 143 | 3 | 65 | 47.666667 | 0.664234 | 0 | 0 | 0 | 0 | 0 | 0.048611 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 13 |
073b3b667def41da412adf453c715dc70036e3d5 | 3,324 | py | Python | tests/sequences/test_sorting.py | ref-humbold/AlgoLib_Python | 05f725504656ec93b879374a8cd87464d88fff77 | [
"Apache-2.0"
] | null | null | null | tests/sequences/test_sorting.py | ref-humbold/AlgoLib_Python | 05f725504656ec93b879374a8cd87464d88fff77 | [
"Apache-2.0"
] | null | null | null | tests/sequences/test_sorting.py | ref-humbold/AlgoLib_Python | 05f725504656ec93b879374a8cd87464d88fff77 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Tests: Algorithms for sequence sorting"""
import unittest
from assertpy import assert_that
from algolib.sequences import bottom_up_merge_sorted, heap_sorted, quick_sorted, \
top_down_merge_sorted
class SortingTest(unittest.TestCase):
@staticmethod
def test__heap_sorted():
# given
sequence = [3, 17, -6, 0, 9, -12, 7, 4, 2]
# when
result = heap_sorted(sequence)
# then
assert_that(result).is_instance_of(list)
assert_that(result).is_not_same_as(sequence)
assert_that(result).is_sorted()
assert_that(result).is_equal_to(sorted(sequence))
@staticmethod
def test__heap_sorted__when_argument_is_not_list():
# given
sequence = {3, 17, -6, 0, 9, -12, 7, 4, 2}
# when
result = heap_sorted(sequence)
# then
assert_that(result).is_instance_of(list)
assert_that(result).is_sorted()
assert_that(result).is_equal_to(sorted(sequence))
@staticmethod
def test__top_down_merge_sorted():
# given
sequence = [3, 17, -6, 0, 9, -12, 7, 4, 2]
# when
result = top_down_merge_sorted(sequence)
# then
assert_that(result).is_instance_of(list)
assert_that(result).is_not_same_as(sequence)
assert_that(result).is_sorted()
assert_that(result).is_equal_to(sorted(sequence))
@staticmethod
def test__top_down_merge_sorted__when_argument_is_not_list():
# given
sequence = {3, 17, -6, 0, 9, -12, 7, 4, 2}
# when
result = top_down_merge_sorted(sequence)
# then
assert_that(result).is_instance_of(list)
assert_that(result).is_sorted()
assert_that(result).is_equal_to(sorted(sequence))
@staticmethod
def test__bottom_up_merge_sorted():
# given
sequence = [3, 17, -6, 0, 9, -12, 7, 4, 2]
# when
result = bottom_up_merge_sorted(sequence)
# then
assert_that(result).is_instance_of(list)
assert_that(result).is_not_same_as(sequence)
assert_that(result).is_sorted()
assert_that(result).is_equal_to(sorted(sequence))
@staticmethod
def test__bottom_up_merge_sorted__when_argument_is_not_list():
# given
sequence = {3, 17, -6, 0, 9, -12, 7, 4, 2}
# when
result = bottom_up_merge_sorted(sequence)
# then
assert_that(result).is_instance_of(list)
assert_that(result).is_sorted()
assert_that(result).is_equal_to(sorted(sequence))
@staticmethod
def test__quick_sorted():
# given
sequence = [3, 17, -6, 0, 9, -12, 7, 4, 2]
# when
result = quick_sorted(sequence)
# then
assert_that(result).is_instance_of(list)
assert_that(result).is_not_same_as(sequence)
assert_that(result).is_sorted()
assert_that(result).is_equal_to(sorted(sequence))
@staticmethod
def test__quick_sorted__when_argument_is_not_list():
# given
sequence = {3, 17, -6, 0, 9, -12, 7, 4, 2}
# when
result = quick_sorted(sequence)
# then
assert_that(result).is_instance_of(list)
assert_that(result).is_sorted()
assert_that(result).is_equal_to(sorted(sequence))
| 32.271845 | 82 | 0.631167 | 439 | 3,324 | 4.412301 | 0.120729 | 0.149716 | 0.231285 | 0.260196 | 0.896231 | 0.882292 | 0.882292 | 0.882292 | 0.882292 | 0.882292 | 0 | 0.036135 | 0.259025 | 3,324 | 102 | 83 | 32.588235 | 0.750305 | 0.056859 | 0 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.446154 | 1 | 0.123077 | false | 0 | 0.046154 | 0 | 0.184615 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
075870b172a26c4a7835fb38d97ef62269ff53cf | 121 | py | Python | src/controllers/__init__.py | ramenbroth/pyShortURL | 6cb85d63633c965428b4c5b94c2d3f9f7cb7e196 | [
"Unlicense"
] | null | null | null | src/controllers/__init__.py | ramenbroth/pyShortURL | 6cb85d63633c965428b4c5b94c2d3f9f7cb7e196 | [
"Unlicense"
] | null | null | null | src/controllers/__init__.py | ramenbroth/pyShortURL | 6cb85d63633c965428b4c5b94c2d3f9f7cb7e196 | [
"Unlicense"
] | null | null | null | from .url import url_blueprint
def register_blueprints(app):
app.register_blueprint(url_blueprint, url_prefix='/')
| 20.166667 | 57 | 0.785124 | 16 | 121 | 5.625 | 0.5625 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115702 | 121 | 5 | 58 | 24.2 | 0.841122 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
db6ded6286a9d4e48e36ef434eecbb38226ee2fd | 56,929 | py | Python | scripts/policy/test_policy_acl.py | vkolli/5.0_contrail-test | 1793f169a94100400a1b2fafbad21daf5aa4d48a | [
"Apache-2.0"
] | null | null | null | scripts/policy/test_policy_acl.py | vkolli/5.0_contrail-test | 1793f169a94100400a1b2fafbad21daf5aa4d48a | [
"Apache-2.0"
] | 1 | 2021-06-01T22:18:29.000Z | 2021-06-01T22:18:29.000Z | scripts/policy/test_policy_acl.py | lmadhusudhanan/contrail-test | bd39ff19da06a20bd79af8c25e3cde07375577cf | [
"Apache-2.0"
] | null | null | null | #
# To run tests, you can do 'python -m testtools.run tests'. To run specific tests,
# You can do 'python -m testtools.run -l tests'
# Set the env variable PARAMS_FILE to point to your ini file. Else it will try to pick params.ini in PWD
#
import os
import fixtures
import tcutils.wrappers
from base import BasePolicyTest
import time
from vn_test import VNFixture
from vm_test import VMFixture
from ipam_test import IPAMFixture
from policy_test import PolicyFixture
from vn_policy_test import VN_Policy_Fixture
from test import attr
from netaddr import IPNetwork
from common.policy import policy_test_utils
af_test = 'dual'
class TestPolicyAcl(BasePolicyTest):
@classmethod
def setUpClass(cls):
super(TestPolicyAcl, cls).setUpClass()
def cleanUp(cls):
super(TestPolicyAcl, cls).cleanUp()
# end cleanUp
def setup_ipam_vn(self):
# create new IPAM
self.ipam1_obj = self.useFixture(
IPAMFixture(
connections=self.connections,
name='ipam1'))
self.ipam2_obj = self.useFixture(
IPAMFixture(
connections=self.connections,
name='ipam2'))
self.ipam3_obj = self.useFixture(
IPAMFixture(
connections=self.connections,
name='ipam3'))
for ipam_fixture in [self.ipam1_obj, self.ipam2_obj, self.ipam3_obj]:
assert ipam_fixture.verify_on_setup()
# create new VN
self.VN1_fixture = self.useFixture(
VNFixture(
project_name=self.project.project_name,
connections=self.connections,
vn_name='VN1',
inputs=self.inputs,
subnets=['10.1.1.0/24'],
ipam_fq_name=self.ipam1_obj.fq_name,orch=self.orchestrator))
self.VN1_fixture.read()
self.VN2_fixture = self.useFixture(
VNFixture(
project_name=self.project.project_name,
connections=self.connections,
vn_name='VN2',
inputs=self.inputs,
subnets=['10.2.1.0/24'],
ipam_fq_name=self.ipam2_obj.fq_name))
self.VN3_fixture = self.useFixture(
VNFixture(
project_name=self.project.project_name,
connections=self.connections,
vn_name='VN3',
inputs=self.inputs,
subnets=['10.3.1.0/24'],
ipam_fq_name=self.ipam3_obj.fq_name))
for vn_fixture in [self.VN1_fixture, self.VN2_fixture, self.VN3_fixture]:
assert vn_fixture.verify_on_setup()
# end setup_ipam_vn
def setup_vm(self):
# add VMs to VNs
self.VM11_fixture = self.useFixture(
VMFixture(
connections=self.connections,
vn_obj=self.VN1_fixture.obj,
vm_name='VM11',
project_name=self.project.project_name,orch=self.orchestrator))
self.VM21_fixture = self.useFixture(
VMFixture(
connections=self.connections,
vn_obj=self.VN2_fixture.obj,
vm_name='VM21',
project_name=self.project.project_name))
self.VM31_fixture = self.useFixture(
VMFixture(
connections=self.connections,
vn_obj=self.VN3_fixture.obj,
vm_name='VM31',
project_name=self.project.project_name))
for vm_fixture in [self.VM11_fixture, self.VM21_fixture, self.VM31_fixture]:
assert vm_fixture.verify_on_setup()
assert self.VM11_fixture.wait_till_vm_is_up()
assert self.VM21_fixture.wait_till_vm_is_up()
assert self.VM31_fixture.wait_till_vm_is_up()
# end setup_vm
@attr(type=['cb_sanity', 'sanity', 'vcenter', 'vrouter_gw', 'vcenter_compute'])
@tcutils.wrappers.preposttest_wrapper
def test_policy_inheritance_src_vn_dst_pol(self):
"""Test cases to test policy inheritance"""
"""Policy Rule :- source = VN, destination = policy."""
result = True
# create Ipam and VN
self.setup_ipam_vn()
# create policy
policy_name = 'policy12'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_network': 'VN2',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
policy12_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections, api=True))
policy_name = 'policy21'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_policy': 'policy13',
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
policy21_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections, api=True))
policy_name = 'policy13'
rules = []
rules = [{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN3',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'}]
policy13_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections, api=True))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : \
[policy12_fixture.policy_obj, \
policy13_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy12','policy13'],
project_name=self.project.project_name,options='contrail'))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : [policy21_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy21'],
project_name=self.project.project_name))
# create VM
self.setup_vm()
ret = self.VM11_fixture.ping_with_certainty(expectation=True,
dst_vm_fixture=self.VM21_fixture)
if ret == True :
self.logger.info("Test with src as VN and dst as policy PASSED")
else:
result = False
self.logger.error("Test with src as VN and dst as policy FAILED")
return result
# end test_policy_inheritance_src_vn_dst_pol
@tcutils.wrappers.preposttest_wrapper
def test_policy_inheritance_src_pol_dst_vn(self):
"""Test cases to test policy inheritance"""
"""Policy Rule :- source = policy, destination = VN."""
result = True
# create Ipam and VN
self.setup_ipam_vn()
# create policy
policy_name = 'policy12'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_network': 'VN2',
'source_policy': 'policy13',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
policy12_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy21'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_network': 'VN1',
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
policy21_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy13'
rules = []
rules = [{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN3',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'}]
policy13_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : \
[policy12_fixture.policy_obj, \
policy13_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy12','policy13'],
project_name=self.project.project_name))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : \
[policy21_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy21'],
project_name=self.project.project_name))
# create VM
self.setup_vm()
ret = self.VM11_fixture.ping_with_certainty(expectation=True,
dst_vm_fixture=self.VM21_fixture)
if ret == True :
self.logger.info("Test with src as policy and dst as VN PASSED")
else:
result = False
self.logger.error("Test with src as policy and dst as VN FAILED")
return result
# end test_policy_inheritance_src_pol_dst_vn
@tcutils.wrappers.preposttest_wrapper
def test_policy_inheritance_src_any_dst_pol(self):
"""Test cases to test policy inheritance"""
"""Policy Rule :- source = Any, destination = policy."""
result = True
# create Ipam and VN
self.setup_ipam_vn()
# create policy
policy_name = 'policy12'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_network': 'VN2',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
policy12_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy21'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_policy': 'policy13',
'source_network': 'any',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
policy21_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy13'
rules = []
rules = [{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN3',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'}]
policy13_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : \
[policy12_fixture.policy_obj, \
policy13_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy12','policy13'],
project_name=self.project.project_name))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : \
[policy21_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy21'],
project_name=self.project.project_name))
# create VM
self.setup_vm()
ret = self.VM11_fixture.ping_with_certainty(expectation=True,
dst_vm_fixture=self.VM21_fixture)
if ret == True :
self.logger.info("Test with src as any and dst as policy PASSED")
else:
result = False
self.logger.error("Test with src as any and dst as policy FAILED")
return result
# end test_policy_inheritance_src_any_dst_pol
@attr(type=['cb_sanity', 'vcenter'])
@tcutils.wrappers.preposttest_wrapper
def test_policy_inheritance_src_pol_dst_any(self):
"""Test cases to test policy inheritance"""
"""Policy Rule :- source = policy, destination = Any."""
result = True
# create Ipam and VN
self.setup_ipam_vn()
# create policy
policy_name = 'policy12'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_network': 'any',
'source_policy': 'policy13',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
policy12_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy21'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_network': 'VN1',
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
policy21_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy13'
rules = []
rules = [{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN3',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'}]
policy13_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : \
[policy12_fixture.policy_obj, \
policy13_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy12','policy13'],
project_name=self.project.project_name))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : \
[policy21_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy21'],
project_name=self.project.project_name))
# create VM
self.setup_vm()
ret = self.VM11_fixture.ping_with_certainty(expectation=True,
dst_vm_fixture=self.VM21_fixture)
if ret == True :
self.logger.info("Test with src as policy and dst as any PASSED")
else:
result = False
self.logger.error("Test with src as policy and dst as any FAILED")
return result
# end test_policy_inheritance_src_pol_dst_any
@tcutils.wrappers.preposttest_wrapper
def test_policy_cidr_src_policy_dst_cidr(self):
"""Test cases to test policy CIDR"""
"""Policy Rule :- source = Policy, destination = CIDR."""
result = True
af = self.inputs.get_af()
# create Ipam and VN
self.setup_ipam_vn()
VN2_subnet_v4 = self.VN2_fixture.get_cidrs(af='v4')[0]
if 'v6' == af or 'dual' == af:
VN2_subnet_v6 = self.VN2_fixture.get_cidrs(af='v6')[0]
else:
VN2_subnet_v6 = None
# create policy
policy_name = 'policy12'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': VN2_subnet_v4,
'source_policy': 'policy13',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN2',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{VN2_subnet_v4:VN2_subnet_v6})
policy12_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy21'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_policy': 'policy13',
'source_subnet': VN2_subnet_v4,
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN1',
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{VN2_subnet_v4:VN2_subnet_v6})
policy21_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy13'
rules = []
rules = [{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN3',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'}]
policy13_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : \
[policy12_fixture.policy_obj, \
policy13_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy12','policy13'],
project_name=self.project.project_name))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : [policy21_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy21'],
project_name=self.project.project_name))
# create VM
self.setup_vm()
ret = self.VM11_fixture.ping_with_certainty(expectation=False,
dst_vm_fixture=self.VM21_fixture)
if ret == True :
cmd = "flow -l | grep %s -A1 | grep %s -A1 " % (
self.VM11_fixture.vm_ip, self.VM21_fixture.vm_ip)
cmd = cmd + "| grep 'Action:D(Policy)\|Action:D(OutPolicy)'"
cmd = cmd + " | wc -l"
flow_record = self.inputs.run_cmd_on_server(
self.VM11_fixture.vm_node_ip, cmd,
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['username'],
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['password'],
container='agent')
if flow_record > 0:
self.logger.info("Found %s matching flows" % flow_record)
self.logger.info("Test with src as policy and dst as cidr PASSED")
else:
result = False
self.logger.error("Test with src as policy and dst as cidr FAILED")
else:
result = False
self.logger.error("Test with src as policy and dst as cidr FAILED")
return result
# end test_policy_cidr_src_policy_dst_cidr
@attr(type=['cb_sanity', 'vcenter'])
@tcutils.wrappers.preposttest_wrapper
def test_policy_cidr_src_vn_dst_cidr(self):
"""Test cases to test policy CIDR"""
"""Policy Rule :- source = VN, destination = CIDR."""
result = True
af = self.inputs.get_af()
# create Ipam and VN
self.setup_ipam_vn()
VN1_subnet_v4 = self.VN1_fixture.get_cidrs(af='v4')[0]
VN2_subnet_v4 = self.VN2_fixture.get_cidrs(af='v4')[0]
if 'v6' == af or 'dual' == af:
VN1_subnet_v6 = self.VN1_fixture.get_cidrs(af='v6')[0]
VN2_subnet_v6 = self.VN2_fixture.get_cidrs(af='v6')[0]
else:
VN1_subnet_v6 = None
VN2_subnet_v6 = None
# create policy
policy_name = 'policy12'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': VN2_subnet_v4,
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN2',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{VN2_subnet_v4:VN2_subnet_v6})
policy12_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy21'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': VN1_subnet_v4,
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN1',
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{VN1_subnet_v4:VN1_subnet_v6})
policy21_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : [policy12_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy12'],
project_name=self.project.project_name))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : [policy21_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy21'],
project_name=self.project.project_name))
# create VM
self.setup_vm()
ret = self.VM11_fixture.ping_with_certainty(expectation=False,
dst_vm_fixture=self.VM21_fixture)
if ret == True :
cmd = "flow -l | grep %s -A1 | grep %s -A1 " % (
self.VM11_fixture.vm_ip, self.VM21_fixture.vm_ip)
cmd = cmd + "| grep 'Action:D(Policy)' | wc -l"
flow_record = self.inputs.run_cmd_on_server(
self.VM11_fixture.vm_node_ip, cmd,
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['username'],
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['password'],
container='agent')
if flow_record > 0:
self.logger.info("Test with src as VN and dst as cidr PASSED")
else:
result = False
self.logger.error("Test with src as VN and dst as cidr FAILED")
else:
result = False
self.logger.error("Test with src as VN and dst as policy FAILED")
return result
# end test_policy_cidr_src_vn_dst_cidr
@tcutils.wrappers.preposttest_wrapper
def test_policy_cidr_src_duplicate_vn_dst_cidr(self):
"""Test cases to test policy CIDR"""
"""Policy Rule1 :- source = VN-A, destination = CIDR-A."""
"""Policy Rule2 :- source = VN-A, destination = CIDR-B."""
result = True
af = self.inputs.get_af()
# create Ipam and VN
self.setup_ipam_vn()
VN1_subnet_v4 = self.VN1_fixture.get_cidrs(af='v4')[0]
VN2_subnet_v4 = self.VN2_fixture.get_cidrs(af='v4')[0]
VN3_subnet_v4 = self.VN3_fixture.get_cidrs(af='v4')[0]
if 'v6' == af or 'dual' == af:
VN1_subnet_v6 = self.VN1_fixture.get_cidrs(af='v6')[0]
VN2_subnet_v6 = self.VN2_fixture.get_cidrs(af='v6')[0]
VN3_subnet_v6 = self.VN3_fixture.get_cidrs(af='v6')[0]
else:
VN1_subnet_v6 = None
VN2_subnet_v6 = None
VN3_subnet_v6 = None
# create policy
policy_name = 'policy123'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': VN2_subnet_v4,
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': VN3_subnet_v4,
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN2',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN3',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{VN2_subnet_v4:VN2_subnet_v6,
VN3_subnet_v4:VN3_subnet_v6})
policy123_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy21'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': VN1_subnet_v4,
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN1',
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{VN1_subnet_v4:VN1_subnet_v6})
policy21_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy31'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': VN1_subnet_v4,
'source_network': 'VN3',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN1',
'source_network': 'VN3',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{VN1_subnet_v4:VN1_subnet_v6})
policy31_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : [policy123_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy123'],
project_name=self.project.project_name))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : [policy21_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy21'],
project_name=self.project.project_name))
VN3_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN3_fixture.vn_name,
policy_obj={self.VN3_fixture.vn_name : [policy31_fixture.policy_obj]},
vn_obj={self.VN3_fixture.vn_name : self.VN3_fixture},
vn_policys=['policy31'],
project_name=self.project.project_name))
# create VM
self.setup_vm()
ret = self.VM11_fixture.ping_with_certainty(expectation=False,
dst_vm_fixture=self.VM21_fixture)
if ret == True :
cmd = "flow -l | grep %s -A1 | grep %s -A1 " % (
self.VM11_fixture.vm_ip, self.VM21_fixture.vm_ip)
cmd = cmd + "| grep 'Action:D(Policy)' | wc -l"
flow_record = self.inputs.run_cmd_on_server(
self.VM11_fixture.vm_node_ip, cmd,
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['username'],
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['password'],
container='agent')
if flow_record > 0:
self.logger.info("Test with src as VN and dst as cidr PASSED")
else:
result = False
self.logger.error("Test with src as VN and dst as cidr FAILED")
else:
result = False
self.logger.error("Test with src as VN and dst as policy FAILED")
ret = False
flow_record = 0
ret = self.VM11_fixture.ping_with_certainty(expectation=False,
dst_vm_fixture=self.VM31_fixture)
if ret == True :
cmd = "flow -l | grep %s -A1 | grep %s -A1 " % (
self.VM11_fixture.vm_ip, self.VM31_fixture.vm_ip)
cmd = cmd + "| grep 'Action:D(Policy)' | wc -l"
flow_record = self.inputs.run_cmd_on_server(
self.VM11_fixture.vm_node_ip, cmd,
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['username'],
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['password'],
container='agent')
if flow_record > 0:
self.logger.info("Test with src as VN and dst as cidr PASSED")
else:
result = False
self.logger.error("Test with src as VN and dst as cidr FAILED")
return result
# end test_policy_cidr_src_duplicate_vn_dst_cidr
@attr(type=['cb_sanity', 'sanity', 'vcenter', 'vcenter_compute'])
@tcutils.wrappers.preposttest_wrapper
def test_policy_cidr_src_cidr_dst_any(self):
"""Test cases to test policy CIDR"""
"""Policy Rule :- source = CIDR, destination = ANY."""
"""Policy Rule :- source = ANY, destination = CIDR."""
result = True
af = self.inputs.get_af()
# create Ipam and VN
self.setup_ipam_vn()
VN1_subnet_v4 = self.VN1_fixture.get_cidrs(af='v4')[0]
VN2_subnet_v4 = self.VN2_fixture.get_cidrs(af='v4')[0]
if 'v6' == af or 'dual' == af:
VN1_subnet_v6 = self.VN1_fixture.get_cidrs(af='v6')[0]
VN2_subnet_v6 = self.VN2_fixture.get_cidrs(af='v6')[0]
else:
VN1_subnet_v6 = None
VN2_subnet_v6 = None
# create policy
policy_name = 'policy12'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_network': 'any',
'source_subnet': VN1_subnet_v4,
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN2',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{VN1_subnet_v4:VN1_subnet_v6})
policy12_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy21'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': VN1_subnet_v4,
'source_network': 'any',
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN1',
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{VN1_subnet_v4:VN1_subnet_v6})
policy21_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : [policy12_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy12'],
project_name=self.project.project_name))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : [policy21_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy21'],
project_name=self.project.project_name))
# create VM
self.setup_vm()
ret1 = self.VM11_fixture.ping_with_certainty(expectation=False,
dst_vm_fixture=self.VM21_fixture)
ret2 = self.VM21_fixture.ping_with_certainty(expectation=False,
dst_vm_fixture=self.VM11_fixture)
if ((ret1 == True) and (ret2 == True)):
cmd = "flow -l | grep %s -A1 | grep %s -A1 " % (
self.VM11_fixture.vm_ip, self.VM21_fixture.vm_ip)
cmd = cmd + "| grep 'Action:D(Policy)' | wc -l"
flow_record = self.inputs.run_cmd_on_server(
self.VM11_fixture.vm_node_ip, cmd,
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['username'],
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['password'],
container='agent')
if flow_record > 0:
self.logger.info("Found %s matching flows" % flow_record)
self.logger.info("Test with src as CIDR and dst as ANY PASSED")
else:
result = False
self.logger.error("Test with src as CIDR and dst as ANY FAILED")
else:
result = False
self.logger.error("Test with src as CIDR and dst as ANY FAILED")
return result
# end test_policy_cidr_src_cidr_dst_any
@tcutils.wrappers.preposttest_wrapper
def test_policy_cidr_src_cidr_dst_cidr(self):
"""Test cases to test policy CIDR"""
"""Policy1 Rule :- source = CIDR-VM11, destination = CIDR-VM12."""
"""Policy2 Rule :- source = CIDR-VM11, destination = CIDR-VM21."""
result = True
af = self.inputs.get_af()
# create Ipam and VN
self.setup_ipam_vn()
# create VM
self.setup_vm()
self.VM12_fixture = self.useFixture(
VMFixture(
connections=self.connections,
vn_obj=self.VN1_fixture.obj,
vm_name='VM12',
project_name=self.project.project_name))
assert self.VM12_fixture.verify_on_setup()
self.VM12_fixture.wait_till_vm_is_up()
#Check initial connectivity without policies in place.
ret = self.VM11_fixture.ping_with_certainty(expectation=True,
dst_vm_fixture=self.VM12_fixture)
if ret == True :
self.logger.info("ICMP traffic is allowed between VMs in same VN")
else:
result = False
self.logger.error(
"ICMP traffic is not allowed between VMs in same VN, which is wrong")
ret = self.VM11_fixture.ping_with_certainty(expectation=False,
dst_vm_fixture=self.VM21_fixture)
if ret == True :
self.logger.info("ICMP traffic is not allowed between VMs accross VNs")
else:
result = False
self.logger.error(
"ICMP traffic is allowed between VMs accross VNs, which is wrong")
if result == False:
return result
#get the VM IP Addresses in cidr format.
vm11_ip = str(IPNetwork(self.VM11_fixture.get_vm_ips(af='v4')[0]))
vm12_ip = str(IPNetwork(self.VM12_fixture.get_vm_ips(af='v4')[0]))
vm21_ip = str(IPNetwork(self.VM21_fixture.get_vm_ips(af='v4')[0]))
if 'v6' == af or 'dual' == af:
vm11_ipv6 = str(IPNetwork(self.VM11_fixture.get_vm_ips(af='v6')[0]))
vm12_ipv6 = str(IPNetwork(self.VM12_fixture.get_vm_ips(af='v6')[0]))
vm21_ipv6 = str(IPNetwork(self.VM21_fixture.get_vm_ips(af='v6')[0]))
else:
vm11_ipv6 = None
vm12_ipv6 = None
vm21_ipv6 = None
# create policy
policy_name = 'policy1112'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': vm12_ip,
'source_subnet': vm11_ip,
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN1',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{vm11_ip:vm11_ipv6,
vm12_ip:vm12_ipv6})
policy1112_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy1211'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': vm11_ip,
'source_subnet': vm12_ip,
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN1',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{vm11_ip:vm11_ipv6,
vm12_ip:vm12_ipv6})
policy1211_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy1121'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': vm21_ip,
'source_subnet': vm11_ip,
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN2',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{vm11_ip:vm11_ipv6,
vm21_ip:vm21_ipv6})
policy1121_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
policy_name = 'policy2111'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': vm11_ip,
'source_subnet': vm21_ip,
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'any',
'dest_network': 'VN1',
'source_network': 'VN2',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
rules = policy_test_utils.update_cidr_rules_with_ipv6(af, rules,
{vm11_ip:vm11_ipv6,
vm21_ip:vm21_ipv6})
policy2111_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : \
[policy1112_fixture.policy_obj, \
policy1211_fixture.policy_obj, \
policy1121_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy1112','policy1211','policy1121'],
project_name=self.project.project_name))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : \
[policy2111_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy2111'],
project_name=self.project.project_name))
#Test traffic with the policies having cidr as src and dst,
#attached to the respective networks.
ret = self.VM11_fixture.ping_with_certainty(expectation=False,
dst_vm_fixture=self.VM12_fixture)
if ret == True :
cmd = "flow -l | grep %s -A1 | grep %s -A1 " % (
self.VM11_fixture.vm_ip, self.VM12_fixture.vm_ip)
cmd = cmd + "| grep 'Action:D(Policy)' | wc -l"
flow_record = self.inputs.run_cmd_on_server(
self.VM11_fixture.vm_node_ip, cmd,
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['username'],
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['password'],
container='agent')
if flow_record > 0:
self.logger.info(
"ICMP traffic is not allowed between VM11 and VM12, by policy1112 and policy1211.")
self.logger.info("Above test Passed.")
else:
result = False
self.logger.error(
"ICMP traffic is not allowed between VM11 and VM12, by policy1112 and policy1211.")
self.logger.error("Above test Failed.")
else:
result = False
self.logger.error(
"ICMP traffic is not allowed between VM11 and VM12, by policy1112 and policy1211.")
self.logger.error("Above test Failed.")
ret = False
flow_record = 0
ret = self.VM11_fixture.ping_with_certainty(expectation=True,
dst_vm_fixture=self.VM21_fixture)
if ret == True :
cmd = "flow -l | grep %s -A1 | grep %s -A1 " % (
self.VM11_fixture.vm_ip, self.VM21_fixture.vm_ip)
cmd = cmd + "| grep 'Action:F' | wc -l"
flow_record = self.inputs.run_cmd_on_server(
self.VM11_fixture.vm_node_ip, cmd,
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['username'],
self.inputs.host_data[self.VM11_fixture.vm_node_ip]['password'],
container='agent')
if flow_record > 0:
self.logger.info(
"ICMP traffic is allowed between VM11 and VM21, by policy1121 and policy2111.")
self.logger.info("Above test Passed.")
else:
result = False
self.logger.error(
"ICMP traffic is allowed between VM11 and VM21, by policy1121 and policy2111.")
self.logger.error("Above test Failed.")
else:
result = False
self.logger.error(
"ICMP traffic is allowed between VM11 and VM21, by policy1121 and policy2111.")
self.logger.error("Above test Failed.")
if result == False:
return result
return result
# end test_policy_cidr_src_cidr_dst_cidr
@tcutils.wrappers.preposttest_wrapper
def test_route_leaking_pass_protocol_src_cidr_dst_cidr(self):
"""Test case to test route leaking with specific protocol"""
"""Policy Rule :- source = CIDR, destination = CIDR."""
result = True
# create Ipam and VN
self.setup_ipam_vn()
VN1_subnet = self.VN1_fixture.get_cidrs()[0]
VN2_subnet = self.VN2_fixture.get_cidrs()[0]
# create policy
policy_name = 'policy12'
rules = []
rules = [{'direction': '<>',
'protocol': 'icmp',
'dest_subnet': VN2_subnet,
'source_subnet': VN1_subnet,
'dst_ports': 'any',
'simple_action': 'deny',
'src_ports': 'any'
},
{'direction': '<>',
'protocol': 'tcp',
'dest_network': 'VN2',
'source_network': 'VN1',
'dst_ports': 'any',
'simple_action': 'pass',
'src_ports': 'any'}]
policy12_fixture = self.useFixture(
PolicyFixture(
policy_name=policy_name,
rules_list=rules,
inputs=self.inputs,
connections=self.connections))
# attach policy to VN
VN1_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN1_fixture.vn_name,
policy_obj={self.VN1_fixture.vn_name : [policy12_fixture.policy_obj]},
vn_obj={self.VN1_fixture.vn_name : self.VN1_fixture},
vn_policys=['policy12'],
project_name=self.project.project_name))
VN2_policy_fixture = self.useFixture(
VN_Policy_Fixture(
connections=self.connections,
vn_name=self.VN2_fixture.vn_name,
policy_obj={self.VN2_fixture.vn_name : [policy12_fixture.policy_obj]},
vn_obj={self.VN2_fixture.vn_name : self.VN2_fixture},
vn_policys=['policy12'],
project_name=self.project.project_name))
# create VM
self.setup_vm()
agent_inspect_h = self.agent_inspect[self.VM11_fixture.vm_node_ip]
vrf_id = agent_inspect_h.get_vna_vrf_id(self.VN1_fixture.vn_fq_name)
route = agent_inspect_h.get_vna_route(vrf_id= vrf_id, ip=self.VM21_fixture.vm_ip)
self.logger.debug("Route value : %s" % route)
if route:
self.logger.info("Route of VN2 found in VN1 database. Route leaking successful")
else:
self.logger.error("Route of VN2 not found in VN1 database. Route leaking failed")
result = False
assert result, "Route leaking between VN1 and VN2 failed"
assert self.VM11_fixture.ping_with_certainty(self.VM21_fixture.vm_ip, \
expectation=False),"ICMP deny rule working unexpectedly and allowing ICMP"
assert self.VM11_fixture.check_file_transfer(self.VM21_fixture, mode='scp',\
size='100', expectation=True),"TCP Allow rule working unexpectedly and denying TCP as well"
# end test_route_leaking_pass_protocol_src_cidr_dst_cidr
# end PolicyAclTests
class TestPolicyAclIpv4v6(TestPolicyAcl):
@classmethod
def setUpClass(cls):
super(TestPolicyAclIpv4v6, cls).setUpClass()
cls.inputs.set_af(af_test)
def is_test_applicable(self):
if self.inputs.orchestrator == 'vcenter' and not self.orch.is_feature_supported('ipv6'):
return(False, 'Skipping IPv6 Test on vcenter setup')
if not self.connections.orch.is_feature_supported('ipv6'):
return(False, 'IPv6 tests not supported in this environment ')
return (True, None)
@attr(type=['vcenter'])
@tcutils.wrappers.preposttest_wrapper
def test_policy_cidr_src_cidr_dst_any(self):
super(TestPolicyAclIpv4v6, self).test_policy_cidr_src_cidr_dst_any()
@attr(type=['sanity', 'vcenter', 'vcenter_compute'])
@tcutils.wrappers.preposttest_wrapper
def test_policy_cidr_src_vn_dst_cidr(self):
super(TestPolicyAclIpv4v6, self).test_policy_cidr_src_vn_dst_cidr()
@attr(type=['vcenter'])
@tcutils.wrappers.preposttest_wrapper
def test_policy_inheritance_src_vn_dst_pol(self):
super(TestPolicyAclIpv4v6, self).test_policy_inheritance_src_vn_dst_pol()
@attr(type=['sanity', 'vcenter', 'vcenter_compute'])
@tcutils.wrappers.preposttest_wrapper
def test_policy_inheritance_src_pol_dst_any(self):
super(TestPolicyAclIpv4v6, self).test_policy_inheritance_src_pol_dst_any()
| 38.806408 | 108 | 0.524232 | 5,854 | 56,929 | 4.815511 | 0.044927 | 0.018517 | 0.029053 | 0.025931 | 0.90415 | 0.88425 | 0.862966 | 0.850692 | 0.828875 | 0.808868 | 0 | 0.029881 | 0.371586 | 56,929 | 1,466 | 109 | 38.832879 | 0.758099 | 0.034201 | 0 | 0.844482 | 0 | 0 | 0.142253 | 0.000721 | 0 | 0 | 0 | 0 | 0.008361 | 1 | 0.016722 | false | 0.036789 | 0.01087 | 0 | 0.039298 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
db87c9f8304c6458a4615ca34d6c2497cc8670ca | 486 | py | Python | dashboard/views/_positions/_non_ec/__init__.py | beta-nu-theta-chi/ox-dashboard | 842d86a381f26159b2c5bad39a95169496832023 | [
"MIT"
] | null | null | null | dashboard/views/_positions/_non_ec/__init__.py | beta-nu-theta-chi/ox-dashboard | 842d86a381f26159b2c5bad39a95169496832023 | [
"MIT"
] | 70 | 2016-11-16T18:49:02.000Z | 2021-04-26T00:47:18.000Z | dashboard/views/_positions/_non_ec/__init__.py | beta-nu-theta-chi/ox-dashboard | 842d86a381f26159b2c5bad39a95169496832023 | [
"MIT"
] | null | null | null | from dashboard.views._positions._non_ec._alumni_relations_chair import *
from dashboard.views._positions._non_ec._detail_manager import *
from dashboard.views._positions._non_ec._membership_development_chair import *
from dashboard.views._positions._non_ec._philanthropy_chair import *
from dashboard.views._positions._non_ec._public_relations_chair import *
from dashboard.views._positions._non_ec._service_chair import *
from dashboard.views._positions._non_ec._social_chair import *
| 60.75 | 78 | 0.87037 | 66 | 486 | 5.833333 | 0.272727 | 0.236364 | 0.327273 | 0.490909 | 0.787013 | 0.787013 | 0.703896 | 0.605195 | 0.27013 | 0 | 0 | 0 | 0.057613 | 486 | 7 | 79 | 69.428571 | 0.840611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
dbc1bfab3985cc873d81123e22736402c676a9c6 | 692 | py | Python | fenics_concrete/__init__.py | BAMresearch/FenicsConcrete | 7a086d7767e20bd111cc7b05e5aa742d7e5ff47c | [
"MIT"
] | null | null | null | fenics_concrete/__init__.py | BAMresearch/FenicsConcrete | 7a086d7767e20bd111cc7b05e5aa742d7e5ff47c | [
"MIT"
] | 1 | 2022-03-24T15:24:53.000Z | 2022-03-24T15:24:53.000Z | fenics_concrete/__init__.py | BAMresearch/FenicsConcrete | 7a086d7767e20bd111cc7b05e5aa742d7e5ff47c | [
"MIT"
] | null | null | null | import fenics_concrete.sensors
from fenics_concrete.experimental_setups.concrete_column import ConcreteColumnExperiment
from fenics_concrete.experimental_setups.concrete_cube import ConcreteCubeExperiment
from fenics_concrete.experimental_setups.concrete_beam import ConcreteBeamExperiment
from fenics_concrete.experimental_setups.minimal_cube import MinimalCubeExperiment
from fenics_concrete.experimental_setups.concrete_cylinder import ConcreteCylinderExperiment
from fenics_concrete.material_problems.concrete_thermo_mechanical import ConcreteThermoMechanical
from fenics_concrete.material_problems.linear_elasticity import LinearElasticity
from fenics_concrete.helpers import Parameters
| 69.2 | 97 | 0.927746 | 74 | 692 | 8.351351 | 0.364865 | 0.203884 | 0.23301 | 0.242718 | 0.453074 | 0.28479 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049133 | 692 | 9 | 98 | 76.888889 | 0.93921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
dbcbe74d9cafd92e92ac0204a71309ff5415ebc5 | 5,943 | py | Python | reinforcement_learning/replay_buffer_maddpg.py | SigmaBM/neurips2020-flatland-starter-kit | 5237b74f0e646ddb505a9b44afe4d73d0a33c1f5 | [
"MIT"
] | 2 | 2021-03-03T13:26:23.000Z | 2021-11-02T01:19:16.000Z | reinforcement_learning/replay_buffer_maddpg.py | SigmaBM/neurips2020-flatland-starter-kit | 5237b74f0e646ddb505a9b44afe4d73d0a33c1f5 | [
"MIT"
] | null | null | null | reinforcement_learning/replay_buffer_maddpg.py | SigmaBM/neurips2020-flatland-starter-kit | 5237b74f0e646ddb505a9b44afe4d73d0a33c1f5 | [
"MIT"
] | null | null | null | import torch
import random
import numpy as np
from collections import namedtuple, deque, Iterable
"""Replay buffer stores all observations, actions, rewards, etc"""
Experience = namedtuple("Experience", field_names=["obs_n", "act_n", "reward", "next_obs_n", "done", "act_mask"])
class ReplayBuffer:
"""Fixed-size buffer to store experience tuples."""
def __init__(self, action_size, buffer_size, batch_size, device):
"""Initialize a ReplayBuffer object.
Params
======
action_size (int): dimension of each action
buffer_size (int): maximum size of buffer
batch_size (int): size of each training batch
"""
self.action_size = action_size
# self.memory = deque(maxlen=buffer_size)
self.memory = []
self.buffer_size = buffer_size
self.batch_size = batch_size
self.device = device
self._next_idx = 0
def add(self, obs_n, act_n, reward, next_obs_n, done, act_mask):
"""Add a new experience to memory.
Params
======
obs_n: array (n_agent, ob_size)
act_n: array (n_agent, ac_size)
reward: float
next_obs_n: array (n_agent, ob_size)
done: bool
act_mask: array (n_agent, )
"""
e = Experience(obs_n, act_n, reward, next_obs_n, done, act_mask)
if self._next_idx >= len(self.memory):
self.memory.append(e)
else:
self.memory[self._next_idx] = e
self._next_idx = (self._next_idx + 1) % self.buffer_size
def sample_idxes(self):
"""Randomly sample a batch of experiences from memory."""
idxes = random.sample([i for i in range(len(self.memory))], k=self.batch_size)
return idxes
def get(self, idxes):
experiences = []
for idx in idxes:
experiences.append(self.memory[idx])
obs_n = torch.from_numpy(np.array([e.obs_n for e in experiences])).float().to(self.device) # (batch_size, n_agent, ob_size)
act_n = torch.from_numpy(np.array([e.act_n for e in experiences])).float().to(self.device) # (batch_size, n_agent, ac_size)
rewards = torch.from_numpy(np.array([e.reward for e in experiences])).float().to(self.device) # (batch_size, )
next_obs_n = torch.from_numpy(np.array([e.next_obs_n for e in experiences])).float().to(self.device) # (batch_size, n_agent, ob_size)
dones = torch.from_numpy(np.array([e.done for e in experiences])).float().to(self.device) # (batch_size, )
act_mask = torch.from_numpy(np.array([e.act_mask for e in experiences])).bool().to(self.device) # (batch_size, n_agent)
return obs_n, act_n, rewards, next_obs_n, dones, act_mask
def __len__(self):
"""Return the current size of internal memory."""
return len(self.memory)
Experience_2 = namedtuple("Experience", field_names=["obs_n", "act_n", "reward", "next_obs_n", "done", "act_mask", "agent_id"])
class ReplayBufferParamSharing:
"""Fixed-size buffer to store experience tuples."""
def __init__(self, action_size, buffer_size, batch_size, device):
"""Initialize a ReplayBuffer object.
Params
======
action_size (int): dimension of each action
buffer_size (int): maximum size of buffer
batch_size (int): size of each training batch
"""
self.action_size = action_size
# self.memory = deque(maxlen=buffer_size)
self.memory = []
self.buffer_size = buffer_size
self.batch_size = batch_size
self.device = device
self._next_idx = 0
def add(self, obs_n, act_n, reward, next_obs_n, done, act_mask, agent_id):
"""Add a new experience to memory.
Params
======
obs_n: array (n_agent, ob_size)
act_n: array (n_agent, ac_size)
reward: float
next_obs_n: array (n_agent, ob_size)
done: array (n_agent, )
act_mask: array (n_agent, )
agent_id: int
"""
e = Experience_2(obs_n, act_n, reward, next_obs_n, done, act_mask, agent_id)
if self._next_idx >= len(self.memory):
self.memory.append(e)
else:
self.memory[self._next_idx] = e
self._next_idx = (self._next_idx + 1) % self.buffer_size
def sample_idxes(self):
"""Randomly sample a batch of experiences from memory."""
idxes = random.sample([i for i in range(len(self.memory))], k=self.batch_size)
return idxes
def get(self, idxes):
experiences = []
for idx in idxes:
experiences.append(self.memory[idx])
obs_n = torch.from_numpy(np.array([e.obs_n for e in experiences])).float().to(self.device) # (batch_size, n_agent, ob_size)
act_n = torch.from_numpy(np.array([e.act_n for e in experiences])).float().to(self.device) # (batch_size, n_agent, ac_size)
rewards = torch.from_numpy(np.array([e.reward for e in experiences])).float().to(self.device) # (batch_size, )
next_obs_n = torch.from_numpy(np.array([e.next_obs_n for e in experiences])).float().to(self.device) # (batch_size, n_agent, ob_size)
dones = torch.from_numpy(np.array([e.done for e in experiences])).float().to(self.device) # (batch_size, )
act_mask = torch.from_numpy(np.array([e.act_mask for e in experiences])).bool().to(self.device) # (batch_size, n_agent)
agent_ids = torch.from_numpy(np.array([e.agent_id for e in experiences])).int().to(self.device) # (batch_size, )
return obs_n, act_n, rewards, next_obs_n, dones, act_mask, agent_ids
def __len__(self):
"""Return the current size of internal memory."""
return len(self.memory) | 42.148936 | 141 | 0.611139 | 829 | 5,943 | 4.135103 | 0.107358 | 0.032672 | 0.032672 | 0.060677 | 0.915403 | 0.898775 | 0.892357 | 0.892357 | 0.892357 | 0.892357 | 0 | 0.001377 | 0.2667 | 5,943 | 141 | 142 | 42.148936 | 0.785223 | 0.266196 | 0 | 0.782609 | 0 | 0 | 0.026249 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144928 | false | 0 | 0.057971 | 0 | 0.318841 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
91b996ec31881d07eb21851b82f73763b4e99038 | 187 | py | Python | tests/parser/rewriting.projection.12.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/rewriting.projection.12.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/rewriting.projection.12.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | input = """
%#maxint=2.
a(1).
b(1).
dummy :- a(A), b(B), A=B+C. % +(A,B,C), #int(A).
"""
output = """
%#maxint=2.
a(1).
b(1).
dummy :- a(A), b(B), A=B+C. % +(A,B,C), #int(A).
"""
| 14.384615 | 49 | 0.379679 | 40 | 187 | 1.775 | 0.25 | 0.169014 | 0.169014 | 0.253521 | 0.84507 | 0.84507 | 0.84507 | 0.84507 | 0.84507 | 0.84507 | 0 | 0.040268 | 0.203209 | 187 | 12 | 50 | 15.583333 | 0.436242 | 0 | 0 | 0.833333 | 0 | 0.166667 | 0.826816 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
534d418ed40ce9f6a9bddfbdcd962b5b9ebe9bdd | 1,883 | py | Python | tests/test_console_say.py | lucatrv/hooks4git | a1cac75d4119d82ce26dfde72ca3404c1064c3de | [
"MIT"
] | 32 | 2018-07-09T19:45:56.000Z | 2022-02-11T19:38:46.000Z | tests/test_console_say.py | lucatrv/hooks4git | a1cac75d4119d82ce26dfde72ca3404c1064c3de | [
"MIT"
] | 63 | 2018-07-06T19:09:24.000Z | 2020-12-14T19:54:00.000Z | tests/test_console_say.py | lucatrv/hooks4git | a1cac75d4119d82ce26dfde72ca3404c1064c3de | [
"MIT"
] | 3 | 2020-03-14T21:28:40.000Z | 2021-11-18T22:00:53.000Z | # -*- coding: utf-8 -*-
from tests import BaseTestCase
from hooks4git.console import Display
class SayTestCase(BaseTestCase):
def test_divider(self):
d = Display()
check = "CHECK"
response = d.say("PASS", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 43)
response = d.say("PASS ", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 43)
response = d.say("FAIL", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 47)
response = d.say("FAIL ", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 43)
response = d.say("SOUT", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 32)
response = d.say("SERR", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 32)
response = d.say("INFO", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 38)
response = d.say("WARN", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 38)
response = d.say("STEP", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 38)
response = d.say("STEPS", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 38)
response = d.say("TIME", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 38)
response = d.say("ERR!", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 38)
response = d.say("TITLE", check)
self.assertTrue(check in response)
self.assertTrue(len(response) == 26)
| 30.370968 | 44 | 0.603824 | 218 | 1,883 | 5.211009 | 0.178899 | 0.320423 | 0.137324 | 0.274648 | 0.845951 | 0.845951 | 0.845951 | 0.845951 | 0.845951 | 0.845951 | 0 | 0.020275 | 0.266596 | 1,883 | 61 | 45 | 30.868852 | 0.802317 | 0.011152 | 0 | 0.622222 | 0 | 0 | 0.032796 | 0 | 0 | 0 | 0 | 0 | 0.577778 | 1 | 0.022222 | false | 0.044444 | 0.044444 | 0 | 0.088889 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5356965d6ce79a5633ae415f68ff6cc3e7267626 | 4,668 | py | Python | robustml_portal/postAveragedModels.py | YupingLin171/PostAvgDefense | 23832afb7bb6127a9571738f20f0e3dfd3935697 | [
"MIT"
] | 1 | 2019-09-27T02:15:32.000Z | 2019-09-27T02:15:32.000Z | robustml_portal/postAveragedModels.py | YupingLin171/PostAvgDefense | 23832afb7bb6127a9571738f20f0e3dfd3935697 | [
"MIT"
] | 1 | 2019-12-06T20:22:15.000Z | 2019-12-06T20:22:15.000Z | robustml_portal/postAveragedModels.py | YupingLin171/PostAvgDefense | 23832afb7bb6127a9571738f20f0e3dfd3935697 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import robustml
import numpy as np
from collections import OrderedDict
import PADefense as padef
import resnetSmall as rnsmall
import torch
import torchvision.models as mdl
import torchvision.transforms as transforms
class PostAveragedResNet152(robustml.model.Model):
def __init__(self, K, R, eps, device='cpu'):
self._model = mdl.resnet152(pretrained=True).to(device)
self._dataset = robustml.dataset.ImageNet((224, 224, 3))
self._threat_model = robustml.threat_model.Linf(epsilon=eps)
self._K = K
self._r = [R/3, 2*R/3, R]
self._sample_method = 'random'
self._vote_method = 'avg_softmax'
self._device = device
@property
def model(self):
return self._model
@property
def dataset(self):
return self._dataset
@property
def threat_model(self):
return self._threat_model
def classify(self, x):
# model requires image in (C, H, W), but robustml provides (H, W, C)
# transpose x to accommodate pytorch's axis arrangement convention
x = torch.as_tensor(np.transpose(x, (2, 0, 1)))
# preprocess data
x = self._preprocess(x).unsqueeze(0)
# gather neighbor samples
x_squad = padef.formSquad_resnet(self._sample_method, self._model, x, self._K, self._r, device=self._device)
# forward with a batch of neighbors
feat, _ = padef.integratedForward(self._model, x_squad, batchSize=100, nClasses=1000, device=self._device, voteMethod=self._vote_method)
# get predicted class
prediction = torch.argmax(feat.squeeze()).item()
return prediction
def _preprocess(self, image):
# normalization used by pre-trained model
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
return normalize(image)
def to(self, device):
self._model = self._model.to(device)
self._device = device
def eval(self):
self._model = self._model.eval()
def pa_resnet152_config1():
return PostAveragedResNet152(K=15, R=30, eps=8/255)
class PostAveragedResNet110(robustml.model.Model):
def __init__(self, K, R, eps, device='cpu'):
# load model state dict
checkpoint = torch.load('./trainedModel/resnet110.th')
paramDict = OrderedDict()
for k, v in checkpoint['state_dict'].items():
# remove 'module.' prefix introduced by DataParallel, if any
if k.startswith('module.'):
paramDict[k[7:]] = v
self._model = rnsmall.resnet110()
self._model.load_state_dict(paramDict)
self._model = self._model.to(device)
self._dataset = robustml.dataset.CIFAR10()
self._threat_model = robustml.threat_model.Linf(epsilon=eps)
self._K = K
self._r = [R/3, 2*R/3, R]
self._sample_method = 'random'
self._vote_method = 'avg_softmax'
self._device = device
@property
def model(self):
return self._model
@property
def dataset(self):
return self._dataset
@property
def threat_model(self):
return self._threat_model
def classify(self, x):
# model requires image in (C, H, W), but robustml provides (H, W, C)
# transpose x to accommodate pytorch's axis arrangement convention
x = torch.as_tensor(np.transpose(x, (2, 0, 1)))
# preprocess data
x = self._preprocess(x).unsqueeze(0)
# gather neighbor samples
x_squad = padef.formSquad_resnet(self._sample_method, self._model, x, self._K, self._r, device=self._device)
# forward with a batch of neighbors
feat, _ = padef.integratedForward(self._model, x_squad, batchSize=1000, nClasses=10, device=self._device, voteMethod=self._vote_method)
# get predicted class
prediction = torch.argmax(feat.squeeze()).item()
return prediction
def _preprocess(self, image):
# normalization used by pre-trained model
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
return normalize(image)
def to(self, device):
self._model = self._model.to(device)
self._device = device
def eval(self):
self._model = self._model.eval()
def pa_resnet110_config1():
return PostAveragedResNet110(K=15, R=6, eps=8/255)
| 33.106383 | 144 | 0.612896 | 570 | 4,668 | 4.847368 | 0.249123 | 0.061889 | 0.030402 | 0.032573 | 0.74991 | 0.74991 | 0.729642 | 0.718784 | 0.718784 | 0.718784 | 0 | 0.038771 | 0.281705 | 4,668 | 140 | 145 | 33.342857 | 0.785267 | 0.135818 | 0 | 0.701149 | 0 | 0 | 0.020916 | 0.006723 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206897 | false | 0 | 0.091954 | 0.091954 | 0.45977 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
72d3a34b5207945be19d479f038710e035db09f3 | 206,746 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/iam/service_resource.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/iam/service_resource.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/iam/service_resource.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from boto3.resources.collection import ResourceCollection
from typing import List
from typing import Optional
from typing import Union
from typing import Dict
from datetime import datetime
from boto3.resources import base
class ServiceResource(base.ServiceResource):
groups: 'groups'
instance_profiles: 'instance_profiles'
policies: 'policies'
roles: 'roles'
saml_providers: 'saml_providers'
server_certificates: 'server_certificates'
users: 'users'
virtual_mfa_devices: 'virtual_mfa_devices'
def AccessKey(self, user_name: str = None, id: str = None) -> 'AccessKey':
"""
Creates a AccessKey resource.::
access_key = iam.AccessKey('user_name','id')
:type user_name: string
:param user_name: The AccessKey\'s user_name identifier. This **must** be set.
:type id: string
:param id: The AccessKey\'s id identifier. This **must** be set.
:rtype: :py:class:`IAM.AccessKey`
:returns: A AccessKey resource
"""
pass
def AccessKeyPair(self, user_name: str = None, id: str = None, secret: str = None) -> 'AccessKeyPair':
"""
Creates a AccessKeyPair resource.::
access_key_pair = iam.AccessKeyPair('user_name','id','secret')
:type user_name: string
:param user_name: The AccessKeyPair\'s user_name identifier. This **must** be set.
:type id: string
:param id: The AccessKeyPair\'s id identifier. This **must** be set.
:type secret: string
:param secret: The AccessKeyPair\'s secret identifier. This **must** be set.
:rtype: :py:class:`IAM.AccessKeyPair`
:returns: A AccessKeyPair resource
"""
pass
def AccountPasswordPolicy(self) -> 'AccountPasswordPolicy':
"""
Creates a AccountPasswordPolicy resource.::
account_password_policy = iam.AccountPasswordPolicy()
:rtype: :py:class:`IAM.AccountPasswordPolicy`
:returns: A AccountPasswordPolicy resource
"""
pass
def AccountSummary(self) -> 'AccountSummary':
"""
Creates a AccountSummary resource.::
account_summary = iam.AccountSummary()
:rtype: :py:class:`IAM.AccountSummary`
:returns: A AccountSummary resource
"""
pass
def AssumeRolePolicy(self, role_name: str = None) -> 'AssumeRolePolicy':
"""
Creates a AssumeRolePolicy resource.::
assume_role_policy = iam.AssumeRolePolicy('role_name')
:type role_name: string
:param role_name: The AssumeRolePolicy\'s role_name identifier. This **must** be set.
:rtype: :py:class:`IAM.AssumeRolePolicy`
:returns: A AssumeRolePolicy resource
"""
pass
def CurrentUser(self) -> 'CurrentUser':
"""
Creates a CurrentUser resource.::
current_user = iam.CurrentUser()
:rtype: :py:class:`IAM.CurrentUser`
:returns: A CurrentUser resource
"""
pass
def Group(self, name: str = None) -> 'Group':
"""
Creates a Group resource.::
group = iam.Group('name')
:type name: string
:param name: The Group\'s name identifier. This **must** be set.
:rtype: :py:class:`IAM.Group`
:returns: A Group resource
"""
pass
def GroupPolicy(self, group_name: str = None, name: str = None) -> 'GroupPolicy':
"""
Creates a GroupPolicy resource.::
group_policy = iam.GroupPolicy('group_name','name')
:type group_name: string
:param group_name: The GroupPolicy\'s group_name identifier. This **must** be set.
:type name: string
:param name: The GroupPolicy\'s name identifier. This **must** be set.
:rtype: :py:class:`IAM.GroupPolicy`
:returns: A GroupPolicy resource
"""
pass
def InstanceProfile(self, name: str = None) -> 'InstanceProfile':
"""
Creates a InstanceProfile resource.::
instance_profile = iam.InstanceProfile('name')
:type name: string
:param name: The InstanceProfile\'s name identifier. This **must** be set.
:rtype: :py:class:`IAM.InstanceProfile`
:returns: A InstanceProfile resource
"""
pass
def LoginProfile(self, user_name: str = None) -> 'LoginProfile':
"""
Creates a LoginProfile resource.::
login_profile = iam.LoginProfile('user_name')
:type user_name: string
:param user_name: The LoginProfile\'s user_name identifier. This **must** be set.
:rtype: :py:class:`IAM.LoginProfile`
:returns: A LoginProfile resource
"""
pass
def MfaDevice(self, user_name: str = None, serial_number: str = None) -> 'MfaDevice':
"""
Creates a MfaDevice resource.::
mfa_device = iam.MfaDevice('user_name','serial_number')
:type user_name: string
:param user_name: The MfaDevice\'s user_name identifier. This **must** be set.
:type serial_number: string
:param serial_number: The MfaDevice\'s serial_number identifier. This **must** be set.
:rtype: :py:class:`IAM.MfaDevice`
:returns: A MfaDevice resource
"""
pass
def Policy(self, policy_arn: str = None) -> 'Policy':
"""
Creates a Policy resource.::
policy = iam.Policy('policy_arn')
:type policy_arn: string
:param policy_arn: The Policy\'s policy_arn identifier. This **must** be set.
:rtype: :py:class:`IAM.Policy`
:returns: A Policy resource
"""
pass
def PolicyVersion(self, arn: str = None, version_id: str = None) -> 'PolicyVersion':
"""
Creates a PolicyVersion resource.::
policy_version = iam.PolicyVersion('arn','version_id')
:type arn: string
:param arn: The PolicyVersion\'s arn identifier. This **must** be set.
:type version_id: string
:param version_id: The PolicyVersion\'s version_id identifier. This **must** be set.
:rtype: :py:class:`IAM.PolicyVersion`
:returns: A PolicyVersion resource
"""
pass
def Role(self, name: str = None) -> 'Role':
"""
Creates a Role resource.::
role = iam.Role('name')
:type name: string
:param name: The Role\'s name identifier. This **must** be set.
:rtype: :py:class:`IAM.Role`
:returns: A Role resource
"""
pass
def RolePolicy(self, role_name: str = None, name: str = None) -> 'RolePolicy':
"""
Creates a RolePolicy resource.::
role_policy = iam.RolePolicy('role_name','name')
:type role_name: string
:param role_name: The RolePolicy\'s role_name identifier. This **must** be set.
:type name: string
:param name: The RolePolicy\'s name identifier. This **must** be set.
:rtype: :py:class:`IAM.RolePolicy`
:returns: A RolePolicy resource
"""
pass
def SamlProvider(self, arn: str = None) -> 'SamlProvider':
"""
Creates a SamlProvider resource.::
saml_provider = iam.SamlProvider('arn')
:type arn: string
:param arn: The SamlProvider\'s arn identifier. This **must** be set.
:rtype: :py:class:`IAM.SamlProvider`
:returns: A SamlProvider resource
"""
pass
def ServerCertificate(self, name: str = None) -> 'ServerCertificate':
"""
Creates a ServerCertificate resource.::
server_certificate = iam.ServerCertificate('name')
:type name: string
:param name: The ServerCertificate\'s name identifier. This **must** be set.
:rtype: :py:class:`IAM.ServerCertificate`
:returns: A ServerCertificate resource
"""
pass
def SigningCertificate(self, user_name: str = None, id: str = None) -> 'SigningCertificate':
"""
Creates a SigningCertificate resource.::
signing_certificate = iam.SigningCertificate('user_name','id')
:type user_name: string
:param user_name: The SigningCertificate\'s user_name identifier. This **must** be set.
:type id: string
:param id: The SigningCertificate\'s id identifier. This **must** be set.
:rtype: :py:class:`IAM.SigningCertificate`
:returns: A SigningCertificate resource
"""
pass
def User(self, name: str = None) -> 'User':
"""
Creates a User resource.::
user = iam.User('name')
:type name: string
:param name: The User\'s name identifier. This **must** be set.
:rtype: :py:class:`IAM.User`
:returns: A User resource
"""
pass
def UserPolicy(self, user_name: str = None, name: str = None) -> 'UserPolicy':
"""
Creates a UserPolicy resource.::
user_policy = iam.UserPolicy('user_name','name')
:type user_name: string
:param user_name: The UserPolicy\'s user_name identifier. This **must** be set.
:type name: string
:param name: The UserPolicy\'s name identifier. This **must** be set.
:rtype: :py:class:`IAM.UserPolicy`
:returns: A UserPolicy resource
"""
pass
def VirtualMfaDevice(self, serial_number: str = None) -> 'VirtualMfaDevice':
"""
Creates a VirtualMfaDevice resource.::
virtual_mfa_device = iam.VirtualMfaDevice('serial_number')
:type serial_number: string
:param serial_number: The VirtualMfaDevice\'s serial_number identifier. This **must** be set.
:rtype: :py:class:`IAM.VirtualMfaDevice`
:returns: A VirtualMfaDevice resource
"""
pass
def change_password(self, OldPassword: str, NewPassword: str):
"""
Changes the password of the IAM user who is calling this operation. The AWS account root user password is not affected by this operation.
To change the password for a different user, see UpdateLoginProfile . For more information about modifying passwords, see `Managing Passwords <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ChangePassword>`_
**Request Syntax**
::
response = iam.change_password(
OldPassword='string',
NewPassword='string'
)
:type OldPassword: string
:param OldPassword: **[REQUIRED]**
The IAM user\'s current password.
:type NewPassword: string
:param NewPassword: **[REQUIRED]**
The new password. The new password must conform to the AWS account\'s password policy, if one exists.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ that is used to validate this parameter is a string of characters. That string can include almost any printable ASCII character from the space (\u0020) through the end of the ASCII character range (\u00FF). You can also include the tab (\u0009), line feed (\u000A), and carriage return (\u000D) characters. Any of these characters are valid in a password. However, many tools, such as the AWS Management Console, might restrict the ability to type certain characters because they have special meaning within that tool.
:returns: None
"""
pass
def create_account_alias(self, AccountAlias: str):
"""
Creates an alias for your AWS account. For information about using an AWS account alias, see `Using an Alias for Your AWS Account ID <https://docs.aws.amazon.com/IAM/latest/UserGuide/AccountAlias.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateAccountAlias>`_
**Request Syntax**
::
response = iam.create_account_alias(
AccountAlias='string'
)
:type AccountAlias: string
:param AccountAlias: **[REQUIRED]**
The account alias to create.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of lowercase letters, digits, and dashes. You cannot start or finish with a dash, nor can you have two dashes in a row.
:returns: None
"""
pass
def create_account_password_policy(self, MinimumPasswordLength: int = None, RequireSymbols: bool = None, RequireNumbers: bool = None, RequireUppercaseCharacters: bool = None, RequireLowercaseCharacters: bool = None, AllowUsersToChangePassword: bool = None, MaxPasswordAge: int = None, PasswordReusePrevention: int = None, HardExpiry: bool = None) -> 'AccountPasswordPolicy':
"""
Updates the password policy settings for the AWS account.
.. note::
* This operation does not support partial updates. No parameters are required, but if you do not specify a parameter, that parameter's value reverts to its default value. See the **Request Parameters** section for each parameter's default value. Also note that some parameters do not allow the default parameter to be explicitly set. Instead, to invoke the default value, do not include that parameter when you invoke the operation.
For more information about using a password policy, see `Managing an IAM Password Policy <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccountPasswordPolicy>`_
**Request Syntax**
::
account_password_policy = iam.create_account_password_policy(
MinimumPasswordLength=123,
RequireSymbols=True|False,
RequireNumbers=True|False,
RequireUppercaseCharacters=True|False,
RequireLowercaseCharacters=True|False,
AllowUsersToChangePassword=True|False,
MaxPasswordAge=123,
PasswordReusePrevention=123,
HardExpiry=True|False
)
:type MinimumPasswordLength: integer
:param MinimumPasswordLength:
The minimum number of characters allowed in an IAM user password.
If you do not specify a value for this parameter, then the operation uses the default value of ``6`` .
:type RequireSymbols: boolean
:param RequireSymbols:
Specifies whether IAM user passwords must contain at least one of the following non-alphanumeric characters:
! @ # $ % ^ & * ( ) _ + - = [ ] { } | \'
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that passwords do not require at least one symbol character.
:type RequireNumbers: boolean
:param RequireNumbers:
Specifies whether IAM user passwords must contain at least one numeric character (0 to 9).
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that passwords do not require at least one numeric character.
:type RequireUppercaseCharacters: boolean
:param RequireUppercaseCharacters:
Specifies whether IAM user passwords must contain at least one uppercase character from the ISO basic Latin alphabet (A to Z).
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that passwords do not require at least one uppercase character.
:type RequireLowercaseCharacters: boolean
:param RequireLowercaseCharacters:
Specifies whether IAM user passwords must contain at least one lowercase character from the ISO basic Latin alphabet (a to z).
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that passwords do not require at least one lowercase character.
:type AllowUsersToChangePassword: boolean
:param AllowUsersToChangePassword:
Allows all IAM users in your account to use the AWS Management Console to change their own passwords. For more information, see `Letting IAM Users Change Their Own Passwords <https://docs.aws.amazon.com/IAM/latest/UserGuide/HowToPwdIAMUser.html>`__ in the *IAM User Guide* .
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that IAM users in the account do not automatically have permissions to change their own password.
:type MaxPasswordAge: integer
:param MaxPasswordAge:
The number of days that an IAM user password is valid.
If you do not specify a value for this parameter, then the operation uses the default value of ``0`` . The result is that IAM user passwords never expire.
:type PasswordReusePrevention: integer
:param PasswordReusePrevention:
Specifies the number of previous passwords that IAM users are prevented from reusing.
If you do not specify a value for this parameter, then the operation uses the default value of ``0`` . The result is that IAM users are not prevented from reusing previous passwords.
:type HardExpiry: boolean
:param HardExpiry:
Prevents IAM users from setting a new password after their password has expired. The IAM user cannot be accessed until an administrator resets the password.
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that IAM users can change their passwords after they expire and continue to sign in as the user.
:rtype: :py:class:`iam.AccountPasswordPolicy`
:returns: AccountPasswordPolicy resource
"""
pass
def create_group(self, GroupName: str, Path: str = None) -> List['Group']:
"""
Creates a new group.
For information about the number of groups you can create, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateGroup>`_
**Request Syntax**
::
group = iam.create_group(
Path='string',
GroupName='string'
)
:type Path: string
:param Path:
The path to the group. For more information about paths, see `IAM Identifiers <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html>`__ in the *IAM User Guide* .
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type GroupName: string
:param GroupName: **[REQUIRED]**
The name of the group to create. Do not include the path in this value.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-. The group name must be unique within the account. Group names are not distinguished by case. For example, you cannot create groups named both \"ADMINS\" and \"admins\".
:rtype: :py:class:`iam.Group`
:returns: Group resource
"""
pass
def create_instance_profile(self, InstanceProfileName: str, Path: str = None) -> 'InstanceProfile':
"""
Creates a new instance profile. For information about instance profiles, go to `About Instance Profiles <https://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html>`__ .
For information about the number of instance profiles you can create, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateInstanceProfile>`_
**Request Syntax**
::
instance_profile = iam.create_instance_profile(
InstanceProfileName='string',
Path='string'
)
:type InstanceProfileName: string
:param InstanceProfileName: **[REQUIRED]**
The name of the instance profile to create.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:type Path: string
:param Path:
The path to the instance profile. For more information about paths, see `IAM Identifiers <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html>`__ in the *IAM User Guide* .
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:rtype: :py:class:`iam.InstanceProfile`
:returns: InstanceProfile resource
"""
pass
def create_policy(self, PolicyName: str, PolicyDocument: str, Path: str = None, Description: str = None) -> 'Policy':
"""
Creates a new managed policy for your AWS account.
This operation creates a policy version with a version identifier of ``v1`` and sets v1 as the policy's default version. For more information about policy versions, see `Versioning for Managed Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html>`__ in the *IAM User Guide* .
For more information about managed policies in general, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreatePolicy>`_
**Request Syntax**
::
policy = iam.create_policy(
PolicyName='string',
Path='string',
PolicyDocument='string',
Description='string'
)
:type PolicyName: string
:param PolicyName: **[REQUIRED]**
The friendly name of the policy.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:type Path: string
:param Path:
The path for the policy.
For more information about paths, see `IAM Identifiers <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html>`__ in the *IAM User Guide* .
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type PolicyDocument: string
:param PolicyDocument: **[REQUIRED]**
The JSON policy document that you want to use as the content for the new policy.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:type Description: string
:param Description:
A friendly description of the policy.
Typically used to store information about the permissions defined in the policy. For example, \"Grants access to production DynamoDB tables.\"
The policy description is immutable. After a value is assigned, it cannot be changed.
:rtype: :py:class:`iam.Policy`
:returns: Policy resource
"""
pass
def create_role(self, RoleName: str, AssumeRolePolicyDocument: str, Path: str = None, Description: str = None, MaxSessionDuration: int = None, PermissionsBoundary: str = None, Tags: List = None) -> 'Role':
"""
Creates a new role for your AWS account. For more information about roles, go to `IAM Roles <https://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html>`__ . For information about limitations on role names and the number of roles you can create, go to `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateRole>`_
**Request Syntax**
::
role = iam.create_role(
Path='string',
RoleName='string',
AssumeRolePolicyDocument='string',
Description='string',
MaxSessionDuration=123,
PermissionsBoundary='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
:type Path: string
:param Path:
The path to the role. For more information about paths, see `IAM Identifiers <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html>`__ in the *IAM User Guide* .
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type RoleName: string
:param RoleName: **[REQUIRED]**
The name of the role to create.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
Role names are not distinguished by case. For example, you cannot create roles named both \"PRODROLE\" and \"prodrole\".
:type AssumeRolePolicyDocument: string
:param AssumeRolePolicyDocument: **[REQUIRED]**
The trust relationship policy document that grants an entity permission to assume the role.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:type Description: string
:param Description:
A description of the role.
:type MaxSessionDuration: integer
:param MaxSessionDuration:
The maximum session duration (in seconds) that you want to set for the specified role. If you do not specify a value for this setting, the default maximum of one hour is applied. This setting can have a value from 1 hour to 12 hours.
Anyone who assumes the role from the AWS CLI or API can use the ``DurationSeconds`` API parameter or the ``duration-seconds`` CLI parameter to request a longer session. The ``MaxSessionDuration`` setting determines the maximum duration that can be requested using the ``DurationSeconds`` parameter. If users don\'t specify a value for the ``DurationSeconds`` parameter, their security credentials are valid for one hour by default. This applies when you use the ``AssumeRole*`` API operations or the ``assume-role*`` CLI operations but does not apply when you use those operations to create a console URL. For more information, see `Using IAM Roles <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html>`__ in the *IAM User Guide* .
:type PermissionsBoundary: string
:param PermissionsBoundary:
The ARN of the policy that is used to set the permissions boundary for the role.
:type Tags: list
:param Tags:
A list of tags that you want to attach to the newly created role. Each tag consists of a key name and an associated value. For more information about tagging, see `Tagging IAM Identities <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html>`__ in the *IAM User Guide* .
.. note::
If any one of the tags is invalid or if you exceed the allowed number of tags per role, then the entire request fails and the role is not created.
- *(dict) --*
A structure that represents user-provided metadata that can be associated with a resource such as an IAM user or role. For more information about tagging, see `Tagging IAM Identities <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html>`__ in the *IAM User Guide* .
- **Key** *(string) --* **[REQUIRED]**
The key name that can be used to look up or retrieve the associated value. For example, ``Department`` or ``Cost Center`` are common choices.
- **Value** *(string) --* **[REQUIRED]**
The value associated with this tag. For example, tags with a key name of ``Department`` could have values such as ``Human Resources`` , ``Accounting`` , and ``Support`` . Tags with a key name of ``Cost Center`` might have values that consist of the number associated with the different cost centers in your company. Typically, many resources have tags with the same key name but with different values.
.. note::
AWS always interprets the tag ``Value`` as a single string. If you need to store an array, you can store comma-separated values in the string. However, you must interpret the value in your code.
:rtype: :py:class:`iam.Role`
:returns: Role resource
"""
pass
def create_saml_provider(self, SAMLMetadataDocument: str, Name: str) -> 'SamlProvider':
"""
Creates an IAM resource that describes an identity provider (IdP) that supports SAML 2.0.
The SAML provider resource that you create with this operation can be used as a principal in an IAM role's trust policy. Such a policy can enable federated users who sign in using the SAML IdP to assume the role. You can create an IAM role that supports Web-based single sign-on (SSO) to the AWS Management Console or one that supports API access to AWS.
When you create the SAML provider resource, you upload a SAML metadata document that you get from your IdP. That document includes the issuer's name, expiration information, and keys that can be used to validate the SAML authentication response (assertions) that the IdP sends. You must generate the metadata document using the identity management software that is used as your organization's IdP.
.. note::
This operation requires `Signature Version 4 <https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html>`__ .
For more information, see `Enabling SAML 2.0 Federated Users to Access the AWS Management Console <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html>`__ and `About SAML 2.0-based Federation <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateSAMLProvider>`_
**Request Syntax**
::
saml_provider = iam.create_saml_provider(
SAMLMetadataDocument='string',
Name='string'
)
:type SAMLMetadataDocument: string
:param SAMLMetadataDocument: **[REQUIRED]**
An XML document generated by an identity provider (IdP) that supports SAML 2.0. The document includes the issuer\'s name, expiration information, and keys that can be used to validate the SAML authentication response (assertions) that are received from the IdP. You must generate the metadata document using the identity management software that is used as your organization\'s IdP.
For more information, see `About SAML 2.0-based Federation <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html>`__ in the *IAM User Guide*
:type Name: string
:param Name: **[REQUIRED]**
The name of the provider to create.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:rtype: :py:class:`iam.SamlProvider`
:returns: SamlProvider resource
"""
pass
def create_server_certificate(self, ServerCertificateName: str, CertificateBody: str, PrivateKey: str, Path: str = None, CertificateChain: str = None) -> 'ServerCertificate':
"""
Uploads a server certificate entity for the AWS account. The server certificate entity includes a public key certificate, a private key, and an optional certificate chain, which should all be PEM-encoded.
We recommend that you use `AWS Certificate Manager <https://docs.aws.amazon.com/acm/>`__ to provision, manage, and deploy your server certificates. With ACM you can request a certificate, deploy it to AWS resources, and let ACM handle certificate renewals for you. Certificates provided by ACM are free. For more information about using ACM, see the `AWS Certificate Manager User Guide <https://docs.aws.amazon.com/acm/latest/userguide/>`__ .
For more information about working with server certificates, see `Working with Server Certificates <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html>`__ in the *IAM User Guide* . This topic includes a list of AWS services that can use the server certificates that you manage with IAM.
For information about the number of server certificates you can upload, see `Limitations on IAM Entities and Objects <https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html>`__ in the *IAM User Guide* .
.. note::
Because the body of the public key certificate, private key, and the certificate chain can be large, you should use POST rather than GET when calling ``UploadServerCertificate`` . For information about setting up signatures and authorization through the API, go to `Signing AWS API Requests <https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html>`__ in the *AWS General Reference* . For general information about using the Query API with IAM, go to `Calling the API by Making HTTP Query Requests <https://docs.aws.amazon.com/IAM/latest/UserGuide/programming.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadServerCertificate>`_
**Request Syntax**
::
server_certificate = iam.create_server_certificate(
Path='string',
ServerCertificateName='string',
CertificateBody='string',
PrivateKey='string',
CertificateChain='string'
)
:type Path: string
:param Path:
The path for the server certificate. For more information about paths, see `IAM Identifiers <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html>`__ in the *IAM User Guide* .
This parameter is optional. If it is not included, it defaults to a slash (/). This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
.. note::
If you are uploading a server certificate specifically for use with Amazon CloudFront distributions, you must specify a path using the ``path`` parameter. The path must begin with ``/cloudfront`` and must include a trailing slash (for example, ``/cloudfront/test/`` ).
:type ServerCertificateName: string
:param ServerCertificateName: **[REQUIRED]**
The name for the server certificate. Do not include the path in this value. The name of the certificate cannot contain any spaces.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:type CertificateBody: string
:param CertificateBody: **[REQUIRED]**
The contents of the public key certificate in PEM-encoded format.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:type PrivateKey: string
:param PrivateKey: **[REQUIRED]**
The contents of the private key in PEM-encoded format.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:type CertificateChain: string
:param CertificateChain:
The contents of the certificate chain. This is typically a concatenation of the PEM-encoded public key certificates of the chain.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:rtype: :py:class:`iam.ServerCertificate`
:returns: ServerCertificate resource
"""
pass
def create_signing_certificate(self, CertificateBody: str, UserName: str = None) -> 'SigningCertificate':
"""
Uploads an X.509 signing certificate and associates it with the specified IAM user. Some AWS services use X.509 signing certificates to validate requests that are signed with a corresponding private key. When you upload the certificate, its default status is ``Active`` .
If the ``UserName`` is not specified, the IAM user name is determined implicitly based on the AWS access key ID used to sign the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated users.
.. note::
Because the body of an X.509 certificate can be large, you should use POST rather than GET when calling ``UploadSigningCertificate`` . For information about setting up signatures and authorization through the API, go to `Signing AWS API Requests <https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html>`__ in the *AWS General Reference* . For general information about using the Query API with IAM, go to `Making Query Requests <https://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSigningCertificate>`_
**Request Syntax**
::
signing_certificate = iam.create_signing_certificate(
UserName='string',
CertificateBody='string'
)
:type UserName: string
:param UserName:
The name of the user the signing certificate is for.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:type CertificateBody: string
:param CertificateBody: **[REQUIRED]**
The contents of the signing certificate.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:rtype: :py:class:`iam.SigningCertificate`
:returns: SigningCertificate resource
"""
pass
def create_user(self, UserName: str, Path: str = None, PermissionsBoundary: str = None, Tags: List = None) -> 'User':
"""
Creates a new IAM user for your AWS account.
For information about limitations on the number of IAM users you can create, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateUser>`_
**Request Syntax**
::
user = iam.create_user(
Path='string',
UserName='string',
PermissionsBoundary='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
:type Path: string
:param Path:
The path for the user name. For more information about paths, see `IAM Identifiers <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html>`__ in the *IAM User Guide* .
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type UserName: string
:param UserName: **[REQUIRED]**
The name of the user to create.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-. User names are not distinguished by case. For example, you cannot create users named both \"TESTUSER\" and \"testuser\".
:type PermissionsBoundary: string
:param PermissionsBoundary:
The ARN of the policy that is used to set the permissions boundary for the user.
:type Tags: list
:param Tags:
A list of tags that you want to attach to the newly created user. Each tag consists of a key name and an associated value. For more information about tagging, see `Tagging IAM Identities <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html>`__ in the *IAM User Guide* .
.. note::
If any one of the tags is invalid or if you exceed the allowed number of tags per user, then the entire request fails and the user is not created.
- *(dict) --*
A structure that represents user-provided metadata that can be associated with a resource such as an IAM user or role. For more information about tagging, see `Tagging IAM Identities <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html>`__ in the *IAM User Guide* .
- **Key** *(string) --* **[REQUIRED]**
The key name that can be used to look up or retrieve the associated value. For example, ``Department`` or ``Cost Center`` are common choices.
- **Value** *(string) --* **[REQUIRED]**
The value associated with this tag. For example, tags with a key name of ``Department`` could have values such as ``Human Resources`` , ``Accounting`` , and ``Support`` . Tags with a key name of ``Cost Center`` might have values that consist of the number associated with the different cost centers in your company. Typically, many resources have tags with the same key name but with different values.
.. note::
AWS always interprets the tag ``Value`` as a single string. If you need to store an array, you can store comma-separated values in the string. However, you must interpret the value in your code.
:rtype: :py:class:`iam.User`
:returns: User resource
"""
pass
def create_virtual_mfa_device(self, VirtualMFADeviceName: str, Path: str = None) -> 'VirtualMfaDevice':
"""
Creates a new virtual MFA device for the AWS account. After creating the virtual MFA, use EnableMFADevice to attach the MFA device to an IAM user. For more information about creating and working with virtual MFA devices, go to `Using a Virtual MFA Device <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_VirtualMFA.html>`__ in the *IAM User Guide* .
For information about limits on the number of MFA devices you can create, see `Limitations on Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
.. warning::
The seed information contained in the QR code and the Base32 string should be treated like any other secret access information. In other words, protect the seed information as you would your AWS access keys or your passwords. After you provision your virtual device, you should ensure that the information is destroyed following secure procedures.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateVirtualMFADevice>`_
**Request Syntax**
::
virtual_mfa_device = iam.create_virtual_mfa_device(
Path='string',
VirtualMFADeviceName='string'
)
:type Path: string
:param Path:
The path for the virtual MFA device. For more information about paths, see `IAM Identifiers <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html>`__ in the *IAM User Guide* .
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type VirtualMFADeviceName: string
:param VirtualMFADeviceName: **[REQUIRED]**
The name of the virtual MFA device. Use with path to uniquely identify a virtual MFA device.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:rtype: :py:class:`iam.VirtualMfaDevice`
:returns: VirtualMfaDevice resource
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
class AccessKey(base.ServiceResource):
access_key_id: str
status: str
create_date: datetime
user_name: str
id: str
def activate(self):
"""
Changes the status of the specified access key from Active to Inactive, or vice versa. This operation can be used to disable a user's key as part of a key rotation workflow.
If the ``UserName`` is not specified, the user name is determined implicitly based on the AWS access key ID used to sign the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated users.
For information about rotating keys, see `Managing Keys and Certificates <https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey>`_
**Request Syntax**
::
response = access_key.activate()
:returns: None
"""
pass
def deactivate(self):
"""
Changes the status of the specified access key from Active to Inactive, or vice versa. This operation can be used to disable a user's key as part of a key rotation workflow.
If the ``UserName`` is not specified, the user name is determined implicitly based on the AWS access key ID used to sign the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated users.
For information about rotating keys, see `Managing Keys and Certificates <https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey>`_
**Request Syntax**
::
response = access_key.deactivate()
:returns: None
"""
pass
def delete(self):
"""
Deletes the access key pair associated with the specified IAM user.
If you do not specify a user name, IAM determines the user name implicitly based on the AWS access key ID signing the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated users.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteAccessKey>`_
**Request Syntax**
::
response = access_key.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
class AccessKeyPair(base.ServiceResource):
access_key_id: str
status: str
secret_access_key: str
create_date: datetime
user_name: str
id: str
secret: str
def activate(self):
"""
Changes the status of the specified access key from Active to Inactive, or vice versa. This operation can be used to disable a user's key as part of a key rotation workflow.
If the ``UserName`` is not specified, the user name is determined implicitly based on the AWS access key ID used to sign the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated users.
For information about rotating keys, see `Managing Keys and Certificates <https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey>`_
**Request Syntax**
::
response = access_key_pair.activate()
:returns: None
"""
pass
def deactivate(self):
"""
Changes the status of the specified access key from Active to Inactive, or vice versa. This operation can be used to disable a user's key as part of a key rotation workflow.
If the ``UserName`` is not specified, the user name is determined implicitly based on the AWS access key ID used to sign the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated users.
For information about rotating keys, see `Managing Keys and Certificates <https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey>`_
**Request Syntax**
::
response = access_key_pair.deactivate()
:returns: None
"""
pass
def delete(self):
"""
Deletes the access key pair associated with the specified IAM user.
If you do not specify a user name, IAM determines the user name implicitly based on the AWS access key ID signing the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated users.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteAccessKey>`_
**Request Syntax**
::
response = access_key_pair.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
class AccountPasswordPolicy(base.ServiceResource):
minimum_password_length: int
require_symbols: bool
require_numbers: bool
require_uppercase_characters: bool
require_lowercase_characters: bool
allow_users_to_change_password: bool
expire_passwords: bool
max_password_age: int
password_reuse_prevention: int
hard_expiry: bool
def delete(self):
"""
Deletes the password policy for the AWS account. There are no parameters.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteAccountPasswordPolicy>`_
**Request Syntax**
::
response = account_password_policy.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_account_password_policy` to update the attributes of the AccountPasswordPolicy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
account_password_policy.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_account_password_policy` to update the attributes of the AccountPasswordPolicy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
account_password_policy.load()
:returns: None
"""
pass
def update(self, MinimumPasswordLength: int = None, RequireSymbols: bool = None, RequireNumbers: bool = None, RequireUppercaseCharacters: bool = None, RequireLowercaseCharacters: bool = None, AllowUsersToChangePassword: bool = None, MaxPasswordAge: int = None, PasswordReusePrevention: int = None, HardExpiry: bool = None):
"""
Updates the password policy settings for the AWS account.
.. note::
* This operation does not support partial updates. No parameters are required, but if you do not specify a parameter, that parameter's value reverts to its default value. See the **Request Parameters** section for each parameter's default value. Also note that some parameters do not allow the default parameter to be explicitly set. Instead, to invoke the default value, do not include that parameter when you invoke the operation.
For more information about using a password policy, see `Managing an IAM Password Policy <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccountPasswordPolicy>`_
**Request Syntax**
::
response = account_password_policy.update(
MinimumPasswordLength=123,
RequireSymbols=True|False,
RequireNumbers=True|False,
RequireUppercaseCharacters=True|False,
RequireLowercaseCharacters=True|False,
AllowUsersToChangePassword=True|False,
MaxPasswordAge=123,
PasswordReusePrevention=123,
HardExpiry=True|False
)
:type MinimumPasswordLength: integer
:param MinimumPasswordLength:
The minimum number of characters allowed in an IAM user password.
If you do not specify a value for this parameter, then the operation uses the default value of ``6`` .
:type RequireSymbols: boolean
:param RequireSymbols:
Specifies whether IAM user passwords must contain at least one of the following non-alphanumeric characters:
! @ # $ % ^ & * ( ) _ + - = [ ] { } | \'
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that passwords do not require at least one symbol character.
:type RequireNumbers: boolean
:param RequireNumbers:
Specifies whether IAM user passwords must contain at least one numeric character (0 to 9).
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that passwords do not require at least one numeric character.
:type RequireUppercaseCharacters: boolean
:param RequireUppercaseCharacters:
Specifies whether IAM user passwords must contain at least one uppercase character from the ISO basic Latin alphabet (A to Z).
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that passwords do not require at least one uppercase character.
:type RequireLowercaseCharacters: boolean
:param RequireLowercaseCharacters:
Specifies whether IAM user passwords must contain at least one lowercase character from the ISO basic Latin alphabet (a to z).
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that passwords do not require at least one lowercase character.
:type AllowUsersToChangePassword: boolean
:param AllowUsersToChangePassword:
Allows all IAM users in your account to use the AWS Management Console to change their own passwords. For more information, see `Letting IAM Users Change Their Own Passwords <https://docs.aws.amazon.com/IAM/latest/UserGuide/HowToPwdIAMUser.html>`__ in the *IAM User Guide* .
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that IAM users in the account do not automatically have permissions to change their own password.
:type MaxPasswordAge: integer
:param MaxPasswordAge:
The number of days that an IAM user password is valid.
If you do not specify a value for this parameter, then the operation uses the default value of ``0`` . The result is that IAM user passwords never expire.
:type PasswordReusePrevention: integer
:param PasswordReusePrevention:
Specifies the number of previous passwords that IAM users are prevented from reusing.
If you do not specify a value for this parameter, then the operation uses the default value of ``0`` . The result is that IAM users are not prevented from reusing previous passwords.
:type HardExpiry: boolean
:param HardExpiry:
Prevents IAM users from setting a new password after their password has expired. The IAM user cannot be accessed until an administrator resets the password.
If you do not specify a value for this parameter, then the operation uses the default value of ``false`` . The result is that IAM users can change their passwords after they expire and continue to sign in as the user.
:returns: None
"""
pass
class AccountSummary(base.ServiceResource):
summary_map: Dict
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_account_summary` to update the attributes of the AccountSummary resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
account_summary.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_account_summary` to update the attributes of the AccountSummary resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
account_summary.load()
:returns: None
"""
pass
class AssumeRolePolicy(base.ServiceResource):
role_name: str
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def update(self, PolicyDocument: str):
"""
Updates the policy that grants an IAM entity permission to assume a role. This is typically referred to as the "role trust policy". For more information about roles, go to `Using Roles to Delegate Permissions and Federate Identities <https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-toplevel.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAssumeRolePolicy>`_
**Request Syntax**
::
response = assume_role_policy.update(
PolicyDocument='string'
)
:type PolicyDocument: string
:param PolicyDocument: **[REQUIRED]**
The policy that grants an entity permission to assume the role.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:returns: None
"""
pass
class CurrentUser(base.ServiceResource):
path: str
user_name: str
user_id: str
arn: str
create_date: datetime
password_last_used: datetime
permissions_boundary: Dict
tags: List
access_keys: 'access_keys'
mfa_devices: 'mfa_devices'
signing_certificates: 'signing_certificates'
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_user` to update the attributes of the CurrentUser resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
current_user.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_user` to update the attributes of the CurrentUser resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
current_user.load()
:returns: None
"""
pass
class Group(base.ServiceResource):
path: str
group_name: str
group_id: str
arn: str
create_date: datetime
name: str
attached_policies: 'attached_policies'
policies: 'policies'
users: 'users'
def add_user(self, UserName: str):
"""
Adds the specified user to the specified group.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AddUserToGroup>`_
**Request Syntax**
::
response = group.add_user(
UserName='string'
)
:type UserName: string
:param UserName: **[REQUIRED]**
The name of the user to add.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def attach_policy(self, PolicyArn: str):
"""
Attaches the specified managed policy to the specified IAM group.
You use this API to attach a managed policy to a group. To embed an inline policy in a group, use PutGroupPolicy .
For more information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachGroupPolicy>`_
**Request Syntax**
::
response = group.attach_policy(
PolicyArn='string'
)
:type PolicyArn: string
:param PolicyArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the IAM policy you want to attach.
For more information about ARNs, see `Amazon Resource Names (ARNs) and AWS Service Namespaces <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`__ in the *AWS General Reference* .
:returns: None
"""
pass
def create(self, Path: str = None) -> List['Group']:
"""
Creates a new group.
For information about the number of groups you can create, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateGroup>`_
**Request Syntax**
::
group = group.create(
Path='string',
)
:type Path: string
:param Path:
The path to the group. For more information about paths, see `IAM Identifiers <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html>`__ in the *IAM User Guide* .
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:rtype: :py:class:`iam.Group`
:returns: Group resource
"""
pass
def create_policy(self, PolicyName: str, PolicyDocument: str) -> 'GroupPolicy':
"""
Adds or updates an inline policy document that is embedded in the specified IAM group.
A user can also have managed policies attached to it. To attach a managed policy to a group, use AttachGroupPolicy . To create a new managed policy, use CreatePolicy . For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
For information about limits on the number of inline policies that you can embed in a group, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
.. note::
Because policy documents can be large, you should use POST rather than GET when calling ``PutGroupPolicy`` . For general information about using the Query API with IAM, go to `Making Query Requests <https://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutGroupPolicy>`_
**Request Syntax**
::
group_policy = group.create_policy(
PolicyName='string',
PolicyDocument='string'
)
:type PolicyName: string
:param PolicyName: **[REQUIRED]**
The name of the policy document.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:type PolicyDocument: string
:param PolicyDocument: **[REQUIRED]**
The policy document.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:rtype: :py:class:`iam.GroupPolicy`
:returns: GroupPolicy resource
"""
pass
def delete(self):
"""
Deletes the specified IAM group. The group must not contain any users or have any attached policies.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteGroup>`_
**Request Syntax**
::
response = group.delete()
:returns: None
"""
pass
def detach_policy(self, PolicyArn: str):
"""
Removes the specified managed policy from the specified IAM group.
A group can also have inline policies embedded with it. To delete an inline policy, use the DeleteGroupPolicy API. For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachGroupPolicy>`_
**Request Syntax**
::
response = group.detach_policy(
PolicyArn='string'
)
:type PolicyArn: string
:param PolicyArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the IAM policy you want to detach.
For more information about ARNs, see `Amazon Resource Names (ARNs) and AWS Service Namespaces <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`__ in the *AWS General Reference* .
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_group` to update the attributes of the Group resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
group.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_group` to update the attributes of the Group resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
group.load()
:returns: None
"""
pass
def remove_user(self, UserName: str):
"""
Removes the specified user from the specified group.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/RemoveUserFromGroup>`_
**Request Syntax**
::
response = group.remove_user(
UserName='string'
)
:type UserName: string
:param UserName: **[REQUIRED]**
The name of the user to remove.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def update(self, NewPath: str = None, NewGroupName: str = None) -> List['Group']:
"""
Updates the name and/or the path of the specified IAM group.
.. warning::
You should understand the implications of changing a group's path or name. For more information, see `Renaming Users and Groups <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.html>`__ in the *IAM User Guide* .
.. note::
The person making the request (the principal), must have permission to change the role group with the old name and the new name. For example, to change the group named ``Managers`` to ``MGRs`` , the principal must have a policy that allows them to update both groups. If the principal has permission to update the ``Managers`` group, but not the ``MGRs`` group, then the update fails. For more information about permissions, see `Access Management <https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateGroup>`_
**Request Syntax**
::
group = group.update(
NewPath='string',
NewGroupName='string'
)
:type NewPath: string
:param NewPath:
New path for the IAM group. Only include this if changing the group\'s path.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type NewGroupName: string
:param NewGroupName:
New name for the IAM group. Only include this if changing the group\'s name.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:rtype: :py:class:`iam.Group`
:returns: Group resource
"""
pass
class GroupPolicy(base.ServiceResource):
policy_name: str
policy_document: str
group_name: str
name: str
def delete(self):
"""
Deletes the specified inline policy that is embedded in the specified IAM group.
A group can also have managed policies attached to it. To detach a managed policy from a group, use DetachGroupPolicy . For more information about policies, refer to `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteGroupPolicy>`_
**Request Syntax**
::
response = group_policy.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_group_policy` to update the attributes of the GroupPolicy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
group_policy.load()
:returns: None
"""
pass
def put(self, PolicyDocument: str):
"""
Adds or updates an inline policy document that is embedded in the specified IAM group.
A user can also have managed policies attached to it. To attach a managed policy to a group, use AttachGroupPolicy . To create a new managed policy, use CreatePolicy . For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
For information about limits on the number of inline policies that you can embed in a group, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
.. note::
Because policy documents can be large, you should use POST rather than GET when calling ``PutGroupPolicy`` . For general information about using the Query API with IAM, go to `Making Query Requests <https://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutGroupPolicy>`_
**Request Syntax**
::
response = group_policy.put(
PolicyDocument='string'
)
:type PolicyDocument: string
:param PolicyDocument: **[REQUIRED]**
The policy document.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_group_policy` to update the attributes of the GroupPolicy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
group_policy.load()
:returns: None
"""
pass
class InstanceProfile(base.ServiceResource):
path: str
instance_profile_name: str
instance_profile_id: str
arn: str
create_date: datetime
roles_attribute: List
name: str
def add_role(self, RoleName: str):
"""
Adds the specified IAM role to the specified instance profile. An instance profile can contain only one role, and this limit cannot be increased. You can remove the existing role and then add a different role to an instance profile. You must then wait for the change to appear across all of AWS because of `eventual consistency <https://en.wikipedia.org/wiki/Eventual_consistency>`__ . To force the change, you must `disassociate the instance profile <https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DisassociateIamInstanceProfile.html>`__ and then `associate the instance profile <https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AssociateIamInstanceProfile.html>`__ , or you can stop your instance and then restart it.
.. note::
The caller of this API must be granted the ``PassRole`` permission on the IAM role by a permissions policy.
For more information about roles, go to `Working with Roles <https://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html>`__ . For more information about instance profiles, go to `About Instance Profiles <https://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AddRoleToInstanceProfile>`_
**Request Syntax**
::
response = instance_profile.add_role(
RoleName='string'
)
:type RoleName: string
:param RoleName: **[REQUIRED]**
The name of the role to add.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def delete(self):
"""
Deletes the specified instance profile. The instance profile must not have an associated role.
.. warning::
Make sure that you do not have any Amazon EC2 instances running with the instance profile you are about to delete. Deleting a role or instance profile that is associated with a running instance will break any applications running on the instance.
For more information about instance profiles, go to `About Instance Profiles <https://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteInstanceProfile>`_
**Request Syntax**
::
response = instance_profile.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_instance_profile` to update the attributes of the InstanceProfile resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
instance_profile.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_instance_profile` to update the attributes of the InstanceProfile resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
instance_profile.load()
:returns: None
"""
pass
def remove_role(self, RoleName: str):
"""
Removes the specified IAM role from the specified EC2 instance profile.
.. warning::
Make sure that you do not have any Amazon EC2 instances running with the role you are about to remove from the instance profile. Removing a role from an instance profile that is associated with a running instance might break any applications running on the instance.
For more information about IAM roles, go to `Working with Roles <https://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html>`__ . For more information about instance profiles, go to `About Instance Profiles <https://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/RemoveRoleFromInstanceProfile>`_
**Request Syntax**
::
response = instance_profile.remove_role(
RoleName='string'
)
:type RoleName: string
:param RoleName: **[REQUIRED]**
The name of the role to remove.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
class LoginProfile(base.ServiceResource):
create_date: datetime
password_reset_required: bool
user_name: str
def create(self, Password: str, PasswordResetRequired: bool = None) -> 'LoginProfile':
"""
Creates a password for the specified user, giving the user the ability to access AWS services through the AWS Management Console. For more information about managing passwords, see `Managing Passwords <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateLoginProfile>`_
**Request Syntax**
::
login_profile = login_profile.create(
Password='string',
PasswordResetRequired=True|False
)
:type Password: string
:param Password: **[REQUIRED]**
The new password for the user.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ that is used to validate this parameter is a string of characters. That string can include almost any printable ASCII character from the space (\u0020) through the end of the ASCII character range (\u00FF). You can also include the tab (\u0009), line feed (\u000A), and carriage return (\u000D) characters. Any of these characters are valid in a password. However, many tools, such as the AWS Management Console, might restrict the ability to type certain characters because they have special meaning within that tool.
:type PasswordResetRequired: boolean
:param PasswordResetRequired:
Specifies whether the user is required to set a new password on next sign-in.
:rtype: :py:class:`iam.LoginProfile`
:returns: LoginProfile resource
"""
pass
def delete(self):
"""
Deletes the password for the specified IAM user, which terminates the user's ability to access AWS services through the AWS Management Console.
.. warning::
Deleting a user's password does not prevent a user from accessing AWS through the command line interface or the API. To prevent all user access, you must also either make any access keys inactive or delete them. For more information about making keys inactive or deleting them, see UpdateAccessKey and DeleteAccessKey .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteLoginProfile>`_
**Request Syntax**
::
response = login_profile.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_login_profile` to update the attributes of the LoginProfile resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
login_profile.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_login_profile` to update the attributes of the LoginProfile resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
login_profile.load()
:returns: None
"""
pass
def update(self, Password: str = None, PasswordResetRequired: bool = None):
"""
Changes the password for the specified IAM user.
IAM users can change their own passwords by calling ChangePassword . For more information about modifying passwords, see `Managing Passwords <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateLoginProfile>`_
**Request Syntax**
::
response = login_profile.update(
Password='string',
PasswordResetRequired=True|False
)
:type Password: string
:param Password:
The new password for the specified IAM user.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
However, the format can be further restricted by the account administrator by setting a password policy on the AWS account. For more information, see UpdateAccountPasswordPolicy .
:type PasswordResetRequired: boolean
:param PasswordResetRequired:
Allows this new password to be used only once by requiring the specified IAM user to set a new password on next sign-in.
:returns: None
"""
pass
class MfaDevice(base.ServiceResource):
enable_date: datetime
user_name: str
serial_number: str
def associate(self, AuthenticationCode1: str, AuthenticationCode2: str):
"""
Enables the specified MFA device and associates it with the specified IAM user. When enabled, the MFA device is required for every subsequent login by the IAM user associated with the device.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/EnableMFADevice>`_
**Request Syntax**
::
response = mfa_device.associate(
AuthenticationCode1='string',
AuthenticationCode2='string'
)
:type AuthenticationCode1: string
:param AuthenticationCode1: **[REQUIRED]**
An authentication code emitted by the device.
The format for this parameter is a string of six digits.
.. warning::
Submit your request immediately after generating the authentication codes. If you generate the codes and then wait too long to submit the request, the MFA device successfully associates with the user but the MFA device becomes out of sync. This happens because time-based one-time passwords (TOTP) expire after a short period of time. If this happens, you can `resync the device <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_sync.html>`__ .
:type AuthenticationCode2: string
:param AuthenticationCode2: **[REQUIRED]**
A subsequent authentication code emitted by the device.
The format for this parameter is a string of six digits.
.. warning::
Submit your request immediately after generating the authentication codes. If you generate the codes and then wait too long to submit the request, the MFA device successfully associates with the user but the MFA device becomes out of sync. This happens because time-based one-time passwords (TOTP) expire after a short period of time. If this happens, you can `resync the device <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_sync.html>`__ .
:returns: None
"""
pass
def disassociate(self):
"""
Deactivates the specified MFA device and removes it from association with the user name for which it was originally enabled.
For more information about creating and working with virtual MFA devices, go to `Enabling a Virtual Multi-factor Authentication (MFA) Device <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_VirtualMFA.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeactivateMFADevice>`_
**Request Syntax**
::
response = mfa_device.disassociate()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def resync(self, AuthenticationCode1: str, AuthenticationCode2: str):
"""
Synchronizes the specified MFA device with its IAM resource object on the AWS servers.
For more information about creating and working with virtual MFA devices, go to `Using a Virtual MFA Device <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_VirtualMFA.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ResyncMFADevice>`_
**Request Syntax**
::
response = mfa_device.resync(
AuthenticationCode1='string',
AuthenticationCode2='string'
)
:type AuthenticationCode1: string
:param AuthenticationCode1: **[REQUIRED]**
An authentication code emitted by the device.
The format for this parameter is a sequence of six digits.
:type AuthenticationCode2: string
:param AuthenticationCode2: **[REQUIRED]**
A subsequent authentication code emitted by the device.
The format for this parameter is a sequence of six digits.
:returns: None
"""
pass
class Policy(base.ServiceResource):
policy_name: str
policy_id: str
path: str
default_version_id: str
attachment_count: int
permissions_boundary_usage_count: int
is_attachable: bool
description: str
create_date: datetime
update_date: datetime
arn: str
attached_groups: 'attached_groups'
attached_roles: 'attached_roles'
attached_users: 'attached_users'
versions: 'versions'
def attach_group(self, GroupName: str):
"""
Attaches the specified managed policy to the specified IAM group.
You use this API to attach a managed policy to a group. To embed an inline policy in a group, use PutGroupPolicy .
For more information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachGroupPolicy>`_
**Request Syntax**
::
response = policy.attach_group(
GroupName='string',
)
:type GroupName: string
:param GroupName: **[REQUIRED]**
The name (friendly name, not ARN) of the group to attach the policy to.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def attach_role(self, RoleName: str):
"""
Attaches the specified managed policy to the specified IAM role. When you attach a managed policy to a role, the managed policy becomes part of the role's permission (access) policy.
.. note::
You cannot use a managed policy as the role's trust policy. The role's trust policy is created at the same time as the role, using CreateRole . You can update a role's trust policy using UpdateAssumeRolePolicy .
Use this API to attach a *managed* policy to a role. To embed an inline policy in a role, use PutRolePolicy . For more information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachRolePolicy>`_
**Request Syntax**
::
response = policy.attach_role(
RoleName='string',
)
:type RoleName: string
:param RoleName: **[REQUIRED]**
The name (friendly name, not ARN) of the role to attach the policy to.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def attach_user(self, UserName: str):
"""
Attaches the specified managed policy to the specified user.
You use this API to attach a *managed* policy to a user. To embed an inline policy in a user, use PutUserPolicy .
For more information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachUserPolicy>`_
**Request Syntax**
::
response = policy.attach_user(
UserName='string',
)
:type UserName: string
:param UserName: **[REQUIRED]**
The name (friendly name, not ARN) of the IAM user to attach the policy to.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def create_version(self, PolicyDocument: str, SetAsDefault: bool = None) -> 'PolicyVersion':
"""
Creates a new version of the specified managed policy. To update a managed policy, you create a new policy version. A managed policy can have up to five versions. If the policy has five versions, you must delete an existing version using DeletePolicyVersion before you create a new version.
Optionally, you can set the new version as the policy's default version. The default version is the version that is in effect for the IAM users, groups, and roles to which the policy is attached.
For more information about managed policy versions, see `Versioning for Managed Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreatePolicyVersion>`_
**Request Syntax**
::
policy_version = policy.create_version(
PolicyDocument='string',
SetAsDefault=True|False
)
:type PolicyDocument: string
:param PolicyDocument: **[REQUIRED]**
The JSON policy document that you want to use as the content for this new version of the policy.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:type SetAsDefault: boolean
:param SetAsDefault:
Specifies whether to set this version as the policy\'s default version.
When this parameter is ``true`` , the new policy version becomes the operative version. That is, it becomes the version that is in effect for the IAM users, groups, and roles that the policy is attached to.
For more information about managed policy versions, see `Versioning for Managed Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html>`__ in the *IAM User Guide* .
:rtype: :py:class:`iam.PolicyVersion`
:returns: PolicyVersion resource
"""
pass
def delete(self):
"""
Deletes the specified managed policy.
Before you can delete a managed policy, you must first detach the policy from all users, groups, and roles that it is attached to. In addition, you must delete all the policy's versions. The following steps describe the process for deleting a managed policy:
* Detach the policy from all users, groups, and roles that the policy is attached to, using the DetachUserPolicy , DetachGroupPolicy , or DetachRolePolicy API operations. To list all the users, groups, and roles that a policy is attached to, use ListEntitiesForPolicy .
* Delete all versions of the policy using DeletePolicyVersion . To list the policy's versions, use ListPolicyVersions . You cannot use DeletePolicyVersion to delete the version that is marked as the default version. You delete the policy's default version in the next step of the process.
* Delete the policy (this automatically deletes the policy's default version) using this API.
For information about managed policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeletePolicy>`_
**Request Syntax**
::
response = policy.delete()
:returns: None
"""
pass
def detach_group(self, GroupName: str):
"""
Removes the specified managed policy from the specified IAM group.
A group can also have inline policies embedded with it. To delete an inline policy, use the DeleteGroupPolicy API. For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachGroupPolicy>`_
**Request Syntax**
::
response = policy.detach_group(
GroupName='string',
)
:type GroupName: string
:param GroupName: **[REQUIRED]**
The name (friendly name, not ARN) of the IAM group to detach the policy from.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def detach_role(self, RoleName: str):
"""
Removes the specified managed policy from the specified role.
A role can also have inline policies embedded with it. To delete an inline policy, use the DeleteRolePolicy API. For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachRolePolicy>`_
**Request Syntax**
::
response = policy.detach_role(
RoleName='string',
)
:type RoleName: string
:param RoleName: **[REQUIRED]**
The name (friendly name, not ARN) of the IAM role to detach the policy from.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def detach_user(self, UserName: str):
"""
Removes the specified managed policy from the specified user.
A user can also have inline policies embedded with it. To delete an inline policy, use the DeleteUserPolicy API. For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachUserPolicy>`_
**Request Syntax**
::
response = policy.detach_user(
UserName='string',
)
:type UserName: string
:param UserName: **[REQUIRED]**
The name (friendly name, not ARN) of the IAM user to detach the policy from.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_policy` to update the attributes of the Policy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
policy.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_policy` to update the attributes of the Policy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
policy.load()
:returns: None
"""
pass
class PolicyVersion(base.ServiceResource):
document: str
is_default_version: bool
create_date: datetime
arn: str
version_id: str
def delete(self):
"""
Deletes the specified version from the specified managed policy.
You cannot delete the default version from a policy using this API. To delete the default version from a policy, use DeletePolicy . To find out which version of a policy is marked as the default version, use ListPolicyVersions .
For information about versions for managed policies, see `Versioning for Managed Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeletePolicyVersion>`_
**Request Syntax**
::
response = policy_version.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_policy_version` to update the attributes of the PolicyVersion resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
policy_version.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_policy_version` to update the attributes of the PolicyVersion resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
policy_version.load()
:returns: None
"""
pass
def set_as_default(self):
"""
Sets the specified version of the specified policy as the policy's default (operative) version.
This operation affects all users, groups, and roles that the policy is attached to. To list the users, groups, and roles that the policy is attached to, use the ListEntitiesForPolicy API.
For information about managed policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/SetDefaultPolicyVersion>`_
**Request Syntax**
::
response = policy_version.set_as_default()
:returns: None
"""
pass
class Role(base.ServiceResource):
path: str
role_name: str
role_id: str
arn: str
create_date: datetime
assume_role_policy_document: str
description: str
max_session_duration: int
permissions_boundary: Dict
tags: List
name: str
attached_policies: 'attached_policies'
instance_profiles: 'instance_profiles'
policies: 'policies'
def attach_policy(self, PolicyArn: str):
"""
Attaches the specified managed policy to the specified IAM role. When you attach a managed policy to a role, the managed policy becomes part of the role's permission (access) policy.
.. note::
You cannot use a managed policy as the role's trust policy. The role's trust policy is created at the same time as the role, using CreateRole . You can update a role's trust policy using UpdateAssumeRolePolicy .
Use this API to attach a *managed* policy to a role. To embed an inline policy in a role, use PutRolePolicy . For more information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachRolePolicy>`_
**Request Syntax**
::
response = role.attach_policy(
PolicyArn='string'
)
:type PolicyArn: string
:param PolicyArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the IAM policy you want to attach.
For more information about ARNs, see `Amazon Resource Names (ARNs) and AWS Service Namespaces <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`__ in the *AWS General Reference* .
:returns: None
"""
pass
def delete(self):
"""
Deletes the specified role. The role must not have any policies attached. For more information about roles, go to `Working with Roles <https://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html>`__ .
.. warning::
Make sure that you do not have any Amazon EC2 instances running with the role you are about to delete. Deleting a role or instance profile that is associated with a running instance will break any applications running on the instance.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteRole>`_
**Request Syntax**
::
response = role.delete()
:returns: None
"""
pass
def detach_policy(self, PolicyArn: str):
"""
Removes the specified managed policy from the specified role.
A role can also have inline policies embedded with it. To delete an inline policy, use the DeleteRolePolicy API. For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachRolePolicy>`_
**Request Syntax**
::
response = role.detach_policy(
PolicyArn='string'
)
:type PolicyArn: string
:param PolicyArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the IAM policy you want to detach.
For more information about ARNs, see `Amazon Resource Names (ARNs) and AWS Service Namespaces <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`__ in the *AWS General Reference* .
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_role` to update the attributes of the Role resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
role.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_role` to update the attributes of the Role resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
role.load()
:returns: None
"""
pass
class RolePolicy(base.ServiceResource):
policy_name: str
policy_document: str
role_name: str
name: str
def delete(self):
"""
Deletes the specified inline policy that is embedded in the specified IAM role.
A role can also have managed policies attached to it. To detach a managed policy from a role, use DetachRolePolicy . For more information about policies, refer to `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteRolePolicy>`_
**Request Syntax**
::
response = role_policy.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_role_policy` to update the attributes of the RolePolicy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
role_policy.load()
:returns: None
"""
pass
def put(self, PolicyDocument: str):
"""
Adds or updates an inline policy document that is embedded in the specified IAM role.
When you embed an inline policy in a role, the inline policy is used as part of the role's access (permissions) policy. The role's trust policy is created at the same time as the role, using CreateRole . You can update a role's trust policy using UpdateAssumeRolePolicy . For more information about IAM roles, go to `Using Roles to Delegate Permissions and Federate Identities <https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-toplevel.html>`__ .
A role can also have a managed policy attached to it. To attach a managed policy to a role, use AttachRolePolicy . To create a new managed policy, use CreatePolicy . For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
For information about limits on the number of inline policies that you can embed with a role, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
.. note::
Because policy documents can be large, you should use POST rather than GET when calling ``PutRolePolicy`` . For general information about using the Query API with IAM, go to `Making Query Requests <https://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutRolePolicy>`_
**Request Syntax**
::
response = role_policy.put(
PolicyDocument='string'
)
:type PolicyDocument: string
:param PolicyDocument: **[REQUIRED]**
The policy document.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_role_policy` to update the attributes of the RolePolicy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
role_policy.load()
:returns: None
"""
pass
class SamlProvider(base.ServiceResource):
saml_metadata_document: str
create_date: datetime
valid_until: datetime
arn: str
def delete(self):
"""
Deletes a SAML provider resource in IAM.
Deleting the provider resource from IAM does not update any roles that reference the SAML provider resource's ARN as a principal in their trust policies. Any attempt to assume a role that references a non-existent provider resource ARN fails.
.. note::
This operation requires `Signature Version 4 <https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteSAMLProvider>`_
**Request Syntax**
::
response = saml_provider.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_saml_provider` to update the attributes of the SamlProvider resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
saml_provider.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_saml_provider` to update the attributes of the SamlProvider resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
saml_provider.load()
:returns: None
"""
pass
def update(self, SAMLMetadataDocument: str) -> Dict:
"""
Updates the metadata document for an existing SAML provider resource object.
.. note::
This operation requires `Signature Version 4 <https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSAMLProvider>`_
**Request Syntax**
::
response = saml_provider.update(
SAMLMetadataDocument='string',
)
**Response Syntax**
::
{
'SAMLProviderArn': 'string'
}
**Response Structure**
- *(dict) --*
Contains the response to a successful UpdateSAMLProvider request.
- **SAMLProviderArn** *(string) --*
The Amazon Resource Name (ARN) of the SAML provider that was updated.
:type SAMLMetadataDocument: string
:param SAMLMetadataDocument: **[REQUIRED]**
An XML document generated by an identity provider (IdP) that supports SAML 2.0. The document includes the issuer\'s name, expiration information, and keys that can be used to validate the SAML authentication response (assertions) that are received from the IdP. You must generate the metadata document using the identity management software that is used as your organization\'s IdP.
:rtype: dict
:returns:
"""
pass
class ServerCertificate(base.ServiceResource):
server_certificate_metadata: Dict
certificate_body: str
certificate_chain: str
name: str
def delete(self):
"""
Deletes the specified server certificate.
For more information about working with server certificates, see `Working with Server Certificates <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html>`__ in the *IAM User Guide* . This topic also includes a list of AWS services that can use the server certificates that you manage with IAM.
.. warning::
If you are using a server certificate with Elastic Load Balancing, deleting the certificate could have implications for your application. If Elastic Load Balancing doesn't detect the deletion of bound certificates, it may continue to use the certificates. This could cause Elastic Load Balancing to stop accepting traffic. We recommend that you remove the reference to the certificate from Elastic Load Balancing before using this command to delete the certificate. For more information, go to `DeleteLoadBalancerListeners <https://docs.aws.amazon.com/ElasticLoadBalancing/latest/APIReference/API_DeleteLoadBalancerListeners.html>`__ in the *Elastic Load Balancing API Reference* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteServerCertificate>`_
**Request Syntax**
::
response = server_certificate.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_server_certificate` to update the attributes of the ServerCertificate resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
server_certificate.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_server_certificate` to update the attributes of the ServerCertificate resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
server_certificate.load()
:returns: None
"""
pass
def update(self, NewPath: str = None, NewServerCertificateName: str = None) -> 'ServerCertificate':
"""
Updates the name and/or the path of the specified server certificate stored in IAM.
For more information about working with server certificates, see `Working with Server Certificates <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html>`__ in the *IAM User Guide* . This topic also includes a list of AWS services that can use the server certificates that you manage with IAM.
.. warning::
You should understand the implications of changing a server certificate's path or name. For more information, see `Renaming a Server Certificate <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs_manage.html#RenamingServerCerts>`__ in the *IAM User Guide* .
.. note::
The person making the request (the principal), must have permission to change the server certificate with the old name and the new name. For example, to change the certificate named ``ProductionCert`` to ``ProdCert`` , the principal must have a policy that allows them to update both certificates. If the principal has permission to update the ``ProductionCert`` group, but not the ``ProdCert`` certificate, then the update fails. For more information about permissions, see `Access Management <https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServerCertificate>`_
**Request Syntax**
::
server_certificate = server_certificate.update(
NewPath='string',
NewServerCertificateName='string'
)
:type NewPath: string
:param NewPath:
The new path for the server certificate. Include this only if you are updating the server certificate\'s path.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type NewServerCertificateName: string
:param NewServerCertificateName:
The new name for the server certificate. Include this only if you are updating the server certificate\'s name. The name of the certificate cannot contain any spaces.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:rtype: :py:class:`iam.ServerCertificate`
:returns: ServerCertificate resource
"""
pass
class SigningCertificate(base.ServiceResource):
certificate_id: str
certificate_body: str
status: str
upload_date: datetime
user_name: str
id: str
def activate(self):
"""
Changes the status of the specified user signing certificate from active to disabled, or vice versa. This operation can be used to disable an IAM user's signing certificate as part of a certificate rotation work flow.
If the ``UserName`` field is not specified, the user name is determined implicitly based on the AWS access key ID used to sign the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated users.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSigningCertificate>`_
**Request Syntax**
::
response = signing_certificate.activate()
:returns: None
"""
pass
def deactivate(self):
"""
Changes the status of the specified user signing certificate from active to disabled, or vice versa. This operation can be used to disable an IAM user's signing certificate as part of a certificate rotation work flow.
If the ``UserName`` field is not specified, the user name is determined implicitly based on the AWS access key ID used to sign the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated users.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSigningCertificate>`_
**Request Syntax**
::
response = signing_certificate.deactivate()
:returns: None
"""
pass
def delete(self):
"""
Deletes a signing certificate associated with the specified IAM user.
If you do not specify a user name, IAM determines the user name implicitly based on the AWS access key ID signing the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials even if the AWS account has no associated IAM users.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteSigningCertificate>`_
**Request Syntax**
::
response = signing_certificate.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
class User(base.ServiceResource):
path: str
user_name: str
user_id: str
arn: str
create_date: datetime
password_last_used: datetime
permissions_boundary: Dict
tags: List
name: str
access_keys: 'access_keys'
attached_policies: 'attached_policies'
groups: 'groups'
mfa_devices: 'mfa_devices'
policies: 'policies'
signing_certificates: 'signing_certificates'
def add_group(self, GroupName: str):
"""
Adds the specified user to the specified group.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AddUserToGroup>`_
**Request Syntax**
::
response = user.add_group(
GroupName='string',
)
:type GroupName: string
:param GroupName: **[REQUIRED]**
The name of the group to update.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def attach_policy(self, PolicyArn: str):
"""
Attaches the specified managed policy to the specified user.
You use this API to attach a *managed* policy to a user. To embed an inline policy in a user, use PutUserPolicy .
For more information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/AttachUserPolicy>`_
**Request Syntax**
::
response = user.attach_policy(
PolicyArn='string'
)
:type PolicyArn: string
:param PolicyArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the IAM policy you want to attach.
For more information about ARNs, see `Amazon Resource Names (ARNs) and AWS Service Namespaces <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`__ in the *AWS General Reference* .
:returns: None
"""
pass
def create(self, Path: str = None, PermissionsBoundary: str = None, Tags: List = None) -> 'User':
"""
Creates a new IAM user for your AWS account.
For information about limitations on the number of IAM users you can create, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateUser>`_
**Request Syntax**
::
user = user.create(
Path='string',
PermissionsBoundary='string',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
:type Path: string
:param Path:
The path for the user name. For more information about paths, see `IAM Identifiers <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html>`__ in the *IAM User Guide* .
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type PermissionsBoundary: string
:param PermissionsBoundary:
The ARN of the policy that is used to set the permissions boundary for the user.
:type Tags: list
:param Tags:
A list of tags that you want to attach to the newly created user. Each tag consists of a key name and an associated value. For more information about tagging, see `Tagging IAM Identities <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html>`__ in the *IAM User Guide* .
.. note::
If any one of the tags is invalid or if you exceed the allowed number of tags per user, then the entire request fails and the user is not created.
- *(dict) --*
A structure that represents user-provided metadata that can be associated with a resource such as an IAM user or role. For more information about tagging, see `Tagging IAM Identities <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html>`__ in the *IAM User Guide* .
- **Key** *(string) --* **[REQUIRED]**
The key name that can be used to look up or retrieve the associated value. For example, ``Department`` or ``Cost Center`` are common choices.
- **Value** *(string) --* **[REQUIRED]**
The value associated with this tag. For example, tags with a key name of ``Department`` could have values such as ``Human Resources`` , ``Accounting`` , and ``Support`` . Tags with a key name of ``Cost Center`` might have values that consist of the number associated with the different cost centers in your company. Typically, many resources have tags with the same key name but with different values.
.. note::
AWS always interprets the tag ``Value`` as a single string. If you need to store an array, you can store comma-separated values in the string. However, you must interpret the value in your code.
:rtype: :py:class:`iam.User`
:returns: User resource
"""
pass
def create_access_key_pair(self) -> 'AccessKeyPair':
"""
Creates a new AWS secret access key and corresponding AWS access key ID for the specified user. The default status for new keys is ``Active`` .
If you do not specify a user name, IAM determines the user name implicitly based on the AWS access key ID signing the request. This operation works for access keys under the AWS account. Consequently, you can use this operation to manage AWS account root user credentials. This is true even if the AWS account has no associated users.
For information about limits on the number of keys you can create, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
.. warning::
To ensure the security of your AWS account, the secret access key is accessible only during key and user creation. You must save the key (for example, in a text file) if you want to be able to access it again. If a secret key is lost, you can delete the access keys for the associated user and then create new keys.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateAccessKey>`_
**Request Syntax**
::
access_key_pair = user.create_access_key_pair()
:rtype: :py:class:`iam.AccessKeyPair`
:returns: AccessKeyPair resource
"""
pass
def create_login_profile(self, Password: str, PasswordResetRequired: bool = None) -> 'LoginProfile':
"""
Creates a password for the specified user, giving the user the ability to access AWS services through the AWS Management Console. For more information about managing passwords, see `Managing Passwords <https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/CreateLoginProfile>`_
**Request Syntax**
::
login_profile = user.create_login_profile(
Password='string',
PasswordResetRequired=True|False
)
:type Password: string
:param Password: **[REQUIRED]**
The new password for the user.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ that is used to validate this parameter is a string of characters. That string can include almost any printable ASCII character from the space (\u0020) through the end of the ASCII character range (\u00FF). You can also include the tab (\u0009), line feed (\u000A), and carriage return (\u000D) characters. Any of these characters are valid in a password. However, many tools, such as the AWS Management Console, might restrict the ability to type certain characters because they have special meaning within that tool.
:type PasswordResetRequired: boolean
:param PasswordResetRequired:
Specifies whether the user is required to set a new password on next sign-in.
:rtype: :py:class:`iam.LoginProfile`
:returns: LoginProfile resource
"""
pass
def create_policy(self, PolicyName: str, PolicyDocument: str) -> 'UserPolicy':
"""
Adds or updates an inline policy document that is embedded in the specified IAM user.
An IAM user can also have a managed policy attached to it. To attach a managed policy to a user, use AttachUserPolicy . To create a new managed policy, use CreatePolicy . For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
For information about limits on the number of inline policies that you can embed in a user, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
.. note::
Because policy documents can be large, you should use POST rather than GET when calling ``PutUserPolicy`` . For general information about using the Query API with IAM, go to `Making Query Requests <https://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutUserPolicy>`_
**Request Syntax**
::
user_policy = user.create_policy(
PolicyName='string',
PolicyDocument='string'
)
:type PolicyName: string
:param PolicyName: **[REQUIRED]**
The name of the policy document.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:type PolicyDocument: string
:param PolicyDocument: **[REQUIRED]**
The policy document.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:rtype: :py:class:`iam.UserPolicy`
:returns: UserPolicy resource
"""
pass
def delete(self):
"""
Deletes the specified IAM user. Unlike the AWS Management Console, when you delete a user programmatically, you must delete the items attached to the user manually, or the deletion fails. For more information, see `Deleting an IAM User <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_deleting_cli>`__ . Before attempting to delete a user, remove the following items:
* Password ( DeleteLoginProfile )
* Access keys ( DeleteAccessKey )
* Signing certificate ( DeleteSigningCertificate )
* SSH public key ( DeleteSSHPublicKey )
* Git credentials ( DeleteServiceSpecificCredential )
* Multi-factor authentication (MFA) device ( DeactivateMFADevice , DeleteVirtualMFADevice )
* Inline policies ( DeleteUserPolicy )
* Attached managed policies ( DetachUserPolicy )
* Group memberships ( RemoveUserFromGroup )
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteUser>`_
**Request Syntax**
::
response = user.delete()
:returns: None
"""
pass
def detach_policy(self, PolicyArn: str):
"""
Removes the specified managed policy from the specified user.
A user can also have inline policies embedded with it. To delete an inline policy, use the DeleteUserPolicy API. For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DetachUserPolicy>`_
**Request Syntax**
::
response = user.detach_policy(
PolicyArn='string'
)
:type PolicyArn: string
:param PolicyArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the IAM policy you want to detach.
For more information about ARNs, see `Amazon Resource Names (ARNs) and AWS Service Namespaces <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`__ in the *AWS General Reference* .
:returns: None
"""
pass
def enable_mfa(self, SerialNumber: str, AuthenticationCode1: str, AuthenticationCode2: str) -> 'MfaDevice':
"""
Enables the specified MFA device and associates it with the specified IAM user. When enabled, the MFA device is required for every subsequent login by the IAM user associated with the device.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/EnableMFADevice>`_
**Request Syntax**
::
mfa_device = user.enable_mfa(
SerialNumber='string',
AuthenticationCode1='string',
AuthenticationCode2='string'
)
:type SerialNumber: string
:param SerialNumber: **[REQUIRED]**
The serial number that uniquely identifies the MFA device. For virtual MFA devices, the serial number is the device ARN.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@:/-
:type AuthenticationCode1: string
:param AuthenticationCode1: **[REQUIRED]**
An authentication code emitted by the device.
The format for this parameter is a string of six digits.
.. warning::
Submit your request immediately after generating the authentication codes. If you generate the codes and then wait too long to submit the request, the MFA device successfully associates with the user but the MFA device becomes out of sync. This happens because time-based one-time passwords (TOTP) expire after a short period of time. If this happens, you can `resync the device <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_sync.html>`__ .
:type AuthenticationCode2: string
:param AuthenticationCode2: **[REQUIRED]**
A subsequent authentication code emitted by the device.
The format for this parameter is a string of six digits.
.. warning::
Submit your request immediately after generating the authentication codes. If you generate the codes and then wait too long to submit the request, the MFA device successfully associates with the user but the MFA device becomes out of sync. This happens because time-based one-time passwords (TOTP) expire after a short period of time. If this happens, you can `resync the device <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_sync.html>`__ .
:rtype: :py:class:`iam.MfaDevice`
:returns: MfaDevice resource
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_user` to update the attributes of the User resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
user.load()
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_user` to update the attributes of the User resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
user.load()
:returns: None
"""
pass
def remove_group(self, GroupName: str):
"""
Removes the specified user from the specified group.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/RemoveUserFromGroup>`_
**Request Syntax**
::
response = user.remove_group(
GroupName='string',
)
:type GroupName: string
:param GroupName: **[REQUIRED]**
The name of the group to update.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:returns: None
"""
pass
def update(self, NewPath: str = None, NewUserName: str = None) -> 'User':
"""
Updates the name and/or the path of the specified IAM user.
.. warning::
You should understand the implications of changing an IAM user's path or name. For more information, see `Renaming an IAM User <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_renaming>`__ and `Renaming an IAM Group <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_rename.html>`__ in the *IAM User Guide* .
.. note::
To change a user name, the requester must have appropriate permissions on both the source object and the target object. For example, to change Bob to Robert, the entity making the request must have permission on Bob and Robert, or must have permission on all (*). For more information about permissions, see `Permissions and Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/PermissionsAndPolicies.html>`__ .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateUser>`_
**Request Syntax**
::
user = user.update(
NewPath='string',
NewUserName='string'
)
:type NewPath: string
:param NewPath:
New path for the IAM user. Include this parameter only if you\'re changing the user\'s path.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type NewUserName: string
:param NewUserName:
New name for the user. Include this parameter only if you\'re changing the user\'s name.
This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
:rtype: :py:class:`iam.User`
:returns: User resource
"""
pass
class UserPolicy(base.ServiceResource):
policy_name: str
policy_document: str
user_name: str
name: str
def delete(self):
"""
Deletes the specified inline policy that is embedded in the specified IAM user.
A user can also have managed policies attached to it. To detach a managed policy from a user, use DetachUserPolicy . For more information about policies, refer to `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteUserPolicy>`_
**Request Syntax**
::
response = user_policy.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
def load(self):
"""
Calls :py:meth:`IAM.Client.get_user_policy` to update the attributes of the UserPolicy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
user_policy.load()
:returns: None
"""
pass
def put(self, PolicyDocument: str):
"""
Adds or updates an inline policy document that is embedded in the specified IAM user.
An IAM user can also have a managed policy attached to it. To attach a managed policy to a user, use AttachUserPolicy . To create a new managed policy, use CreatePolicy . For information about policies, see `Managed Policies and Inline Policies <https://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html>`__ in the *IAM User Guide* .
For information about limits on the number of inline policies that you can embed in a user, see `Limitations on IAM Entities <https://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html>`__ in the *IAM User Guide* .
.. note::
Because policy documents can be large, you should use POST rather than GET when calling ``PutUserPolicy`` . For general information about using the Query API with IAM, go to `Making Query Requests <https://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html>`__ in the *IAM User Guide* .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutUserPolicy>`_
**Request Syntax**
::
response = user_policy.put(
PolicyDocument='string'
)
:type PolicyDocument: string
:param PolicyDocument: **[REQUIRED]**
The policy document.
The `regex pattern <http://wikipedia.org/wiki/regex>`__ used to validate this parameter is a string of characters consisting of the following:
* Any printable ASCII character ranging from the space character (\u0020) through the end of the ASCII character range
* The printable characters in the Basic Latin and Latin-1 Supplement character set (through \u00FF)
* The special characters tab (\u0009), line feed (\u000A), and carriage return (\u000D)
:returns: None
"""
pass
def reload(self):
"""
Calls :py:meth:`IAM.Client.get_user_policy` to update the attributes of the UserPolicy resource. Note that the load and reload methods are the same method and can be used interchangeably.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/None>`_
**Request Syntax**
::
user_policy.load()
:returns: None
"""
pass
class VirtualMfaDevice(base.ServiceResource):
base32_string_seed: bytes
qr_code_png: bytes
user_attribute: Dict
enable_date: datetime
serial_number: str
def delete(self):
"""
Deletes a virtual MFA device.
.. note::
You must deactivate a user's virtual MFA device before you can delete it. For information about deactivating MFA devices, see DeactivateMFADevice .
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteVirtualMFADevice>`_
**Request Syntax**
::
response = virtual_mfa_device.delete()
:returns: None
"""
pass
def get_available_subresources(self) -> List[str]:
"""
Returns a list of all the available sub-resources for this
Resource.
:returns: A list containing the name of each sub-resource for this
resource
:rtype: list of str
"""
pass
class groups(ResourceCollection):
@classmethod
def all(cls) -> List['Group']:
"""
Creates an iterable of all Group resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroups>`_
**Request Syntax**
::
group_iterator = iam.groups.all()
:rtype: list(:py:class:`iam.Group`)
:returns: A list of Group resources
"""
pass
@classmethod
def filter(cls, PathPrefix: str = None, Marker: str = None, MaxItems: int = None) -> List['Group']:
"""
Creates an iterable of all Group resources in the collection filtered by kwargs passed to method.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroups>`_
**Request Syntax**
::
group_iterator = iam.groups.filter(
PathPrefix='string',
Marker='string',
MaxItems=123
)
:type PathPrefix: string
:param PathPrefix:
The path prefix for filtering the results. For example, the prefix ``/division_abc/subdivision_xyz/`` gets all groups whose path starts with ``/division_abc/subdivision_xyz/`` .
This parameter is optional. If it is not included, it defaults to a slash (/), listing all groups. This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type Marker: string
:param Marker:
Use this parameter only when paginating results and only after you receive a response indicating that the results are truncated. Set it to the value of the ``Marker`` element in the response that you received to indicate where the next call should start.
:type MaxItems: integer
:param MaxItems:
Use this only when paginating results to indicate the maximum number of items you want in the response. If additional items exist beyond the maximum you specify, the ``IsTruncated`` response element is ``true`` .
If you do not include this parameter, the number of items defaults to 100. Note that IAM might return fewer results, even when there are more results available. In that case, the ``IsTruncated`` response element returns ``true`` , and ``Marker`` contains a value to include in the subsequent call that tells the service where to continue from.
:rtype: list(:py:class:`iam.Group`)
:returns: A list of Group resources
"""
pass
@classmethod
def iterator(cls) -> ResourceCollection:
"""
Get a resource collection iterator from this manager.
:rtype: :py:class:`ResourceCollection`
:return: An iterable representing the collection of resources
"""
pass
@classmethod
def limit(cls, count: int = None) -> List['Group']:
"""
Creates an iterable up to a specified amount of Group resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroups>`_
**Request Syntax**
::
group_iterator = iam.groups.limit(
count=123
)
:type count: integer
:param count: The limit to the number of resources in the iterable.
:rtype: list(:py:class:`iam.Group`)
:returns: A list of Group resources
"""
pass
@classmethod
def page_size(cls, count: int = None) -> List['Group']:
"""
Creates an iterable of all Group resources in the collection, but limits the number of items returned by each service call by the specified amount.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListGroups>`_
**Request Syntax**
::
group_iterator = iam.groups.page_size(
count=123
)
:type count: integer
:param count: The number of items returned by each service call
:rtype: list(:py:class:`iam.Group`)
:returns: A list of Group resources
"""
pass
@classmethod
def pages(cls) -> List[base.ServiceResource]:
"""
A generator which yields pages of resource instances after
doing the appropriate service operation calls and handling
any pagination on your behalf. Non-paginated calls will
return a single page of items.
Page size, item limit, and filter parameters are applied
if they have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
:rtype: list(:py:class:`~boto3.resources.base.ServiceResource`)
:return: List of resource instances
"""
pass
class instance_profiles(ResourceCollection):
@classmethod
def all(cls) -> List['InstanceProfile']:
"""
Creates an iterable of all InstanceProfile resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListInstanceProfiles>`_
**Request Syntax**
::
instance_profile_iterator = iam.instance_profiles.all()
:rtype: list(:py:class:`iam.InstanceProfile`)
:returns: A list of InstanceProfile resources
"""
pass
@classmethod
def filter(cls, PathPrefix: str = None, Marker: str = None, MaxItems: int = None) -> List['InstanceProfile']:
"""
Creates an iterable of all InstanceProfile resources in the collection filtered by kwargs passed to method.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListInstanceProfiles>`_
**Request Syntax**
::
instance_profile_iterator = iam.instance_profiles.filter(
PathPrefix='string',
Marker='string',
MaxItems=123
)
:type PathPrefix: string
:param PathPrefix:
The path prefix for filtering the results. For example, the prefix ``/application_abc/component_xyz/`` gets all instance profiles whose path starts with ``/application_abc/component_xyz/`` .
This parameter is optional. If it is not included, it defaults to a slash (/), listing all instance profiles. This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type Marker: string
:param Marker:
Use this parameter only when paginating results and only after you receive a response indicating that the results are truncated. Set it to the value of the ``Marker`` element in the response that you received to indicate where the next call should start.
:type MaxItems: integer
:param MaxItems:
Use this only when paginating results to indicate the maximum number of items you want in the response. If additional items exist beyond the maximum you specify, the ``IsTruncated`` response element is ``true`` .
If you do not include this parameter, the number of items defaults to 100. Note that IAM might return fewer results, even when there are more results available. In that case, the ``IsTruncated`` response element returns ``true`` , and ``Marker`` contains a value to include in the subsequent call that tells the service where to continue from.
:rtype: list(:py:class:`iam.InstanceProfile`)
:returns: A list of InstanceProfile resources
"""
pass
@classmethod
def iterator(cls) -> ResourceCollection:
"""
Get a resource collection iterator from this manager.
:rtype: :py:class:`ResourceCollection`
:return: An iterable representing the collection of resources
"""
pass
@classmethod
def limit(cls, count: int = None) -> List['InstanceProfile']:
"""
Creates an iterable up to a specified amount of InstanceProfile resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListInstanceProfiles>`_
**Request Syntax**
::
instance_profile_iterator = iam.instance_profiles.limit(
count=123
)
:type count: integer
:param count: The limit to the number of resources in the iterable.
:rtype: list(:py:class:`iam.InstanceProfile`)
:returns: A list of InstanceProfile resources
"""
pass
@classmethod
def page_size(cls, count: int = None) -> List['InstanceProfile']:
"""
Creates an iterable of all InstanceProfile resources in the collection, but limits the number of items returned by each service call by the specified amount.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListInstanceProfiles>`_
**Request Syntax**
::
instance_profile_iterator = iam.instance_profiles.page_size(
count=123
)
:type count: integer
:param count: The number of items returned by each service call
:rtype: list(:py:class:`iam.InstanceProfile`)
:returns: A list of InstanceProfile resources
"""
pass
@classmethod
def pages(cls) -> List[base.ServiceResource]:
"""
A generator which yields pages of resource instances after
doing the appropriate service operation calls and handling
any pagination on your behalf. Non-paginated calls will
return a single page of items.
Page size, item limit, and filter parameters are applied
if they have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
:rtype: list(:py:class:`~boto3.resources.base.ServiceResource`)
:return: List of resource instances
"""
pass
class policies(ResourceCollection):
@classmethod
def all(cls) -> List['Policy']:
"""
Creates an iterable of all Policy resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListPolicies>`_
**Request Syntax**
::
policy_iterator = iam.policies.all()
:rtype: list(:py:class:`iam.Policy`)
:returns: A list of Policy resources
"""
pass
@classmethod
def filter(cls, Scope: str = None, OnlyAttached: bool = None, PathPrefix: str = None, PolicyUsageFilter: str = None, Marker: str = None, MaxItems: int = None) -> List['Policy']:
"""
Creates an iterable of all Policy resources in the collection filtered by kwargs passed to method.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListPolicies>`_
**Request Syntax**
::
policy_iterator = iam.policies.filter(
Scope='All'|'AWS'|'Local',
OnlyAttached=True|False,
PathPrefix='string',
PolicyUsageFilter='PermissionsPolicy'|'PermissionsBoundary',
Marker='string',
MaxItems=123
)
:type Scope: string
:param Scope:
The scope to use for filtering the results.
To list only AWS managed policies, set ``Scope`` to ``AWS`` . To list only the customer managed policies in your AWS account, set ``Scope`` to ``Local`` .
This parameter is optional. If it is not included, or if it is set to ``All`` , all policies are returned.
:type OnlyAttached: boolean
:param OnlyAttached:
A flag to filter the results to only the attached policies.
When ``OnlyAttached`` is ``true`` , the returned list contains only the policies that are attached to an IAM user, group, or role. When ``OnlyAttached`` is ``false`` , or when the parameter is not included, all policies are returned.
:type PathPrefix: string
:param PathPrefix:
The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all policies. This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type PolicyUsageFilter: string
:param PolicyUsageFilter:
The policy usage method to use for filtering the results.
To list only permissions policies, set ``PolicyUsageFilter`` to ``PermissionsPolicy`` . To list only the policies used to set permissions boundaries, set the value to ``PermissionsBoundary`` .
This parameter is optional. If it is not included, all policies are returned.
:type Marker: string
:param Marker:
Use this parameter only when paginating results and only after you receive a response indicating that the results are truncated. Set it to the value of the ``Marker`` element in the response that you received to indicate where the next call should start.
:type MaxItems: integer
:param MaxItems:
Use this only when paginating results to indicate the maximum number of items you want in the response. If additional items exist beyond the maximum you specify, the ``IsTruncated`` response element is ``true`` .
If you do not include this parameter, the number of items defaults to 100. Note that IAM might return fewer results, even when there are more results available. In that case, the ``IsTruncated`` response element returns ``true`` , and ``Marker`` contains a value to include in the subsequent call that tells the service where to continue from.
:rtype: list(:py:class:`iam.Policy`)
:returns: A list of Policy resources
"""
pass
@classmethod
def iterator(cls) -> ResourceCollection:
"""
Get a resource collection iterator from this manager.
:rtype: :py:class:`ResourceCollection`
:return: An iterable representing the collection of resources
"""
pass
@classmethod
def limit(cls, count: int = None) -> List['Policy']:
"""
Creates an iterable up to a specified amount of Policy resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListPolicies>`_
**Request Syntax**
::
policy_iterator = iam.policies.limit(
count=123
)
:type count: integer
:param count: The limit to the number of resources in the iterable.
:rtype: list(:py:class:`iam.Policy`)
:returns: A list of Policy resources
"""
pass
@classmethod
def page_size(cls, count: int = None) -> List['Policy']:
"""
Creates an iterable of all Policy resources in the collection, but limits the number of items returned by each service call by the specified amount.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListPolicies>`_
**Request Syntax**
::
policy_iterator = iam.policies.page_size(
count=123
)
:type count: integer
:param count: The number of items returned by each service call
:rtype: list(:py:class:`iam.Policy`)
:returns: A list of Policy resources
"""
pass
@classmethod
def pages(cls) -> List[base.ServiceResource]:
"""
A generator which yields pages of resource instances after
doing the appropriate service operation calls and handling
any pagination on your behalf. Non-paginated calls will
return a single page of items.
Page size, item limit, and filter parameters are applied
if they have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
:rtype: list(:py:class:`~boto3.resources.base.ServiceResource`)
:return: List of resource instances
"""
pass
class roles(ResourceCollection):
@classmethod
def all(cls) -> List['Role']:
"""
Creates an iterable of all Role resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRoles>`_
**Request Syntax**
::
role_iterator = iam.roles.all()
:rtype: list(:py:class:`iam.Role`)
:returns: A list of Role resources
"""
pass
@classmethod
def filter(cls, PathPrefix: str = None, Marker: str = None, MaxItems: int = None) -> List['Role']:
"""
Creates an iterable of all Role resources in the collection filtered by kwargs passed to method.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRoles>`_
**Request Syntax**
::
role_iterator = iam.roles.filter(
PathPrefix='string',
Marker='string',
MaxItems=123
)
:type PathPrefix: string
:param PathPrefix:
The path prefix for filtering the results. For example, the prefix ``/application_abc/component_xyz/`` gets all roles whose path starts with ``/application_abc/component_xyz/`` .
This parameter is optional. If it is not included, it defaults to a slash (/), listing all roles. This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type Marker: string
:param Marker:
Use this parameter only when paginating results and only after you receive a response indicating that the results are truncated. Set it to the value of the ``Marker`` element in the response that you received to indicate where the next call should start.
:type MaxItems: integer
:param MaxItems:
Use this only when paginating results to indicate the maximum number of items you want in the response. If additional items exist beyond the maximum you specify, the ``IsTruncated`` response element is ``true`` .
If you do not include this parameter, the number of items defaults to 100. Note that IAM might return fewer results, even when there are more results available. In that case, the ``IsTruncated`` response element returns ``true`` , and ``Marker`` contains a value to include in the subsequent call that tells the service where to continue from.
:rtype: list(:py:class:`iam.Role`)
:returns: A list of Role resources
"""
pass
@classmethod
def iterator(cls) -> ResourceCollection:
"""
Get a resource collection iterator from this manager.
:rtype: :py:class:`ResourceCollection`
:return: An iterable representing the collection of resources
"""
pass
@classmethod
def limit(cls, count: int = None) -> List['Role']:
"""
Creates an iterable up to a specified amount of Role resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRoles>`_
**Request Syntax**
::
role_iterator = iam.roles.limit(
count=123
)
:type count: integer
:param count: The limit to the number of resources in the iterable.
:rtype: list(:py:class:`iam.Role`)
:returns: A list of Role resources
"""
pass
@classmethod
def page_size(cls, count: int = None) -> List['Role']:
"""
Creates an iterable of all Role resources in the collection, but limits the number of items returned by each service call by the specified amount.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRoles>`_
**Request Syntax**
::
role_iterator = iam.roles.page_size(
count=123
)
:type count: integer
:param count: The number of items returned by each service call
:rtype: list(:py:class:`iam.Role`)
:returns: A list of Role resources
"""
pass
@classmethod
def pages(cls) -> List[base.ServiceResource]:
"""
A generator which yields pages of resource instances after
doing the appropriate service operation calls and handling
any pagination on your behalf. Non-paginated calls will
return a single page of items.
Page size, item limit, and filter parameters are applied
if they have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
:rtype: list(:py:class:`~boto3.resources.base.ServiceResource`)
:return: List of resource instances
"""
pass
class saml_providers(ResourceCollection):
@classmethod
def all(cls) -> List['SamlProvider']:
"""
Creates an iterable of all SamlProvider resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSAMLProviders>`_
**Request Syntax**
::
saml_provider_iterator = iam.saml_providers.all()
:rtype: list(:py:class:`iam.SamlProvider`)
:returns: A list of SamlProvider resources
"""
pass
@classmethod
def filter(cls) -> List['SamlProvider']:
"""
Creates an iterable of all SamlProvider resources in the collection filtered by kwargs passed to method.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSAMLProviders>`_
**Request Syntax**
::
saml_provider_iterator = iam.saml_providers.filter()
:rtype: list(:py:class:`iam.SamlProvider`)
:returns: A list of SamlProvider resources
"""
pass
@classmethod
def iterator(cls) -> ResourceCollection:
"""
Get a resource collection iterator from this manager.
:rtype: :py:class:`ResourceCollection`
:return: An iterable representing the collection of resources
"""
pass
@classmethod
def limit(cls, count: int = None) -> List['SamlProvider']:
"""
Creates an iterable up to a specified amount of SamlProvider resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSAMLProviders>`_
**Request Syntax**
::
saml_provider_iterator = iam.saml_providers.limit(
count=123
)
:type count: integer
:param count: The limit to the number of resources in the iterable.
:rtype: list(:py:class:`iam.SamlProvider`)
:returns: A list of SamlProvider resources
"""
pass
@classmethod
def page_size(cls, count: int = None) -> List['SamlProvider']:
"""
Creates an iterable of all SamlProvider resources in the collection, but limits the number of items returned by each service call by the specified amount.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListSAMLProviders>`_
**Request Syntax**
::
saml_provider_iterator = iam.saml_providers.page_size(
count=123
)
:type count: integer
:param count: The number of items returned by each service call
:rtype: list(:py:class:`iam.SamlProvider`)
:returns: A list of SamlProvider resources
"""
pass
@classmethod
def pages(cls) -> List[base.ServiceResource]:
"""
A generator which yields pages of resource instances after
doing the appropriate service operation calls and handling
any pagination on your behalf. Non-paginated calls will
return a single page of items.
Page size, item limit, and filter parameters are applied
if they have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
:rtype: list(:py:class:`~boto3.resources.base.ServiceResource`)
:return: List of resource instances
"""
pass
class server_certificates(ResourceCollection):
@classmethod
def all(cls) -> List['ServerCertificate']:
"""
Creates an iterable of all ServerCertificate resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListServerCertificates>`_
**Request Syntax**
::
server_certificate_iterator = iam.server_certificates.all()
:rtype: list(:py:class:`iam.ServerCertificate`)
:returns: A list of ServerCertificate resources
"""
pass
@classmethod
def filter(cls, PathPrefix: str = None, Marker: str = None, MaxItems: int = None) -> List['ServerCertificate']:
"""
Creates an iterable of all ServerCertificate resources in the collection filtered by kwargs passed to method.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListServerCertificates>`_
**Request Syntax**
::
server_certificate_iterator = iam.server_certificates.filter(
PathPrefix='string',
Marker='string',
MaxItems=123
)
:type PathPrefix: string
:param PathPrefix:
The path prefix for filtering the results. For example: ``/company/servercerts`` would get all server certificates for which the path starts with ``/company/servercerts`` .
This parameter is optional. If it is not included, it defaults to a slash (/), listing all server certificates. This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type Marker: string
:param Marker:
Use this parameter only when paginating results and only after you receive a response indicating that the results are truncated. Set it to the value of the ``Marker`` element in the response that you received to indicate where the next call should start.
:type MaxItems: integer
:param MaxItems:
Use this only when paginating results to indicate the maximum number of items you want in the response. If additional items exist beyond the maximum you specify, the ``IsTruncated`` response element is ``true`` .
If you do not include this parameter, the number of items defaults to 100. Note that IAM might return fewer results, even when there are more results available. In that case, the ``IsTruncated`` response element returns ``true`` , and ``Marker`` contains a value to include in the subsequent call that tells the service where to continue from.
:rtype: list(:py:class:`iam.ServerCertificate`)
:returns: A list of ServerCertificate resources
"""
pass
@classmethod
def iterator(cls) -> ResourceCollection:
"""
Get a resource collection iterator from this manager.
:rtype: :py:class:`ResourceCollection`
:return: An iterable representing the collection of resources
"""
pass
@classmethod
def limit(cls, count: int = None) -> List['ServerCertificate']:
"""
Creates an iterable up to a specified amount of ServerCertificate resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListServerCertificates>`_
**Request Syntax**
::
server_certificate_iterator = iam.server_certificates.limit(
count=123
)
:type count: integer
:param count: The limit to the number of resources in the iterable.
:rtype: list(:py:class:`iam.ServerCertificate`)
:returns: A list of ServerCertificate resources
"""
pass
@classmethod
def page_size(cls, count: int = None) -> List['ServerCertificate']:
"""
Creates an iterable of all ServerCertificate resources in the collection, but limits the number of items returned by each service call by the specified amount.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListServerCertificates>`_
**Request Syntax**
::
server_certificate_iterator = iam.server_certificates.page_size(
count=123
)
:type count: integer
:param count: The number of items returned by each service call
:rtype: list(:py:class:`iam.ServerCertificate`)
:returns: A list of ServerCertificate resources
"""
pass
@classmethod
def pages(cls) -> List[base.ServiceResource]:
"""
A generator which yields pages of resource instances after
doing the appropriate service operation calls and handling
any pagination on your behalf. Non-paginated calls will
return a single page of items.
Page size, item limit, and filter parameters are applied
if they have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
:rtype: list(:py:class:`~boto3.resources.base.ServiceResource`)
:return: List of resource instances
"""
pass
class users(ResourceCollection):
@classmethod
def all(cls) -> List['User']:
"""
Creates an iterable of all User resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers>`_
**Request Syntax**
::
user_iterator = iam.users.all()
:rtype: list(:py:class:`iam.User`)
:returns: A list of User resources
"""
pass
@classmethod
def filter(cls, PathPrefix: str = None, Marker: str = None, MaxItems: int = None) -> List['User']:
"""
Creates an iterable of all User resources in the collection filtered by kwargs passed to method.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers>`_
**Request Syntax**
::
user_iterator = iam.users.filter(
PathPrefix='string',
Marker='string',
MaxItems=123
)
:type PathPrefix: string
:param PathPrefix:
The path prefix for filtering the results. For example: ``/division_abc/subdivision_xyz/`` , which would get all user names whose path starts with ``/division_abc/subdivision_xyz/`` .
This parameter is optional. If it is not included, it defaults to a slash (/), listing all user names. This parameter allows (through its `regex pattern <http://wikipedia.org/wiki/regex>`__ ) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercased letters.
:type Marker: string
:param Marker:
Use this parameter only when paginating results and only after you receive a response indicating that the results are truncated. Set it to the value of the ``Marker`` element in the response that you received to indicate where the next call should start.
:type MaxItems: integer
:param MaxItems:
Use this only when paginating results to indicate the maximum number of items you want in the response. If additional items exist beyond the maximum you specify, the ``IsTruncated`` response element is ``true`` .
If you do not include this parameter, the number of items defaults to 100. Note that IAM might return fewer results, even when there are more results available. In that case, the ``IsTruncated`` response element returns ``true`` , and ``Marker`` contains a value to include in the subsequent call that tells the service where to continue from.
:rtype: list(:py:class:`iam.User`)
:returns: A list of User resources
"""
pass
@classmethod
def iterator(cls) -> ResourceCollection:
"""
Get a resource collection iterator from this manager.
:rtype: :py:class:`ResourceCollection`
:return: An iterable representing the collection of resources
"""
pass
@classmethod
def limit(cls, count: int = None) -> List['User']:
"""
Creates an iterable up to a specified amount of User resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers>`_
**Request Syntax**
::
user_iterator = iam.users.limit(
count=123
)
:type count: integer
:param count: The limit to the number of resources in the iterable.
:rtype: list(:py:class:`iam.User`)
:returns: A list of User resources
"""
pass
@classmethod
def page_size(cls, count: int = None) -> List['User']:
"""
Creates an iterable of all User resources in the collection, but limits the number of items returned by each service call by the specified amount.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers>`_
**Request Syntax**
::
user_iterator = iam.users.page_size(
count=123
)
:type count: integer
:param count: The number of items returned by each service call
:rtype: list(:py:class:`iam.User`)
:returns: A list of User resources
"""
pass
@classmethod
def pages(cls) -> List[base.ServiceResource]:
"""
A generator which yields pages of resource instances after
doing the appropriate service operation calls and handling
any pagination on your behalf. Non-paginated calls will
return a single page of items.
Page size, item limit, and filter parameters are applied
if they have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
:rtype: list(:py:class:`~boto3.resources.base.ServiceResource`)
:return: List of resource instances
"""
pass
class virtual_mfa_devices(ResourceCollection):
@classmethod
def all(cls) -> List['VirtualMfaDevice']:
"""
Creates an iterable of all VirtualMfaDevice resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListVirtualMFADevices>`_
**Request Syntax**
::
virtual_mfa_device_iterator = iam.virtual_mfa_devices.all()
:rtype: list(:py:class:`iam.VirtualMfaDevice`)
:returns: A list of VirtualMfaDevice resources
"""
pass
@classmethod
def filter(cls, AssignmentStatus: str = None, Marker: str = None, MaxItems: int = None) -> List['VirtualMfaDevice']:
"""
Creates an iterable of all VirtualMfaDevice resources in the collection filtered by kwargs passed to method.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListVirtualMFADevices>`_
**Request Syntax**
::
virtual_mfa_device_iterator = iam.virtual_mfa_devices.filter(
AssignmentStatus='Assigned'|'Unassigned'|'Any',
Marker='string',
MaxItems=123
)
:type AssignmentStatus: string
:param AssignmentStatus:
The status (``Unassigned`` or ``Assigned`` ) of the devices to list. If you do not specify an ``AssignmentStatus`` , the operation defaults to ``Any`` , which lists both assigned and unassigned virtual MFA devices.,
:type Marker: string
:param Marker:
Use this parameter only when paginating results and only after you receive a response indicating that the results are truncated. Set it to the value of the ``Marker`` element in the response that you received to indicate where the next call should start.
:type MaxItems: integer
:param MaxItems:
Use this only when paginating results to indicate the maximum number of items you want in the response. If additional items exist beyond the maximum you specify, the ``IsTruncated`` response element is ``true`` .
If you do not include this parameter, the number of items defaults to 100. Note that IAM might return fewer results, even when there are more results available. In that case, the ``IsTruncated`` response element returns ``true`` , and ``Marker`` contains a value to include in the subsequent call that tells the service where to continue from.
:rtype: list(:py:class:`iam.VirtualMfaDevice`)
:returns: A list of VirtualMfaDevice resources
"""
pass
@classmethod
def iterator(cls) -> ResourceCollection:
"""
Get a resource collection iterator from this manager.
:rtype: :py:class:`ResourceCollection`
:return: An iterable representing the collection of resources
"""
pass
@classmethod
def limit(cls, count: int = None) -> List['VirtualMfaDevice']:
"""
Creates an iterable up to a specified amount of VirtualMfaDevice resources in the collection.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListVirtualMFADevices>`_
**Request Syntax**
::
virtual_mfa_device_iterator = iam.virtual_mfa_devices.limit(
count=123
)
:type count: integer
:param count: The limit to the number of resources in the iterable.
:rtype: list(:py:class:`iam.VirtualMfaDevice`)
:returns: A list of VirtualMfaDevice resources
"""
pass
@classmethod
def page_size(cls, count: int = None) -> List['VirtualMfaDevice']:
"""
Creates an iterable of all VirtualMfaDevice resources in the collection, but limits the number of items returned by each service call by the specified amount.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListVirtualMFADevices>`_
**Request Syntax**
::
virtual_mfa_device_iterator = iam.virtual_mfa_devices.page_size(
count=123
)
:type count: integer
:param count: The number of items returned by each service call
:rtype: list(:py:class:`iam.VirtualMfaDevice`)
:returns: A list of VirtualMfaDevice resources
"""
pass
@classmethod
def pages(cls) -> List[base.ServiceResource]:
"""
A generator which yields pages of resource instances after
doing the appropriate service operation calls and handling
any pagination on your behalf. Non-paginated calls will
return a single page of items.
Page size, item limit, and filter parameters are applied
if they have previously been set.
>>> bucket = s3.Bucket('boto3')
>>> for page in bucket.objects.pages():
... for obj in page:
... print(obj.key)
'key1'
'key2'
:rtype: list(:py:class:`~boto3.resources.base.ServiceResource`)
:return: List of resource instances
"""
pass
| 55.398178 | 756 | 0.666441 | 26,383 | 206,746 | 5.175189 | 0.035894 | 0.01727 | 0.023027 | 0.03454 | 0.845888 | 0.831049 | 0.816072 | 0.808162 | 0.800054 | 0.787288 | 0 | 0.011266 | 0.250801 | 206,746 | 3,731 | 757 | 55.413026 | 0.870221 | 0.800398 | 0 | 0.709936 | 0 | 0 | 0.051694 | 0.001928 | 0 | 0 | 0 | 0 | 0 | 1 | 0.315705 | false | 0.341346 | 0.011218 | 0 | 0.607372 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 11 |
72fe519a1393dcfb6f95be47cb0bfd3f9287ad1c | 65,312 | py | Python | src/bpp/migrations/0078_django110_py3k.py | iplweb/django-bpp | 85f183a99d8d5027ae4772efac1e4a9f21675849 | [
"BSD-3-Clause"
] | 1 | 2017-04-27T19:50:02.000Z | 2017-04-27T19:50:02.000Z | src/bpp/migrations/0078_django110_py3k.py | mpasternak/django-bpp | 434338821d5ad1aaee598f6327151aba0af66f5e | [
"BSD-3-Clause"
] | 41 | 2019-11-07T00:07:02.000Z | 2022-02-27T22:09:39.000Z | src/bpp/migrations/0078_django110_py3k.py | iplweb/bpp | f027415cc3faf1ca79082bf7bacd4be35b1a6fdf | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.7 on 2017-07-08 19:45
from __future__ import unicode_literals
import autoslug.fields
import bpp.fields
from decimal import Decimal
import django.contrib.auth.validators
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('bpp', '0077_auto_20170624_0130'),
]
operations = [
migrations.AlterModelOptions(
name='autor',
options={'ordering': ['sort'], 'verbose_name': 'autor', 'verbose_name_plural': 'autorzy'},
),
migrations.AlterModelOptions(
name='charakter_formalny',
options={'ordering': ['nazwa'], 'verbose_name': 'charakter formalny', 'verbose_name_plural': 'charaktery formalne'},
),
migrations.AlterModelOptions(
name='funkcja_autora',
options={'ordering': ['nazwa'], 'verbose_name': 'funkcja w jednostce', 'verbose_name_plural': 'funkcje w jednostkach'},
),
migrations.AlterModelOptions(
name='jednostka',
options={'ordering': ['nazwa'], 'verbose_name': 'jednostka', 'verbose_name_plural': 'jednostki'},
),
migrations.AlterModelOptions(
name='patent',
options={'verbose_name': 'patent', 'verbose_name_plural': 'patenty'},
),
migrations.AlterModelOptions(
name='praca_doktorska',
options={'verbose_name': 'praca doktorska', 'verbose_name_plural': 'prace doktorskie'},
),
migrations.AlterModelOptions(
name='praca_habilitacyjna',
options={'verbose_name': 'praca habilitacyjna', 'verbose_name_plural': 'prace habilitacyjne'},
),
migrations.AlterModelOptions(
name='status_korekty',
options={'verbose_name': 'status korekty', 'verbose_name_plural': 'statusy korekty'},
),
migrations.AlterModelOptions(
name='typ_kbn',
options={'ordering': ['nazwa'], 'verbose_name': 'typ KBN', 'verbose_name_plural': 'typy KBN'},
),
migrations.AlterModelOptions(
name='uczelnia',
options={'verbose_name': 'uczelnia', 'verbose_name_plural': 'uczelnie'},
),
migrations.AlterModelOptions(
name='wydawnictwo_zwarte',
options={'verbose_name': 'wydawnictwo zwarte', 'verbose_name_plural': 'wydawnictwa zwarte'},
),
migrations.AlterField(
model_name='autor',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='autor',
name='aktualny',
field=models.BooleanField(db_index=True, default=False, help_text='Jeżeli zaznaczone, pole to oznacza,\n że autor jest aktualnie - na dziś dzień - przypisany do jakiejś jednostki w bazie danych i jego przypisanie\n do tej jednostki nie zostało zakończone wraz z konkretną datą w \n przeszłości.', verbose_name='Aktualny?'),
),
migrations.AlterField(
model_name='autor',
name='email',
field=models.EmailField(blank=True, max_length=128, null=True, verbose_name='E-mail'),
),
migrations.AlterField(
model_name='autor',
name='pbn_id',
field=models.IntegerField(blank=True, db_index=True, help_text='Identyfikator w systemie Polskiej Bibliografii Naukowej (PBN)', null=True, unique=True, verbose_name='Identyfikator PBN'),
),
migrations.AlterField(
model_name='autor',
name='pesel_md5',
field=models.CharField(blank=True, db_index=True, help_text='Hash MD5 numeru PESEL', max_length=32, null=True, verbose_name='PESEL MD5'),
),
migrations.AlterField(
model_name='autor',
name='poprzednie_nazwiska',
field=models.CharField(blank=True, db_index=True, help_text='Jeżeli ten\n autor(-ka) posiada nazwisko panieńskie, pod którym ukazywały\n się publikacje lub zmieniał nazwisko z innych powodów, wpisz tutaj\n wszystkie poprzednie nazwiska, oddzielając je przecinkami.', max_length=1024, null=True),
),
migrations.AlterField(
model_name='autor',
name='slug',
field=autoslug.fields.AutoSlugField(editable=False, max_length=1024, populate_from='get_full_name', unique=True),
),
migrations.AlterField(
model_name='autor',
name='www',
field=models.URLField(blank=True, max_length=1024, null=True, verbose_name='WWW'),
),
migrations.AlterField(
model_name='autor_jednostka',
name='rozpoczal_prace',
field=models.DateField(blank=True, db_index=True, null=True, verbose_name='Rozpoczął pracę'),
),
migrations.AlterField(
model_name='autor_jednostka',
name='zakonczyl_prace',
field=models.DateField(blank=True, db_index=True, null=True, verbose_name='Zakończył pracę'),
),
migrations.AlterField(
model_name='bppuser',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='bppuser',
name='multiseek_format',
field=models.CharField(blank=True, max_length=200, null=True, verbose_name='Ostatnio wybrany format wyświetlania w Multiseeku'),
),
migrations.AlterField(
model_name='bppuser',
name='multiseek_order_1',
field=models.CharField(blank=True, max_length=200, null=True, verbose_name='Ostatnio wybrane pole sortowania w Multiseeku'),
),
migrations.AlterField(
model_name='bppuser',
name='per_page',
field=models.IntegerField(default=20, verbose_name='Ilość wyświetlanych rekordów na stronie'),
),
migrations.AlterField(
model_name='bppuser',
name='username',
field=models.CharField(error_messages={'unique': 'A user with that username already exists.'}, help_text='Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.', max_length=150, unique=True, validators=[django.contrib.auth.validators.UnicodeUsernameValidator()], verbose_name='username'),
),
migrations.AlterField(
model_name='charakter_formalny',
name='artykul_pbn',
field=models.BooleanField(default=False, help_text='Wydawnictwa ciągłe posiadające\n ten charakter formalny zostaną włączone do eksportu PBN jako artykuły', verbose_name='Artykuł w PBN'),
),
migrations.AlterField(
model_name='charakter_formalny',
name='charakter_pbn',
field=models.ForeignKey(blank=True, default=None, help_text='Wartość wybrana w tym polu zostanie użyta jako zawartość tagu <is>\n w plikach eksportu do PBN', null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Charakter_PBN', verbose_name='Charakter PBN'),
),
migrations.AlterField(
model_name='charakter_formalny',
name='ksiazka_pbn',
field=models.BooleanField(default=False, help_text='Wydawnictwa zwarte posiadające ten\n charakter formalny zostaną włączone do eksportu PBN jako ksiażki', verbose_name='Książka w PBN'),
),
migrations.AlterField(
model_name='charakter_formalny',
name='nazwa_w_primo',
field=models.CharField(blank=True, choices=[('', ''), ('Artykuł', 'Artykuł'), ('Książka', 'Książka'), ('Zasób tekstowy', 'Zasób tekstowy'), ('Rozprawa naukowa', 'Rozprawa naukowa'), ('Recenzja', 'Recenzja'), ('Artykuł prasowy', 'Artykuł prasowy'), ('Rozdział', 'Rozdział'), ('Czasopismo', 'Czasopismo'), ('Dane badawcze', 'Dane badawcze'), ('Materiał konferencyjny', 'Materiał konferencyjny'), ('Obraz', 'Obraz'), ('Baza', 'Baza'), ('Zestaw danych statystycznych', 'Zestaw danych statystycznych'), ('Multimedia', 'Multimedia'), ('Inny', 'Inny')], db_index=True, default='', help_text='\n Nazwa charakteru formalnego w wyszukiwarce Primo, eksponowana przez OAI-PMH. W przypadku,\n gdy to pole jest puste, prace o danym charakterze formalnym nie będą udostępniane przez\n protokół OAI-PMH.\n ', max_length=100, verbose_name='Nazwa w Primo'),
),
migrations.AlterField(
model_name='charakter_formalny',
name='publikacja',
field=models.BooleanField(default=False, help_text='Jest charakterem dla publikacji'),
),
migrations.AlterField(
model_name='charakter_formalny',
name='rozdzial_pbn',
field=models.BooleanField(default=False, help_text='Wydawnictwa zwarte posiadające ten\n charakter formalny zostaną włączone do eksportu PBN jako rozdziały', verbose_name='Rozdział w PBN'),
),
migrations.AlterField(
model_name='charakter_formalny',
name='streszczenie',
field=models.BooleanField(default=False, help_text='Jest charakterem dla streszczeń'),
),
migrations.AlterField(
model_name='charakter_pbn',
name='wlasciwy_dla',
field=models.CharField(choices=[('article', 'Artykuł'), ('book', 'Książka'), ('chapter', 'Rozdział')], max_length=20),
),
migrations.AlterField(
model_name='jednostka',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='jednostka',
name='aktualna',
field=models.BooleanField(default=False, help_text="Jeżeli dana jednostka wchodzi w struktury wydziału\n (czyli jej obecność w strukturach wydziału nie została zakończona z określoną datą), to pole to będzie miało\n wartość 'PRAWDA'."),
),
migrations.AlterField(
model_name='jednostka',
name='email',
field=models.EmailField(blank=True, max_length=128, null=True, verbose_name='E-mail'),
),
migrations.AlterField(
model_name='jednostka',
name='pbn_id',
field=models.IntegerField(blank=True, db_index=True, help_text='Identyfikator w systemie Polskiej Bibliografii Naukowej (PBN)', null=True, unique=True, verbose_name='Identyfikator PBN'),
),
migrations.AlterField(
model_name='jednostka',
name='skrot',
field=models.CharField(max_length=128, unique=True, verbose_name='Skrót'),
),
migrations.AlterField(
model_name='jednostka',
name='skupia_pracownikow',
field=models.BooleanField(default=True, help_text="Ta jednostka skupia osoby będące faktycznymi pracownikami uczelni. Odznacz dla jednostek\n typu 'Studenci', 'Doktoranci', 'Pracownicy emerytowani' itp.", verbose_name='Skupia pracowników'),
),
migrations.AlterField(
model_name='jednostka',
name='slug',
field=autoslug.fields.AutoSlugField(editable=False, populate_from='nazwa', unique=True),
),
migrations.AlterField(
model_name='jednostka',
name='wchodzi_do_raportow',
field=models.BooleanField(db_index=True, default=True, verbose_name='Wchodzi do raportów'),
),
migrations.AlterField(
model_name='jednostka',
name='www',
field=models.URLField(blank=True, max_length=1024, null=True, verbose_name='WWW'),
),
migrations.AlterField(
model_name='jednostka',
name='wydzial',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Wydzial', verbose_name='Wydział'),
),
migrations.AlterField(
model_name='jednostka',
name='zarzadzaj_automatycznie',
field=models.BooleanField(default=True, help_text='Jednostka ta będzie dowolnie modyfikowana przez procedury importujace dane z zewnętrznych\n systemów informatycznych', verbose_name='Zarządzaj automatycznie'),
),
migrations.AlterField(
model_name='jezyk',
name='skrot_dla_pbn',
field=models.CharField(blank=True, help_text='\n Skrót nazwy języka używany w plikach eksportu do PBN.', max_length=10, verbose_name='Skrót dla PBN'),
),
migrations.AlterField(
model_name='patent',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='patent',
name='dostep_dnia',
field=models.DateField(blank=True, help_text='Data dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (płatny dostęp)'),
),
migrations.AlterField(
model_name='patent',
name='index_copernicus',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Index Copernicus'),
),
migrations.AlterField(
model_name='patent',
name='informacje',
field=models.TextField(blank=True, db_index=True, null=True, verbose_name='Informacje'),
),
migrations.AlterField(
model_name='patent',
name='kc_impact_factor',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', max_digits=6, null=True, verbose_name='KC: Impact factor'),
),
migrations.AlterField(
model_name='patent',
name='kc_index_copernicus',
field=models.DecimalField(blank=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Index Copernicus'),
),
migrations.AlterField(
model_name='patent',
name='kc_punkty_kbn',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Punkty KBN'),
),
migrations.AlterField(
model_name='patent',
name='opis_bibliograficzny_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='patent',
name='opis_bibliograficzny_zapisani_autorzy_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='patent',
name='public_dostep_dnia',
field=models.DateField(blank=True, help_text='Data wolnego dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (wolny dostęp)'),
),
migrations.AlterField(
model_name='patent',
name='public_www',
field=models.URLField(blank=True, max_length=2048, null=True, verbose_name='Adres WWW (wolny dostęp)'),
),
migrations.AlterField(
model_name='patent',
name='punktacja_wewnetrzna',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punktacja wewnętrzna'),
),
migrations.AlterField(
model_name='patent',
name='punkty_kbn',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punkty KBN'),
),
migrations.AlterField(
model_name='patent',
name='rok',
field=models.IntegerField(db_index=True, help_text='Rok uwzględniany przy wyszukiwaniu i raportach\n KBN/MNiSW)'),
),
migrations.AlterField(
model_name='patent',
name='slowa_kluczowe',
field=models.TextField(blank=True, null=True, verbose_name='Słowa kluczowe'),
),
migrations.AlterField(
model_name='patent',
name='szczegoly',
field=models.CharField(blank=True, help_text='Np. str. 23-45', max_length=512, null=True, verbose_name='Szczegóły'),
),
migrations.AlterField(
model_name='patent',
name='tytul_oryginalny',
field=models.TextField(db_index=True, verbose_name='Tytuł oryginalny'),
),
migrations.AlterField(
model_name='patent',
name='tytul_oryginalny_sort',
field=models.TextField(db_index=True, default=''),
),
migrations.AlterField(
model_name='patent',
name='utworzono',
field=models.DateTimeField(auto_now_add=True, null=True, verbose_name='Utworzono'),
),
migrations.AlterField(
model_name='patent',
name='www',
field=models.URLField(blank=True, max_length=1024, null=True, verbose_name='Adres WWW (płatny dostęp)'),
),
migrations.AlterField(
model_name='patent_autor',
name='kolejnosc',
field=models.IntegerField(default=0, verbose_name='Kolejność'),
),
migrations.AlterField(
model_name='patent_autor',
name='typ_odpowiedzialnosci',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Typ_Odpowiedzialnosci', verbose_name='Typ odpowiedzialności'),
),
migrations.AlterField(
model_name='patent_autor',
name='zatrudniony',
field=models.BooleanField(default=False, help_text='Pracownik jednostki podanej w afiliacji'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='praca_doktorska',
name='doi',
field=bpp.fields.DOIField(blank=True, db_index=True, help_text='Digital Object Identifier (DOI)', max_length=2048, null=True, verbose_name='DOI'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='dostep_dnia',
field=models.DateField(blank=True, help_text='Data dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (płatny dostęp)'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='e_isbn',
field=models.CharField(blank=True, db_index=True, max_length=64, null=True, verbose_name='E-ISBN'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='index_copernicus',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Index Copernicus'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='informacje',
field=models.TextField(blank=True, db_index=True, null=True, verbose_name='Informacje'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='isbn',
field=models.CharField(blank=True, db_index=True, max_length=64, null=True, verbose_name='ISBN'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='jezyk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Jezyk', verbose_name='Język'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='kc_impact_factor',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', max_digits=6, null=True, verbose_name='KC: Impact factor'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='kc_index_copernicus',
field=models.DecimalField(blank=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Index Copernicus'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='kc_punkty_kbn',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Punkty KBN'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='miejsce_i_rok',
field=models.CharField(blank=True, help_text='Przykładowo:\n Warszawa 2012. Wpisz proszę najpierw miejsce potem rok; oddziel\n spacją.', max_length=256, null=True),
),
migrations.AlterField(
model_name='praca_doktorska',
name='opis_bibliograficzny_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='praca_doktorska',
name='opis_bibliograficzny_zapisani_autorzy_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='praca_doktorska',
name='public_dostep_dnia',
field=models.DateField(blank=True, help_text='Data wolnego dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (wolny dostęp)'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='public_www',
field=models.URLField(blank=True, max_length=2048, null=True, verbose_name='Adres WWW (wolny dostęp)'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='pubmed_id',
field=models.BigIntegerField(blank=True, help_text='Identyfikator PubMed (PMID)', null=True, verbose_name='PubMed ID'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='punktacja_wewnetrzna',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punktacja wewnętrzna'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='punkty_kbn',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punkty KBN'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='rok',
field=models.IntegerField(db_index=True, help_text='Rok uwzględniany przy wyszukiwaniu i raportach\n KBN/MNiSW)'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='slowa_kluczowe',
field=models.TextField(blank=True, null=True, verbose_name='Słowa kluczowe'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='szczegoly',
field=models.CharField(blank=True, help_text='Np. str. 23-45', max_length=512, null=True, verbose_name='Szczegóły'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='typ_kbn',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Typ_KBN', verbose_name='Typ KBN'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='tytul',
field=models.TextField(blank=True, db_index=True, null=True, verbose_name='Tytuł'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='tytul_oryginalny',
field=models.TextField(db_index=True, verbose_name='Tytuł oryginalny'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='tytul_oryginalny_sort',
field=models.TextField(db_index=True, default=''),
),
migrations.AlterField(
model_name='praca_doktorska',
name='utworzono',
field=models.DateTimeField(auto_now_add=True, null=True, verbose_name='Utworzono'),
),
migrations.AlterField(
model_name='praca_doktorska',
name='www',
field=models.URLField(blank=True, max_length=1024, null=True, verbose_name='Adres WWW (płatny dostęp)'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='doi',
field=bpp.fields.DOIField(blank=True, db_index=True, help_text='Digital Object Identifier (DOI)', max_length=2048, null=True, verbose_name='DOI'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='dostep_dnia',
field=models.DateField(blank=True, help_text='Data dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (płatny dostęp)'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='e_isbn',
field=models.CharField(blank=True, db_index=True, max_length=64, null=True, verbose_name='E-ISBN'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='index_copernicus',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Index Copernicus'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='informacje',
field=models.TextField(blank=True, db_index=True, null=True, verbose_name='Informacje'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='isbn',
field=models.CharField(blank=True, db_index=True, max_length=64, null=True, verbose_name='ISBN'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='jezyk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Jezyk', verbose_name='Język'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='kc_impact_factor',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', max_digits=6, null=True, verbose_name='KC: Impact factor'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='kc_index_copernicus',
field=models.DecimalField(blank=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Index Copernicus'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='kc_punkty_kbn',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Punkty KBN'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='miejsce_i_rok',
field=models.CharField(blank=True, help_text='Przykładowo:\n Warszawa 2012. Wpisz proszę najpierw miejsce potem rok; oddziel\n spacją.', max_length=256, null=True),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='opis_bibliograficzny_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='opis_bibliograficzny_zapisani_autorzy_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='public_dostep_dnia',
field=models.DateField(blank=True, help_text='Data wolnego dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (wolny dostęp)'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='public_www',
field=models.URLField(blank=True, max_length=2048, null=True, verbose_name='Adres WWW (wolny dostęp)'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='pubmed_id',
field=models.BigIntegerField(blank=True, help_text='Identyfikator PubMed (PMID)', null=True, verbose_name='PubMed ID'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='punktacja_wewnetrzna',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punktacja wewnętrzna'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='punkty_kbn',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punkty KBN'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='rok',
field=models.IntegerField(db_index=True, help_text='Rok uwzględniany przy wyszukiwaniu i raportach\n KBN/MNiSW)'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='slowa_kluczowe',
field=models.TextField(blank=True, null=True, verbose_name='Słowa kluczowe'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='szczegoly',
field=models.CharField(blank=True, help_text='Np. str. 23-45', max_length=512, null=True, verbose_name='Szczegóły'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='typ_kbn',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Typ_KBN', verbose_name='Typ KBN'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='tytul',
field=models.TextField(blank=True, db_index=True, null=True, verbose_name='Tytuł'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='tytul_oryginalny',
field=models.TextField(db_index=True, verbose_name='Tytuł oryginalny'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='tytul_oryginalny_sort',
field=models.TextField(db_index=True, default=''),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='utworzono',
field=models.DateTimeField(auto_now_add=True, null=True, verbose_name='Utworzono'),
),
migrations.AlterField(
model_name='praca_habilitacyjna',
name='www',
field=models.URLField(blank=True, max_length=1024, null=True, verbose_name='Adres WWW (płatny dostęp)'),
),
migrations.AlterField(
model_name='publikacja_habilitacyjna',
name='kolejnosc',
field=models.IntegerField(default=0, verbose_name='Kolejność'),
),
migrations.AlterField(
model_name='punktacja_zrodla',
name='index_copernicus',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Index Copernicus'),
),
migrations.AlterField(
model_name='punktacja_zrodla',
name='kc_impact_factor',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', max_digits=6, null=True, verbose_name='KC: Impact factor'),
),
migrations.AlterField(
model_name='punktacja_zrodla',
name='kc_index_copernicus',
field=models.DecimalField(blank=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Index Copernicus'),
),
migrations.AlterField(
model_name='punktacja_zrodla',
name='kc_punkty_kbn',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Punkty KBN'),
),
migrations.AlterField(
model_name='punktacja_zrodla',
name='punktacja_wewnetrzna',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punktacja wewnętrzna'),
),
migrations.AlterField(
model_name='punktacja_zrodla',
name='punkty_kbn',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punkty KBN'),
),
migrations.AlterField(
model_name='typ_kbn',
name='artykul_pbn',
field=models.BooleanField(default=False, help_text='Wydawnictwa ciągłe posiadające\n ten typ KBN zostaną włączone do eksportu PBN jako artykuły', verbose_name='Artykuł w PBN'),
),
migrations.AlterField(
model_name='uczelnia',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='uczelnia',
name='favicon_ico',
field=models.FileField(blank=True, null=True, upload_to='favicon', verbose_name='Ikona ulubionych (favicon)'),
),
migrations.AlterField(
model_name='uczelnia',
name='logo_svg',
field=models.FileField(blank=True, null=True, upload_to='logo_svg', verbose_name='Logo wektorowe (SVG)'),
),
migrations.AlterField(
model_name='uczelnia',
name='logo_www',
field=models.ImageField(blank=True, help_text='Plik w formacie bitmapowym, np. JPEG lub PNG,\n w rozdzielczości maks. 100x100', null=True, upload_to='logo', verbose_name='Logo na stronę WWW'),
),
migrations.AlterField(
model_name='uczelnia',
name='obca_jednostka',
field=models.ForeignKey(blank=True, help_text='\n Jednostka skupiająca autorów nieindeksowanych, nie będących pracownikami uczelni. Procedury importujące\n dane z zewnętrznych systemów informatycznych będą przypisywać do tej jednostki osoby, które zakończyły\n pracę na uczelni. ', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='obca_jednostka', to='bpp.Jednostka'),
),
migrations.AlterField(
model_name='uczelnia',
name='pbn_id',
field=models.IntegerField(blank=True, db_index=True, help_text='Identyfikator w systemie Polskiej Bibliografii Naukowej (PBN)', null=True, unique=True, verbose_name='Identyfikator PBN'),
),
migrations.AlterField(
model_name='uczelnia',
name='slug',
field=autoslug.fields.AutoSlugField(editable=False, populate_from='skrot', unique=True),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='charakter_formalny',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Charakter_Formalny', verbose_name='Charakter formalny'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='doi',
field=bpp.fields.DOIField(blank=True, db_index=True, help_text='Digital Object Identifier (DOI)', max_length=2048, null=True, verbose_name='DOI'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='dostep_dnia',
field=models.DateField(blank=True, help_text='Data dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (płatny dostęp)'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='e_issn',
field=models.CharField(blank=True, max_length=32, null=True, verbose_name='e-ISSN'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='index_copernicus',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Index Copernicus'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='informacje',
field=models.TextField(blank=True, db_index=True, null=True, verbose_name='Informacje'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='issn',
field=models.CharField(blank=True, max_length=32, null=True, verbose_name='ISSN'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='jezyk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Jezyk', verbose_name='Język'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='kc_impact_factor',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', max_digits=6, null=True, verbose_name='KC: Impact factor'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='kc_index_copernicus',
field=models.DecimalField(blank=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Index Copernicus'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='kc_punkty_kbn',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Punkty KBN'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='liczba_znakow_wydawniczych',
field=models.IntegerField(blank=True, db_index=True, null=True, verbose_name='Liczba znaków wydawniczych'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='openaccess_czas_publikacji',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Czas_Udostepnienia_OpenAccess', verbose_name='OpenAccess: czas udostępnienia'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='openaccess_ilosc_miesiecy',
field=models.PositiveIntegerField(blank=True, help_text='Ilość miesięcy jakie upłynęły od momentu opublikowania do momentu udostępnienia', null=True, verbose_name='OpenAccess: ilość miesięcy'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='openaccess_licencja',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Licencja_OpenAccess', verbose_name='OpenAccess: licencja'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='openaccess_tryb_dostepu',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Tryb_OpenAccess_Wydawnictwo_Ciagle', verbose_name='OpenAccess: tryb dostępu'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='openaccess_wersja_tekstu',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Wersja_Tekstu_OpenAccess', verbose_name='OpenAccess: wersja tekstu'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='opis_bibliograficzny_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='opis_bibliograficzny_zapisani_autorzy_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='ostatnio_zmieniony_dla_pbn',
field=models.DateTimeField(auto_now_add=True, help_text='Moment ostatniej aktualizacji rekordu dla potrzeb PBN. To pole zmieni się automatycznie, gdy\n nastąpi zmiana dowolnego z pól za wyjątkiem bloków pól: „punktacja”, „punktacja komisji centralnej”,\n „adnotacje” oraz pole „status korekty”.', verbose_name='Ostatnio zmieniony (dla PBN)'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='public_dostep_dnia',
field=models.DateField(blank=True, help_text='Data wolnego dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (wolny dostęp)'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='public_www',
field=models.URLField(blank=True, max_length=2048, null=True, verbose_name='Adres WWW (wolny dostęp)'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='pubmed_id',
field=models.BigIntegerField(blank=True, help_text='Identyfikator PubMed (PMID)', null=True, verbose_name='PubMed ID'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='punktacja_wewnetrzna',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punktacja wewnętrzna'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='punkty_kbn',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punkty KBN'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='rok',
field=models.IntegerField(db_index=True, help_text='Rok uwzględniany przy wyszukiwaniu i raportach\n KBN/MNiSW)'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='slowa_kluczowe',
field=models.TextField(blank=True, null=True, verbose_name='Słowa kluczowe'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='szczegoly',
field=models.CharField(blank=True, help_text='Np. str. 23-45', max_length=512, null=True, verbose_name='Szczegóły'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='typ_kbn',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Typ_KBN', verbose_name='Typ KBN'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='tytul',
field=models.TextField(blank=True, db_index=True, null=True, verbose_name='Tytuł'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='tytul_oryginalny',
field=models.TextField(db_index=True, verbose_name='Tytuł oryginalny'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='tytul_oryginalny_sort',
field=models.TextField(db_index=True, default=''),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='utworzono',
field=models.DateTimeField(auto_now_add=True, null=True, verbose_name='Utworzono'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='www',
field=models.URLField(blank=True, max_length=1024, null=True, verbose_name='Adres WWW (płatny dostęp)'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle',
name='zrodlo',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='bpp.Zrodlo', verbose_name='Źródło'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle_autor',
name='kolejnosc',
field=models.IntegerField(default=0, verbose_name='Kolejność'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle_autor',
name='typ_odpowiedzialnosci',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Typ_Odpowiedzialnosci', verbose_name='Typ odpowiedzialności'),
),
migrations.AlterField(
model_name='wydawnictwo_ciagle_autor',
name='zatrudniony',
field=models.BooleanField(default=False, help_text='Pracownik jednostki podanej w afiliacji'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='calkowita_liczba_autorow',
field=models.PositiveIntegerField(blank=True, help_text='Jeżeli dodajesz monografię, wpisz tutaj całkowitą liczbę\n autorów monografii. Ta informacja zostanie użyta w eksporcie danych do PBN.', null=True),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='charakter_formalny',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Charakter_Formalny', verbose_name='Charakter formalny'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='doi',
field=bpp.fields.DOIField(blank=True, db_index=True, help_text='Digital Object Identifier (DOI)', max_length=2048, null=True, verbose_name='DOI'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='dostep_dnia',
field=models.DateField(blank=True, help_text='Data dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (płatny dostęp)'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='e_isbn',
field=models.CharField(blank=True, db_index=True, max_length=64, null=True, verbose_name='E-ISBN'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='index_copernicus',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Index Copernicus'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='informacje',
field=models.TextField(blank=True, db_index=True, null=True, verbose_name='Informacje'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='isbn',
field=models.CharField(blank=True, db_index=True, max_length=64, null=True, verbose_name='ISBN'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='jezyk',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Jezyk', verbose_name='Język'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='kc_impact_factor',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa tego raportu.', max_digits=6, null=True, verbose_name='KC: Impact factor'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='kc_index_copernicus',
field=models.DecimalField(blank=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Index Copernicus'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='kc_punkty_kbn',
field=models.DecimalField(blank=True, db_index=True, decimal_places=2, default=None, help_text='Jeżeli wpiszesz\n wartość w to pole, to zostanie ona użyta w raporcie dla Komisji\n Centralnej w punkcie IXa i IXb tego raportu.', max_digits=6, null=True, verbose_name='KC: Punkty KBN'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='liczba_znakow_wydawniczych',
field=models.IntegerField(blank=True, db_index=True, null=True, verbose_name='Liczba znaków wydawniczych'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='miejsce_i_rok',
field=models.CharField(blank=True, help_text='Przykładowo:\n Warszawa 2012. Wpisz proszę najpierw miejsce potem rok; oddziel\n spacją.', max_length=256, null=True),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='openaccess_czas_publikacji',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Czas_Udostepnienia_OpenAccess', verbose_name='OpenAccess: czas udostępnienia'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='openaccess_ilosc_miesiecy',
field=models.PositiveIntegerField(blank=True, help_text='Ilość miesięcy jakie upłynęły od momentu opublikowania do momentu udostępnienia', null=True, verbose_name='OpenAccess: ilość miesięcy'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='openaccess_licencja',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Licencja_OpenAccess', verbose_name='OpenAccess: licencja'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='openaccess_wersja_tekstu',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Wersja_Tekstu_OpenAccess', verbose_name='OpenAccess: wersja tekstu'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='opis_bibliograficzny_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='opis_bibliograficzny_zapisani_autorzy_cache',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='ostatnio_zmieniony_dla_pbn',
field=models.DateTimeField(auto_now_add=True, help_text='Moment ostatniej aktualizacji rekordu dla potrzeb PBN. To pole zmieni się automatycznie, gdy\n nastąpi zmiana dowolnego z pól za wyjątkiem bloków pól: „punktacja”, „punktacja komisji centralnej”,\n „adnotacje” oraz pole „status korekty”.', verbose_name='Ostatnio zmieniony (dla PBN)'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='public_dostep_dnia',
field=models.DateField(blank=True, help_text='Data wolnego dostępu do strony WWW.', null=True, verbose_name='Dostęp dnia (wolny dostęp)'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='public_www',
field=models.URLField(blank=True, max_length=2048, null=True, verbose_name='Adres WWW (wolny dostęp)'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='pubmed_id',
field=models.BigIntegerField(blank=True, help_text='Identyfikator PubMed (PMID)', null=True, verbose_name='PubMed ID'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='punktacja_wewnetrzna',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punktacja wewnętrzna'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='punkty_kbn',
field=models.DecimalField(db_index=True, decimal_places=2, default=Decimal('0.00'), max_digits=6, verbose_name='Punkty KBN'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='rok',
field=models.IntegerField(db_index=True, help_text='Rok uwzględniany przy wyszukiwaniu i raportach\n KBN/MNiSW)'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='slowa_kluczowe',
field=models.TextField(blank=True, null=True, verbose_name='Słowa kluczowe'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='szczegoly',
field=models.CharField(blank=True, help_text='Np. str. 23-45', max_length=512, null=True, verbose_name='Szczegóły'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='typ_kbn',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Typ_KBN', verbose_name='Typ KBN'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='tytul',
field=models.TextField(blank=True, db_index=True, null=True, verbose_name='Tytuł'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='tytul_oryginalny',
field=models.TextField(db_index=True, verbose_name='Tytuł oryginalny'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='tytul_oryginalny_sort',
field=models.TextField(db_index=True, default=''),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='utworzono',
field=models.DateTimeField(auto_now_add=True, null=True, verbose_name='Utworzono'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='www',
field=models.URLField(blank=True, max_length=1024, null=True, verbose_name='Adres WWW (płatny dostęp)'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte',
name='wydawnictwo_nadrzedne',
field=models.ForeignKey(blank=True, help_text='Jeżeli dodajesz rozdział,\n tu wybierz pracę, w ramach której dany rozdział występuje.', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='wydawnictwa_powiazane_set', to='bpp.Wydawnictwo_Zwarte'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte_autor',
name='kolejnosc',
field=models.IntegerField(default=0, verbose_name='Kolejność'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte_autor',
name='typ_odpowiedzialnosci',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='bpp.Typ_Odpowiedzialnosci', verbose_name='Typ odpowiedzialności'),
),
migrations.AlterField(
model_name='wydawnictwo_zwarte_autor',
name='zatrudniony',
field=models.BooleanField(default=False, help_text='Pracownik jednostki podanej w afiliacji'),
),
migrations.AlterField(
model_name='wydzial',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='wydzial',
name='kolejnosc',
field=models.IntegerField(default=0, verbose_name='Kolejność'),
),
migrations.AlterField(
model_name='wydzial',
name='nazwa',
field=models.CharField(help_text='Pełna nazwa wydziału, np. "Wydział Lekarski"', max_length=512, unique=True),
),
migrations.AlterField(
model_name='wydzial',
name='otwarcie',
field=models.DateField(blank=True, null=True, verbose_name='Data otwarcia wydziału'),
),
migrations.AlterField(
model_name='wydzial',
name='pbn_id',
field=models.IntegerField(blank=True, db_index=True, help_text='Identyfikator w systemie Polskiej Bibliografii Naukowej (PBN)', null=True, unique=True, verbose_name='Identyfikator PBN'),
),
migrations.AlterField(
model_name='wydzial',
name='poprzednie_nazwy',
field=models.CharField(blank=True, default='', max_length=4096, null=True),
),
migrations.AlterField(
model_name='wydzial',
name='skrot',
field=models.CharField(help_text='Skrót nazwy wydziału, wersja minimalna, np. "WL"', max_length=4, unique=True, verbose_name='Skrót'),
),
migrations.AlterField(
model_name='wydzial',
name='slug',
field=autoslug.fields.AutoSlugField(editable=False, max_length=512, populate_from='nazwa', unique=True),
),
migrations.AlterField(
model_name='wydzial',
name='widoczny',
field=models.BooleanField(default=True, help_text='Czy wydział ma być widoczny przy przeglądaniu strony dla zakładki "Uczelnia"?'),
),
migrations.AlterField(
model_name='wydzial',
name='zamkniecie',
field=models.DateField(blank=True, null=True, verbose_name='Data zamknięcia wydziału'),
),
migrations.AlterField(
model_name='wydzial',
name='zarzadzaj_automatycznie',
field=models.BooleanField(default=True, help_text="Wydział ten będzie dowolnie modyfikowany przez procedury importujace dane z zewnętrznych\n systemów informatycznych. W przypadku, gdy pole ma ustawioną wartość na 'fałsz', wydział ten może być", verbose_name='Zarządzaj automatycznie'),
),
migrations.AlterField(
model_name='wydzial',
name='zezwalaj_na_ranking_autorow',
field=models.BooleanField(default=True, verbose_name='Zezwalaj na generowanie rankingu autorów dla tego wydziału'),
),
migrations.AlterField(
model_name='zrodlo',
name='adnotacje',
field=models.TextField(blank=True, db_index=True, help_text='Pole do użytku wewnętrznego -\n wpisane tu informacje nie są wyświetlane na stronach WWW dostępnych\n dla użytkowników końcowych.', null=True),
),
migrations.AlterField(
model_name='zrodlo',
name='doi',
field=bpp.fields.DOIField(blank=True, db_index=True, help_text='Digital Object Identifier (DOI)', max_length=2048, null=True, verbose_name='DOI'),
),
migrations.AlterField(
model_name='zrodlo',
name='e_issn',
field=models.CharField(blank=True, max_length=32, null=True, verbose_name='e-ISSN'),
),
migrations.AlterField(
model_name='zrodlo',
name='issn',
field=models.CharField(blank=True, max_length=32, null=True, verbose_name='ISSN'),
),
migrations.AlterField(
model_name='zrodlo',
name='openaccess_licencja',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='bpp.Licencja_OpenAccess', verbose_name='OpenAccess: licencja'),
),
migrations.AlterField(
model_name='zrodlo',
name='openaccess_tryb_dostepu',
field=models.CharField(blank=True, choices=[('FULL', 'pełny'), ('PARTIAL', 'częściowy')], db_index=True, max_length=50, verbose_name='OpenAccess: tryb dostępu'),
),
migrations.AlterField(
model_name='zrodlo',
name='poprzednia_nazwa',
field=models.CharField(blank=True, db_index=True, max_length=1024, null=True, verbose_name='Poprzedni tytuł'),
),
migrations.AlterField(
model_name='zrodlo',
name='skrot',
field=models.CharField(db_index=True, max_length=512, verbose_name='Skrót'),
),
migrations.AlterField(
model_name='zrodlo',
name='slug',
field=autoslug.fields.AutoSlugField(editable=False, populate_from='nazwa', unique=True),
),
migrations.AlterField(
model_name='zrodlo',
name='www',
field=models.URLField(blank=True, db_index=True, max_length=1024, null=True, verbose_name='WWW'),
),
]
| 53.754733 | 863 | 0.635549 | 7,035 | 65,312 | 5.726226 | 0.080881 | 0.114189 | 0.142737 | 0.165574 | 0.877867 | 0.864214 | 0.841575 | 0.8063 | 0.77954 | 0.751887 | 0 | 0.007816 | 0.251638 | 65,312 | 1,214 | 864 | 53.799012 | 0.816212 | 0.001041 | 0 | 0.90058 | 1 | 0.032312 | 0.307138 | 0.023252 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008285 | 0 | 0.010771 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
f43e4e4ea607c748a8c3ffd0c2c96ee22c5479b4 | 4,142 | py | Python | 1400OS_11_Codes/demo_pca.py | radiumweilei/building-machine-learning-systems-with-python | 45ed110daf9f72c992ac93c8b3dc4cd29211f974 | [
"Apache-2.0"
] | 2 | 2018-05-11T15:16:27.000Z | 2018-05-11T15:16:31.000Z | 1400OS_11_Codes/demo_pca.py | radiumweilei/building-machine-learning-systems-with-python | 45ed110daf9f72c992ac93c8b3dc4cd29211f974 | [
"Apache-2.0"
] | null | null | null | 1400OS_11_Codes/demo_pca.py | radiumweilei/building-machine-learning-systems-with-python | 45ed110daf9f72c992ac93c8b3dc4cd29211f974 | [
"Apache-2.0"
] | null | null | null | import os
import numpy as np
from matplotlib import pylab
from sklearn import linear_model, decomposition
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as lda
logistic = linear_model.LogisticRegression()
CHART_DIR = os.path.join("..", "charts")
np.random.seed(3)
x1 = np.arange(0, 10, .2)
x2 = x1 + np.random.normal(loc=0, scale=1, size=len(x1))
def plot_simple_demo_1():
pylab.clf()
fig = pylab.figure(num=None, figsize=(10, 4))
pylab.subplot(121)
title = "Original feature space"
pylab.title(title)
pylab.xlabel("$X_1$")
pylab.ylabel("$X_2$")
x1 = np.arange(0, 10, .2)
x2 = x1 + np.random.normal(loc=0, scale=1, size=len(x1))
good = (x1 > 5) | (x2 > 5)
bad = ~good
x1g = x1[good]
x2g = x2[good]
pylab.scatter(x1g, x2g, edgecolor="blue", facecolor="blue")
x1b = x1[bad]
x2b = x2[bad]
pylab.scatter(x1b, x2b, edgecolor="red", facecolor="white")
pylab.grid(True)
pylab.subplot(122)
X = np.c_[(x1, x2)]
pca = decomposition.PCA(n_components=1)
Xtrans = pca.fit_transform(X)
Xg = Xtrans[good]
Xb = Xtrans[bad]
pylab.scatter(Xg[:, 0], np.zeros(len(Xg)), edgecolor="blue", facecolor="blue")
pylab.scatter(Xb[:, 0], np.zeros(len(Xb)), edgecolor="red", facecolor="white")
title = "Transformed feature space"
pylab.title(title)
pylab.xlabel("$X'$")
fig.axes[1].get_yaxis().set_visible(False)
print(pca.explained_variance_ratio_)
pylab.grid(True)
pylab.autoscale(tight=True)
filename = "pca_demo_1.png"
pylab.savefig(os.path.join(CHART_DIR, filename), bbox_inches="tight")
def plot_simple_demo_2():
pylab.clf()
fig = pylab.figure(num=None, figsize=(10, 4))
pylab.subplot(121)
title = "Original feature space"
pylab.title(title)
pylab.xlabel("$X_1$")
pylab.ylabel("$X_2$")
x1 = np.arange(0, 10, .2)
x2 = x1 + np.random.normal(loc=0, scale=1, size=len(x1))
good = x1 > x2
bad = ~good
x1g = x1[good]
x2g = x2[good]
pylab.scatter(x1g, x2g, edgecolor="blue", facecolor="blue")
x1b = x1[bad]
x2b = x2[bad]
pylab.scatter(x1b, x2b, edgecolor="red", facecolor="white")
pylab.grid(True)
pylab.subplot(122)
X = np.c_[(x1, x2)]
pca = decomposition.PCA(n_components=1)
Xtrans = pca.fit_transform(X)
Xg = Xtrans[good]
Xb = Xtrans[bad]
pylab.scatter(Xg[:, 0], np.zeros(len(Xg)), edgecolor="blue", facecolor="blue")
pylab.scatter(Xb[:, 0], np.zeros(len(Xb)), edgecolor="red", facecolor="white")
title = "Transformed feature space"
pylab.title(title)
pylab.xlabel("$X'$")
fig.axes[1].get_yaxis().set_visible(False)
print(pca.explained_variance_ratio_)
pylab.grid(True)
pylab.autoscale(tight=True)
filename = "pca_demo_2.png"
pylab.savefig(os.path.join(CHART_DIR, filename), bbox_inches="tight")
def plot_simple_demo_lda():
pylab.clf()
fig = pylab.figure(num=None, figsize=(10, 4))
pylab.subplot(121)
title = "Original feature space"
pylab.title(title)
pylab.xlabel("$X_1$")
pylab.ylabel("$X_2$")
good = x1 > x2
bad = ~good
x1g = x1[good]
x2g = x2[good]
pylab.scatter(x1g, x2g, edgecolor="blue", facecolor="blue")
x1b = x1[bad]
x2b = x2[bad]
pylab.scatter(x1b, x2b, edgecolor="red", facecolor="white")
pylab.grid(True)
pylab.subplot(122)
X = np.c_[(x1, x2)]
lda_inst = lda(n_components=1)
Xtrans = lda_inst.fit_transform(X, good)
Xg = Xtrans[good]
Xb = Xtrans[bad]
pylab.scatter(Xg[:, 0], np.zeros(len(Xg)), edgecolor="blue", facecolor="blue")
pylab.scatter(Xb[:, 0], np.zeros(len(Xb)), edgecolor="red", facecolor="white")
title = "Transformed feature space"
pylab.title(title)
pylab.xlabel("$X'$")
fig.axes[1].get_yaxis().set_visible(False)
pylab.grid(True)
pylab.autoscale(tight=True)
filename = "lda_demo.png"
pylab.savefig(os.path.join(CHART_DIR, filename), bbox_inches="tight")
if __name__ == '__main__':
plot_simple_demo_1()
plot_simple_demo_2()
plot_simple_demo_lda()
| 24.081395 | 82 | 0.635925 | 597 | 4,142 | 4.294807 | 0.177554 | 0.056162 | 0.032761 | 0.051482 | 0.853744 | 0.853744 | 0.853744 | 0.853744 | 0.836583 | 0.836583 | 0 | 0.039711 | 0.197489 | 4,142 | 171 | 83 | 24.222222 | 0.731649 | 0 | 0 | 0.823529 | 0 | 0 | 0.0845 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02521 | false | 0 | 0.042017 | 0 | 0.067227 | 0.016807 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f4721b72d55b6c965726dd1aa1ec83b498db935f | 3,843 | py | Python | tests/formats/test_audacity.py | CostanzoPablo/audiomate | 080402eadaa81f77f64c8680510a2de64bc18e74 | [
"MIT"
] | 133 | 2018-05-18T13:54:10.000Z | 2022-02-15T02:14:20.000Z | tests/formats/test_audacity.py | CostanzoPablo/audiomate | 080402eadaa81f77f64c8680510a2de64bc18e74 | [
"MIT"
] | 68 | 2018-06-03T16:42:09.000Z | 2021-01-29T10:58:30.000Z | tests/formats/test_audacity.py | CostanzoPablo/audiomate | 080402eadaa81f77f64c8680510a2de64bc18e74 | [
"MIT"
] | 37 | 2018-11-02T02:40:29.000Z | 2021-11-30T07:44:50.000Z | import os.path
from audiomate import annotations
from audiomate.formats import audacity
class TestAudacityFormat:
def test_read_label_file_en(self):
path = os.path.join(os.path.dirname(__file__), 'audacity_labels_en.txt')
labels = audacity.read_label_file(path)
assert len(labels) == 2
assert labels[0][0] == 43352.824046
assert labels[0][1] == 43525.837661
assert labels[0][2] == 'music'
assert labels[1][0] == 43512.446969
assert labels[1][1] == 43531.343483
assert labels[1][2] == 'speech_male'
def test_read_label_file_de(self):
path = os.path.join(os.path.dirname(__file__), 'audacity_labels_de.txt')
labels = audacity.read_label_file(path)
assert len(labels) == 2
assert labels[0][0] == 43352.824046
assert labels[0][1] == 43525.837661
assert labels[0][2] == 'music'
assert labels[1][0] == 43512.446969
assert labels[1][1] == 43531.343483
assert labels[1][2] == 'speech_male'
def test_read_label_file_with_empty_value(self):
path = os.path.join(os.path.dirname(__file__), 'audacity_labels_empty_value.txt')
labels = audacity.read_label_file(path)
assert len(labels) == 3
assert labels[0][0] == 1
assert labels[0][1] == 4
assert labels[0][2] == 'music'
assert labels[1][0] == 4
assert labels[1][1] == 7
assert labels[1][2] == ''
assert labels[2][0] == 7
assert labels[2][1] == 9
assert labels[2][2] == 'speech_male'
def test_read_label_list_en(self):
path = os.path.join(os.path.dirname(__file__), 'audacity_labels_en.txt')
ll = audacity.read_label_list(path)
assert ll == annotations.LabelList(labels=[
annotations.Label('music', 43352.824046, 43525.837661),
annotations.Label('speech_male', 43512.446969, 43531.343483),
])
def test_read_label_list_de(self):
path = os.path.join(os.path.dirname(__file__), 'audacity_labels_de.txt')
ll = audacity.read_label_list(path)
assert ll == annotations.LabelList(labels=[
annotations.Label('music', 43352.824046, 43525.837661),
annotations.Label('speech_male', 43512.446969, 43531.343483),
])
def test_read_label_list_with_empty_value(self):
path = os.path.join(os.path.dirname(__file__), 'audacity_labels_empty_value.txt')
ll = audacity.read_label_list(path)
assert ll == annotations.LabelList(labels=[
annotations.Label('music', 1, 4),
annotations.Label('', 4, 7),
annotations.Label('speech_male', 7, 9),
])
def test_write_label_file(self, tmpdir):
path = os.path.join(tmpdir.strpath, 'audacity_labels.txt')
entries = [
[10.01, 11.08, 'music'],
[11.08, 13.33, 'speech_male']
]
audacity.write_label_file(path, entries)
assert os.path.isfile(path)
with open(path) as file:
lines = file.readlines()
assert len(lines) == 2
assert lines[0] == '10.01\t11.08\tmusic\n'
assert lines[1] == '11.08\t13.33\tspeech_male\n'
def test_write_label_list(self, tmpdir):
path = os.path.join(tmpdir.strpath, 'audacity_labels.txt')
ll = annotations.LabelList(labels=[
annotations.Label('music', 10.01, 11.08),
annotations.Label('speech_male', 11.08, 13.33),
])
audacity.write_label_list(path, ll)
assert os.path.isfile(path)
with open(path) as file:
lines = file.readlines()
assert len(lines) == 2
assert lines[0] == '10.01\t11.08\tmusic\n'
assert lines[1] == '11.08\t13.33\tspeech_male\n'
| 32.025 | 89 | 0.601353 | 504 | 3,843 | 4.386905 | 0.134921 | 0.113976 | 0.052917 | 0.050656 | 0.803709 | 0.792854 | 0.792854 | 0.75848 | 0.75848 | 0.743555 | 0 | 0.106728 | 0.261254 | 3,843 | 119 | 90 | 32.294118 | 0.672068 | 0 | 0 | 0.611765 | 0 | 0 | 0.107208 | 0.064012 | 0 | 0 | 0 | 0 | 0.411765 | 1 | 0.094118 | false | 0 | 0.035294 | 0 | 0.141176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f47cd4e56a822f4fea84e1bfaa72906e6d194525 | 302 | py | Python | graphs/functions/__init__.py | CSI-BennettUniversity/Sample-Project-1 | 23197352372b7ad00a026683477b5a95a4178e35 | [
"MIT"
] | 5 | 2020-07-30T16:47:30.000Z | 2021-02-15T16:44:59.000Z | graphs/functions/__init__.py | CSI-BennettUniversity/Sample-Project-1 | 23197352372b7ad00a026683477b5a95a4178e35 | [
"MIT"
] | 4 | 2021-06-04T23:42:41.000Z | 2021-09-11T03:17:12.000Z | graphs/functions/__init__.py | CSI-BennettUniversity/Sample-Project-1 | 23197352372b7ad00a026683477b5a95a4178e35 | [
"MIT"
] | 7 | 2020-07-05T14:29:17.000Z | 2021-06-05T14:34:20.000Z | from .graph_wrapper import return_ocean_descriptions_with_graph, return_comparison_graphs
from .dict_data import return_plot_and_view_data, return_share_box
__all__ = ['return_plot_and_view_data', 'return_share_box',
'return_ocean_descriptions_with_graph', 'return_comparison_graphs']
| 50.333333 | 90 | 0.831126 | 41 | 302 | 5.390244 | 0.439024 | 0.108597 | 0.208145 | 0.244344 | 0.80543 | 0.80543 | 0.80543 | 0.80543 | 0 | 0 | 0 | 0 | 0.109272 | 302 | 5 | 91 | 60.4 | 0.821561 | 0 | 0 | 0 | 0 | 0 | 0.340067 | 0.286195 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
be2f6f80b8d5bcf0cb698b5b06fc60e7f8e6dd7c | 9,169 | py | Python | terrascript/google/r.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/google/r.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/google/r.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | # terrascript/google/r.py
import terrascript
class google_access_context_manager_access_policy(terrascript.Resource):
pass
class google_access_context_manager_access_level(terrascript.Resource):
pass
class google_access_context_manager_service_perimeter(terrascript.Resource):
pass
class google_app_engine_firewall_rule(terrascript.Resource):
pass
class google_bigquery_dataset(terrascript.Resource):
pass
class google_bigquery_data_transfer_config(terrascript.Resource):
pass
class google_bigtable_app_profile(terrascript.Resource):
pass
class google_binary_authorization_attestor(terrascript.Resource):
pass
class google_binary_authorization_policy(terrascript.Resource):
pass
class google_cloudbuild_trigger(terrascript.Resource):
pass
class google_cloud_scheduler_job(terrascript.Resource):
pass
class google_compute_address(terrascript.Resource):
pass
class google_compute_autoscaler(terrascript.Resource):
pass
class google_compute_backend_bucket(terrascript.Resource):
pass
class google_compute_backend_bucket_signed_url_key(terrascript.Resource):
pass
class google_compute_backend_service(terrascript.Resource):
pass
class google_compute_region_backend_service(terrascript.Resource):
pass
class google_compute_backend_service_signed_url_key(terrascript.Resource):
pass
class google_compute_disk(terrascript.Resource):
pass
class google_compute_firewall(terrascript.Resource):
pass
class google_compute_forwarding_rule(terrascript.Resource):
pass
class google_compute_global_address(terrascript.Resource):
pass
class google_compute_global_forwarding_rule(terrascript.Resource):
pass
class google_compute_http_health_check(terrascript.Resource):
pass
class google_compute_https_health_check(terrascript.Resource):
pass
class google_compute_health_check(terrascript.Resource):
pass
class google_compute_image(terrascript.Resource):
pass
class google_compute_interconnect_attachment(terrascript.Resource):
pass
class google_compute_network(terrascript.Resource):
pass
class google_compute_network_endpoint(terrascript.Resource):
pass
class google_compute_network_endpoint_group(terrascript.Resource):
pass
class google_compute_node_group(terrascript.Resource):
pass
class google_compute_node_template(terrascript.Resource):
pass
class google_compute_region_autoscaler(terrascript.Resource):
pass
class google_compute_region_disk(terrascript.Resource):
pass
class google_compute_route(terrascript.Resource):
pass
class google_compute_router(terrascript.Resource):
pass
class google_compute_snapshot(terrascript.Resource):
pass
class google_compute_ssl_certificate(terrascript.Resource):
pass
class google_compute_ssl_policy(terrascript.Resource):
pass
class google_compute_subnetwork(terrascript.Resource):
pass
class google_compute_target_http_proxy(terrascript.Resource):
pass
class google_compute_target_https_proxy(terrascript.Resource):
pass
class google_compute_target_instance(terrascript.Resource):
pass
class google_compute_target_ssl_proxy(terrascript.Resource):
pass
class google_compute_target_tcp_proxy(terrascript.Resource):
pass
class google_compute_vpn_gateway(terrascript.Resource):
pass
class google_compute_url_map(terrascript.Resource):
pass
class google_compute_vpn_tunnel(terrascript.Resource):
pass
class google_dns_managed_zone(terrascript.Resource):
pass
class google_filestore_instance(terrascript.Resource):
pass
class google_firestore_index(terrascript.Resource):
pass
class google_kms_key_ring(terrascript.Resource):
pass
class google_kms_crypto_key(terrascript.Resource):
pass
class google_logging_metric(terrascript.Resource):
pass
class google_ml_engine_model(terrascript.Resource):
pass
class google_monitoring_alert_policy(terrascript.Resource):
pass
class google_monitoring_group(terrascript.Resource):
pass
class google_monitoring_notification_channel(terrascript.Resource):
pass
class google_monitoring_uptime_check_config(terrascript.Resource):
pass
class google_pubsub_topic(terrascript.Resource):
pass
class google_pubsub_subscription(terrascript.Resource):
pass
class google_redis_instance(terrascript.Resource):
pass
class google_resource_manager_lien(terrascript.Resource):
pass
class google_scc_source(terrascript.Resource):
pass
class google_sourcerepo_repository(terrascript.Resource):
pass
class google_spanner_instance(terrascript.Resource):
pass
class google_spanner_database(terrascript.Resource):
pass
class google_sql_database(terrascript.Resource):
pass
class google_storage_object_access_control(terrascript.Resource):
pass
class google_storage_default_object_access_control(terrascript.Resource):
pass
class google_tpu_node(terrascript.Resource):
pass
class google_app_engine_application(terrascript.Resource):
pass
class google_bigquery_table(terrascript.Resource):
pass
class google_bigtable_instance(terrascript.Resource):
pass
class google_bigtable_table(terrascript.Resource):
pass
class google_cloudfunctions_function(terrascript.Resource):
pass
class google_cloudiot_registry(terrascript.Resource):
pass
class google_composer_environment(terrascript.Resource):
pass
class google_compute_attached_disk(terrascript.Resource):
pass
class google_compute_instance(terrascript.Resource):
pass
class google_compute_instance_from_template(terrascript.Resource):
pass
class google_compute_instance_group(terrascript.Resource):
pass
class google_compute_instance_group_manager(terrascript.Resource):
pass
class google_compute_instance_template(terrascript.Resource):
pass
class google_compute_network_peering(terrascript.Resource):
pass
class google_compute_project_default_network_tier(terrascript.Resource):
pass
class google_compute_project_metadata(terrascript.Resource):
pass
class google_compute_project_metadata_item(terrascript.Resource):
pass
class google_compute_region_instance_group_manager(terrascript.Resource):
pass
class google_compute_router_interface(terrascript.Resource):
pass
class google_compute_router_nat(terrascript.Resource):
pass
class google_compute_router_peer(terrascript.Resource):
pass
class google_compute_security_policy(terrascript.Resource):
pass
class google_compute_shared_vpc_host_project(terrascript.Resource):
pass
class google_compute_shared_vpc_service_project(terrascript.Resource):
pass
class google_compute_target_pool(terrascript.Resource):
pass
class google_container_cluster(terrascript.Resource):
pass
class google_container_node_pool(terrascript.Resource):
pass
class google_dataflow_job(terrascript.Resource):
pass
class google_dataproc_cluster(terrascript.Resource):
pass
class google_dataproc_job(terrascript.Resource):
pass
class google_dns_record_set(terrascript.Resource):
pass
class google_endpoints_service(terrascript.Resource):
pass
class google_folder(terrascript.Resource):
pass
class google_folder_organization_policy(terrascript.Resource):
pass
class google_logging_billing_account_sink(terrascript.Resource):
pass
class google_logging_organization_sink(terrascript.Resource):
pass
class google_logging_folder_sink(terrascript.Resource):
pass
class google_logging_project_sink(terrascript.Resource):
pass
class google_service_networking_connection(terrascript.Resource):
pass
class google_sql_database_instance(terrascript.Resource):
pass
class google_sql_ssl_cert(terrascript.Resource):
pass
class google_sql_user(terrascript.Resource):
pass
class google_organization_iam_custom_role(terrascript.Resource):
pass
class google_organization_policy(terrascript.Resource):
pass
class google_project(terrascript.Resource):
pass
class google_project_iam_policy(terrascript.Resource):
pass
class google_project_service(terrascript.Resource):
pass
class google_project_iam_custom_role(terrascript.Resource):
pass
class google_project_organization_policy(terrascript.Resource):
pass
class google_project_usage_export_bucket(terrascript.Resource):
pass
class google_project_services(terrascript.Resource):
pass
class google_runtimeconfig_config(terrascript.Resource):
pass
class google_runtimeconfig_variable(terrascript.Resource):
pass
class google_service_account(terrascript.Resource):
pass
class google_service_account_key(terrascript.Resource):
pass
class google_storage_bucket(terrascript.Resource):
pass
class google_storage_bucket_acl(terrascript.Resource):
pass
class google_storage_bucket_object(terrascript.Resource):
pass
class google_storage_object_acl(terrascript.Resource):
pass
class google_storage_default_object_acl(terrascript.Resource):
pass
class google_storage_notification(terrascript.Resource):
pass
class google_storage_transfer_job(terrascript.Resource):
pass
| 22.528256 | 76 | 0.823427 | 1,071 | 9,169 | 6.680672 | 0.144725 | 0.20601 | 0.430748 | 0.520475 | 0.895597 | 0.85283 | 0.567156 | 0.272257 | 0.031866 | 0 | 0 | 0 | 0.117679 | 9,169 | 406 | 77 | 22.583744 | 0.884425 | 0.002508 | 0 | 0.498141 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498141 | 0.003717 | 0 | 0.501859 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
be4daa454a2414047aac309b15de9a677695cb38 | 4,792 | py | Python | index2.py | yaovct/0004.findMedianSortedArrays | 8c574395437bdf01d41672ddaebd019dd61c6676 | [
"MIT"
] | null | null | null | index2.py | yaovct/0004.findMedianSortedArrays | 8c574395437bdf01d41672ddaebd019dd61c6676 | [
"MIT"
] | 1 | 2020-02-07T07:05:42.000Z | 2020-02-07T07:05:42.000Z | index2.py | yaovct/0004.findMedianSortedArrays | 8c574395437bdf01d41672ddaebd019dd61c6676 | [
"MIT"
] | null | null | null | # the fastest method
class Solution(object):
def findMedianSortedArrays(self, nums1, nums2):
"""
:type nums1: List[int]
:type nums2: List[int]
:rtype: float
"""
A, B, m, n = nums1, nums2, len(nums1), len(nums2)
if m > n:
A, B, m, n = B, A, n, m
imin, imax, half_len = 0, m, (m + n + 1) / 2
while imin <= imax:
i = (imin + imax) / 2
j = half_len - i
if i < m and B[j-1] > A[i]:
# i is too small, must increase it
imin = i + 1
elif i > 0 and A[i-1] > B[j]:
# i is too big, must decrease it
imax = i - 1
else:
# i is perfect
if i == 0: max_of_left = B[j-1]
elif j == 0: max_of_left = A[i-1]
else: max_of_left = max(A[i-1], B[j-1])
if (m + n) % 2 == 1:
return max_of_left
if i == m: min_of_right = B[j]
elif j == n: min_of_right = A[i]
else: min_of_right = min(A[i], B[j])
return (max_of_left + min_of_right) / 2.0
my_test = Solution()
nums1 = [1, 2]
nums2 = []
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2, 3]
nums2 = []
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2, 3, 4]
nums2 = []
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2, 3, 3, 3, 4]
nums2 = []
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1]
nums2 = [2]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 3]
nums2 = [2]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [2, 3]
nums2 = [1]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [2]
nums2 = [1, 3]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1]
nums2 = [2, 3]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2]
nums2 = [3, 4]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 3]
nums2 = [2, 4]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 4]
nums2 = [2, 3]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [2, 3]
nums2 = [1, 4]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [2, 4]
nums2 = [1, 3]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2, 2]
nums2 = [3, 4, 6]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2, 7]
nums2 = [3, 4]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2, 2, 2, 2, 2, 3, 6]
nums2 = [3, 4, 6, 7, 7, 8, 9, 9]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2, 2, 2, 2, 2, 3, 3]
nums2 = [3, 4, 6, 7, 7, 8, 9, 9]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2, 2, 2, 2, 2, 3]
nums2 = [3, 4, 6, 7, 7, 8, 9, 9]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1, 2, 2, 2, 2, 2, 2, 9]
nums2 = [3, 4, 6, 7, 7, 8, 9, 9]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [100000]
nums2 = [100001]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
nums1 = [1]
nums2 = [2, 3, 4]
print("nums1 = %s" % nums1)
print("nums2 = %s" % nums2)
print("Mid = %.2f" % (my_test.findMedianSortedArrays(nums1, nums2)))
| 28.35503 | 68 | 0.55217 | 705 | 4,792 | 3.692199 | 0.089362 | 0.099885 | 0.09297 | 0.135229 | 0.803304 | 0.799078 | 0.799078 | 0.799078 | 0.799078 | 0.799078 | 0 | 0.100138 | 0.241444 | 4,792 | 168 | 69 | 28.52381 | 0.615956 | 0.019825 | 0 | 0.679104 | 0 | 0 | 0.143697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.492537 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
be68311223175335c201c366c1520af4da21645c | 17,780 | py | Python | blahb/data.py | mvsaha/blahb | e4ea703fa0fc255f627057c07df4c51138299d8b | [
"MIT"
] | null | null | null | blahb/data.py | mvsaha/blahb | e4ea703fa0fc255f627057c07df4c51138299d8b | [
"MIT"
] | null | null | null | blahb/data.py | mvsaha/blahb | e4ea703fa0fc255f627057c07df4c51138299d8b | [
"MIT"
] | null | null | null | """
Functions that allow propagation of associated data through set operations.
"""
import numpy as np
import numba
from .flags import *
@numba.njit
def _merge_indirect_min(contrib_flag, data_in, data_out):
"""Take the minimum value of contributing pixels, propagating NaNs."""
j = 0
d = np.float32(0)
for i in range(contrib_flag.shape[0]):
if contrib_flag[i]:
if np.isnan(data_out[i]) or np.isnan(data_in[j]):
data_out[i] = np.nan
else: # Both values are non-NaN
data_out[i] = min(data_out[i], data_in[j])
j += 1
else:
data_out[i] = np.nan
assert j == data_in.shape[0]
@numba.njit
def _merge_indirect_nanmin(contrib_flag, data_in, data_out):
"""Take the minimum value of contributing pixels, ignoring NaNs."""
j = 0
d = np.float32(0)
for i in range(contrib_flag.shape[0]):
if contrib_flag[i]:
# Uses property min(NaN, value) -> value
data_out[i] = min(data_out[j], data_in[j])
j += 1
assert j == data_in.shape[0]
@numba.njit
def _merge_indirect_max(contrib_flag, data_in, data_out):
"""Take the maximum value of contributing pixels, propagating NaNs."""
j = 0
d = np.float32(0)
for i in range(contrib_flag.shape[0]):
if contrib_flag[i]:
if np.isnan(data_out[i]) or np.isnan(data_in[j]):
data_out[i] = np.nan
else: # Both values are non-NaN
data_out[i] = max(data_out[i], data_in[j])
j += 1
else:
data_out[i] = np.nan
assert j == data_in.shape[0]
@numba.njit
def _merge_indirect_nanmax(contrib_flag, data_in, data_out):
"""Take the maximum value of contributing pixels, ignoring NaNs."""
j = 0
d = np.float32(0)
for i in range(contrib_flag.shape[0]):
if contrib_flag[i]:
# Uses property max(NaN, value) -> value
data_out[i] = max(data_out[j], data_in[j])
j += 1
assert j == data_in.shape[0]
@numba.njit
def _merge_indirect_first(contrib_flag, data_in, data_out):
"""Take the first encountered non-NaN value of contributing pixels."""
j = 0
d = np.float32(0)
for i in range(contrib_flag.shape[0]):
if contrib_flag[i]:
d = data_out[i]
if np.isnan(d):
data_out[i] = data_in[j]
j += 1
if not j == data_in.shape[0]:
raise ValueError("Wrong input.")
@numba.njit
def _merge_indirect_last(contrib_flag, data_in, data_out):
"""Take the last encountered non-NaN value of contributing pixels."""
j = 0
d = np.float32(0)
for i in range(contrib_flag.shape[0]):
if contrib_flag[i]:
d = data_in[j]
if not np.isnan(d):
data_out[i] = d
j += 1
assert j == data_in.shape[0]
@numba.njit
def _merge_indirect_sum(contrib_flag, data_in, data_out):
"""Take the sum of contributing pixels, propagating NaNs"""
j = 0
d = np.float32(0)
for i in range(contrib_flag.shape[0]):
if contrib_flag[i]:
data_out[i] += data_in[j]
j += 1
else:
data_out[i] = np.nan
assert j == data_in.shape[0]
@numba.njit
def _merge_indirect_nansum(contrib_flag, data_in, data_out):
"""Take the sum of contributing non-NaN pixels."""
j = 0
d = np.float32(0)
for i in range(contrib_flag.shape[0]):
if contrib_flag[i]:
d = data_in[j]
if not np.isnan(d):
if np.isnan(data_out[i]):
data_out[i] = d
else:
data_out[i] += d
j += 1
assert j == data_in.shape[0]
@numba.njit
def merge_data_column_indirect(contrib, data_in, data_out, MERGE_DATA):
"""Merge a single column of data using a MERGE_DATA rule.
Modifies `data_out`."""
if MERGE_DATA == DATA_NANFIRST:
_merge_indirect_first(contrib, data_in, data_out)
elif MERGE_DATA == DATA_NANLAST:
_merge_indirect_last(contrib, data_in, data_out)
elif MERGE_DATA == DATA_SUM:
_merge_indirect_sum(contrib, data_in, data_out)
# NOT equivaled to:
#data_out[contrib] = np.add(data_in, data_out[contrib])
elif MERGE_DATA == DATA_NANSUM:
_merge_indirect_nansum(contrib, data_in, data_out)
elif MERGE_DATA == DATA_MIN:
_merge_indirect_min(contrib, data_in, data_out)
# NOT equivaled to:
#data_out[contrib] = np.minimum(data_in, data_out[contrib])
elif MERGE_DATA == DATA_NANMIN:
data_out[contrib] = np.fmin(data_in, data_out[contrib])
elif MERGE_DATA == DATA_MAX:
_merge_indirect_max(contrib, data_in, data_out)
# NOT equivaled to:
#data_out[contrib] = np.maximum(data_in, data_out[contrib])
elif MERGE_DATA == DATA_NANMAX:
data_out[contrib] = np.fmax(data_in, data_out[contrib])
else:
raise ValueError("Invalid MERGE_DATA flag.")
from .flags import DATA_NANFIRST
__DEFAULT_MERGE_ACTION = DATA_NANFIRST
@numba.njit(
(numba.optional(numba.float32[:, :]),
numba.boolean[:],
numba.optional(numba.float32[:, :]),
numba.optional(numba.uint8[:]))
)
def merge_data_indirect(data_in, contrib, existing_data, MERGE_ACTION=None):
"""
Arguments
---------
data_in : [None] | 2d float32 matrix
Input dataset
contrib : 1d bool array
The size of this array must match the lowest dimension shape
of existing_data, if existing_data is not None. This indicates where
values from data in should be extracted. The number of True values
in this array should be
IF data_in is None then this
argument is ignored. It must still be passed in (cannot be None) due
to typing restrictions. (a numba indexer cannot be optional and this
arguments indexes existing_data).
existing_data : None | [2d float32 matrix]
If this is not None, it should have a lowest dimension shape
equal to the number of True values in contrib.
MERGE_ACTION : [None] | 1d uint8 array
The kind of merge to do on each column of existing_data to
combine it with data_int. This will raise an error. See the
definitions in numblahb.bits. The default when None is given
is to keep the first non-NaN value in each position
(_BLAHB_DATA_FIRST). If an array with a single value is passed in then
it will be applied to all columns of the data_in. Due to typing
restrictions this argument must be an array, it cannot be a scalar
(uint8 or otherwise).
Returns
-------
data : None | 2d float32 matrix
The data extracted using contrib. If existing data was given,
this array is the SAME array, and has been modified by this
function. Otherwise a new array has been created. If the input
data is None and no existing data was passed in then the return
value is None.
This is designed to be used sequentially on data from multiple
IndexSets.
"""
if data_in is None:
if existing_data is not None:
ndim = existing_data.shape[1]
if MERGE_ACTION is None:
if DATA_DEFAULT & DATA_NANS_PROPAGATE:
existing_data[:, :] = np.nan
else:
pass # Existing data does not change
elif (MERGE_ACTION.size == 1 and
MERGE_ACTION[0] & DATA_NANS_PROPAGATE):
existing_data[:, :] = np.nan
elif MERGE_ACTION.size == ndim:
for col, M in enumerate(MERGE_ACTION):
if M & DATA_NANS_PROPAGATE:
existing_data[:, col] = np.nan
else:
raise ValueError("Number of elements in MERGE do not match"
"the dimensionality of input PixelSets.")
return existing_data
else:
return None
if not data_in.ndim == 2:
raise ValueError("data_in must be 2d.")
ncol = data_in.shape[1]
if existing_data is None: # Form the output array
existing_data = np.full((contrib.size, ncol), np.nan, dtype=np.float32)
existing_data[contrib] = data_in
return existing_data
if MERGE_ACTION is None:
for col in range(ncol):
merge_data_column_indirect(contrib,
data_in[:, col], existing_data[:, col], DATA_DEFAULT)
elif MERGE_ACTION.size == 1:
for col in range(ncol):
merge_data_column_indirect(contrib, data_in[:, col],
existing_data[:, col], MERGE_ACTION[0])
elif MERGE_ACTION.size == ncol:
for col in range(ncol):
merge_data_column_indirect(contrib, data_in[:, col],
existing_data[:, col], MERGE_ACTION[col])
else:
raise ValueError("Number of elements in MERGE do not match"
"the dimensionality of input PixelSets.")
return existing_data
@numba.njit
def _merge_contrib_nanmin(contrib_flag, data_in, data_out):
"""Find the minimum of all contributing pixels, propagating NaN."""
# assert contrib_flag.size == data_in.shape[0] # Should be True
# assert np.sum(contrib_flag) == data_out.shape[0] # Should be True
n_out = 0
for i in range(contrib_flag.size):
if contrib_flag[i]:
d = data_out[n_out]
if np.isnan(d):
data_out[n_out] = data_in[i]
elif not np.isnan(data_in[i]):
data_out[n_out] = min(d, data_in[i])
# else: Both elements are NaN, no change to data_out needed.
n_out += 1
assert n_out == data_out.shape[0]
@numba.njit
def _merge_contrib_nanmax(contrib_flag, data_in, data_out):
"""Find the minimum of all contributing pixels, propagating NaN."""
# assert contrib_flag.size == data_in.shape[0] # Should be True
# assert np.sum(contrib_flag) == data_out.shape[0] # Should be True
n_out = 0
for i in range(contrib_flag.size):
if contrib_flag[i]:
d = data_out[n_out]
if np.isnan(d):
data_out[n_out] = data_in[i]
elif not np.isnan(data_in[i]):
data_out[n_out] = max(d, data_in[i])
# else: Both elements are NaN, no change to data_out needed.
n_out += 1
assert n_out == data_out.shape[0]
@numba.njit
def _merge_contrib_nansum(contrib_flag, data_in, data_out):
"""Find the minimum of all contributing pixels, propagating NaN."""
# assert contrib_flag.size == data_in.shape[0] # Should be True
# assert np.sum(contrib_flag) == data_out.shape[0] # Should be True
n_out = 0
for i in range(contrib_flag.size):
if contrib_flag[i]:
d = data_in[i]
if not np.isnan(d):
if np.isnan(data_out[n_out]):
data_out[n_out] = d
else:
data_out[n_out] += d
# else: Both elements are NaN, no change to data_out needed.
n_out += 1
assert n_out == data_out.shape[0]
@numba.njit
def _merge_contrib_first(contrib_flag, data_in, data_out):
n_out = 0
for i in range(contrib_flag.size):
if contrib_flag[i]:
if np.isnan(data_out[n_out]):
data_out[n_out] = data_in[i]
n_out += 1
assert n_out == data_out.shape[0]
@numba.njit
def _merge_contrib_last(contrib_flag, data_in, data_out):
n_out = 0
for i in range(contrib_flag.size):
if contrib_flag[i]:
if not np.isnan(data_in[i]):
data_out[n_out] = data_in[i]
n_out += 1
assert n_out == data_out.shape[0]
@numba.njit
def merge_data_column_direct(contrib, data_in, data_out, MERGE_DATA):
if MERGE_DATA == DATA_NANFIRST:
_merge_contrib_first(contrib, data_in, data_out)
elif MERGE_DATA == DATA_NANLAST:
_merge_contrib_last(contrib, data_in, data_out)
elif MERGE_DATA == DATA_SUM:
np.add(data_in[contrib], data_out, data_out)
elif MERGE_DATA == DATA_NANSUM:
_merge_contrib_nansum(contrib, data_in, data_out)
elif MERGE_DATA == DATA_MIN:
np.minimum(data_in[contrib], data_out, data_out)
elif MERGE_DATA == DATA_NANMIN:
np.fmin(data_in[contrib], data_out, data_out)
elif MERGE_DATA == DATA_MAX:
np.maximum(data_in[contrib], data_out, data_out)
elif MERGE_DATA == DATA_NANMAX:
np.fmax(data_in[contrib], data_out, data_out)
else:
raise ValueError("Invalid MERGE_DATA flag.")
@numba.njit(
(numba.optional(numba.float32[:, :]),
numba.boolean[:],
numba.optional(numba.float32[:, :]),
numba.optional(numba.uint8[:]))
)
def merge_data_direct(data_in, contrib, existing_data=None, MERGE_ACTION=None):
"""
Arguments
---------
data_in : [None] | 2d float32 matrix
Input dataset
contrib : 1d bool array
The size of this array must match the lowest dimension shape
of data_in. This indicates where values from data in should be
extracted. IF data_in is None then this argument is ignored. It must
still be passed in (cannot be None) due to typing restrictions.
(a numba indexer cannot be optional and this arguments indexes data_in)
existing_data : None | [2d float32 matrix]
If this is not None, it should have a lowest dimension shape
equal to the number of True values in contrib.
MERGE_ACTION : [None] | 1d uint8 array
The kind of merge to do on each column of existing_data to
combine it with data_int. See the definitions in blahb.flags.
The default when None is given is to keep the first non-NaN value in
each position (_BLAHB_DATA_FIRST). If an array with a single value
is passed in then it will be applied to all columns of the data_in.
Due to typing restrictions this argument must be an array, it cannot
be a scalar (uint8 or otherwise).
Returns
-------
data : None | 2d float32 matrix
The data extracted using contrib. If existing data was given,
this array is the SAME array, and has been modified by this
function. Otherwise a new array has been created. If the input
data is None and no existing data was passed in then the return
value is None.
This is designed to be used sequentially on data from multiple
IndexSets.
"""
if data_in is None:
if existing_data is not None:
n_cols = existing_data.shape[1]
if MERGE_ACTION is None:
if DATA_DEFAULT & DATA_NANS_PROPAGATE:
existing_data[:, :] = np.nan
else:
pass
elif MERGE_ACTION.size == 1:
if MERGE_ACTION[0] & DATA_NANS_PROPAGATE:
existing_data[:, :] = np.nan
else:
pass # No change to existing data
elif MERGE_ACTION.size == n_cols:
for col, M in enumerate(MERGE_ACTION):
if M & DATA_NANS_PROPAGATE:
existing_data[:, col] = np.nan
else:
raise ValueError("Number of elements in MERGE does not match"
" the dimensionality of input PixelSets.")
return existing_data
else:
return None
if data_in.ndim != 2:
raise TypeError("Dimensionality of data must be 2d.")
if existing_data is None: # Form the array if all inputs
existing_data = data_in[contrib]
return existing_data
n_cols = data_in.shape[1]
if MERGE_ACTION is None:
for col in range(n_cols):
merge_data_column_direct(contrib, data_in[:, col],
existing_data[:, col], DATA_DEFAULT)
elif MERGE_ACTION.size == 1:
for col in range(n_cols):
merge_data_column_direct(contrib, data_in[:, col],
existing_data[:, col], MERGE_ACTION[0])
elif MERGE_ACTION.size == data_in.shape[1]:
for col in range(n_cols):
merge_data_column_direct(contrib, data_in[:, col],
existing_data[:, col], MERGE_ACTION[col])
else:
raise ValueError("Number of elements in MERGE do not match"
"the dimensionality of input PixelSets.")
return existing_data
@numba.njit( (numba.optional(numba.uint8[:]), ) )
def all_short_circuit_merges(MERGE=None):
"""Determine if all MERGE flags are can be skipped if data is None.
This allows us to skip the data merge step if any of the contributing
data is None.
However, if any of the columns are True, then we must propagate all data
columns.
"""
if MERGE is None:
return numba.boolean(DATA_DEFAULT & DATA_NANS_PROPAGATE)
for M in MERGE:
if not (M & DATA_NANS_PROPAGATE):
return False
return True
@numba.njit( (numba.optional(numba.uint8[:]), ) )
def order_does_not_matter(MERGE=None):
"""Determine if a the merge order matters from the input flags.
In some cases, like taking the MAX or SUM of multiple data values at the
same location, the order in which we introduce arguments does not change
the result. In this case we are allowed to rearrange the input values
for more efficient computation.
"""
if MERGE is None:
M = DATA_DEFAULT
return M == DATA_NANFIRST or M == DATA_NANLAST
for M in MERGE:
if M == DATA_NANFIRST or M == DATA_NANLAST:
return False
return True | 36.811594 | 79 | 0.615467 | 2,567 | 17,780 | 4.069731 | 0.087651 | 0.053412 | 0.027759 | 0.036087 | 0.845506 | 0.814588 | 0.794869 | 0.755528 | 0.746052 | 0.727482 | 0 | 0.010704 | 0.295894 | 17,780 | 483 | 80 | 36.811594 | 0.823788 | 0.311249 | 0 | 0.726115 | 0 | 0 | 0.03633 | 0 | 0 | 0 | 0 | 0 | 0.038217 | 1 | 0.06051 | false | 0.009554 | 0.012739 | 0 | 0.117834 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
be6a4f2f18e8ad5caef8e7fb571eff77e8724847 | 175 | py | Python | tests/compara/test_compara.py | paradoxcell/jcvi | 3b161796234670ce1c4894974eaeb590d35cf2a2 | [
"BSD-2-Clause"
] | 517 | 2015-01-04T13:09:55.000Z | 2022-03-30T13:42:20.000Z | tests/compara/test_compara.py | xuzhougeng/jcvi | 225a7f0375a2f8b31d3c44b8134d58b68befe3d6 | [
"BSD-2-Clause"
] | 342 | 2015-05-08T03:50:42.000Z | 2022-03-30T15:30:32.000Z | tests/compara/test_compara.py | xuzhougeng/jcvi | 225a7f0375a2f8b31d3c44b8134d58b68befe3d6 | [
"BSD-2-Clause"
] | 171 | 2015-01-16T13:01:51.000Z | 2022-03-19T11:28:11.000Z | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
from ..config import test_script, generate_tests
def pytest_generate_tests(metafunc):
generate_tests(metafunc, "compara")
| 19.444444 | 48 | 0.725714 | 23 | 175 | 5.304348 | 0.782609 | 0.319672 | 0.344262 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006579 | 0.131429 | 175 | 8 | 49 | 21.875 | 0.796053 | 0.24 | 0 | 0 | 1 | 0 | 0.053435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
be78205e1b012c39bdc0bde7caf09fb908be2d13 | 176 | py | Python | moha/posthf/__init__.py | ZhaoYilin/moha | d701fd921839474380982db1478e66f0dc8cbd98 | [
"MIT"
] | 12 | 2019-12-07T18:37:34.000Z | 2022-03-30T14:23:38.000Z | moha/posthf/__init__.py | ZhaoYilin/moha | d701fd921839474380982db1478e66f0dc8cbd98 | [
"MIT"
] | null | null | null | moha/posthf/__init__.py | ZhaoYilin/moha | d701fd921839474380982db1478e66f0dc8cbd98 | [
"MIT"
] | 2 | 2019-12-08T05:48:47.000Z | 2021-10-31T21:40:21.000Z | from __future__ import division, print_function
from __future__ import absolute_import
from moha.posthf.ci import *
from moha.posthf.pt import *
from moha.posthf.cc import *
| 22 | 47 | 0.8125 | 26 | 176 | 5.115385 | 0.461538 | 0.225564 | 0.315789 | 0.451128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130682 | 176 | 7 | 48 | 25.142857 | 0.869281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
be95fb10cfddf0b4cdeb8893eb167f072820775a | 10,740 | py | Python | tests/test_tasks.py | julia-shenshina/LMS | 110364bb5c59055d584c314fd5c04158c637d0e6 | [
"MIT"
] | null | null | null | tests/test_tasks.py | julia-shenshina/LMS | 110364bb5c59055d584c314fd5c04158c637d0e6 | [
"MIT"
] | 4 | 2018-12-18T19:03:36.000Z | 2018-12-20T21:07:02.000Z | tests/test_tasks.py | julia-shenshina/LMS | 110364bb5c59055d584c314fd5c04158c637d0e6 | [
"MIT"
] | null | null | null | from django.urls import reverse
from django.utils import timezone
from rest_framework.test import APITestCase
from lms.models.models import Course, Faculty, Group, Professor, Student, Task
class TestSolution(APITestCase):
def test_get_tasks_professor(self):
professor = Professor.objects.create(
first_name="first", last_name="last", secret_key="123123"
)
courses = [
Course.objects.create(name="Курс_1", description="Описание курса_1"),
Course.objects.create(name="Курс_2", description="Описание курса_2")
]
courses[0].professor.set([professor])
today = timezone.now().date()
delta = timezone.timedelta(days=3)
tasks = [
Task.objects.create(
name="Задача_1", text="Текст задачи_1", start_time=today, finish_time=today + delta, course=courses[0]
),
Task.objects.create(
name="Задача_2", text="Текст задачи_2", start_time=today, finish_time=today + delta, course=courses[1]
)
]
for course, task in zip(courses, tasks):
task.refresh_from_db()
course.refresh_from_db()
professor.refresh_from_db()
response = self.client.get(
reverse('task-list'),
**{"HTTP_X_SECRET_KEY": professor.secret_key}
)
assert response.status_code == 200
assert response.json().get("count") == 1
assert response.json().get("results")[0]["id"] == tasks[0].id
def test_get_tasks_student(self):
faculty = Faculty.objects.create(name="Факультет_1")
group = Group.objects.create(name="Группа_1", faculty=faculty, level=1)
student = Student.objects.create(
first_name="first", last_name="last", group=group, secret_key="123123", start_year=2017
)
courses = [
Course.objects.create(name="Курс_1", description="Описание курса_1"),
Course.objects.create(name="Курс_2", description="Описание курса_2")
]
courses[0].groups.set([group])
today = timezone.now().date()
delta = timezone.timedelta(days=3)
tasks = [
Task.objects.create(
name="Задача_1", text="Текст задачи_1", start_time=today, finish_time=today + delta, course=courses[0]
),
Task.objects.create(
name="Задача_3", text="Текст задачи_3", start_time=today - 2 * delta, finish_time=today - delta, course=courses[0]
),
Task.objects.create(
name="Задача_2", text="Текст задачи_2", start_time=today + delta, finish_time=today + 2 * delta, course=courses[0]
),
Task.objects.create(
name="Задача_1", text="Текст задачи_1", start_time=today, finish_time=today + delta, course=courses[1]
),
]
for course in courses:
course.refresh_from_db()
for task in tasks:
task.refresh_from_db()
group.refresh_from_db()
student.refresh_from_db()
response = self.client.get(
reverse('task-list'),
**{"HTTP_X_SECRET_KEY": student.secret_key}
)
assert response.status_code == 200
assert response.json().get("count") == 1
assert response.json().get("results")[0]["id"] == tasks[0].id
def test_create_tasks_professor_ok(self):
professor = Professor.objects.create(
first_name="first", last_name="last", secret_key="123123"
)
course = Course.objects.create(name='Курс_1', description='Описание курса_1')
course.professor.set([professor])
course.refresh_from_db()
professor.refresh_from_db()
today = timezone.now().date()
delta = timezone.timedelta(days=3)
response = self.client.post(
reverse('task-list'),
data={
'name': 'Задание от профессора',
'text': 'Текст задания от профессора',
'start_time': today,
'finish_time': today + delta,
'course': course.id
},
**{"HTTP_X_SECRET_KEY": professor.secret_key}
)
assert response.status_code == 201
def test_create_tasks_professor_failed(self):
professor = Professor.objects.create(
first_name="first", last_name="last", secret_key="123123"
)
course = Course.objects.create(name='Курс_1', description='Описание курса_1')
course.refresh_from_db()
professor.refresh_from_db()
today = timezone.now().date()
delta = timezone.timedelta(days=3)
response = self.client.post(
reverse('task-list'),
data={
'name': 'Задание от профессора',
'text': 'Текст задания от профессора',
'start_time': today,
'finish_time': today + delta,
'course': course.id
},
**{"HTTP_X_SECRET_KEY": professor.secret_key}
)
assert response.status_code == 403
def test_create_tasks_student(self):
faculty = Faculty.objects.create(name="Факультет_1")
group = Group.objects.create(name="Группа_1", faculty=faculty, level=1)
student = Student.objects.create(
first_name="first", last_name="last", group=group, secret_key="123123", start_year=2017
)
course = Course.objects.create(name='Курс_1', description='Описание курса_1')
course.refresh_from_db()
student.refresh_from_db()
today = timezone.now().date()
delta = timezone.timedelta(days=3)
response = self.client.post(
reverse('task-list'),
data={
'name': 'Задание от профессора',
'text': 'Текст задания от профессора',
'start_time': today,
'finish_time': today + delta,
'course': course.id
},
**{"HTTP_X_SECRET_KEY": student.secret_key}
)
assert response.status_code == 403
def test_delete_task_professor(self):
professor = Professor.objects.create(
first_name="first", last_name="last", secret_key="123123"
)
course = Course.objects.create(name='Курс_1', description='Описание курса_1')
course.professor.set([professor])
today = timezone.now().date()
delta = timezone.timedelta(days=3)
task = Task.objects.create(
name='Задание', start_time=today, finish_time=today + delta, course=course
)
task.refresh_from_db()
course.refresh_from_db()
professor.refresh_from_db()
response = self.client.delete(
reverse('task-detail', args=[task.id]),
**{"HTTP_X_SECRET_KEY": professor.secret_key}
)
assert response.status_code == 204
def test_delete_tasks_student(self):
faculty = Faculty.objects.create(name="Факультет_1")
group = Group.objects.create(name="Группа_1", faculty=faculty, level=1)
student = Student.objects.create(
first_name="first", last_name="last", group=group, secret_key="123123", start_year=2017
)
course = Course.objects.create(name='Курс_1', description='Описание курса_1')
course.groups.set([group])
today = timezone.now().date()
delta = timezone.timedelta(days=3)
task = Task.objects.create(
name='Задание', start_time=today, finish_time=today + delta, course=course
)
task.refresh_from_db()
course.refresh_from_db()
student.refresh_from_db()
response = self.client.delete(
reverse('task-detail', args=[task.id]),
**{"HTTP_X_SECRET_KEY": student.secret_key}
)
assert response.status_code == 403
def test_update_task_professor_ok(self):
professor = Professor.objects.create(
first_name="first", last_name="last", secret_key="123123"
)
course = Course.objects.create(name='Курс_1', description='Описание курса_1')
course.professor.set([professor])
today = timezone.now().date()
delta = timezone.timedelta(days=3)
task = Task.objects.create(
name='Задание', start_time=today, finish_time=today + delta, course=course
)
task.refresh_from_db()
course.refresh_from_db()
professor.refresh_from_db()
response = self.client.patch(
reverse('task-detail', args=[task.id]),
data={'name': 'New name'},
**{"HTTP_X_SECRET_KEY": professor.secret_key}
)
assert response.status_code == 200
def test_update_task_professor_failed(self):
professor = Professor.objects.create(
first_name="first", last_name="last", secret_key="123123"
)
course = Course.objects.create(name='Курс_1', description='Описание курса_1')
today = timezone.now().date()
delta = timezone.timedelta(days=3)
task = Task.objects.create(
name='Задание', start_time=today, finish_time=today + delta, course=course
)
task.refresh_from_db()
course.refresh_from_db()
professor.refresh_from_db()
response = self.client.patch(
reverse('task-detail', args=[task.id]),
data={'name': 'New name'},
**{"HTTP_X_SECRET_KEY": professor.secret_key}
)
assert response.status_code == 404
def test_update_tasks_student(self):
faculty = Faculty.objects.create(name="Факультет_1")
group = Group.objects.create(name="Группа_1", faculty=faculty, level=1)
student = Student.objects.create(
first_name="first", last_name="last", group=group, secret_key="123123", start_year=2017
)
course = Course.objects.create(name='Курс_1', description='Описание курса_1')
course.groups.set([group])
today = timezone.now().date()
delta = timezone.timedelta(days=3)
task = Task.objects.create(
name='Задание', start_time=today, finish_time=today + delta, course=course
)
task.refresh_from_db()
course.refresh_from_db()
student.refresh_from_db()
response = self.client.patch(
reverse('task-detail', args=[task.id]),
data={'name': 'New name'},
**{"HTTP_X_SECRET_KEY": student.secret_key}
)
assert response.status_code == 403
| 34.423077 | 130 | 0.58892 | 1,203 | 10,740 | 5.052369 | 0.081463 | 0.087693 | 0.086706 | 0.042777 | 0.939125 | 0.925469 | 0.925469 | 0.925469 | 0.924317 | 0.919546 | 0 | 0.023511 | 0.287151 | 10,740 | 311 | 131 | 34.533762 | 0.770376 | 0 | 0 | 0.747934 | 0 | 0 | 0.115456 | 0 | 0 | 0 | 0 | 0 | 0.057851 | 1 | 0.041322 | false | 0 | 0.016529 | 0 | 0.061983 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fe46a2a3b79efc05b40edf4b2750a85c918e33f8 | 55,016 | py | Python | ku/backprop/gan.py | tonandr/keras_unsupervised | fd2a2494bca2eb745027178e220b42b5e5882f94 | [
"BSD-3-Clause"
] | 4 | 2019-07-28T11:56:01.000Z | 2021-11-06T02:50:58.000Z | ku/backprop/gan.py | tonandr/keras_unsupervised | fd2a2494bca2eb745027178e220b42b5e5882f94 | [
"BSD-3-Clause"
] | 2 | 2021-06-30T01:00:07.000Z | 2021-07-21T08:04:40.000Z | ku/backprop/gan.py | tonandr/keras_unsupervised | fd2a2494bca2eb745027178e220b42b5e5882f94 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from abc import ABC, abstractmethod
import os
import warnings
import shutil
import numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.models import load_model, Model
from tensorflow.keras.utils import Sequence, GeneratorEnqueuer, OrderedEnqueuer
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Lambda
from tensorflow.keras.losses import BinaryCrossentropy, MeanSquaredError
from tensorflow.python.keras.utils.generic_utils import to_list, CustomObjectScope
from tensorflow.python.keras.utils.data_utils import iter_sequence_infinite
from tensorflow.python.keras import callbacks as cbks
from tensorflow.python.keras.engine import training_utils
from tensorflow.python.keras.utils.mode_keys import ModeKeys #?
from ..engine_ext import ModelExt
from ..loss_ext import WGANLoss, WGANGPLoss
from ..loss_ext import SoftPlusInverseLoss, SoftPlusLoss, RPenaltyLoss
# GAN mode.
STYLE_GAN_REGULAR = 0
STYLE_GAN_WGAN_GP = 1
STYLE_GAN_SOFTPLUS_INVERSE_R1_GP = 2
LSGAN = 3
PIX2PIX_GAN = 4
# Loss configuration type.
LOSS_CONF_TYPE_NON_SATURATION_REGULAR = 0
LOSS_CONF_TYPE_WGAN_GP = 1
LOSS_CONF_TYPE_NON_SATURATION_SOFTPLUS_R1_GP = 2
LOSS_CONF_TYPE_LS = 3
def get_loss_conf(hps, lc_type, *args, **kwargs):
"""Get the GAN loss configuration.
Parameters
----------
hps: Dict.
GAN model's hyper-parameters.
lc_type: Integer.
Loss configuration type.
Returns
-------
Loss configuration.
Dict.
"""
loss_conf = {}
if lc_type == LOSS_CONF_TYPE_NON_SATURATION_REGULAR:
loss_conf = {'disc_ext_losses': [BinaryCrossentropy(from_logits=True), BinaryCrossentropy(from_logits=True)]
, 'disc_ext_loss_weights': [1.0, 1.0]
, 'gen_disc_losses': [BinaryCrossentropy(from_logits=True)]
, 'gen_disc_loss_weights': [1.0]}
elif lc_type == LOSS_CONF_TYPE_WGAN_GP:
loss_conf = {'disc_ext_losses': [WGANLoss()
, WGANLoss()
, WGANGPLoss(model=kwargs['model']
, input_variable_orders=kwargs['input_variable_orders']
, wgan_lambda=hps['wgan_lambda']
, wgan_target=hps['wgan_target'])]
, 'disc_ext_loss_weights': [-1.0, 1.0, 1.0]
, 'gen_disc_losses': [WGANLoss()]
, 'gen_disc_loss_weights': [-1.0]}
elif lc_type == LOSS_CONF_TYPE_NON_SATURATION_SOFTPLUS_R1_GP :
loss_conf = {'disc_ext_losses': [SoftPlusInverseLoss()
, RPenaltyLoss(model=kwargs['model']
, input_variable_orders=kwargs['input_variable_orders']
, r_gamma=hps['r_gamma'])
, SoftPlusLoss()]
, 'disc_ext_loss_weights': [1.0, 1.0, 1.0]
, 'gen_disc_losses': [SoftPlusInverseLoss()]
, 'gen_disc_loss_weights': [1.0]}
elif lc_type == LOSS_CONF_TYPE_LS:
loss_conf = {'disc_ext_losses': [MeanSquaredError(), MeanSquaredError()]
, 'disc_ext_loss_weights': [1.0, 1.0]
, 'gen_disc_losses': [MeanSquaredError()]
, 'gen_disc_loss_weights': [1.0]}
else:
raise ValueError('type is not valid.')
return loss_conf
class AbstractGAN(ABC):
"""Abstract generative adversarial network."""
# Constants.
GEN_DISC_PATH = 'gen_disc.h5'
DISC_EXT_PATH = 'disc_ext.h5'
def __init__(self, conf):
"""
Parameters
----------
conf: Dict.
Configuration.
"""
self.conf = conf #?
if self.conf['model_loading']:
if not hasattr(self, 'custom_objects'):
raise RuntimeError('Before models, custom_objects must be created.')
self.custom_objects['ModelExt'] = ModelExt
# gen_disc.
self.gen_disc = load_model(self.GEN_DISC_PATH
, custom_objects=self.custom_objects
, compile=False) #?
# gen, disc.
self.gen = self.gen_disc.get_layer('gen')
self.disc = self.gen_disc.get_layer('disc')
@property
def is_gan_compiled(self):
return self._is_gan_compiled
@abstractmethod
def _create_generator(self):
"""Create the generator."""
raise NotImplementedError('_crate_generator is not implemented.')
@abstractmethod
def _create_discriminator(self):
"""Create the discriminator."""
raise NotImplementedError('_crate_discriminator is not implemented.')
def compose_gan(self):
"""Compose the GAN model."""
raise NotImplementedError('compose_gan is not implemented.')
def compose_gan_with_mode(self, mode):
"""Compose gan with mode.
Parameters
----------
mode: Integer.
GAN model composing mode.
"""
disc_ext, gen_disc = compose_gan_with_mode(self.gen, self.disc, mode)
self.disc_ext = disc_ext
self.gen_disc = gen_disc
def compile(self
, disc_ext_opt
, disc_ext_losses
, disc_ext_loss_weights
, gen_disc_opt
, gen_disc_losses
, gen_disc_loss_weights
, disc_ext_metrics=None
, gen_disc_metrics=None):
"""compile."""
# Check exception.
assert hasattr(self, 'disc_ext') and hasattr(self, 'gen_disc')
self.gen.trainable = False
for layer in self.gen.layers: layer.trainable = False
self.disc.trainable = True
for layer in self.disc.layers: layer.trainable = True
self.disc_ext.compile(optimizer=disc_ext_opt
, loss=disc_ext_losses
, loss_weights=disc_ext_loss_weights
, metrics=disc_ext_metrics
, run_eagerly=True) # run_eagerly?
self.gen.trainable = True
for layer in self.gen.layers: layer.trainable = True
self.disc.trainable = False
for layer in self.disc.layers: layer.trainable = False
self.gen_disc.compile(optimizer=gen_disc_opt
, loss=gen_disc_losses
, loss_weights=gen_disc_loss_weights
, metrics=gen_disc_metrics
, run_eagerly=True)
self._is_gan_compiled = True
@abstractmethod
def gen_disc_ext_data_fun(self, generator, gen_prog_depth=None, disc_prog_depth=None, *args, **kwargs):
"""Generate disc_ext data.
Parameters
----------
generator: Generator.
Data generator.
gen_prog_depth: Integer.
Partial generator model's layer depth (default: None).
disc_prog_depth: Integer.
Partial discriminator model's layer depth (default: None).
"""
raise NotImplementedError('gen_disc_ext_data_fun is not implemented.')
@abstractmethod
def gen_gen_disc_data_fun(self, generator, gen_prog_depth=None, disc_prog_depth=None, *args, **kwargs):
"""Generate disc_ext data.
Parameters
----------
generator: Generator.
Data generator.
gen_prog_depth: Integer.
Partial generator model's layer depth (default: None).
disc_prog_depth: Integer.
Partial discriminator model's layer depth (default: None).
"""
raise NotImplementedError('gen_gen_disc_data_fun is not implemented.')
def fit_generator(self
, generator
, gen_disc_ext_data_fun
, gen_gen_disc_data_fun
, verbose=1
, callbacks_disc_ext_raw=None
, callbacks_gen_disc_raw=None
, validation_data_gen=None # ?
, validation_steps=None
, validation_freq=1 # ?
, class_weight=None # ?
, max_queue_size=10
, workers=1
, use_multiprocessing=False # ?
, shuffle=True
, initial_epoch=0
, save_f=True): # ?
"""Train the GAN model with the generator.
Parameters
----------
generator: Generator.
Training data generator.
gen_disc_ext_data_fun: Function.
Data generating function for disc_ext.
gen_gen_disc_data_fun: Function.
Data generating function for gen_disc.
verbose: Integer.
Verbose mode (default=1).
callback_disc_ext_raw: list.
disc_ext callbacks (default=None).
callback_gen_disc_raw: list.
gen_disc callbacks (default=None).
validation_data_gen: Generator or Sequence.
Validation generator or sequence (default=None).
validation_steps: Integer.
Validation steps (default=None).
validation_freq: Integer.
Validation frequency (default=1).
class_weight: Numpy array. ?
Class weight (default=None).
max_queue_size: Integer.
Maximum size for the generator queue (default: 10).
workers: Integer.
Maximum number of processes to get samples (default: 1, 0: main thread).
use_multiprocessing: Boolean.
Multi-processing flag (default: False).
shuffle: Boolean.
Batch shuffling flag (default: True).
initial_epoch: Integer.
Initial epoch (default: 0).
save_f: Boolean.
Model saving flag (default: True).
Returns
-------
Training history.
Tuple.
"""
'''
_keras_api_gauge.get_cell('fit').set(True)
# Legacy graph support is contained in `training_v1.Model`.
version_utils.disallow_legacy_graph('Model', 'fit')
self._assert_compile_was_called()
self._check_call_args('fit')
_disallow_inside_tf_function('fit')
'''
# Check exception.
do_validation = bool(validation_data_gen)
if do_validation:
assert hasattr(validation_data_gen, 'next') or \
hasattr(validation_data_gen, '__next') or \
isinstance(validation_data_gen, Sequence)
if not isinstance(validation_data_gen, Sequence):
assert validation_steps # ?
assert isinstance(validation_freq, int)
if not isinstance(generator, Sequence) and use_multiprocessing and workers > 1:
warnings.warn(UserWarning('For multiprocessing, use the instance of Sequence.'))
# Initialize the results directory
if not os.path.isdir(os.path.join('results')):
os.mkdir(os.path.join('results'))
else:
shutil.rmtree(os.path.join('results'))
os.mkdir(os.path.join('results'))
enq = None
val_enq = None
try:
# Get the validation generator and output generator.
if do_validation:
if workers > 0:
if isinstance(validation_data_gen, Sequence):
val_enq = OrderedEnqueuer(validation_data_gen
, use_multiprocessing=use_multiprocessing) # shuffle?
validation_steps = validation_steps or len(validation_data_gen)
else:
val_enq = GeneratorEnqueuer(validation_data_gen
, use_multiprocessing=use_multiprocessing)
val_enq.start(workers=workers, max_queue_size=max_queue_size)
val_generator = val_enq.get()
else:
if isinstance(validation_data_gen, Sequence):
val_generator = iter_sequence_infinite(validation_data_gen)
validation_steps = validation_steps or len(validation_data_gen)
else:
val_generator = validation_data_gen
if workers > 0:
if isinstance(generator, Sequence):
enq = OrderedEnqueuer(generator
, use_multiprocessing=use_multiprocessing
, shuffle=shuffle)
else:
enq = GeneratorEnqueuer(generator
, use_multiprocessing=use_multiprocessing)
enq.start(workers=workers, max_queue_size=max_queue_size)
output_generator = enq.get()
else:
if isinstance(generator, Sequence):
output_generator = iter_sequence_infinite(generator)
else:
output_generator = generator
# Callbacks.
# disc_ext.
if not isinstance(callbacks_disc_ext_raw, cbks.CallbackList):
callbacks_disc_ext = cbks.CallbackList(callbacks_disc_ext_raw
, add_history=True
, add_progbar=verbose != 0
, model=self.disc_ext
, verbose=verbose
, epochs=self.hps['epochs']
, steps=self.hps['batch_step'] * self.hps['disc_k_step'])
else:
callbacks_disc_ext = callbacks_disc_ext_raw
# gen_disc.
if not isinstance(callbacks_disc_ext_raw, cbks.CallbackList):
callbacks_gen_disc = cbks.CallbackList(callbacks_gen_disc_raw
, add_history=True
, add_progbar=verbose != 0
, model=self.gen_disc
, verbose=verbose
, epochs=self.hps['epochs']
, steps=self.hps['batch_step'])
else:
callbacks_gen_disc = callbacks_gen_disc_raw
# Train.
self.disc_ext.stop_training = False
self.disc_ext._train_counter.assign(0)
self.gen_disc.stop_training = False
self.gen_disc._train_counter.assign(0)
disc_ext_training_logs = None
gen_disc_training_logs = None
callbacks_disc_ext.on_train_begin()
callbacks_gen_disc.on_train_begin()
initial_epoch = self.disc_ext._maybe_load_initial_epoch_from_ckpt(initial_epoch) # ?
pre_e_i = -1
for e_i in range(initial_epoch, self.hps['epochs']):
if callbacks_disc_ext.model.stop_training or callbacks_gen_disc.model.stop_training:
break
self.disc_ext.reset_metrics() # ?
self.gen_disc.reset_metrics()
epochs_log_disc_ext = {}
epochs_log_gen_disc = {}
callbacks_disc_ext.on_epoch_begin(e_i, epochs_log_disc_ext)
callbacks_gen_disc.on_epoch_begin(e_i, epochs_log_gen_disc)
for s_i in range(self.hps['batch_step']):
for k_i in range(self.hps['disc_k_step']):
step = self.hps['disc_k_step'] * s_i + k_i - 1 # ?
with trace.Trace('TraceContext'
, graph_type='train'
, epoch_num=e_i
, step_num=step
, batch_size=self.hps['batch_size']):
callbacks_disc_ext.on_train_batch_begin(step)
inputs, outputs = gen_disc_ext_data_fun(output_generator)
self.gen.trainable = False
for layer in self.gen.layers: layer.trainable = False
self.disc.trainable = True
for layer in self.disc.layers: layer.trainable = True
disc_ext_step_logs = self.disc_ext.train_on_batch(inputs
, outputs
, class_weight=class_weight
, reset_metrics=False
, return_dict=True) # ?
del inputs, outputs
end_step = step + 1
callbacks_disc_ext.on_train_batch_end(end_step, disc_ext_step_logs)
step = s_i - 1 # ?
with trace.Trace('TraceContext'
, graph_type='train'
, epoch_num=e_i
, step_num=step
, batch_size=self.hps['batch_size']):
inputs, outputs = gen_gen_disc_data_fun(output_generator)
self.gen.trainable = True
for layer in self.gen.layers: layer.trainable = True
self.disc.trainable = False
for layer in self.disc.layers: layer.trainable = False
gen_disc_step_logs = self.gen_disc.train_on_batch(inputs
, outputs
, class_weight=class_weight
, reset_metrics=False
, return_dict=True)
del inputs, outputs
end_step = step + 1
callbacks_gen_disc.on_train_batch_end(end_step, gen_disc_step_logs)
disc_ext_epoch_logs = copy.copy(disc_ext_step_logs) # ?
gen_disc_epoch_logs = copy.copy(gen_disc_step_logs) # ?
# Do validation.
if do_validation: # ?
if e_i % validation_freq == 0: # ?
# disc_ext.
val_outs_disc_ext = self._evaluate_disc_ext(self.disc_ext
, val_generator # ?
, gen_disc_ext_data_fun
, callbacks_raw=None # callbacks_disc_ext
, workers=1
, verbose=1)
# gen_disc.
val_outs_gen_disc = self._evaluate_gen_disc(self.gen_disc
, val_generator
, gen_gen_disc_data_fun
, callbacks_raw=None # callbacks_gen_disc
, workers=1
, verbose=1)
# Make epochs logs.
epochs_log_disc_ext.update(val_outs_disc_ext)
epochs_log_gen_disc.update(val_outs_gen_disc)
callbacks_disc_ext.on_epoch_end(e_i, epochs_log_disc_ext)
callbacks_gen_disc.on_epoch_end(e_i, epochs_log_gen_disc)
disc_ext_training_logs = epochs_log_disc_ext
gen_disc_training_logs = epochs_log_gen_disc
if save_f:
self.save_gan_model()
pre_e_i = e_i
callbacks_disc_ext.on_train_end(logs=disc_ext_training_logs)
callbacks_gen_disc.on_train_end(logs=gen_disc_training_logs)
finally:
try:
if enq:
enq.stop()
finally:
if val_enq:
val_enq.stop()
return self.disc_ext.history, self.gen_disc.history
def fit_generator_progressively(self
, generator
, gen_disc_ext_data_fun
, gen_gen_disc_data_fun
, verbose=1
, callbacks_disc_ext=None
, callbacks_gen_disc=None
, validation_data_gen=None # ?
, validation_steps=None
, validation_freq=1 # ?
, class_weight=None # ?
, max_queue_size=10
, workers=1
, use_multiprocessing=False # ?
, shuffle=True
, initial_epoch=0
, save_f=True): # ?
"""Train the GAN model with the generator progressively.
Parameters
----------
generator: Generator.
Training data generator.
gen_disc_ext_data_fun: Function.
Data generating function for disc_ext.
gen_gen_disc_data_fun: Function.
Data generating function for gen_disc.
verbose: Integer.
Verbose mode (default=1).
callback_disc_ext: list.
disc_ext callbacks (default=None).
callback_gen_disc: list.
gen_disc callbacks (default=None).
validation_data_gen: Generator or Sequence.
Validation generator or sequence (default=None).
validation_steps: Integer.
Validation steps (default=None).
validation_freq: Integer.
Validation frequency (default=1).
class_weight: Numpy array. ?
Class weight (default=None).
max_queue_size: Integer.
Maximum size for the generator queue (default: 10).
workers: Integer.
Maximum number of processes to get samples (default: 1, 0: main thread).
use_multiprocessing: Boolean.
Multi-processing flag (default: False).
shuffle: Boolean.
Batch shuffling flag (default: True).
initial_epoch: Integer.
Initial epoch (default: 0).
save_f: Boolean.
Model saving flag (default: True).
Returns
-------
Training history.
Tuple.
"""
'''
_keras_api_gauge.get_cell('fit').set(True)
# Legacy graph support is contained in `training_v1.Model`.
version_utils.disallow_legacy_graph('Model', 'fit')
self._assert_compile_was_called()
self._check_call_args('fit')
_disallow_inside_tf_function('fit')
'''
# Check exception.
do_validation = bool(validation_data_gen)
if do_validation:
assert hasattr(validation_data_gen, 'next') or \
hasattr(validation_data_gen, '__next') or \
isinstance(validation_data_gen, Sequence)
if not isinstance(validation_data_gen, Sequence):
assert validation_steps # ?
assert isinstance(validation_freq, int)
if not isinstance(generator, Sequence) and use_multiprocessing and workers > 1:
warnings.warn(UserWarning('For multiprocessing, use the instance of Sequence.'))
# Initialize the results directory
if not os.path.isdir(os.path.join('results')):
os.mkdir(os.path.join('results'))
else:
shutil.rmtree(os.path.join('results'))
os.mkdir(os.path.join('results'))
enq = None
val_enq = None
try:
# Get the validation generator and output generator.
if do_validation:
if workers > 0:
if isinstance(validation_data_gen, Sequence):
val_enq = OrderedEnqueuer(validation_data_gen
, use_multiprocessing=use_multiprocessing) # shuffle?
validation_steps = validation_steps or len(validation_data_gen)
else:
val_enq = GeneratorEnqueuer(validation_data_gen
, use_multiprocessing=use_multiprocessing)
val_enq.start(workers=workers, max_queue_size=max_queue_size)
val_generator = val_enq.get()
else:
if isinstance(validation_data_gen, Sequence):
val_generator = iter_sequence_infinite(validation_data_gen)
validation_steps = validation_steps or len(validation_data_gen)
else:
val_generator = validation_data_gen
if workers > 0:
if isinstance(generator, Sequence):
enq = OrderedEnqueuer(generator
, use_multiprocessing=use_multiprocessing
, shuffle=shuffle)
else:
enq = GeneratorEnqueuer(generator
, use_multiprocessing=use_multiprocessing)
enq.start(workers=workers, max_queue_size=max_queue_size)
output_generator = enq.get()
else:
if isinstance(generator, Sequence):
output_generator = iter_sequence_infinite(generator)
else:
output_generator = generator
# Callbacks.
# disc_ext.
if not isinstance(callbacks_disc_ext_raw, cbks.CallbackList):
callbacks_disc_ext = cbks.CallbackList(callbacks_disc_ext_raw
, add_history=True
, add_progbar=verbose != 0
, model=self.disc_ext
, verbose=verbose
, epochs=self.hps['epochs']
, steps=self.hps['batch_step'] * self.hps['disc_k_step'])
else:
callbacks_disc_ext = callbacks_disc_ext_raw
# gen_disc.
if not isinstance(callbacks_disc_ext_raw, cbks.CallbackList):
callbacks_gen_disc = cbks.CallbackList(callbacks_gen_disc_raw
, add_history=True
, add_progbar=verbose != 0
, model=self.gen_disc
, verbose=verbose
, epochs=self.hps['epochs']
, steps=self.hps['batch_step'])
else:
callbacks_gen_disc = callbacks_gen_disc_raw
# Train.
self.disc_ext.model.stop_training = False
self.disc_ext._train_counter.assign(0)
self.gen_disc.model.stop_training = False
self.gen_disc._train_counter.assign(0)
disc_ext_training_logs = None
gen_disc_training_logs = None
callbacks_disc_ext.on_train_begin()
callbacks_gen_disc.on_train_begin()
initial_epoch = self.disc_ext._maybe_load_initial_epoch_from_ckpt(
initial_epoch) # ?
pre_e_i = -1
for e_i in range(initial_epoch, self.hps['epochs']):
if callbacks_disc_ext.model.stop_training or callbacks_gen_disc.model.stop_training:
break
self.disc_ext.reset_metrics() # ?
self.gen_disc.reset_metrics()
epochs_log_disc_ext = {}
epochs_log_gen_disc = {}
callbacks_disc_ext.on_epoch_begin(e_i, epochs_log_disc_ext)
callbacks_gen_disc.on_epoch_begin(e_i, epochs_log_gen_disc)
# Train disc_ext, gen_disc models progressively according to the schedule for epochs.
# Make partial disc_ext, gen_disc.
partial_gen = self.gen.create_prog_model(ModelExt.PROGRESSIVE_MODE_FORWARD
, self.nn_arch['gen_prog_depths'][e_i]
, self.nn_arch['gen_prog_fixed_layer_names'])
partial_disc = self.disc.create_prog_model(ModelExt.PROGRESSIVE_MODE_BACKWARD
, self.nn_arch['disc_prog_depths'][e_i]
, self.nn_arch['disc_prog_fixed_layer_names'])
partial_disc_ext, partial_gen_disc = compose_gan_with_mode(partial_gen
, partial_disc
, self.nn_arch['composing_mode']
, multi_gpu=self.conf['multi_gpu']
, num_gpus=self.conf['num_gpus'])
for s_i in range(self.hps['batch_step']):
for k_i in range(self.hps['disc_k_step']):
step = self.hps['disc_k_step'] * s_i + k_i - 1 # ?
with trace.Trace('TraceContext'
, graph_type='train'
, epoch_num=e_i
, step_num=step
, batch_size=self.hps['batch_size']):
callbacks_disc_ext.on_train_batch_begin(step)
inputs, outputs = gen_disc_ext_data_fun(output_generator)
self.gen.trainable = False
for layer in self.gen.layers: layer.trainable = False
self.disc.trainable = True
for layer in self.disc.layers: layer.trainable = True
disc_ext_step_logs = partial_disc_ext.train_on_batch(inputs
, outputs
, class_weight=class_weight
, reset_metrics=False
, return_dict=True) # ?
del inputs, outputs
end_step = step + 1
callbacks_disc_ext.on_train_batch_end(end_step, disc_ext_step_logs)
step = s_i - 1 # ?
with trace.Trace('TraceContext'
, graph_type='train'
, epoch_num=e_i
, step_num=step
, batch_size=self.hps['batch_size']):
inputs, outputs = gen_gen_disc_data_fun(output_generator)
self.gen.trainable = True
for layer in self.gen.layers: layer.trainable = True
self.disc.trainable = False
for layer in self.disc.layers: layer.trainable = False
gen_disc_step_logs = partial_gen_disc.train_on_batch(inputs
, outputs
, class_weight=class_weight
, reset_metrics=False
, return_dict=True)
del inputs, outputs
end_step = step + 1
callbacks_gen_disc.on_train_batch_end(end_step, gen_disc_step_logs)
disc_ext_epoch_logs = copy.copy(disc_ext_step_logs) # ?
gen_disc_epoch_logs = copy.copy(gen_disc_step_logs) # ?
# Do validation.
if do_validation: # ?
if e_i % validation_freq == 0: # ?
# disc_ext.
val_outs_disc_ext = self._evaluate_disc_ext(self.disc_ext
, val_generator # ?
, gen_disc_ext_data_fun
, callbacks_raw=None # callbacks_disc_ext
, workers=1
, verbose=1)
# gen_disc.
val_outs_gen_disc = self._evaluate_gen_disc(self.gen_disc
, val_generator
, gen_gen_disc_data_fun
, callbacks_raw=None # callbacks_gen_disc
, workers=1
, verbose=1)
# Make epochs logs.
epochs_log_disc_ext.update(val_outs_disc_ext)
epochs_log_gen_disc.update(val_outs_gen_disc)
callbacks_disc_ext.on_epoch_end(e_i, epochs_log_disc_ext)
callbacks_gen_disc.on_epoch_end(e_i, epochs_log_gen_disc)
disc_ext_training_logs = epochs_log_disc_ext
gen_disc_training_logs = epochs_log_gen_disc
if save_f:
self.save_gan_model()
pre_e_i = e_i
callbacks_disc_ext.on_train_end(logs=disc_ext_training_logs) # progress bar?
callbacks_gen_disc.on_train_end(logs=gen_disc_training_logs)
finally:
try:
if enq:
enq.stop()
finally:
if val_enq:
val_enq.stop()
return self.disc_ext.history, self.gen_disc.history
def _evaluate_disc_ext(self
, disc_ext
, generator
, gen_disc_ext_data_func
, verbose=1
, callbacks_raw=None
, max_queue_size=10
, workers=1
, use_multiprocessing=False):
"""Evaluate the extended discriminator.
Parameters
----------
disc_ext: ModelExt.
Discriminator extension.
generator: Generator
Test data generator.
verbose: Integer
Verbose mode (default=1).
callback: list
Callbacks (default=None).
max_queue_size: Integer
Maximum size for the generator queue (default: 10).
class_weight: Numpy array. ?
Class weight.
workers: Integer
Maximum number of processes to get samples (default: 1, 0: main thread).
use_multiprocessing: Boolean
Multi-processing flag (default: False).
.
Returns
-------
Evaluating result.
Dictionary.
"""
'''
_keras_api_gauge.get_cell('evaluate').set(True)
version_utils.disallow_legacy_graph('Model', 'evaluate')
self._assert_compile_was_called()
self._check_call_args('evaluate')
_disallow_inside_tf_function('evaluate')
'''
# Check exception.
if not isinstance(generator, Sequence) and use_multiprocessing and workers > 1:
warnings.warn(UserWarning('For multiprocessing, use the instance of Sequence.'))
# Callbacks.
if not isinstance(callbacks_raw, cbks.CallbackList):
callbacks = cbks.CallbackList(callbacks_raw
, add_history=True
, add_progbar=verbose != 0
, model=disc_ext
, verbose=verbose
, epochs=1
, steps=self.hps['batch_step'])
else:
callbacks = callbacks_raw
# Evaluate.
logs = {}
disc_ext._test_counter.assign(0)
callbacks.on_test_begin()
disc_ext.reset_metrics()
for k_i in range(self.hps['batch_step']): # ?
step = k_i - 1
with trace.Trace('TraceContext', graph_type='test', step_num=step):
callbacks.on_test_batch_begin(step)
inputs, outputs = gen_disc_ext_data_func(generator)
tmp_logs = disc_ext.test_on_batch(inputs
, outputs
, reset_metrics=False
, return_dict=True)
del inputs, outputs
logs = tmp_logs # No error, now safe to assign to logs.
end_step = step + 1
callbacks.on_test_batch_end(end_step, logs)
logs = tf_utils.to_numpy_or_python_type(logs)
callbacks.on_test_end(logs=logs)
return logs
def _evaluate_gen_disc(self
, gen_disc
, generator
, gen_gen_disc_data_func
, verbose=1
, callbacks_raw=None
, max_queue_size=10
, workers=1
, use_multiprocessing=False):
"""Evaluate the generator via discriminator. #?
Parameters
----------
gen_disc: ModelExt.
Generator and discriminator composite model.
generator: Generator
Test data generator.
verbose: Integer
Verbose mode (default=1).
callback: list
Callbacks (default=None).
max_queue_size: Integer
Maximum size for the generator queue (default: 10).
class_weight: Numpy array. ?
Class weight.
workers: Integer
Maximum number of processes to get samples (default: 1, 0: main thread).
use_multiprocessing: Boolean
Multi-processing flag (default: False).
.
Returns
-------
Evaluating result.
Dictionary.
"""
'''
_keras_api_gauge.get_cell('evaluate').set(True)
version_utils.disallow_legacy_graph('Model', 'evaluate')
self._assert_compile_was_called()
self._check_call_args('evaluate')
_disallow_inside_tf_function('evaluate')
'''
# Check exception.
if not isinstance(generator, Sequence) and use_multiprocessing and workers > 1:
warnings.warn(UserWarning('For multiprocessing, use the instance of Sequence.'))
# Callbacks.
if not isinstance(callbacks_raw, cbks.CallbackList):
callbacks = cbks.CallbackList(callbacks_raw
, add_history=True
, add_progbar=verbose != 0
, model=gen_disc
, verbose=verbose
, epochs=1
, steps=self.hps['batch_step'])
else:
callbacks = callbacks_raw
# Evaluate.
logs = {}
gen_disc._test_counter.assign(0)
callbacks.on_test_begin()
gen_disc.reset_metrics()
for s_i in range(self.hps['batch_step']):
step = s_i - 1
with trace.Trace('TraceContext', graph_type='test', step_num=step):
callbacks.on_test_batch_begin(s_i)
inputs, outputs = gen_gen_disc_data_func(generator)
tmp_logs = gen_disc.test_on_batch(inputs
, outputs
, reset_metrics=False
, return_dict=True)
del inputs, outputs
logs = tmp_logs # No error, now safe to assign to logs.
end_step = step + 1
callbacks.on_test_batch_end(end_step, logs)
logs = tf_utils.to_numpy_or_python_type(logs)
callbacks.on_test_end(logs=logs)
return logs
def save_gan_model(self):
"""Save the GAN model."""
assert hasattr(self, 'disc_ext') and hasattr(self, 'gen_disc')
with CustomObjectScope(self.custom_objects):
self.disc_ext.save(self.DISC_EXT_PATH, save_format='h5')
self.gen_disc.save(self.GEN_DISC_PATH, save_format='h5')
def generate(self, inputs, *args, **kwargs):
"""Generate.
Parameters
----------
inputs: Numpy array, list or tuple.
Inputs.
"""
inputs = inputs if isinstance(inputs, (list, tuple)) else [inputs]
return self.gen.predict(inputs)
def compose_gan_with_mode(gen, disc, mode, multi_gpu=False, num_gpus=1):
"""Compose the GAN model with mode.
Parameters
----------
gan: ModelExt.
Generator model.
disc: ModelExt.
Discriminator model.
mode: Integer.
GAN composing mode.
"""
assert isinstance(gen, (Model, ModelExt)) and isinstance(disc, (Model, ModelExt))
if mode == STYLE_GAN_REGULAR or mode == LSGAN:
# Compose gan.
# Compose disc_ext.
# disc.
x_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in disc.inputs]
x_outputs = [disc(x_inputs)] if len(disc.outputs) == 1 else disc(x_inputs) #?
# gen and disc.
z_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in gen.inputs]
gen.trainable = False
for layer in gen.layers: layer.trainable = False
z_outputs = [gen(z_inputs)] if len(gen.outputs) == 1 else gen(z_inputs)
disc.trainable = True
for layer in disc.layers: layer.trainable = True
x2_outputs = [disc(z_outputs + [z_inputs[1]])] if len(disc.outputs) == 1 else disc(z_outputs)
disc_ext = ModelExt(inputs=x_inputs + z_inputs
, outputs=x_outputs + x2_outputs
, name='disc_ext')
# Compose gen_disc.
z_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in gen.inputs]
gen.trainable = True
for layer in gen.layers: layer.trainable = True
z_outputs = [gen(z_inputs)] if len(gen.outputs) == 1 else gen(z_inputs)
disc.trainable = False
for layer in disc.layers: layer.trainable = False
z_p_outputs = [disc(z_outputs + [z_inputs[1]])] if len(disc.outputs) == 1 else disc(z_outputs)
gen_disc = ModelExt(inputs=z_inputs
, outputs=z_p_outputs
, name='gen_disc')
elif mode == STYLE_GAN_WGAN_GP:
# Compose gan.
# Compose disc_ext.
# disc.
x_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in disc.inputs]
x_outputs = [disc(x_inputs)] if len(disc.outputs) == 1 else disc(x_inputs) #?
# gen and disc.
z_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in gen.inputs]
gen.trainable = False
for layer in gen.layers: layer.trainable = False
z_outputs = [gen(z_inputs)] if len(gen.outputs) == 1 else gen(z_inputs)
disc.trainable = True
for layer in disc.layers: layer.trainable = True
x2_outputs = [disc(z_outputs + [z_inputs[1]])] if len(disc.outputs) == 1 else disc(z_outputs)
x3_inputs = [[tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in disc.inputs][0]]
x3_outputs = [disc(x3_inputs + [x_inputs[1]])]
disc_ext = ModelExt(inputs=x_inputs + z_inputs + x3_inputs
, outputs=x_outputs + x2_outputs + x3_outputs
, name='disc_ext')
# Compose gen_disc.
z_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in gen.inputs]
gen.trainable = True
for layer in gen.layers: layer.trainable = True
z_outputs = [gen(z_inputs)] if len(gen.outputs) == 1 else gen(z_inputs)
disc.trainable = False
for layer in disc.layers: layer.trainable = False
z_p_outputs = [disc(z_outputs + [z_inputs[1]])] if len(disc.outputs) == 1 else disc(z_outputs)
gen_disc = ModelExt(inputs=z_inputs
, outputs=z_p_outputs
, name='gen_disc')
elif mode == STYLE_GAN_SOFTPLUS_INVERSE_R1_GP:
# Compose gan.
# Compose disc_ext.
# disc.
x_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in disc.inputs]
x_outputs = [disc(x_inputs)] if len(disc.outputs) == 1 else disc(x_inputs) #?
# gen and disc.
z_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in gen.inputs]
gen.trainable = False
for layer in gen.layers: layer.trainable = False
z_outputs = [gen(z_inputs)] if len(gen.outputs) == 1 else gen(z_inputs)
disc.trainable = True
for layer in disc.layers: layer.trainable = True
x2_outputs = [disc(z_outputs + [z_inputs[1]])] if len(disc.outputs) == 1 else disc(z_outputs)
disc_ext = ModelExt(inputs=x_inputs + z_inputs
, outputs=x_outputs + x_outputs + x2_outputs
, name='disc_ext')
# Compose gen_disc.
z_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in gen.inputs]
gen.trainable = True
for layer in gen.layers: layer.trainable = True
z_outputs = [gen(z_inputs)] if len(gen.outputs) == 1 else gen(z_inputs)
disc.trainable = False
for layer in disc.layers: layer.trainable = False
z_p_outputs = [disc(z_outputs + [z_inputs[1]])] if len(disc.outputs) == 1 else disc(z_outputs)
gen_disc = ModelExt(inputs=z_inputs
, outputs=z_p_outputs
, name='gen_disc')
elif mode == PIX2PIX_GAN:
# Compose gan.
# Compose disc_ext.
# disc.
x_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in disc.inputs]
x_outputs = [disc(x_inputs)] if len(disc.outputs) == 1 else disc(x_inputs) #?
# gen and disc.
z_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in gen.inputs]
gen.trainable = False
for layer in gen.layers: layer.trainable = False
z_outputs = [gen(z_inputs)] if len(gen.outputs) == 1 else gen(z_inputs)
disc.trainable = True
for layer in disc.layers: layer.trainable = True
# Get condition inputs.
cond_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in disc.inputs \
if 'cond' in t.name]
x2_outputs = [disc(cond_inputs + z_outputs)] \
if len(disc.outputs) == 1 else disc(cond_inputs + z_outputs)
disc_ext = ModelExt(inputs=x_inputs + z_inputs + cond_inputs
, outputs=x_outputs + x2_outputs
, name='disc_ext')
# Compose gen_disc.
z_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in gen.inputs]
gen.trainable = True
for layer in gen.layers: layer.trainable = True
z_outputs = [gen(z_inputs)] if len(gen.outputs) == 1 else gen(z_inputs)
disc.trainable = False
for layer in disc.layers: layer.trainable = False
# Get condition inputs.
cond_inputs = [tf.keras.Input(shape=K.int_shape(t)[1:], dtype=t.dtype) for t in disc.inputs \
if 'cond' in t.name]
z_p_outputs = [disc(cond_inputs + z_outputs)] \
if len(disc.outputs) == 1 else disc(cond_inputs + z_outputs)
gen_disc = ModelExt(inputs=z_inputs + cond_inputs
, outputs=z_p_outputs + z_outputs
, name='gen_disc')
else:
ValueError('mode is not valid.')
return disc_ext, gen_disc | 46.702886 | 117 | 0.46701 | 5,096 | 55,016 | 4.73646 | 0.061813 | 0.045822 | 0.021212 | 0.011186 | 0.84182 | 0.816423 | 0.796122 | 0.791068 | 0.78158 | 0.775283 | 0 | 0.006645 | 0.463847 | 55,016 | 1,178 | 118 | 46.702886 | 0.811642 | 0.119983 | 0 | 0.743224 | 0 | 0 | 0.03492 | 0.006894 | 0 | 0 | 0 | 0 | 0.012839 | 1 | 0.024251 | false | 0 | 0.03281 | 0.001427 | 0.072753 | 0.001427 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a3ce2d6da4ce6218c75c4668b4895ed57c28e711 | 61,565 | py | Python | tests/test_observable_single.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 78 | 2015-01-22T23:57:01.000Z | 2021-06-04T15:16:22.000Z | tests/test_observable_single.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2015-10-19T12:59:57.000Z | 2015-10-19T12:59:57.000Z | tests/test_observable_single.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 11 | 2015-02-16T20:43:45.000Z | 2018-05-30T11:46:50.000Z | from rx import Observable
from rx.testing import TestScheduler, ReactiveTest, is_prime, MockDisposable
from rx.disposables import Disposable, SerialDisposable
on_next = ReactiveTest.on_next
on_completed = ReactiveTest.on_completed
on_error = ReactiveTest.on_error
subscribe = ReactiveTest.subscribe
subscribed = ReactiveTest.subscribed
disposed = ReactiveTest.disposed
created = ReactiveTest.created
# def test_Materialize_Never():
# var results, scheduler
# scheduler = TestScheduler()
# results = scheduler.start(create)
# return Rx.Observable.never().materialize()
# results.messages.assert_equal()
# def test_Materialize_Empty():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_completed(250))
# results = scheduler.start(create)
# return xs.materialize()
# }).messages
# equal(2, results.length)
# assert(results[0].value.kind == 'N' and results[0].value.value.kind == 'C' and results[0].time == 250)
# assert(results[1].value.kind == 'C' and results[1].time == 250)
# def test_Materialize_Return():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.materialize()
# }).messages
# equal(3, results.length)
# assert(results[0].value.kind == 'N' and results[0].value.value.kind == 'N' and results[0].value.value.value == 2 and results[0].time == 210)
# assert(results[1].value.kind == 'N' and results[1].value.value.kind == 'C' and results[1].time == 250)
# assert(results[2].value.kind == 'C' and results[1].time == 250)
# def test_Materialize_Throw():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_error(250, ex))
# results = scheduler.start(create)
# return xs.materialize()
# }).messages
# equal(2, results.length)
# assert(results[0].value.kind == 'N' and results[0].value.value.kind == 'E' and results[0].value.value.exception == ex)
# assert(results[1].value.kind == 'C')
# def test_Materialize_Dematerialize_Never():
# var results, scheduler
# scheduler = TestScheduler()
# results = scheduler.start(create)
# return Rx.Observable.never().materialize().dematerialize()
# results.messages.assert_equal()
# def test_Materialize_Dematerialize_Empty():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_completed(250))
# results = scheduler.start(create)
# return xs.materialize().dematerialize()
# }).messages
# equal(1, results.length)
# assert(results[0].value.kind == 'C' and results[0].time == 250)
# def test_Materialize_Dematerialize_Return():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.materialize().dematerialize()
# }).messages
# equal(2, results.length)
# assert(results[0].value.kind == 'N' and results[0].value.value == 2 and results[0].time == 210)
# assert(results[1].value.kind == 'C')
# def test_Materialize_Dematerialize_Throw():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_error(250, ex))
# results = scheduler.start(create)
# return xs.materialize().dematerialize()
# }).messages
# equal(1, results.length)
# assert(results[0].value.kind == 'E' and results[0].value.exception == ex and results[0].time == 250)
# def test_StartWith():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(220, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.startWith(1)
# }).messages
# equal(3, results.length)
# assert(results[0].value.kind == 'N' and results[0].value.value == 1 and results[0].time == 200)
# assert(results[1].value.kind == 'N' and results[1].value.value == 2 and results[1].time == 220)
# assert(results[2].value.kind == 'C')
# var sequenceEqual
# sequenceEqual = function (arr1, arr2) {
# var i
# if (arr1.length !== arr2.length) {
# return false
# }
# for (i = 0 i < arr1.length i++) {
# if (arr1[i] !== arr2[i]) {
# return false
# }
# }
# return true
# }
# def test_Buffer_Count_PartialWindow():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# results = scheduler.start(create)
# return xs.bufferWithCount(5)
# }).messages
# equal(2, results.length)
# assert(sequenceEqual(results[0].value.value, [2, 3, 4, 5]) and results[0].time == 250)
# assert(results[1].value.kind == 'C' and results[1].time == 250)
# def test_Buffer_Count_FullWindows():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# results = scheduler.start(create)
# return xs.bufferWithCount(2)
# }).messages
# equal(3, results.length)
# assert(sequenceEqual(results[0].value.value, [2, 3]) and results[0].time == 220)
# assert(sequenceEqual(results[1].value.value, [4, 5]) and results[1].time == 240)
# assert(results[2].value.kind == 'C' and results[2].time == 250)
# def test_Buffer_Count_FullAndPartialWindows():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# results = scheduler.start(create)
# return xs.bufferWithCount(3)
# }).messages
# equal(3, results.length)
# assert(sequenceEqual(results[0].value.value, [2, 3, 4]) and results[0].time == 230)
# assert(sequenceEqual(results[1].value.value, [5]) and results[1].time == 250)
# assert(results[2].value.kind == 'C' and results[2].time == 250)
# def test_Buffer_Count_Error():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_error(250, 'ex'))
# results = scheduler.start(create)
# return xs.bufferWithCount(5)
# }).messages
# equal(1, results.length)
# assert(results[0].value.kind == 'E' and results[0].time == 250)
# def test_Buffer_Count_Skip_Less():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# results = scheduler.start(create)
# return xs.bufferWithCount(3, 1)
# }).messages
# equal(5, results.length)
# assert(sequenceEqual(results[0].value.value, [2, 3, 4]) and results[0].time == 230)
# assert(sequenceEqual(results[1].value.value, [3, 4, 5]) and results[1].time == 240)
# assert(sequenceEqual(results[2].value.value, [4, 5]) and results[2].time == 250)
# assert(sequenceEqual(results[3].value.value, [5]) and results[3].time == 250)
# assert(results[4].value.kind == 'C' and results[4].time == 250)
# def test_Buffer_Count_Skip_More():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# results = scheduler.start(create)
# return xs.bufferWithCount(2, 3)
# }).messages
# equal(3, results.length)
# assert(sequenceEqual(results[0].value.value, [2, 3]) and results[0].time == 220)
# assert(sequenceEqual(results[1].value.value, [5]) and results[1].time == 250)
# assert(results[2].value.kind == 'C' and results[2].time == 250)
# def test_AsObservable_Hides():
# var someObservable
# someObservable = Rx.Observable.empty()
# assert(someObservable.asObservable() !== someObservable)
# def test_AsObservable_Never():
# var results, scheduler
# scheduler = TestScheduler()
# results = scheduler.start(create)
# return Rx.Observable.never().asObservable()
# results.messages.assert_equal()
# def test_AsObservable_Empty():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_completed(250))
# results = scheduler.start(create)
# return xs.asObservable()
# }).messages
# equal(1, results.length)
# assert(results[0].value.kind == 'C' and results[0].time == 250)
# def test_AsObservable_Throw():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_error(250, ex))
# results = scheduler.start(create)
# return xs.asObservable()
# }).messages
# equal(1, results.length)
# assert(results[0].value.kind == 'E' and results[0].value.exception == ex and results[0].time == 250)
# def test_AsObservable_Return():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(220, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.asObservable()
# }).messages
# equal(2, results.length)
# assert(results[0].value.kind == 'N' and results[0].value.value == 2 and results[0].time == 220)
# assert(results[1].value.kind == 'C' and results[1].time == 250)
# def test_AsObservable_IsNotEager():
# var scheduler, subscribed, xs
# scheduler = TestScheduler()
# subscribed = false
# xs = Rx.Observable.create(function (obs) {
# var disp
# subscribed = true
# disp = scheduler.create_hot_observable(on_next(150, 1), on_next(220, 2), on_completed(250)).subscribe(obs)
# return function () {
# return disp.dispose()
# }
# xs.asObservable()
# assert(!subscribed)
# scheduler.start(create)
# return xs.asObservable()
# assert(subscribed)
def test_scan_seed_never():
scheduler = TestScheduler()
seed = 42
def create():
def func(acc, x):
return acc + x
return Observable.never().scan(seed=seed, accumulator=func)
results = scheduler.start(create)
results.messages.assert_equal()
def test_scan_seed_empty():
scheduler = TestScheduler()
seed = 42
xs = scheduler.create_hot_observable(on_next(150, 1), on_completed(250))
def create():
return xs.scan(lambda acc, x: acc + x, seed=seed)
results = scheduler.start(create).messages
assert(len(results) == 1)
assert(results[0].value.kind == 'C' and results[0].time == 250)
def test_scan_seed_return():
scheduler = TestScheduler()
seed = 42
xs = scheduler.create_hot_observable(on_next(150, 1), on_next(220, 2), on_completed(250))
def create():
return xs.scan(lambda acc, x: acc + x, seed=seed)
results = scheduler.start(create).messages
assert(len(results) == 2)
assert(results[0].value.kind == 'N' and results[0].value.value == seed + 2 and results[0].time == 220)
assert(results[1].value.kind == 'C' and results[1].time == 250)
def test_scan_seed_throw():
ex = 'ex'
scheduler = TestScheduler()
seed = 42
xs = scheduler.create_hot_observable(on_next(150, 1), on_error(250, ex))
def create():
return xs.scan(seed, lambda acc, x: acc + x)
results = scheduler.start(create).messages
assert(len(results) == 1)
assert(results[0].value.kind == 'E' and results[0].value.exception == ex and results[0].time == 250)
def test_scan_seed_somedata():
scheduler = TestScheduler()
seed = 1
xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
def create():
return xs.scan(lambda acc, x: acc + x, seed=seed)
results = scheduler.start(create).messages
assert(len(results) == 5)
assert(results[0].value.kind == 'N' and results[0].value.value == seed + 2 and results[0].time == 210)
assert(results[1].value.kind == 'N' and results[1].value.value == seed + 2 + 3 and results[1].time == 220)
assert(results[2].value.kind == 'N' and results[2].value.value == seed + 2 + 3 + 4 and results[2].time == 230)
assert(results[3].value.kind == 'N' and results[3].value.value == seed + 2 + 3 + 4 + 5 and results[3].time == 240)
assert(results[4].value.kind == 'C' and results[4].time == 250)
def test_scan_noseed_never():
scheduler = TestScheduler()
def create():
return Observable.never().scan(lambda acc, x: acc + x)
results = scheduler.start(create)
results.messages.assert_equal()
def test_scan_noseed_empty():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(150, 1), on_completed(250))
def create():
return xs.scan(lambda acc, x: acc + x)
results = scheduler.start(create).messages
assert(len(results) == 1)
assert(results[0].value.kind == 'C' and results[0].time == 250)
def test_scan_noseed_return():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(150, 1), on_next(220, 2), on_completed(250))
def create():
def func(acc, x):
if acc is None:
acc = 0
return acc + x
return xs.scan(accumulator=func)
results = scheduler.start(create).messages
assert(len(results) == 2)
assert(results[0].value.kind == 'N' and results[0].time == 220 and results[0].value.value == 2)
assert(results[1].value.kind == 'C' and results[1].time == 250)
def test_scan_noseed_throw():
ex = 'ex'
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(150, 1), on_error(250, ex))
def create():
def func(acc, x):
if acc is None:
acc = 0
return acc + x
return xs.scan(func)
results = scheduler.start(create).messages
assert(len(results) == 1)
assert(results[0].value.kind == 'E' and results[0].time == 250 and results[0].value.exception == ex)
def test_scan_noSeed_somedata():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
def create():
def func(acc, x):
if acc is None:
acc = 0
return acc + x
return xs.scan(func)
results = scheduler.start(create).messages
assert(len(results) == 5)
assert(results[0].value.kind == 'N' and results[0].time == 210 and results[0].value.value == 2)
assert(results[1].value.kind == 'N' and results[1].time == 220 and results[1].value.value == 2 + 3)
assert(results[2].value.kind == 'N' and results[2].time == 230 and results[2].value.value == 2 + 3 + 4)
assert(results[3].value.kind == 'N' and results[3].time == 240 and results[3].value.value == 2 + 3 + 4 + 5)
assert(results[4].value.kind == 'C' and results[4].time == 250)
# def test_DistinctUntilChanged_Never():
# var results, scheduler
# scheduler = TestScheduler()
# results = scheduler.start(create)
# return Rx.Observable.never().distinctUntilChanged()
# results.messages.assert_equal()
# def test_DistinctUntilChanged_Empty():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged()
# }).messages
# equal(1, results.length)
# assert(results[0].value.kind == 'C' and results[0].time == 250)
# def test_DistinctUntilChanged_Return():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(220, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged()
# }).messages
# equal(2, results.length)
# assert(results[0].value.kind == 'N' and results[0].time == 220 and results[0].value.value == 2)
# assert(results[1].value.kind == 'C' and results[1].time == 250)
# def test_DistinctUntilChanged_Throw():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_error(250, ex))
# results = scheduler.start(create)
# return xs.distinctUntilChanged()
# }).messages
# equal(1, results.length)
# assert(results[0].value.kind == 'E' and results[0].time == 250 and results[0].value.exception == ex)
# def test_DistinctUntilChanged_AllChanges():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged()
# }).messages
# equal(5, results.length)
# assert(results[0].value.kind == 'N' and results[0].time == 210 and results[0].value.value == 2)
# assert(results[1].value.kind == 'N' and results[1].time == 220 and results[1].value.value == 3)
# assert(results[2].value.kind == 'N' and results[2].time == 230 and results[2].value.value == 4)
# assert(results[3].value.kind == 'N' and results[3].time == 240 and results[3].value.value == 5)
# assert(results[4].value.kind == 'C' and results[4].time == 250)
# def test_DistinctUntilChanged_AllSame():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 2), on_next(230, 2), on_next(240, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged()
# }).messages
# equal(2, results.length)
# assert(results[0].value.kind == 'N' and results[0].time == 210 and results[0].value.value == 2)
# assert(results[1].value.kind == 'C' and results[1].time == 250)
# def test_DistinctUntilChanged_SomeChanges():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(215, 3), on_next(220, 3), on_next(225, 2), on_next(230, 2), on_next(230, 1), on_next(240, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged()
# }).messages
# equal(6, results.length)
# assert(results[0].value.kind == 'N' and results[0].time == 210 and results[0].value.value == 2)
# assert(results[1].value.kind == 'N' and results[1].time == 215 and results[1].value.value == 3)
# assert(results[2].value.kind == 'N' and results[2].time == 225 and results[2].value.value == 2)
# assert(results[3].value.kind == 'N' and results[3].time == 230 and results[3].value.value == 1)
# assert(results[4].value.kind == 'N' and results[4].time == 240 and results[4].value.value == 2)
# assert(results[5].value.kind == 'C' and results[5].time == 250)
# def test_DistinctUntilChanged_Comparer_AllEqual():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged(void 0, function (x, y) {
# return true
# }).messages
# equal(2, results.length)
# assert(results[0].value.kind == 'N' and results[0].time == 210 and results[0].value.value == 2)
# assert(results[1].value.kind == 'C' and results[1].time == 250)
# def test_DistinctUntilChanged_Comparer_AllDifferent():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 2), on_next(230, 2), on_next(240, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged(void 0, function (x, y) {
# return false
# }).messages
# equal(5, results.length)
# assert(results[0].value.kind == 'N' and results[0].time == 210 and results[0].value.value == 2)
# assert(results[1].value.kind == 'N' and results[1].time == 220 and results[1].value.value == 2)
# assert(results[2].value.kind == 'N' and results[2].time == 230 and results[2].value.value == 2)
# assert(results[3].value.kind == 'N' and results[3].time == 240 and results[3].value.value == 2)
# assert(results[4].value.kind == 'C' and results[4].time == 250)
# def test_DistinctUntilChanged_KeySelector_Div2():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 4), on_next(230, 3), on_next(240, 5), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged(function (x) {
# return x % 2
# }).messages
# equal(3, results.length)
# assert(results[0].value.kind == 'N' and results[0].time == 210 and results[0].value.value == 2)
# assert(results[1].value.kind == 'N' and results[1].time == 230 and results[1].value.value == 3)
# assert(results[2].value.kind == 'C' and results[2].time == 250)
# def test_DistinctUntilChanged_KeySelectorThrows():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged(function (x) {
# throw ex
# results.messages.assert_equal(on_error(210, ex))
# def test_DistinctUntilChanged_ComparerThrows():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_completed(250))
# results = scheduler.start(create)
# return xs.distinctUntilChanged(void 0, function (x, y) {
# throw ex
# results.messages.assert_equal(on_next(210, 2), on_error(220, ex))
# def test_Finally_OnlyCalledOnce_Empty():
# var d, invasserteCount, someObservable
# invasserteCount = 0
# someObservable = Rx.Observable.empty().finallyAction(function () {
# return invasserteCount++
# d = someObservable.subscribe()
# d.dispose()
# d.dispose()
# equal(1, invasserteCount)
# def test_Finally_Empty():
# var invasserted, results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_completed(250))
# invasserted = false
# results = scheduler.start(create)
# return xs.finallyAction(function () {
# return invasserted = true
# }).messages
# equal(1, results.length)
# assert(results[0].value.kind == 'C' and results[0].time == 250)
# assert(invasserted)
# def test_Finally_Return():
# var invasserted, results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# invasserted = false
# results = scheduler.start(create)
# return xs.finallyAction(function () {
# return invasserted = true
# }).messages
# equal(2, results.length)
# assert(results[0].value.kind == 'N' and results[0].time == 210 and results[0].value.value == 2)
# assert(results[1].value.kind == 'C' and results[1].time == 250)
# assert(invasserted)
# def test_Finally_Throw():
# var ex, invasserted, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_error(250, ex))
# invasserted = false
# results = scheduler.start(create)
# return xs.finallyAction(function () {
# return invasserted = true
# }).messages
# equal(1, results.length)
# assert(results[0].value.kind == 'E' and results[0].time == 250 and results[0].value.exception == ex)
# assert(invasserted)
# def test_Do_ShouldSeeAllValues():
# var i, scheduler, sum, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# i = 0
# sum = 2 + 3 + 4 + 5
# scheduler.start(create)
# return xs.doAction(function (x) {
# i++
# return sum -= x
# equal(4, i)
# equal(0, sum)
# def test_Do_PlainAction():
# var i, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# i = 0
# scheduler.start(create)
# return xs.doAction(function (x) {
# return i++
# equal(4, i)
# def test_Do_NextCompleted():
# var completed, i, scheduler, sum, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# i = 0
# sum = 2 + 3 + 4 + 5
# completed = false
# scheduler.start(create)
# return xs.doAction(function (x) {
# i++
# sum -= x
# }, undefined, function () {
# completed = true
# equal(4, i)
# equal(0, sum)
# assert(completed)
# def test_Do_NextCompleted_Never():
# var completed, i, scheduler
# scheduler = TestScheduler()
# i = 0
# completed = false
# scheduler.start(create)
# return Rx.Observable.never().doAction(function (x) {
# i++
# }, undefined, function () {
# completed = true
# equal(0, i)
# assert(!completed)
# def test_Do_NextError():
# var ex, i, sawError, scheduler, sum, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_error(250, ex))
# i = 0
# sum = 2 + 3 + 4 + 5
# sawError = false
# scheduler.start(create)
# return xs.doAction(function (x) {
# i++
# sum -= x
# }, function (e) {
# sawError = e == ex
# equal(4, i)
# equal(0, sum)
# assert(sawError)
# def test_Do_NextErrorNot():
# var i, sawError, scheduler, sum, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# i = 0
# sum = 2 + 3 + 4 + 5
# sawError = false
# scheduler.start(create)
# return xs.doAction(function (x) {
# i++
# sum -= x
# }, function (e) {
# sawError = true
# equal(4, i)
# equal(0, sum)
# assert(!sawError)
# def test_Do_NextErrorCompleted():
# var hasCompleted, i, sawError, scheduler, sum, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# i = 0
# sum = 2 + 3 + 4 + 5
# sawError = false
# hasCompleted = false
# scheduler.start(create)
# return xs.doAction(function (x) {
# i++
# sum -= x
# }, function (e) {
# sawError = true
# }, function () {
# hasCompleted = true
# equal(4, i)
# equal(0, sum)
# assert(!sawError)
# assert(hasCompleted)
# def test_Do_NextErrorCompletedError():
# var ex, hasCompleted, i, sawError, scheduler, sum, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_error(250, ex))
# i = 0
# sum = 2 + 3 + 4 + 5
# sawError = false
# hasCompleted = false
# scheduler.start(create)
# return xs.doAction(function (x) {
# i++
# sum -= x
# }, function (e) {
# sawError = ex == e
# }, function () {
# hasCompleted = true
# equal(4, i)
# equal(0, sum)
# assert(sawError)
# assert(!hasCompleted)
# def test_Do_NextErrorCompletedNever():
# var hasCompleted, i, sawError, scheduler
# scheduler = TestScheduler()
# i = 0
# sawError = false
# hasCompleted = false
# scheduler.start(create)
# return Rx.Observable.never().doAction(function (x) {
# i++
# }, function (e) {
# sawError = true
# }, function () {
# hasCompleted = true
# equal(0, i)
# assert(!sawError)
# assert(!hasCompleted)
# def test_Do_Observer_SomeDataWithError():
# var ex, hasCompleted, i, sawError, scheduler, sum, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_error(250, ex))
# i = 0
# sum = 2 + 3 + 4 + 5
# sawError = false
# hasCompleted = false
# scheduler.start(create)
# return xs.doAction(Rx.Observer.create(function (x) {
# i++
# sum -= x
# }, function (e) {
# sawError = e == ex
# }, function () {
# hasCompleted = true
# }))
# equal(4, i)
# equal(0, sum)
# assert(sawError)
# assert(!hasCompleted)
# def test_Do_Observer_SomeDataWithError():
# var hasCompleted, i, sawError, scheduler, sum, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# i = 0
# sum = 2 + 3 + 4 + 5
# sawError = false
# hasCompleted = false
# scheduler.start(create)
# return xs.doAction(Rx.Observer.create(function (x) {
# i++
# sum -= x
# }, function (e) {
# sawError = true
# }, function () {
# hasCompleted = true
# }))
# equal(4, i)
# equal(0, sum)
# assert(!sawError)
# assert(hasCompleted)
# def test_Do1422_Next_NextThrows():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.doAction(function () {
# throw ex
# results.messages.assert_equal(on_error(210, ex))
# def test_Do1422_NextCompleted_NextThrows():
# var ex, results, scheduler, xs, _undefined
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.doAction(function () {
# throw ex
# }, _undefined, function () {
# results.messages.assert_equal(on_error(210, ex))
# def test_Do1422_NextCompleted_CompletedThrows():
# var ex, results, scheduler, xs, _undefined
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.doAction(function () { }, _undefined, function () {
# throw ex
# results.messages.assert_equal(on_next(210, 2), on_error(250, ex))
# def test_Do1422_NextError_NextThrows():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.doAction(function () {
# throw ex
# }, function () {
# results.messages.assert_equal(on_error(210, ex))
# def test_Do1422_NextError_NextThrows():
# var ex1, ex2, results, scheduler, xs
# ex1 = 'ex1'
# ex2 = 'ex2'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_error(210, ex1))
# results = scheduler.start(create)
# return xs.doAction(function () { }, function () {
# throw ex2
# results.messages.assert_equal(on_error(210, ex2))
# def test_Do1422_NextErrorCompleted_NextThrows():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.doAction(function () {
# throw ex
# }, function () { }, function () {
# results.messages.assert_equal(on_error(210, ex))
# def test_Do1422_NextErrorCompleted_ErrorThrows():
# var ex1, ex2, results, scheduler, xs
# ex1 = 'ex1'
# ex2 = 'ex2'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_error(210, ex1))
# results = scheduler.start(create)
# return xs.doAction(function () { }, function () {
# throw ex2
# }, function () {
# results.messages.assert_equal(on_error(210, ex2))
# def test_Do1422_NextErrorCompleted_CompletedThrows():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.doAction(function () { }, function () { }, function () {
# throw ex
# results.messages.assert_equal(on_next(210, 2), on_error(250, ex))
# def test_Do1422_Observer_NextThrows():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.doAction(Rx.Observer.create(function () {
# throw ex
# }, function () { }, function () { }))
# results.messages.assert_equal(on_error(210, ex))
# def test_Do1422_Observer_ErrorThrows():
# var ex1, ex2, results, scheduler, xs
# ex1 = 'ex1'
# ex2 = 'ex2'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_error(210, ex1))
# results = scheduler.start(create)
# return xs.doAction(Rx.Observer.create(function () { }, function () {
# throw ex2
# }, function () { }))
# results.messages.assert_equal(on_error(210, ex2))
# def test_Do1422_Observer_CompletedThrows():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_completed(250))
# results = scheduler.start(create)
# return xs.doAction(Rx.Observer.create(function () { }, function () { }, function () {
# throw ex
# }))
# results.messages.assert_equal(on_next(210, 2), on_error(250, ex))
# def test_TakeLast_Zero_Completed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# results = scheduler.start(create)
# return xs.takeLast(0)
# results.messages.assert_equal(on_completed(650))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLast_Zero_Error():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# results = scheduler.start(create)
# return xs.takeLast(0)
# results.messages.assert_equal(on_error(650, ex))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLast_Zero_Disposed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# results = scheduler.start(create)
# return xs.takeLast(0)
# results.messages.assert_equal()
# xs.subscriptions.assert_equal(subscribe(200, 1000))
# def test_TakeLast_One_Completed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# results = scheduler.start(create)
# return xs.takeLast(1)
# results.messages.assert_equal(on_next(650, 9), on_completed(650))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLast_One_Error():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# results = scheduler.start(create)
# return xs.takeLast(1)
# results.messages.assert_equal(on_error(650, ex))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLast_One_Disposed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# results = scheduler.start(create)
# return xs.takeLast(1)
# results.messages.assert_equal()
# xs.subscriptions.assert_equal(subscribe(200, 1000))
# def test_TakeLast_Three_Completed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# results = scheduler.start(create)
# return xs.takeLast(3)
# results.messages.assert_equal(on_next(650, 7), on_next(650, 8), on_next(650, 9), on_completed(650))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLast_Three_Error():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# results = scheduler.start(create)
# return xs.takeLast(3)
# results.messages.assert_equal(on_error(650, ex))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLast_Three_Disposed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# results = scheduler.start(create)
# return xs.takeLast(3)
# results.messages.assert_equal()
# xs.subscriptions.assert_equal(subscribe(200, 1000))
# def test_SkipLast_Zero_Completed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# results = scheduler.start(create)
# return xs.skipLast(0)
# results.messages.assert_equal(on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_SkipLast_Zero_Error():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# results = scheduler.start(create)
# return xs.skipLast(0)
# results.messages.assert_equal(on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_SkipLast_Zero_Disposed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# results = scheduler.start(create)
# return xs.skipLast(0)
# results.messages.assert_equal(on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# xs.subscriptions.assert_equal(subscribe(200, 1000))
# def test_SkipLast_One_Completed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# results = scheduler.start(create)
# return xs.skipLast(1)
# results.messages.assert_equal(on_next(250, 2), on_next(270, 3), on_next(310, 4), on_next(360, 5), on_next(380, 6), on_next(410, 7), on_next(590, 8), on_completed(650))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_SkipLast_One_Error():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# results = scheduler.start(create)
# return xs.skipLast(1)
# results.messages.assert_equal(on_next(250, 2), on_next(270, 3), on_next(310, 4), on_next(360, 5), on_next(380, 6), on_next(410, 7), on_next(590, 8), on_error(650, ex))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_SkipLast_One_Disposed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# results = scheduler.start(create)
# return xs.skipLast(1)
# results.messages.assert_equal(on_next(250, 2), on_next(270, 3), on_next(310, 4), on_next(360, 5), on_next(380, 6), on_next(410, 7), on_next(590, 8))
# xs.subscriptions.assert_equal(subscribe(200, 1000))
# def test_SkipLast_Three_Completed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# results = scheduler.start(create)
# return xs.skipLast(3)
# results.messages.assert_equal(on_next(310, 2), on_next(360, 3), on_next(380, 4), on_next(410, 5), on_next(590, 6), on_completed(650))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_SkipLast_Three_Error():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# results = scheduler.start(create)
# return xs.skipLast(3)
# results.messages.assert_equal(on_next(310, 2), on_next(360, 3), on_next(380, 4), on_next(410, 5), on_next(590, 6), on_error(650, ex))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_SkipLast_Three_Disposed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# results = scheduler.start(create)
# return xs.skipLast(3)
# results.messages.assert_equal(on_next(310, 2), on_next(360, 3), on_next(380, 4), on_next(410, 5), on_next(590, 6))
# xs.subscriptions.assert_equal(subscribe(200, 1000))
# def test_IgnoreValues_Basic():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# results = scheduler.start(create)
# return xs.ignoreElements()
# results.messages.assert_equal()
# xs.subscriptions.assert_equal(subscribe(200, 1000))
# def test_IgnoreValues_Completed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(610))
# results = scheduler.start(create)
# return xs.ignoreElements()
# results.messages.assert_equal(on_completed(610))
# xs.subscriptions.assert_equal(subscribe(200, 610))
# def test_IgnoreValues_Error():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(610, ex))
# results = scheduler.start(create)
# return xs.ignoreElements()
# results.messages.assert_equal(on_error(610, ex))
# xs.subscriptions.assert_equal(subscribe(200, 610))
# def test_WindowWithCount_Basic():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(100, 1), on_next(210, 2), on_next(240, 3), on_next(280, 4), on_next(320, 5), on_next(350, 6), on_next(380, 7), on_next(420, 8), on_next(470, 9), on_completed(600))
# results = scheduler.start(create)
# return xs.windowWithCount(3, 2).select(function (w, i) {
# return w.select(function (x) {
# return i.toString() + ' ' + x.toString()
# }).mergeObservable()
# results.messages.assert_equal(on_next(210, "0 2"), on_next(240, "0 3"), on_next(280, "0 4"), on_next(280, "1 4"), on_next(320, "1 5"), on_next(350, "1 6"), on_next(350, "2 6"), on_next(380, "2 7"), on_next(420, "2 8"), on_next(420, "3 8"), on_next(470, "3 9"), on_completed(600))
# xs.subscriptions.assert_equal(subscribe(200, 600))
# def test_WindowWithCount_Disposed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(100, 1), on_next(210, 2), on_next(240, 3), on_next(280, 4), on_next(320, 5), on_next(350, 6), on_next(380, 7), on_next(420, 8), on_next(470, 9), on_completed(600))
# results = scheduler.startWithDispose(function () {
# return xs.windowWithCount(3, 2).select(function (w, i) {
# return w.select(function (x) {
# return i.toString() + ' ' + x.toString()
# }).mergeObservable()
# }, 370)
# results.messages.assert_equal(on_next(210, "0 2"), on_next(240, "0 3"), on_next(280, "0 4"), on_next(280, "1 4"), on_next(320, "1 5"), on_next(350, "1 6"), on_next(350, "2 6"))
# xs.subscriptions.assert_equal(subscribe(200, 370))
# def test_WindowWithCount_Error():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(100, 1), on_next(210, 2), on_next(240, 3), on_next(280, 4), on_next(320, 5), on_next(350, 6), on_next(380, 7), on_next(420, 8), on_next(470, 9), on_error(600, ex))
# results = scheduler.start(create)
# return xs.windowWithCount(3, 2).select(function (w, i) {
# return w.select(function (x) {
# return i.toString() + ' ' + x.toString()
# }).mergeObservable()
# results.messages.assert_equal(on_next(210, "0 2"), on_next(240, "0 3"), on_next(280, "0 4"), on_next(280, "1 4"), on_next(320, "1 5"), on_next(350, "1 6"), on_next(350, "2 6"), on_next(380, "2 7"), on_next(420, "2 8"), on_next(420, "3 8"), on_next(470, "3 9"), on_error(600, ex))
# xs.subscriptions.assert_equal(subscribe(200, 600))
# def test_BufferWithCount_Basic():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(100, 1), on_next(210, 2), on_next(240, 3), on_next(280, 4), on_next(320, 5), on_next(350, 6), on_next(380, 7), on_next(420, 8), on_next(470, 9), on_completed(600))
# results = scheduler.start(create)
# return xs.bufferWithCount(3, 2).select(function (x) {
# return x.toString()
# results.messages.assert_equal(on_next(280, "2,3,4"), on_next(350, "4,5,6"), on_next(420, "6,7,8"), on_next(600, "8,9"), on_completed(600))
# xs.subscriptions.assert_equal(subscribe(200, 600))
# def test_BufferWithCount_Disposed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(100, 1), on_next(210, 2), on_next(240, 3), on_next(280, 4), on_next(320, 5), on_next(350, 6), on_next(380, 7), on_next(420, 8), on_next(470, 9), on_completed(600))
# results = scheduler.startWithDispose(function () {
# return xs.bufferWithCount(3, 2).select(function (x) {
# return x.toString()
# }, 370)
# results.messages.assert_equal(on_next(280, "2,3,4"), on_next(350, "4,5,6"))
# xs.subscriptions.assert_equal(subscribe(200, 370))
# def test_DefaultIfEmpty_NonEmpty1():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(280, 42), on_next(360, 43), on_completed(420))
# results = scheduler.start(create)
# return xs.defaultIfEmpty()
# results.messages.assert_equal(on_next(280, 42), on_next(360, 43), on_completed(420))
# xs.subscriptions.assert_equal(subscribe(200, 420))
# def test_DefaultIfEmpty_NonEmpty2():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(280, 42), on_next(360, 43), on_completed(420))
# results = scheduler.start(create)
# return xs.defaultIfEmpty(-1)
# results.messages.assert_equal(on_next(280, 42), on_next(360, 43), on_completed(420))
# xs.subscriptions.assert_equal(subscribe(200, 420))
# def test_DefaultIfEmpty_Empty1():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_completed(420))
# results = scheduler.start(create)
# return xs.defaultIfEmpty(null)
# results.messages.assert_equal(on_next(420, null), on_completed(420))
# xs.subscriptions.assert_equal(subscribe(200, 420))
# def test_DefaultIfEmpty_Empty2():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_completed(420))
# results = scheduler.start(create)
# return xs.defaultIfEmpty(-1)
# results.messages.assert_equal(on_next(420, -1), on_completed(420))
# xs.subscriptions.assert_equal(subscribe(200, 420))
# def test_Distinct_DefaultComparer_AllDistinct():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(280, 4), on_next(300, 2), on_next(350, 1), on_next(380, 3), on_next(400, 5), on_completed(420))
# results = scheduler.start(create)
# return xs.distinct()
# results.messages.assert_equal(on_next(280, 4), on_next(300, 2), on_next(350, 1), on_next(380, 3), on_next(400, 5), on_completed(420))
# xs.subscriptions.assert_equal(subscribe(200, 420))
# def test_Distinct_DefaultComparer_SomeDuplicates():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(280, 4), on_next(300, 2), on_next(350, 2), on_next(380, 3), on_next(400, 4), on_completed(420))
# results = scheduler.start(create)
# return xs.distinct()
# results.messages.assert_equal(on_next(280, 4), on_next(300, 2), on_next(380, 3), on_completed(420))
# xs.subscriptions.assert_equal(subscribe(200, 420))
# def test_Distinct_KeySelectory_AllDistinct():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(280, 8), on_next(300, 4), on_next(350, 2), on_next(380, 6), on_next(400, 10), on_completed(420))
# results = scheduler.start(create)
# return xs.distinct(function (x) {
# return x / 2
# results.messages.assert_equal(on_next(280, 8), on_next(300, 4), on_next(350, 2), on_next(380, 6), on_next(400, 10), on_completed(420))
# xs.subscriptions.assert_equal(subscribe(200, 420))
# def test_Distinct_KeySelector_SomeDuplicates():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(280, 4), on_next(300, 2), on_next(350, 3), on_next(380, 7), on_next(400, 5), on_completed(420))
# results = scheduler.start(create)
# return xs.distinct(function (x) {
# return Math.floor(x / 2)
# results.messages.assert_equal(on_next(280, 4), on_next(300, 2), on_next(380, 7), on_completed(420))
# xs.subscriptions.assert_equal(subscribe(200, 420))
# def test_Distinct_KeySelector_Throws():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(280, 3), on_next(300, 2), on_next(350, 1), on_next(380, 0), on_next(400, 4), on_completed(420))
# results = scheduler.start(create)
# return xs.distinct(function (x) {
# if (x == 0) {
# throw ex
# } else {
# return Math.floor(x / 2)
# }
# results.messages.assert_equal(on_next(280, 3), on_next(350, 1), on_error(380, ex))
# xs.subscriptions.assert_equal(subscribe(200, 380))
# // TakeLastBuffer
# function arrayEqual(arr1, arr2) {
# if (arr1.length !== arr2.length) return false
# for (var i = 0, len = arr1.length i < len i++) {
# if (arr1[i] !== arr2[i]) return false
# }
# return true
# }
# def test_TakeLastBuffer_Zero_Completed():
# var res, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# res = scheduler.start(create)
# return xs.takeLastBuffer(0)
# res.messages.assert_equal(on_next(650, function (lst) {
# return lst.length == 0
# }), on_completed(650))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLastBuffer_Zero_Error():
# var ex, res, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# res = scheduler.start(create)
# return xs.takeLastBuffer(0)
# res.messages.assert_equal(on_error(650, ex))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLastBuffer_Zero_Disposed():
# var res, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# res = scheduler.start(create)
# return xs.takeLastBuffer(0)
# res.messages.assert_equal()
# xs.subscriptions.assert_equal(subscribe(200, 1000))
# def test_TakeLastBuffer_One_Completed():
# var res, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# res = scheduler.start(create)
# return xs.takeLastBuffer(1)
# res.messages.assert_equal(on_next(650, function (lst) {
# return arrayEqual(lst, [9])
# }), on_completed(650))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLastBuffer_One_Error():
# var ex, res, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# res = scheduler.start(create)
# return xs.takeLastBuffer(1)
# res.messages.assert_equal(on_error(650, ex))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLastBuffer_One_Disposed():
# var res, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# res = scheduler.start(create)
# return xs.takeLastBuffer(1)
# res.messages.assert_equal()
# xs.subscriptions.assert_equal(subscribe(200, 1000))
# def test_TakeLastBuffer_Three_Completed():
# var res, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_completed(650))
# res = scheduler.start(create)
# return xs.takeLastBuffer(3)
# res.messages.assert_equal(on_next(650, function (lst) {
# return arrayEqual(lst, [7, 8, 9])
# }), on_completed(650))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLastBuffer_Three_Error():
# var ex, res, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9), on_error(650, ex))
# res = scheduler.start(create)
# return xs.takeLastBuffer(3)
# res.messages.assert_equal(on_error(650, ex))
# xs.subscriptions.assert_equal(subscribe(200, 650))
# def test_TakeLastBuffer_Three_Disposed():
# var res, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(180, 1), on_next(210, 2), on_next(250, 3), on_next(270, 4), on_next(310, 5), on_next(360, 6), on_next(380, 7), on_next(410, 8), on_next(590, 9))
# res = scheduler.start(create)
# return xs.takeLastBuffer(3)
# res.messages.assert_equal()
# xs.subscriptions.assert_equal(subscribe(200, 1000))
| 42.723803 | 285 | 0.635605 | 8,475 | 61,565 | 4.451209 | 0.023953 | 0.103382 | 0.057788 | 0.07645 | 0.938342 | 0.924239 | 0.912735 | 0.90351 | 0.895425 | 0.888771 | 0 | 0.084483 | 0.208641 | 61,565 | 1,440 | 286 | 42.753472 | 0.689819 | 0.862471 | 0 | 0.675214 | 0 | 0 | 0.002979 | 0 | 0 | 0 | 0 | 0 | 0.239316 | 1 | 0.205128 | false | 0 | 0.025641 | 0.059829 | 0.350427 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a3dc84fc30884e84dd648ac268caf92e6d715f46 | 30,195 | py | Python | pysph/sph/integrator_step.py | suhasjains/pysph | df4e882ac790b2790ccd23fb1aaa66a049c2f7b1 | [
"BSD-3-Clause"
] | 3 | 2021-01-06T03:01:18.000Z | 2022-03-21T03:02:55.000Z | docker/water/sph/tags/pysph/pysph/sph/integrator_step.py | liujiamingustc/phd | 4f815a738abad43531d02ac66f5bd0d9a1def52a | [
"Apache-2.0"
] | null | null | null | docker/water/sph/tags/pysph/pysph/sph/integrator_step.py | liujiamingustc/phd | 4f815a738abad43531d02ac66f5bd0d9a1def52a | [
"Apache-2.0"
] | 1 | 2020-06-21T08:42:07.000Z | 2020-06-21T08:42:07.000Z | """Integrator steps for different schemes.
Implement as many stages as needed.
"""
###############################################################################
# `IntegratorStep` class
###############################################################################
class IntegratorStep(object):
"""Subclass this and implement the methods ``initialize``, ``stage1`` etc.
Use the same conventions as the equations.
"""
def __repr__(self):
return '%s()'%(self.__class__.__name__)
###############################################################################
# `EulerStep` class
###############################################################################
class EulerStep(IntegratorStep):
"""Fast but inaccurate integrator. Use this for testing"""
def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y,
d_z, d_rho, d_arho, dt):
d_u[d_idx] += dt*d_au[d_idx]
d_v[d_idx] += dt*d_av[d_idx]
d_w[d_idx] += dt*d_aw[d_idx]
d_x[d_idx] += dt*d_u[d_idx]
d_y[d_idx] += dt*d_v[d_idx]
d_z[d_idx] += dt*d_w[d_idx]
d_rho[d_idx] += dt*d_arho[d_idx]
###############################################################################
# `WCSPHStep` class
###############################################################################
class WCSPHStep(IntegratorStep):
"""Standard Predictor Corrector integrator for the WCSPH formulation
Use this integrator for WCSPH formulations. In the predictor step,
the particles are advanced to `t + dt/2`. The particles are then
advanced with the new force computed at this position.
This integrator can be used in PEC or EPEC mode.
The same integrator can be used for other problems. Like for
example solid mechanics (see SolidMechStep)
"""
def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho):
d_x0[d_idx] = d_x[d_idx]
d_y0[d_idx] = d_y[d_idx]
d_z0[d_idx] = d_z[d_idx]
d_u0[d_idx] = d_u[d_idx]
d_v0[d_idx] = d_v[d_idx]
d_w0[d_idx] = d_w[d_idx]
d_rho0[d_idx] = d_rho[d_idx]
def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av,
d_aw, d_ax, d_ay, d_az, d_arho, dt):
dtb2 = 0.5*dt
d_u[d_idx] = d_u0[d_idx] + dtb2*d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dtb2*d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dtb2*d_aw[d_idx]
d_x[d_idx] = d_x0[d_idx] + dtb2 * d_ax[d_idx]
d_y[d_idx] = d_y0[d_idx] + dtb2 * d_ay[d_idx]
d_z[d_idx] = d_z0[d_idx] + dtb2 * d_az[d_idx]
# Update densities and smoothing lengths from the accelerations
d_rho[d_idx] = d_rho0[d_idx] + dtb2 * d_arho[d_idx]
def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av,
d_aw, d_ax, d_ay, d_az, d_arho, dt):
d_u[d_idx] = d_u0[d_idx] + dt*d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dt*d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dt*d_aw[d_idx]
d_x[d_idx] = d_x0[d_idx] + dt * d_ax[d_idx]
d_y[d_idx] = d_y0[d_idx] + dt * d_ay[d_idx]
d_z[d_idx] = d_z0[d_idx] + dt * d_az[d_idx]
# Update densities and smoothing lengths from the accelerations
d_rho[d_idx] = d_rho0[d_idx] + dt * d_arho[d_idx]
###############################################################################
# `WCSPHTVDRK3` Integrator
###############################################################################
class WCSPHTVDRK3Step(IntegratorStep):
r"""TVD RK3 stepper for WCSPH
This integrator requires :math:`2` stages for the storage of the
acceleration variables.
"""
def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho):
d_x0[d_idx] = d_x[d_idx]
d_y0[d_idx] = d_y[d_idx]
d_z0[d_idx] = d_z[d_idx]
d_u0[d_idx] = d_u[d_idx]
d_v0[d_idx] = d_v[d_idx]
d_w0[d_idx] = d_w[d_idx]
d_rho0[d_idx] = d_rho[d_idx]
def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho,
d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho,
dt):
# update velocities
d_u[d_idx] = d_u0[d_idx] + dt * d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dt * d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dt * d_aw[d_idx]
# update positions
d_x[d_idx] = d_x0[d_idx] + dt * d_ax[d_idx]
d_y[d_idx] = d_y0[d_idx] + dt * d_ay[d_idx]
d_z[d_idx] = d_z0[d_idx] + dt * d_az[d_idx]
# update density
d_rho[d_idx] = d_rho0[d_idx] + dt * d_arho[d_idx]
def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av,
d_aw, d_ax, d_ay, d_az, d_arho, dt):
# update velocities
d_u[d_idx] = 0.75*d_u0[d_idx] + 0.25*( d_u[d_idx] + dt * d_au[d_idx] )
d_v[d_idx] = 0.75*d_v0[d_idx] + 0.25*( d_v[d_idx] + dt * d_av[d_idx] )
d_w[d_idx] = 0.75*d_w0[d_idx] + 0.25*( d_w[d_idx] + dt * d_aw[d_idx] )
# update positions
d_x[d_idx] = 0.75*d_x0[d_idx] + 0.25*( d_x[d_idx] + dt * d_ax[d_idx] )
d_y[d_idx] = 0.75*d_y0[d_idx] + 0.25*( d_y[d_idx] + dt * d_ay[d_idx] )
d_z[d_idx] = 0.75*d_z0[d_idx] + 0.25*( d_z[d_idx] + dt * d_az[d_idx] )
# Update density
d_rho[d_idx] = 0.75*d_rho0[d_idx] + 0.25*( d_rho[d_idx] + dt * d_arho[d_idx] )
def stage3(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av,
d_aw, d_ax, d_ay, d_az, d_arho, dt):
oneby3 = 1./3.
twoby3 = 2./3.
# update velocities
d_u[d_idx] = oneby3*d_u0[d_idx] + twoby3*( d_u[d_idx] + dt * d_au[d_idx] )
d_v[d_idx] = oneby3*d_v0[d_idx] + twoby3*( d_v[d_idx] + dt * d_av[d_idx] )
d_w[d_idx] = oneby3*d_w0[d_idx] + twoby3*( d_w[d_idx] + dt * d_aw[d_idx] )
# update positions
d_x[d_idx] = oneby3*d_x0[d_idx] + twoby3*( d_x[d_idx] + dt * d_ax[d_idx] )
d_y[d_idx] = oneby3*d_y0[d_idx] + twoby3*( d_y[d_idx] + dt * d_ay[d_idx] )
d_z[d_idx] = oneby3*d_z0[d_idx] + twoby3*( d_z[d_idx] + dt * d_az[d_idx] )
# Update density
d_rho[d_idx] = oneby3*d_rho0[d_idx] + twoby3*( d_rho[d_idx] + dt * d_arho[d_idx] )
###############################################################################
# `SolidMechStep` class
###############################################################################
class SolidMechStep(IntegratorStep):
"""Predictor corrector Integrator for solid mechanics problems"""
def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho,
d_s00, d_s01, d_s02, d_s11, d_s12, d_s22,
d_s000, d_s010, d_s020, d_s110, d_s120, d_s220,
d_e0, d_e):
d_x0[d_idx] = d_x[d_idx]
d_y0[d_idx] = d_y[d_idx]
d_z0[d_idx] = d_z[d_idx]
d_u0[d_idx] = d_u[d_idx]
d_v0[d_idx] = d_v[d_idx]
d_w0[d_idx] = d_w[d_idx]
d_rho0[d_idx] = d_rho[d_idx]
d_e0[d_idx] = d_e[d_idx]
d_s000[d_idx] = d_s00[d_idx]
d_s010[d_idx] = d_s01[d_idx]
d_s020[d_idx] = d_s02[d_idx]
d_s110[d_idx] = d_s11[d_idx]
d_s120[d_idx] = d_s12[d_idx]
d_s220[d_idx] = d_s22[d_idx]
def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av,
d_aw, d_ax, d_ay, d_az, d_arho, d_e, d_e0, d_ae,
d_s00, d_s01, d_s02, d_s11, d_s12, d_s22,
d_s000, d_s010, d_s020, d_s110, d_s120, d_s220,
d_as00, d_as01, d_as02, d_as11, d_as12, d_as22,
dt):
dtb2 = 0.5*dt
d_u[d_idx] = d_u0[d_idx] + dtb2*d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dtb2*d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dtb2*d_aw[d_idx]
d_x[d_idx] = d_x0[d_idx] + dtb2 * d_ax[d_idx]
d_y[d_idx] = d_y0[d_idx] + dtb2 * d_ay[d_idx]
d_z[d_idx] = d_z0[d_idx] + dtb2 * d_az[d_idx]
# Update densities and smoothing lengths from the accelerations
d_rho[d_idx] = d_rho0[d_idx] + dtb2 * d_arho[d_idx]
d_e[d_idx] = d_e0[d_idx] + dtb2 * d_ae[d_idx]
# update deviatoric stress components
d_s00[d_idx] = d_s000[d_idx] + dtb2 * d_as00[d_idx]
d_s01[d_idx] = d_s010[d_idx] + dtb2 * d_as01[d_idx]
d_s02[d_idx] = d_s020[d_idx] + dtb2 * d_as02[d_idx]
d_s11[d_idx] = d_s110[d_idx] + dtb2 * d_as11[d_idx]
d_s12[d_idx] = d_s120[d_idx] + dtb2 * d_as12[d_idx]
d_s22[d_idx] = d_s220[d_idx] + dtb2 * d_as22[d_idx]
def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av,
d_aw, d_ax, d_ay, d_az, d_arho, d_e, d_ae, d_e0,
d_s00, d_s01, d_s02, d_s11, d_s12, d_s22,
d_s000, d_s010, d_s020, d_s110, d_s120, d_s220,
d_as00, d_as01, d_as02, d_as11, d_as12, d_as22,
dt):
d_u[d_idx] = d_u0[d_idx] + dt*d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dt*d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dt*d_aw[d_idx]
d_x[d_idx] = d_x0[d_idx] + dt * d_ax[d_idx]
d_y[d_idx] = d_y0[d_idx] + dt * d_ay[d_idx]
d_z[d_idx] = d_z0[d_idx] + dt * d_az[d_idx]
# Update densities and smoothing lengths from the accelerations
d_rho[d_idx] = d_rho0[d_idx] + dt * d_arho[d_idx]
d_e[d_idx] = d_e0[d_idx] + dt * d_ae[d_idx]
# update deviatoric stress components
d_s00[d_idx] = d_s000[d_idx] + dt * d_as00[d_idx]
d_s01[d_idx] = d_s010[d_idx] + dt * d_as01[d_idx]
d_s02[d_idx] = d_s020[d_idx] + dt * d_as02[d_idx]
d_s11[d_idx] = d_s110[d_idx] + dt * d_as11[d_idx]
d_s12[d_idx] = d_s120[d_idx] + dt * d_as12[d_idx]
d_s22[d_idx] = d_s220[d_idx] + dt * d_as22[d_idx]
###############################################################################
# `TransportVelocityStep` class
###############################################################################
class TransportVelocityStep(IntegratorStep):
"""Integrator defined in 'A transport velocity formulation for
smoothed particle hydrodynamics', 2013, JCP, 241, pp 292--307
For a predictor-corrector style of integrator, this integrator
should operate only in PEC mode.
"""
def initialize(self):
pass
def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_uhat, d_auhat, d_vhat,
d_avhat, d_what, d_awhat, d_x, d_y, d_z, dt):
dtb2 = 0.5*dt
# velocity update eqn (14)
d_u[d_idx] += dtb2*d_au[d_idx]
d_v[d_idx] += dtb2*d_av[d_idx]
d_w[d_idx] += dtb2*d_aw[d_idx]
# advection velocity update eqn (15)
d_uhat[d_idx] = d_u[d_idx] + dtb2*d_auhat[d_idx]
d_vhat[d_idx] = d_v[d_idx] + dtb2*d_avhat[d_idx]
d_what[d_idx] = d_w[d_idx] + dtb2*d_awhat[d_idx]
# position update eqn (16)
d_x[d_idx] += dt*d_uhat[d_idx]
d_y[d_idx] += dt*d_vhat[d_idx]
d_z[d_idx] += dt*d_what[d_idx]
def stage2(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_vmag2, dt):
dtb2 = 0.5*dt
# corrector update eqn (17)
d_u[d_idx] += dtb2*d_au[d_idx]
d_v[d_idx] += dtb2*d_av[d_idx]
d_w[d_idx] += dtb2*d_aw[d_idx]
# magnitude of velocity squared
d_vmag2[d_idx] = (d_u[d_idx]*d_u[d_idx] + d_v[d_idx]*d_v[d_idx] +
d_w[d_idx]*d_w[d_idx])
###############################################################################
# `AdamiVerletStep` class
###############################################################################
class AdamiVerletStep(IntegratorStep):
"""Verlet time integration described in `A generalized wall
boundary condition for smoothed particle hydrodynamics` 2012, JCP,
231, pp 7057--7075
This integrator can operate in either PEC mode or in EPEC mode as
described in the paper.
"""
def initialize(self):
pass
def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y, d_z, dt):
dtb2 = 0.5*dt
# velocity predictor eqn (14)
d_u[d_idx] += dtb2*d_au[d_idx]
d_v[d_idx] += dtb2*d_av[d_idx]
d_w[d_idx] += dtb2*d_aw[d_idx]
# position predictor eqn (15)
d_x[d_idx] += dtb2*d_u[d_idx]
d_y[d_idx] += dtb2*d_v[d_idx]
d_z[d_idx] += dtb2*d_w[d_idx]
def stage2(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y, d_z,
d_rho, d_arho, d_vmag2, dt):
dtb2 = 0.5*dt
# position corrector eqn (17)
d_x[d_idx] += dtb2*d_u[d_idx]
d_y[d_idx] += dtb2*d_v[d_idx]
d_z[d_idx] += dtb2*d_w[d_idx]
# velocity corrector eqn (18)
d_u[d_idx] += dtb2*d_au[d_idx]
d_v[d_idx] += dtb2*d_av[d_idx]
d_w[d_idx] += dtb2*d_aw[d_idx]
# density corrector eqn (16)
d_rho[d_idx] += dt * d_arho[d_idx]
# magnitude of velocity squared
d_vmag2[d_idx] = (d_u[d_idx]*d_u[d_idx] + d_v[d_idx]*d_v[d_idx] +
d_w[d_idx]*d_w[d_idx])
###############################################################################
# `GasDFluidStep` class
###############################################################################
class GasDFluidStep(IntegratorStep):
"""Predictor Corrector integrator for Gas-dynamics"""
def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_h,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e, d_e0, d_h0,
d_converged, d_omega, d_rho, d_rho0, d_alpha1, d_alpha2,
d_alpha10, d_alpha20):
d_x0[d_idx] = d_x[d_idx]
d_y0[d_idx] = d_y[d_idx]
d_z0[d_idx] = d_z[d_idx]
d_u0[d_idx] = d_u[d_idx]
d_v0[d_idx] = d_v[d_idx]
d_w0[d_idx] = d_w[d_idx]
d_e0[d_idx] = d_e[d_idx]
d_h0[d_idx] = d_h[d_idx]
d_rho0[d_idx] = d_rho[d_idx]
# set the converged attribute to 0 at the beginning of a Group
d_converged[d_idx] = 0
# likewise, we set the default omega (grad-h) terms to 1 at
# the beginning of this Group.
d_omega[d_idx] = 1.0
d_alpha10[d_idx] = d_alpha1[d_idx]
d_alpha20[d_idx] = d_alpha2[d_idx]
def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e0, d_e, d_au, d_av,
d_aw, d_ae, d_rho, d_rho0, d_arho, d_h, d_h0, d_ah,
d_alpha1, d_aalpha1, d_alpha10,
d_alpha2, d_aalpha2, d_alpha20,
dt):
dtb2 = 0.5*dt
d_u[d_idx] = d_u0[d_idx] + dtb2 * d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dtb2 * d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dtb2 * d_aw[d_idx]
d_x[d_idx] = d_x0[d_idx] + dtb2 * d_u[d_idx]
d_y[d_idx] = d_y0[d_idx] + dtb2 * d_v[d_idx]
d_z[d_idx] = d_z0[d_idx] + dtb2 * d_w[d_idx]
# update thermal energy
d_e[d_idx] = d_e0[d_idx] + dtb2 * d_ae[d_idx]
# predict density and smoothing lengths for faster
# convergence. NNPS need not be explicitly updated since it
# will be called at the end of the predictor stage.
d_h[d_idx] = d_h0[d_idx] + dtb2 * d_ah[d_idx]
d_rho[d_idx] = d_rho0[d_idx] + dtb2 * d_arho[d_idx]
# update viscosity coefficients
d_alpha1[d_idx] = d_alpha10[d_idx] + dtb2*d_aalpha1[d_idx]
d_alpha2[d_idx] = d_alpha20[d_idx] + dtb2*d_aalpha2[d_idx]
def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e0, d_e, d_au, d_av,
d_alpha1, d_aalpha1, d_alpha10,
d_alpha2, d_aalpha2, d_alpha20,
d_aw, d_ae, dt):
d_u[d_idx] = d_u0[d_idx] + dt * d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dt * d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dt * d_aw[d_idx]
d_x[d_idx] = d_x0[d_idx] + dt * d_u[d_idx]
d_y[d_idx] = d_y0[d_idx] + dt * d_v[d_idx]
d_z[d_idx] = d_z0[d_idx] + dt * d_w[d_idx]
# Update densities and smoothing lengths from the accelerations
d_e[d_idx] = d_e0[d_idx] + dt * d_ae[d_idx]
# update viscosity coefficients
d_alpha1[d_idx] = d_alpha10[d_idx] + dt*d_aalpha1[d_idx]
d_alpha2[d_idx] = d_alpha20[d_idx] + dt*d_aalpha2[d_idx]
class ADKEStep(IntegratorStep):
"""Predictor Corrector integrator for Gas-dynamics ADKE"""
def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e, d_e0,
d_rho, d_rho0):
d_x0[d_idx] = d_x[d_idx]
d_y0[d_idx] = d_y[d_idx]
d_z0[d_idx] = d_z[d_idx]
d_u0[d_idx] = d_u[d_idx]
d_v0[d_idx] = d_v[d_idx]
d_w0[d_idx] = d_w[d_idx]
d_e0[d_idx] = d_e[d_idx]
d_rho0[d_idx] = d_rho[d_idx]
def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e0, d_e, d_au, d_av,
d_aw, d_ae, d_rho, d_rho0, d_arho, dt):
dtb2 = 0.5*dt
d_u[d_idx] = d_u0[d_idx] + dtb2 * d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dtb2 * d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dtb2 * d_aw[d_idx]
d_x[d_idx] = d_x0[d_idx] + dtb2 * d_u[d_idx]
d_y[d_idx] = d_y0[d_idx] + dtb2 * d_v[d_idx]
d_z[d_idx] = d_z0[d_idx] + dtb2 * d_w[d_idx]
# update thermal energy
d_e[d_idx] = d_e0[d_idx] + dtb2 * d_ae[d_idx]
def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z,
d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e0, d_e, d_au, d_av,
d_aw, d_ae, dt):
d_u[d_idx] = d_u0[d_idx] + dt * d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dt * d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dt * d_aw[d_idx]
d_x[d_idx] = d_x0[d_idx] + dt * d_u[d_idx]
d_y[d_idx] = d_y0[d_idx] + dt * d_v[d_idx]
d_z[d_idx] = d_z0[d_idx] + dt * d_w[d_idx]
# Update densities and smoothing lengths from the accelerations
d_e[d_idx] = d_e0[d_idx] + dt * d_ae[d_idx]
###############################################################################
# `TwoStageRigidBodyStep` class
###############################################################################
class TwoStageRigidBodyStep(IntegratorStep):
"""Simple rigid-body motion
At each stage of the integrator, the prescribed velocity and
accelerations are incremented by dt/2.
Note that the time centered velocity is used for updating the
particle positions. This ensures exact motion for a constant
acceleration.
"""
def initialize(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0,
d_u, d_v, d_w, d_u0, d_v0, d_w0):
d_u0[d_idx] = d_u[d_idx]
d_v0[d_idx] = d_v[d_idx]
d_w0[d_idx] = d_w[d_idx]
d_x0[d_idx] = d_x[d_idx]
d_y0[d_idx] = d_y[d_idx]
d_z0[d_idx] = d_z[d_idx]
def stage1(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0,
d_u, d_v, d_w, d_u0, d_v0, d_w0, d_au, d_av, d_aw,
dt):
dtb2 = 0.5*dt
d_u[d_idx] = d_u0[d_idx] + dtb2 * d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dtb2 * d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dtb2 * d_aw[d_idx]
# positions are updated based on the time centered velocity
d_x[d_idx] = d_x0[d_idx] + dtb2 * 0.5 * (d_u[d_idx] + d_u0[d_idx])
d_y[d_idx] = d_y0[d_idx] + dtb2 * 0.5 * (d_v[d_idx] + d_v0[d_idx])
d_z[d_idx] = d_z0[d_idx] + dtb2 * 0.5 * (d_w[d_idx] + d_w0[d_idx])
def stage2(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0,
d_u, d_v, d_w, d_u0, d_v0, d_w0, d_au, d_av, d_aw,
dt):
d_u[d_idx] = d_u0[d_idx] + dt * d_au[d_idx]
d_v[d_idx] = d_v0[d_idx] + dt * d_av[d_idx]
d_w[d_idx] = d_w0[d_idx] + dt * d_aw[d_idx]
# positions are updated based on the time centered velocity
d_x[d_idx] = d_x0[d_idx] + dt * 0.5 * (d_u[d_idx] + d_u0[d_idx])
d_y[d_idx] = d_y0[d_idx] + dt * 0.5 * (d_v[d_idx] + d_v0[d_idx])
d_z[d_idx] = d_z0[d_idx] + dt * 0.5 * (d_w[d_idx] + d_w0[d_idx])
###############################################################################
# `OneStageRigidBodyStep` class
###############################################################################
class OneStageRigidBodyStep(IntegratorStep):
"""Simple one stage rigid-body motion """
def initialize(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0,
d_u, d_v, d_w, d_u0, d_v0, d_w0):
d_u0[d_idx] = d_u[d_idx]
d_v0[d_idx] = d_v[d_idx]
d_w0[d_idx] = d_w[d_idx]
d_x0[d_idx] = d_x[d_idx]
d_y0[d_idx] = d_y[d_idx]
d_z0[d_idx] = d_z[d_idx]
def stage1(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0,
d_u, d_v, d_w, d_u0, d_v0, d_w0, d_au, d_av, d_aw,
dt):
pass
def stage2(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0,
d_u, d_v, d_w, d_u0, d_v0, d_w0, d_au, d_av, d_aw,
dt):
# update velocities
d_u[d_idx] += dt * d_au[d_idx]
d_v[d_idx] += dt * d_av[d_idx]
d_w[d_idx] += dt * d_aw[d_idx]
# upadte positions using time-centered velocity
d_x[d_idx] += dt * 0.5 * (d_u[d_idx] + d_u0[d_idx])
d_y[d_idx] += dt * 0.5 * (d_v[d_idx] + d_v0[d_idx])
d_z[d_idx] += dt * 0.5 * (d_w[d_idx] + d_w0[d_idx])
###############################################################################
# `VerletSymplecticWCSPHStep` class
###############################################################################
class VerletSymplecticWCSPHStep(IntegratorStep):
"""Symplectic second order integrator described in the review
paper by Monaghan:
J. Monaghan, "Smoothed Particle Hydrodynamics", Reports on
Progress in Physics, 2005, 68, pp 1703--1759 [JM05]
Notes:
This integrator should run in PEC mode since in the first stage,
the positions are updated using the current velocity. The
accelerations are then computed to advance to the full time step
values.
This version of the integrator does not update the density. That
is, the summation density is used instead of the continuity
equation.
"""
def initialize(self):
pass
def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, dt):
dtb2 = 0.5 * dt
# Eq. (5.39) in [JM05]
d_x[d_idx] += dtb2 * d_u[d_idx]
d_y[d_idx] += dtb2 * d_v[d_idx]
d_z[d_idx] += dtb2 * d_w[d_idx]
def stage2(self, d_idx, d_x, d_y, d_z, d_ax, d_ay, d_az,
d_u, d_v, d_w, d_au, d_av, d_aw, dt):
dtb2 = 0.5 * dt
# Eq. (5.40) in [JM05]
d_u[d_idx] += dt * d_au[d_idx]
d_v[d_idx] += dt * d_av[d_idx]
d_w[d_idx] += dt * d_aw[d_idx]
# Eq. (5.41) in [JM05] using XSPH velocity correction
d_x[d_idx] += dtb2 * d_ax[d_idx]
d_y[d_idx] += dtb2 * d_ay[d_idx]
d_z[d_idx] += dtb2 * d_az[d_idx]
###############################################################################
# `VelocityVerletSymplecticWCSPHStep` class
###############################################################################
class VelocityVerletSymplecticWCSPHStep(IntegratorStep):
"""Another symplectic second order integrator described in the
review paper by Monaghan:
J. Monaghan, "Smoothed Particle Hydrodynamics", Reports on
Progress in Physics, 2005, 68, pp 1703--1759 [JM05]
kick--drift--kick form of the verlet integrator
"""
def initialize(self):
pass
def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, dt):
dtb2 = 0.5 * dt
# Eq. (5.51) in [JM05]
d_u[d_idx] += dtb2 * d_au[d_idx]
d_v[d_idx] += dtb2 * d_av[d_idx]
d_w[d_idx] += dtb2 * d_aw[d_idx]
def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w,
d_au, d_av, d_aw, dt):
dtb2 = 0.5 * dt
# Eq. (5.52) in [JM05]
d_x[d_idx] += dt * d_u[d_idx]
d_y[d_idx] += dt * d_v[d_idx]
d_z[d_idx] += dt * d_w[d_idx]
# Eq. (5.53) in [JM05]
d_u[d_idx] += dtb2 * d_au[d_idx]
d_v[d_idx] += dtb2 * d_av[d_idx]
d_w[d_idx] += dtb2 * d_aw[d_idx]
###############################################################################
# `InletOutletStep` class
###############################################################################
class InletOutletStep(IntegratorStep):
"""A trivial integrator for the inlet/outlet particles
"""
def initialize(self):
pass
def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, dt):
dtb2 = 0.5*dt
d_x[d_idx] += dtb2 * d_u[d_idx]
d_y[d_idx] += dtb2 * d_v[d_idx]
d_z[d_idx] += dtb2 * d_w[d_idx]
def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, dt):
dtb2 = 0.5*dt
d_x[d_idx] += dtb2 * d_u[d_idx]
d_y[d_idx] += dtb2 * d_v[d_idx]
d_z[d_idx] += dtb2 * d_w[d_idx]
###############################################################################
class LeapFrogStep(IntegratorStep):
r"""Using this stepper with XSPH as implemented in
`pysph.base.basic_equations.XSPHCorrection` is not directly possible and
requires a nicer implementation where the correction alone is added to ``ax,
ay, az``.
"""
def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_ax, d_ay, d_az,
dt):
d_x[d_idx] += 0.5 * dt * (d_u[d_idx] + d_ax[d_idx])
d_y[d_idx] += 0.5 * dt * (d_v[d_idx] + d_ay[d_idx])
d_z[d_idx] += 0.5 * dt * (d_w[d_idx] + d_az[d_idx])
def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av,
d_w, d_aw, d_ax, d_ay, d_az,
d_rho, d_arho, d_e, d_ae, dt):
d_u[d_idx] += dt * d_au[d_idx]
d_v[d_idx] += dt * d_av[d_idx]
d_w[d_idx] += dt * d_aw[d_idx]
d_rho[d_idx] += dt * d_arho[d_idx]
d_e[d_idx] += dt * d_ae[d_idx]
d_x[d_idx] += 0.5 * dt * (d_u[d_idx] + d_ax[d_idx])
d_y[d_idx] += 0.5 * dt * (d_v[d_idx] + d_ay[d_idx])
d_z[d_idx] += 0.5 * dt * (d_w[d_idx] + d_az[d_idx])
###############################################################################
class PEFRLStep(IntegratorStep):
r"""Using this stepper with XSPH as implemented in
`pysph.base.basic_equations.XSPHCorrection` is not directly possible and
requires a nicer implementation where the correction alone is added to ``ax,
ay, az``.
"""
def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_ax, d_ay,
d_az, dt):
xi = 0.1786178958448091
d_x[d_idx] += xi * dt * (d_u[d_idx] + d_ax[d_idx])
d_y[d_idx] += xi * dt * (d_v[d_idx] + d_ay[d_idx])
d_z[d_idx] += xi * dt * (d_w[d_idx] + d_az[d_idx])
def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av,
d_w, d_aw, d_ax, d_ay, d_az,
d_rho, d_arho, d_e, d_ae, dt=0.0):
lamda = -0.2123418310626054
fac = (1. - 2.*lamda) / 2.
d_u[d_idx] += fac * dt * d_au[d_idx]
d_v[d_idx] += fac * dt * d_av[d_idx]
d_w[d_idx] += fac * dt * d_aw[d_idx]
d_rho[d_idx] += fac * dt * d_arho[d_idx]
d_e[d_idx] += fac * dt * d_ae[d_idx]
chi = -0.06626458266981849
d_x[d_idx] += chi * dt * (d_u[d_idx] + d_ax[d_idx])
d_y[d_idx] += chi * dt * (d_v[d_idx] + d_ay[d_idx])
d_z[d_idx] += chi * dt * (d_w[d_idx] + d_az[d_idx])
def stage3(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av,
d_w, d_aw, d_ax, d_ay, d_az,
d_rho, d_arho, d_e, d_ae, dt=0.0):
lamda = -0.2123418310626054
d_u[d_idx] += lamda * dt * d_au[d_idx]
d_v[d_idx] += lamda * dt * d_av[d_idx]
d_w[d_idx] += lamda * dt * d_aw[d_idx]
d_rho[d_idx] += lamda * dt * d_arho[d_idx]
d_e[d_idx] += lamda * dt * d_ae[d_idx]
xi = +0.1786178958448091
chi = -0.06626458266981849
fac = 1. - 2.*(xi + chi)
d_x[d_idx] += fac * dt * (d_u[d_idx] + d_ax[d_idx])
d_y[d_idx] += fac * dt * (d_v[d_idx] + d_ay[d_idx])
d_z[d_idx] += fac * dt * (d_w[d_idx] + d_az[d_idx])
def stage4(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av,
d_w, d_aw, d_ax, d_ay, d_az,
d_rho, d_arho, d_e, d_ae, dt=0.0):
lamda = -0.2123418310626054
d_u[d_idx] += lamda * dt * d_au[d_idx]
d_v[d_idx] += lamda * dt * d_av[d_idx]
d_w[d_idx] += lamda * dt * d_aw[d_idx]
d_rho[d_idx] += lamda * dt * d_arho[d_idx]
d_e[d_idx] += lamda * dt * d_ae[d_idx]
chi = -0.06626458266981849
d_x[d_idx] += chi * dt * (d_u[d_idx] + d_ax[d_idx])
d_y[d_idx] += chi * dt * (d_v[d_idx] + d_ay[d_idx])
d_z[d_idx] += chi * dt * (d_w[d_idx] + d_az[d_idx])
def stage5(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av,
d_w, d_aw, d_ax, d_ay, d_az,
d_rho, d_arho, d_e, d_ae, dt=0.0):
lamda = -0.2123418310626054
fac = (1. - 2.*lamda) / 2.
d_u[d_idx] += fac * dt * d_au[d_idx]
d_v[d_idx] += fac * dt * d_av[d_idx]
d_w[d_idx] += fac * dt * d_aw[d_idx]
d_rho[d_idx] += fac * dt * d_arho[d_idx]
d_e[d_idx] += fac * dt * d_ae[d_idx]
xi = +0.1786178958448091
d_x[d_idx] += xi * dt * (d_u[d_idx] + d_ax[d_idx])
d_y[d_idx] += xi * dt * (d_v[d_idx] + d_ay[d_idx])
d_z[d_idx] += xi * dt * (d_w[d_idx] + d_az[d_idx])
| 37.003676 | 90 | 0.526743 | 5,690 | 30,195 | 2.415466 | 0.059402 | 0.222352 | 0.160797 | 0.043801 | 0.75422 | 0.740323 | 0.729482 | 0.710346 | 0.703434 | 0.699141 | 0 | 0.052349 | 0.273092 | 30,195 | 815 | 91 | 37.04908 | 0.57383 | 0.16314 | 0 | 0.731441 | 0 | 0 | 0.000176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.100437 | false | 0.0131 | 0 | 0.002183 | 0.137555 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
430bc49a4bdf38cafd714ec20d84ab7431e1b09d | 13,224 | py | Python | src/data_reader/dataset.py | tomato1mule/Contrastive-Predictive-Coding-PyTorch | c87290bbd9bfabeacc413401b179f2b9a0cb2825 | [
"MIT"
] | null | null | null | src/data_reader/dataset.py | tomato1mule/Contrastive-Predictive-Coding-PyTorch | c87290bbd9bfabeacc413401b179f2b9a0cb2825 | [
"MIT"
] | null | null | null | src/data_reader/dataset.py | tomato1mule/Contrastive-Predictive-Coding-PyTorch | c87290bbd9bfabeacc413401b179f2b9a0cb2825 | [
"MIT"
] | null | null | null | import numpy as np
import torch
from torch.utils import data
import h5py
from scipy.io import wavfile
from collections import defaultdict
from random import randint
class ForwardLibriSpeechRawXXreverseDataset(data.Dataset):
def __init__(self, raw_file, list_file):
""" raw_file: train-clean-100.h5
list_file: list/training.txt
audio_window: 20480
"""
self.raw_file = raw_file
self.utts = []
with open(list_file) as f:
temp = f.readlines()
temp = [x.strip() for x in temp]
self.h5f = h5py.File(self.raw_file, 'r')
for i in temp: # sanity check
utt_len = self.h5f[i].shape[0]
self.utts.append(i)
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
original = self.h5f[utt_id][:]
return utt_id, self.h5f[utt_id][:], original[::-1].copy()
class ForwardLibriSpeechReverseRawDataset(data.Dataset):
def __init__(self, raw_file, list_file):
""" raw_file: train-clean-100.h5
list_file: list/training.txt
audio_window: 20480
"""
self.raw_file = raw_file
self.utts = []
with open(list_file) as f:
temp = f.readlines()
temp = [x.strip() for x in temp]
self.h5f = h5py.File(self.raw_file, 'r')
for i in temp: # sanity check
utt_len = self.h5f[i].shape[0]
self.utts.append(i)
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
original = self.h5f[utt_id][:]
return utt_id, original[::-1].copy() # reverse
class ForwardLibriSpeechRawDataset(data.Dataset):
def __init__(self, raw_file, list_file):
""" raw_file: train-clean-100.h5
list_file: list/training.txt
audio_window: 20480
"""
self.raw_file = raw_file
self.utts = []
with open(list_file) as f:
temp = f.readlines()
temp = [x.strip() for x in temp]
self.h5f = h5py.File(self.raw_file, 'r')
for i in temp: # sanity check
utt_len = self.h5f[i].shape[0]
self.utts.append(i)
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
return utt_id, self.h5f[utt_id][:]
class ReverseRawDataset(data.Dataset):
def __init__(self, raw_file, list_file, audio_window):
""" RawDataset trained reverse;
raw_file: train-clean-100.h5
list_file: list/training.txt
audio_window: 20480
"""
self.raw_file = raw_file
self.audio_window = audio_window
self.utts = []
with open(list_file) as f:
temp = f.readlines()
temp = [x.strip() for x in temp]
self.h5f = h5py.File(self.raw_file, 'r')
for i in temp: # sanity check
utt_len = self.h5f[i].shape[0]
if utt_len > 20480:
self.utts.append(i)
"""
with open(index_file) as f:
content = f.readlines()
content = [x.strip() for x in content]
self.spk2idx = {}
for i in content:
spk = i.split(' ')[0]
idx = i.split(' ')[1]
self.spk2idx[spk] = int(idx)
"""
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
utt_len = self.h5f[utt_id].shape[0] # get the number of data points in the utterance
index = np.random.randint(utt_len - self.audio_window + 1) # get the index to read part of the utterance into memory
#speaker = utt_id.split('-')[0]
#label = self.spk2idx[speaker]
original = self.h5f[utt_id][index:index+self.audio_window]
return original[::-1].copy() # reverse
class ForwardDatasetSITWSilence(data.Dataset):
''' dataset for forward passing sitw without vad '''
def __init__(self, wav_file):
""" wav_file: /export/c01/jlai/thesis/data/sitw_dev_enroll/wav.scp
"""
self.wav_file = wav_file
with open(wav_file) as f:
temp = f.readlines()
self.utts = [x.strip().split(' ')[0] for x in temp]
self.wavs = [x.strip().split(' ')[1] for x in temp]
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
wav_path = self.wavs[index] # get the wav file path
fs, data = wavfile.read(wav_path)
return self.utts[index], data
class ForwardDatasetSwbdSreSilence(data.Dataset):
''' dataset for forward passing swbd_sre or sre16 without vad '''
def __init__(self, wav_dir, scp_file):
""" wav_dir: /export/c01/jlai/thesis/data/swbd_sre_combined/wav/
list_file: /export/c01/jlai/thesis/data/swbd_sre_combined/list/log/swbd_sre_utt.{1..50}.scp
"""
self.wav_dir = wav_dir
with open(scp_file) as f:
temp = f.readlines()
self.utts = [x.strip() for x in temp]
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
path = self.wav_dir + utt_id
fs, data = wavfile.read(path)
return utt_id, data
class RawDatasetSwbdSreOne(data.Dataset):
''' dataset for swbd_sre with vad ; for training cpc with ONE voiced segment per recording '''
def __init__(self, raw_file, list_file):
""" raw_file: swbd_sre_combined_20k_20480.h5
list_file: list/training3.txt, list/val3.txt
"""
self.raw_file = raw_file
with open(list_file) as f:
temp = f.readlines()
all_utt = [x.strip() for x in temp]
# dictionary mapping unique utt id to its number of voied segments
self.utts = defaultdict(lambda: 0)
for i in all_utt:
count = i.split('-')[-1]
utt_uniq = i[:-(len(count)+1)]
self.utts[utt_uniq] += 1 # count
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts.keys()[index] # get the utterance id
count = self.utts[utt_id] # number of voiced segments for the utterance id
select = randint(1, count)
h5f = h5py.File(self.raw_file, 'r')
return h5f[utt_id+'-'+str(select)][:]
class RawDatasetSwbdSreSilence(data.Dataset):
''' dataset for swbd_sre without vad; for training cpc with ONE voiced/unvoiced segment per recording '''
def __init__(self, raw_file, list_file, audio_window):
""" raw_file: swbd_sre_combined_20k_20480.h5
list_file: list/training2.txt, list/val2.txt
"""
self.raw_file = raw_file
self.audio_window = audio_window
with open(list_file) as f:
temp = f.readlines()
self.utts = [x.strip() for x in temp]
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
h5f = h5py.File(self.raw_file, 'r')
utt_len = h5f[utt_id].shape[0] # get the number of data points in the utterance
index = np.random.randint(utt_len - self.audio_window + 1) # get the index to read part of the utterance into memory
return h5f[utt_id][index:index+self.audio_window]
class RawDatasetSwbdSre(data.Dataset):
''' dataset for swbd_sre with vad ; for training cpc with ONE voiced segment per recording '''
def __init__(self, raw_file, list_file):
""" raw_file: swbd_sre_combined_20k_20480.h5
list_file: list/training.txt
"""
self.raw_file = raw_file
with open(list_file) as f:
temp = f.readlines()
self.utts = [x.strip() for x in temp]
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
h5f = h5py.File(self.raw_file, 'r')
return h5f[utt_id][:]
class RawDatasetSpkClass(data.Dataset):
def __init__(self, raw_file, list_file, index_file, audio_window, frame_window):
""" raw_file: train-clean-100.h5
list_file: list/training.txt
index_file: spk2idx
audio_window: 20480
"""
self.raw_file = raw_file
self.audio_window = audio_window
self.frame_window = frame_window
with open(list_file) as f:
temp = f.readlines()
self.utts = [x.strip() for x in temp]
with open(index_file) as f:
content = f.readlines()
content = [x.strip() for x in content]
self.spk2idx = {}
for i in content:
spk = i.split(' ')[0]
idx = int(i.split(' ')[1])
self.spk2idx[spk] = idx
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
h5f = h5py.File(self.raw_file, 'r')
utt_len = h5f[utt_id].shape[0] # get the number of data points in the utterance
index = np.random.randint(utt_len - self.audio_window + 1) # get the index to read part of the utterance into memory
speaker = utt_id.split('-')[0]
label = torch.tensor(self.spk2idx[speaker])
return h5f[utt_id][index:index+self.audio_window], label.repeat(self.frame_window)
class RawXXreverseDataset(data.Dataset):
''' RawDataset but returns sequence twice: x, x_reverse '''
def __init__(self, raw_file, list_file, audio_window):
""" raw_file: train-clean-100.h5
list_file: list/training.txt
audio_window: 20480
"""
self.raw_file = raw_file
self.audio_window = audio_window
self.utts = []
with open(list_file) as f:
temp = f.readlines()
temp = [x.strip() for x in temp]
self.h5f = h5py.File(self.raw_file, 'r')
for i in temp: # sanity check
utt_len = self.h5f[i].shape[0]
if utt_len > 20480:
self.utts.append(i)
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
utt_len = self.h5f[utt_id].shape[0] # get the number of data points in the utterance
index = np.random.randint(utt_len - self.audio_window + 1) # get the index to read part of the utterance into memory
#speaker = utt_id.split('-')[0]
#label = self.spk2idx[speaker]
original = self.h5f[utt_id][index:index+self.audio_window]
return original, original[::-1].copy() # reverse
class RawDataset(data.Dataset):
def __init__(self, raw_file, list_file, audio_window):
""" raw_file: train-clean-100.h5
list_file: list/training.txt
audio_window: 20480
"""
self.raw_file = raw_file
self.audio_window = audio_window
self.utts = [] # list of file names (1 utterance = 1 file)
with open(list_file) as f:
temp = f.readlines()
temp = [x.strip() for x in temp]
self.h5f = h5py.File(self.raw_file, 'r')
for i in temp: # sanity check
utt_len = self.h5f[i].shape[0]
if utt_len > 20480:
self.utts.append(i)
"""
with open(index_file) as f:
content = f.readlines()
content = [x.strip() for x in content]
self.spk2idx = {}
for i in content:
spk = i.split(' ')[0]
idx = i.split(' ')[1]
self.spk2idx[spk] = int(idx)
"""
def __len__(self):
"""Denotes the total number of utterances
"""
return len(self.utts)
def __getitem__(self, index):
utt_id = self.utts[index] # get the utterance id
utt_len = self.h5f[utt_id].shape[0] # get the number of data points in the utterance
index = np.random.randint(utt_len - self.audio_window + 1) # get the index to read part of the utterance into memory
#speaker = utt_id.split('-')[0]
#label = self.spk2idx[speaker]
return self.h5f[utt_id][index:index+self.audio_window]
| 34.259067 | 125 | 0.585375 | 1,774 | 13,224 | 4.157835 | 0.086809 | 0.047451 | 0.04474 | 0.01898 | 0.829176 | 0.81006 | 0.779555 | 0.775488 | 0.760439 | 0.739832 | 0 | 0.022779 | 0.302858 | 13,224 | 385 | 126 | 34.348052 | 0.777308 | 0.249849 | 0 | 0.705069 | 0 | 0 | 0.001936 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.165899 | false | 0 | 0.032258 | 0 | 0.364055 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
430f88440b682017b530edfcd55f9cfb9ef73bd4 | 137 | py | Python | elf/__init__.py | awesome-archive/ELF | ba956fbfc74d28d6df26472f5b464d2d038c040c | [
"BSD-3-Clause"
] | 1 | 2021-09-29T07:34:27.000Z | 2021-09-29T07:34:27.000Z | elf/__init__.py | awesome-archive/ELF | ba956fbfc74d28d6df26472f5b464d2d038c040c | [
"BSD-3-Clause"
] | null | null | null | elf/__init__.py | awesome-archive/ELF | ba956fbfc74d28d6df26472f5b464d2d038c040c | [
"BSD-3-Clause"
] | 1 | 2019-11-01T02:20:55.000Z | 2019-11-01T02:20:55.000Z | from .utils_elf import GCWrapper
from .utils_elf import Batch
from .context_utils import ContextArgs
from .more_labels import MoreLabels
| 27.4 | 38 | 0.854015 | 20 | 137 | 5.65 | 0.55 | 0.159292 | 0.212389 | 0.318584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116788 | 137 | 4 | 39 | 34.25 | 0.933884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4320982737348cf88963ad5b159b7bb9678c4dba | 71 | py | Python | auctions/land.py | sodapopinsky/dfk | be48e89d4b054ad8abbb009d0e1ea4c10f559af5 | [
"MIT"
] | 90 | 2021-10-17T19:36:45.000Z | 2022-03-31T17:19:43.000Z | auctions/land.py | sodapopinsky/dfk | be48e89d4b054ad8abbb009d0e1ea4c10f559af5 | [
"MIT"
] | 13 | 2021-11-13T00:19:31.000Z | 2022-03-20T15:13:22.000Z | auctions/land.py | sodapopinsky/dfk | be48e89d4b054ad8abbb009d0e1ea4c10f559af5 | [
"MIT"
] | 71 | 2021-11-05T03:00:41.000Z | 2022-03-30T06:16:25.000Z | AUCTION_CONTRACT_ADDRESS = '0x77D991987ca85214f9686131C58c1ABE4C93E547' | 71 | 71 | 0.929577 | 4 | 71 | 16 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.434783 | 0.028169 | 71 | 1 | 71 | 71 | 0.492754 | 0 | 0 | 0 | 0 | 0 | 0.583333 | 0.583333 | 0 | 0 | 0.583333 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4343eff500df43dd97aebf6c5f160504f6064166 | 27,804 | py | Python | tests/test__bootstrap.py | SteamPeKa/krippendorffs_alpha | c3d4f3eacaf418aeb22d30759594ca567dce9ecb | [
"MIT"
] | 1 | 2020-10-28T09:37:13.000Z | 2020-10-28T09:37:13.000Z | tests/test__bootstrap.py | SteamPeKa/krippendorffs_alpha | c3d4f3eacaf418aeb22d30759594ca567dce9ecb | [
"MIT"
] | null | null | null | tests/test__bootstrap.py | SteamPeKa/krippendorffs_alpha | c3d4f3eacaf418aeb22d30759594ca567dce9ecb | [
"MIT"
] | null | null | null | # coding=utf-8
# Creation date: 04 нояб. 2020
# Creation time: 21:24
# Creator: SteamPeKa
import csv
import os
import numpy
import krippendorffs_alpha
import testing_utils
class TestObserverwiseJackknife(object):
def test_values_nominal_ram(self):
metric = "nominal"
strategy = "ram-hungry"
with open(os.path.join("tests", "example_E_data.tsv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter="\t")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "NULL" else None for val in row[1:]] for row in input_table]
users_count = len(input_table)
expected_outed_alphas = []
for user_index in range(users_count):
jackknife_table = input_table[:user_index] + input_table[user_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_outed_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.observerwise_jackknife(prepared_data=prepared_input_table,
metric=metric,
strategy=strategy)
expected_outed_alphas = numpy.array(expected_outed_alphas)
assert len(expected_outed_alphas) == len(actual_outer_alphas), (len(expected_outed_alphas),
len(actual_outer_alphas),
actual_outer_alphas)
testing_utils.assert_equal_tensors(expected_outed_alphas, actual_outer_alphas)
def test_values_interval_ram(self):
metric = "interval"
strategy = "ram-hungry"
with open(os.path.join("tests", "example_E_data.tsv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter="\t")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "NULL" else None for val in row[1:]] for row in input_table]
users_count = len(input_table)
expected_outed_alphas = []
for user_index in range(users_count):
jackknife_table = input_table[:user_index] + input_table[user_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_outed_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.observerwise_jackknife(prepared_data=prepared_input_table,
metric=metric,
strategy=strategy)
expected_outed_alphas = numpy.array(expected_outed_alphas)
assert len(expected_outed_alphas) == len(actual_outer_alphas), (len(expected_outed_alphas),
len(actual_outer_alphas),
actual_outer_alphas)
testing_utils.assert_equal_tensors(expected_outed_alphas, actual_outer_alphas)
def test_values_ratio_ram(self):
metric = "ratio"
strategy = "ram-hungry"
with open(os.path.join("tests", "example_E_data.tsv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter="\t")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "NULL" else None for val in row[1:]] for row in input_table]
users_count = len(input_table)
expected_outed_alphas = []
for user_index in range(users_count):
jackknife_table = input_table[:user_index] + input_table[user_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_outed_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.observerwise_jackknife(prepared_data=prepared_input_table,
metric=metric,
strategy=strategy)
expected_outed_alphas = numpy.array(expected_outed_alphas)
assert len(expected_outed_alphas) == len(actual_outer_alphas), (len(expected_outed_alphas),
len(actual_outer_alphas),
actual_outer_alphas)
testing_utils.assert_equal_tensors(expected_outed_alphas, actual_outer_alphas)
def test_values_nominal_time(self):
metric = "nominal"
strategy = "time-hungry"
with open(os.path.join("tests", "example_E_data.tsv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter="\t")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "NULL" else None for val in row[1:]] for row in input_table]
users_count = len(input_table)
expected_outed_alphas = []
for user_index in range(users_count):
jackknife_table = input_table[:user_index] + input_table[user_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_outed_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.observerwise_jackknife(prepared_data=prepared_input_table,
metric=metric,
strategy=strategy)
expected_outed_alphas = numpy.array(expected_outed_alphas)
assert len(expected_outed_alphas) == len(actual_outer_alphas), (len(expected_outed_alphas),
len(actual_outer_alphas),
actual_outer_alphas)
testing_utils.assert_equal_tensors(expected_outed_alphas, actual_outer_alphas)
def test_values_interval_time(self):
metric = "interval"
strategy = "time-hungry"
with open(os.path.join("tests", "example_E_data.tsv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter="\t")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "NULL" else None for val in row[1:]] for row in input_table]
users_count = len(input_table)
expected_outed_alphas = []
for user_index in range(users_count):
jackknife_table = input_table[:user_index] + input_table[user_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_outed_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.observerwise_jackknife(prepared_data=prepared_input_table,
metric=metric,
strategy=strategy)
expected_outed_alphas = numpy.array(expected_outed_alphas)
assert len(expected_outed_alphas) == len(actual_outer_alphas), (len(expected_outed_alphas),
len(actual_outer_alphas),
actual_outer_alphas)
testing_utils.assert_equal_tensors(expected_outed_alphas, actual_outer_alphas)
def test_values_ratio_time(self):
metric = "ratio"
strategy = "time-hungry"
with open(os.path.join("tests", "example_E_data.tsv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter="\t")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "NULL" else None for val in row[1:]] for row in input_table]
users_count = len(input_table)
expected_outed_alphas = []
for user_index in range(users_count):
jackknife_table = input_table[:user_index] + input_table[user_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_outed_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="observer",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.observerwise_jackknife(prepared_data=prepared_input_table,
metric=metric,
strategy=strategy)
expected_outed_alphas = numpy.array(expected_outed_alphas)
assert len(expected_outed_alphas) == len(actual_outer_alphas), (len(expected_outed_alphas),
len(actual_outer_alphas),
actual_outer_alphas)
testing_utils.assert_equal_tensors(expected_outed_alphas, actual_outer_alphas)
class TestUnitwiseJackknife(object):
def test_e_nominal(self):
metric = "nominal"
with open(os.path.join("tests", "example_E_data.tsv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter="\t")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "NULL" else None for val in row[1:]] for row in input_table]
input_table = numpy.transpose(input_table).tolist()
assert isinstance(input_table, list)
expected_alphas = []
for unit_index in range(len(input_table)):
jackknife_table = input_table[:unit_index] + input_table[unit_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.unitwise_jackknife(prepared_data=prepared_input_table,
metric=metric)
expected_alphas = numpy.array(expected_alphas)
testing_utils.assert_equal_tensors(expected_alphas, actual_outer_alphas)
def test_e_interval(self):
metric = "interval"
with open(os.path.join("tests", "example_E_data.tsv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter="\t")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "NULL" else None for val in row[1:]] for row in input_table]
input_table = numpy.transpose(input_table).tolist()
assert isinstance(input_table, list)
expected_alphas = []
for unit_index in range(len(input_table)):
jackknife_table = input_table[:unit_index] + input_table[unit_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.unitwise_jackknife(prepared_data=prepared_input_table,
metric=metric)
expected_alphas = numpy.array(expected_alphas)
testing_utils.assert_equal_tensors(expected_alphas, actual_outer_alphas)
def test_e_ratio(self):
metric = "ratio"
with open(os.path.join("tests", "example_E_data.tsv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter="\t")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "NULL" else None for val in row[1:]] for row in input_table]
input_table = numpy.transpose(input_table).tolist()
assert isinstance(input_table, list)
expected_alphas = []
for unit_index in range(len(input_table)):
jackknife_table = input_table[:unit_index] + input_table[unit_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.unitwise_jackknife(prepared_data=prepared_input_table,
metric=metric)
expected_alphas = numpy.array(expected_alphas)
testing_utils.assert_equal_tensors(expected_alphas, actual_outer_alphas)
def test_wikipedia_nominal(self):
metric = "nominal"
with open(os.path.join("tests", "example_wikipedia.csv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter=",")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "*" else None for val in row[1:]] for row in input_table]
input_table = numpy.transpose(input_table).tolist()
assert isinstance(input_table, list)
expected_alphas = []
for unit_index in range(len(input_table)):
jackknife_table = input_table[:unit_index] + input_table[unit_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.unitwise_jackknife(prepared_data=prepared_input_table,
metric=metric)
expected_alphas = numpy.array(expected_alphas)
testing_utils.assert_equal_tensors(expected_alphas, actual_outer_alphas)
def test_wikipedia_interval(self):
metric = "interval"
with open(os.path.join("tests", "example_wikipedia.csv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter=",")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "*" else None for val in row[1:]] for row in input_table]
input_table = numpy.transpose(input_table).tolist()
assert isinstance(input_table, list)
expected_alphas = []
for unit_index in range(len(input_table)):
jackknife_table = input_table[:unit_index] + input_table[unit_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.unitwise_jackknife(prepared_data=prepared_input_table,
metric=metric)
expected_alphas = numpy.array(expected_alphas)
testing_utils.assert_equal_tensors(expected_alphas, actual_outer_alphas)
def test_wikipedia_ratio(self):
metric = "ratio"
with open(os.path.join("tests", "example_wikipedia.csv"), "r") as f:
input_table = [row for row in csv.reader(f, delimiter=",")]
input_table = input_table[1:]
input_table = [[int(val) if val.strip() != "*" else None for val in row[1:]] for row in input_table]
input_table = numpy.transpose(input_table).tolist()
assert isinstance(input_table, list)
expected_alphas = []
for unit_index in range(len(input_table)):
jackknife_table = input_table[:unit_index] + input_table[unit_index + 1:]
prepared_jackknife_table = krippendorffs_alpha.data_converters.from_list_of_lists(
jackknife_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
outer_alpha = krippendorffs_alpha._calculation.calc_alpha(prepared_jackknife_table, metric=metric)
expected_alphas.append(outer_alpha)
prepared_input_table = krippendorffs_alpha.data_converters.from_list_of_lists(input_table,
header=False,
row_legend=False,
upper_level="unit",
value_constructor=None,
possible_values=[1, 2, 3, 4, 5])
actual_outer_alphas = krippendorffs_alpha._bootstrap.unitwise_jackknife(prepared_data=prepared_input_table,
metric=metric)
expected_alphas = numpy.array(expected_alphas)
testing_utils.assert_equal_tensors(expected_alphas, actual_outer_alphas)
| 63.334852 | 119 | 0.501654 | 2,549 | 27,804 | 5.116909 | 0.043154 | 0.115004 | 0.061182 | 0.049682 | 0.972552 | 0.972552 | 0.972552 | 0.972552 | 0.972552 | 0.972552 | 0 | 0.010519 | 0.429003 | 27,804 | 438 | 120 | 63.479452 | 0.811036 | 0.002913 | 0 | 0.951407 | 0 | 0 | 0.023234 | 0.002273 | 0 | 0 | 0 | 0 | 0.061381 | 1 | 0.030691 | false | 0 | 0.012788 | 0 | 0.048593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4368e28ba359df56463bdb8697786110bd2ca976 | 54,559 | py | Python | Spider/generateProxiesProcessor.py | Liangchengdeye/IPProxiesPool | 0de3fc636901b8a2f794e7be4183662edd3d5886 | [
"Apache-2.0"
] | 1 | 2019-02-22T02:03:27.000Z | 2019-02-22T02:03:27.000Z | Spider/generateProxiesProcessor.py | Liangchengdeye/IPProxiesPool | 0de3fc636901b8a2f794e7be4183662edd3d5886 | [
"Apache-2.0"
] | null | null | null | Spider/generateProxiesProcessor.py | Liangchengdeye/IPProxiesPool | 0de3fc636901b8a2f794e7be4183662edd3d5886 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
"""
@author: jingjingli
@software: PyCharm
@file: generateProxiesProcessor.py
@time: 2019/1/21
@describe: proxypool --代理IP抓取
"""
import os
import sys
from pyquery import PyQuery
import requests
import json
import time
import re
sys.path.append(os.path.abspath(os.path.dirname(__file__) + '/' + '..'))
sys.path.append("..")
from BaseFile.ReadConfig import ReadConfig
from Common.RedisHelperLongConncet import RedisHelperConnect
from Config.HEADERS import HEADERS
from Common.MySqlHelper import MysqlHelper
SUM = 0
CHECKOUTCONFIG = ReadConfig().get_conf("../Config/PROXYCONFIG.yaml")["spiderConfig"]
# 获取一个代理,防止网站反爬
PROXIESIP = CHECKOUTCONFIG["proxiesIP"]['proxies']
# 公共爬虫方法
def spider_common(url, headers, timeout, encoding):
# 设置超时时间为5
try:
html = requests.get(url, headers=headers, timeout=timeout)
html.encoding = encoding
html_headers = html.headers
html_text = html.text
print("0 == html code ==>", html.status_code)
print("1 == html headers ==>", html_headers)
print("2 == html url ==>", url)
return html_text
except Exception as e:
print("SPIDER COMMON:", e)
# 通过代理抓取代理类,
def spider_common_ip(proxiesIP, url, headers, timeout, encoding):
# 设置超时时间为5
try:
# html = requests.get("http://123.207.35.36:5010/get/")
html = requests.get(proxiesIP)
proxies = {"http": "http://%s" % html.text}
html = requests.get(url, headers=headers, timeout=timeout, proxies=proxies)
html.encoding = encoding
html_headers = html.headers
html_text = html.text
print("0 == html code ==>", html.status_code)
print("1 == html headers ==>", html_headers)
print("2 == html url ==>", url)
return html_text
except Exception as e:
global NETWORK_STATUS
# 请求超时改变状态
NETWORK_STATUS = False
if NETWORK_STATUS == False:
for i in range(1, 10): # 超时重试
try:
# html = requests.get("http://123.207.35.36:5010/get/")
html = requests.get(proxiesIP)
proxies = {"http": "http://%s" % html.text}
print('请求超时,第 % s次重复请求' % i, proxies)
html = requests.get(url, headers=headers, timeout=timeout, proxies=proxies)
if html.status_code == 200:
html_headers = html.headers
html_text = html.text
print("0 == html code ==>", html.status_code)
print("1 == html headers ==>", html_headers)
print("2 == html url ==>", url)
return html_text
else:
print("********************", html.text)
NETWORK_STATUS = True
except:
print("*************")
return -1
def get_qydaili_html(url):
try:
html_text = spider_common(url, HEADERS["qydaili"], 5, 'utf-8')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_qydaili_html(url)
else:
return
doc = PyQuery(html_text)
trs = doc("table.table.table-bordered.table-striped > tbody > tr").items()
ipInfo_list = []
for i in trs:
td = i.text()
tdlist = td.split('\n')
ip = tdlist[0]
port = tdlist[1]
anonymity = tdlist[2]
if anonymity == "高匿":
anonymity = "2"
elif anonymity == "匿名":
anonymity = "1"
else:
anonymity = "0"
iptype = tdlist[3]
if "unchina" in url:
country = tdlist[4].split(" ", 1)[0]
area = tdlist[4].split(" ", 1)[1]
else:
country = "中国"
area = tdlist[4].replace("中国", "").strip()
source = "qydaili"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (str(ip), str(port), str(anonymity), str(iptype), country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_xicidaili_html(url):
try:
html_text = spider_common(url, HEADERS["xicidaili"], 5, 'utf-8')
print(html_text)
doc = PyQuery(html_text)
trs = doc("table#ip_list > tr").items()
ipInfo_list = []
for i in trs:
td = i("td").text()
if len(td) != 0:
tdlist = i("td").text().split(" ")
# print(tdlist)
ip = tdlist[1]
port = tdlist[2]
anonymity = tdlist[4]
if anonymity == "高匿":
anonymity = "2"
elif anonymity == "透明":
anonymity = "0"
else:
anonymity = "1"
iptype = tdlist[5]
country = "中国"
area = tdlist[3]
source = "xicidaili"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_data5u_html(url):
try:
html_text = spider_common(url, HEADERS["data5u"], 5, 'utf-8')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_data5u_html(url)
else:
return
doc = PyQuery(html_text)
trs = doc("body > div:nth-child(7) > ul > li:nth-child(2) > ul.l2").items()
ipInfo_list = []
for i in trs:
li_list = i.text().split("\n")
ip = li_list[0]
port = li_list[1]
anonymity = li_list[2]
if anonymity == "高匿":
anonymity = "2"
elif anonymity == "透明":
anonymity = "0"
else:
anonymity = "1"
iptype = li_list[3]
country = li_list[4]
area = li_list[5]
source = "data5u"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_66ip_country_html(url):
try:
html_text = spider_common(url, HEADERS["66ip"], 5, 'gbk')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_66ip_country_html(url)
else:
return
doc = PyQuery(html_text)
trs = doc('body > div:nth-child(3) > table > tr > td > ul > li').items()
for i in trs:
li = i("a").attr("href")
if "areaindex" in li:
url = "http://www.66ip.cn" + li
get_66ip_city_ipInfo(url)
if "areaindex" not in li:
url = "http://www.66ip.cn"
get_66ip_abroad_ipInfo(url)
except Exception as e:
print("SPIDER COMMON:", e)
# 地方代理
def get_66ip_city_ipInfo(url):
try:
html_text = spider_common(url, HEADERS["66ip"], 5, 'gbk')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_66ip_city_ipInfo(url)
else:
return
doc = PyQuery(html_text)
trs = doc('#footer > div > table > tr').items()
ipInfo_list = []
for i in trs:
td = i("td").text()
if "端口号" not in td:
td_list = td.split(" ")
ip = td_list[0]
port = td_list[1]
anonymity = td_list[3]
if anonymity == "高匿代理":
anonymity = "2"
elif anonymity == "匿名":
anonymity = "1"
else:
anonymity = "0"
iptype = ""
country = "中国"
area = td_list[2]
source = '66ip'
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# 国外代理
def get_66ip_abroad_ipInfo(url):
try:
html_text = spider_common(url, HEADERS["66ip"], 5, 'gbk')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_66ip_abroad_ipInfo(url)
else:
return
doc = PyQuery(html_text)
trs = doc('#main > div > div:nth-child(1) > table > tr').items()
ipInfo_list = []
for i in trs:
td = i("td").text()
if "端口号" not in td:
td_list = td.split(" ")
ip = td_list[0]
port = td_list[1]
anonymity = td_list[3]
if anonymity == "高匿代理":
anonymity = "2"
elif anonymity == "匿名":
anonymity = "1"
else:
anonymity = "0"
iptype = ""
country = "中国"
area = td_list[2]
source = '66ip'
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_kuaidaili_html(url):
try:
html_text = spider_common(url, HEADERS["kuaidaili"], 5, 'utf-8')
print(html_text)
doc = PyQuery(html_text)
trs = doc("#freelist > table > tbody > tr").items()
ipInfo_list = []
for i in trs:
td = i("td").text().replace("中国", "").strip()
if 'HTTP, HTTPS' in td:
td = td.replace("HTTP, HTTPS", "HTTP")
else:
td = td
td_list = td.split(" ")
while '' in td_list:
td_list.remove('')
# print(td_list)
ip = td_list[0]
port = td_list[1]
anonymity = td_list[2]
if anonymity == "高匿名":
anonymity = "2"
elif anonymity == "匿名":
anonymity = "1"
else:
anonymity = "0"
iptype = td_list[3]
country = "中国"
area = td_list[6]
source = "kuaidaili"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_goubanjia_html(url):
try:
html_text = spider_common_ip(url, HEADERS["goubanjia"], 10, 'utf-8')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_goubanjia_html(url)
else:
return
doc = PyQuery(html_text)
tds = doc("#services > div > div.row > div > div > div > table > tbody > tr > td.ip").items()
trs = doc("#services > div > div.row > div > div > div > table > tbody > tr").items()
ipInfo_list = []
for i in trs:
ip_str = str(i.html())
re_style = re.compile('<\s*p[^>]*>[^<]*<\s*/\s*p\s*>', re.I) # style
s = re.sub(re_style, '', ip_str) # 去掉style
ip_doc = PyQuery(s).text()
ip_list = ip_doc.replace("\n", "")
td_list = ip_list.split(":")
ip = td_list[0]
td = i("td").text()
if i("td").attr('class') == 'ip':
td = td.replace("\n", "")
else:
td = td
td_list = td.split()
# ip = td_list[0].split(":")[0]
port = td_list[0].split(":")[1]
anonymity = td_list[1]
if anonymity == "透明":
anonymity = "0"
elif anonymity == "匿名":
anonymity = "1"
else:
anonymity = "2"
iptype = td_list[2]
country = td_list[3]
area = td_list[4] + td_list[5]
source = "quanwangdailiip"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_ip3366_html(url):
try:
html_text = spider_common_ip(PROXIESIP, url, HEADERS["ip3366"], 10, 'gbk')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text or "网站防火墙" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_ip3366_html(url)
else:
return
doc = PyQuery(html_text)
trs = doc('#list > table > tbody > tr').items()
ipInfo_list = []
for i in trs:
ipInfoList = i('td').text().split(' ')
ip = ipInfoList[0]
port = ipInfoList[1]
anonymity = ipInfoList[2]
if anonymity == '透明代理IP':
anonymity = '0'
elif anonymity == '普通代理IP':
anonymity = '1'
else:
anonymity = '2'
iptype = ipInfoList[3]
if 'stype=1' or 'stype=2' in url:
country = '中国'
elif '' or '' in url:
country = ""
area = ipInfoList[4].replace("高匿_", "").replace("SSL高匿_", "")
source = 'ip3366'
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), anonymity, iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_zdaye_html(url):
try:
html_text = spider_common_ip(PROXIESIP, url, HEADERS["zdaye"], 10, 'gb2312')
print(html_text)
doc = PyQuery(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_zdaye_html(url)
else:
return
alis = doc(
'body > div.container.mt40 > div.container.mt40 > div.col-md-3.admin_arrow_box > div:nth-child(1) > a:nth-child(2)').items()
for i in alis:
url = 'http://ip.zdaye.com' + i.attr('href')
print(url)
get_zdaye_ipUrl(url)
except Exception as e:
print("SPIDER COMMON:", e)
def get_zdaye_ipUrl(url):
try:
html_text = spider_common_ip(PROXIESIP, url, HEADERS["zdaye"], 10, 'gb2312')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_zdaye_ipUrl(url)
else:
return
doc = PyQuery(html_text)
a1 = doc(
'body > div.container.mt40 > div.container.mt40 > div.col-md-9 > div > div > div.panel.panel-success > div.panel-body > div.row > div:nth-child(2) > div:nth-child(2) > div.title > a').attr(
"href")
a2 = doc(
'body > div.container.mt40 > div.container.mt40 > div.col-md-9 > div > div > div.panel.panel-success > div.panel-body > div.row > div:nth-child(2) > div:nth-child(3) > div.title > a').attr(
"href")
url1 = "http://ip.zdaye.com" + str(a1)
url2 = "http://ip.zdaye.com" + str(a2)
get_zdaye_ipInfo(url1)
get_zdaye_ipInfo(url2)
except Exception as e:
print("SPIDER COMMON:", e)
def get_zdaye_ipInfo(url):
try:
html_text = spider_common_ip(PROXIESIP, url, HEADERS["zdaye"], 5, 'gb2312')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_zdaye_ipInfo(url)
else:
return
doc = PyQuery(html_text)
brs = doc('div.col-md-9 > div > div > div > div.panel-body > div > div:nth-child(2) > div.cont').text()
ipInfo_list = []
iplist = brs.split("\n")
for i in iplist:
ip = i.split(':')[0]
port = i.split(":")[1].split("@")[0]
iptype = i.split(":")[1].split("@")[1].split("#")[0]
area = i.split(":")[1].split("@")[1].split("#")[1].split("]")[1].split(' ')[0]
anonymity = i.split(":")[1].split("@")[1].split("#")[1].split("]")[0].lstrip('[')
if anonymity == '透明':
anonymity = '0'
elif anonymity == '高匿名':
anonymity = '2'
else:
anonymity = '1'
title = doc(
'div.taglineWrap > div > div.col-md-9 > div > div > div > div.panel-body > div > div:nth-child(2) > div.title').text()
if '国内' in title:
country = '中国'
else:
country = area
area = ''
source = "zdaye"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_89ip_html(url):
try:
html_text = spider_common(url, HEADERS["89ip"], 5, 'utf-8')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_zdaye_ipInfo(url)
else:
return
doc = PyQuery(html_text)
tr = doc(
'div.layui-row.layui-col-space15 > div.layui-col-md8 > div > div.layui-form > table > tbody > tr').items()
ipInfo_list = []
for i in tr:
ipInfo = i('td').text().split(' ')
ip = ipInfo[0]
port = ipInfo[1]
anonymity = ""
iptype = ""
country = "中国"
area = ipInfo[2]
source = "89ip"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_xsdaili_html(url):
try:
html_text = spider_common_ip(PROXIESIP, url, HEADERS["xsdaili"], 5, 'utf-8')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text or "function setCookie(name,value)" in html_text or "404 Not Found" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_zdaye_ipInfo(url)
else:
return
doc = PyQuery(html_text)
alis = doc('div.taglineWrap > div > div.col-md-3.admin_arrow_box > div > a:nth-child(2)').items()
for i in alis:
if i.attr('href') != '/':
url = "http://www.xsdaili.com/" + i.attr('href')
get_xsdaili_ipUrl(url)
except Exception as e:
print("SPIDER COMMON:", e)
def get_xsdaili_ipUrl(url):
try:
html_text = spider_common_ip(PROXIESIP, url, HEADERS["xsdaili"], 5, 'utf-8')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_zdaye_ipInfo(url)
else:
return
doc = PyQuery(html_text)
divs = doc(
'div.taglineWrap > div > div.col-md-9 > div > div > div > div.panel-body > div > div:nth-child(2)').items()
for i in divs:
url1 = 'http://www.xsdaili.com' + i('div:nth-child(2) > div.cont > a').attr('href')
url2 = 'http://www.xsdaili.com' + i('div:nth-child(3) > div.cont > a').attr('href')
url3 = 'http://www.xsdaili.com' + i('div:nth-child(4) > div.cont > a').attr('href')
url4 = 'http://www.xsdaili.com' + i('div:nth-child(5) > div.cont > a').attr('href')
get_xsdaili_ipInfo(url1)
get_xsdaili_ipInfo(url2)
get_xsdaili_ipInfo(url3)
get_xsdaili_ipInfo(url4)
except Exception as e:
print("SPIDER COMMON:", e)
def get_xsdaili_ipInfo(url):
try:
html_text = spider_common_ip(PROXIESIP, url, HEADERS["xsdaili"], 5, 'gbk')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_zdaye_ipInfo(url)
else:
return
doc = PyQuery(html_text)
brs = doc('div.col-md-9 > div > div > div > div.panel-body > div > div:nth-child(2) > div.cont').text()
ipInfo_list = []
iplist = brs.split("\n")
for i in iplist:
ip = i.split(':')[0]
port = i.split(":")[1].split("@")[0]
iptype = i.split(":")[1].split("@")[1].split("#")[0]
area = i.split(":")[1].split("@")[1].split("#")[1].split("]")[1].split(' ')[0]
anonymity = i.split(":")[1].split("@")[1].split("#")[1].split("]")[0].lstrip('[')
if anonymity == '透明':
anonymity = '0'
elif anonymity == '高匿名':
anonymity = '2'
else:
anonymity = '1'
title = doc(
'div.taglineWrap > div > div.col-md-9 > div > div > div > div.panel-body > div > div:nth-child(2) > div.title').text()
if '国内' in title:
country = '中国'
else:
country = area
area = ''
source = "xsdaili"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_31f_html(url):
html_text = spider_common_ip(PROXIESIP, url, HEADERS["31f"], 10, 'utf-8')
print(html_text)
try:
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_31f_html(url)
else:
return
doc = PyQuery(html_text)
trs = doc("body > div > table.table.table-striped > tr").items()
ipInfo_list = []
for i in trs:
td = i("td").text().strip()
td_list = td.split(" ")
while '' in td_list:
td_list.remove('')
if len(td_list) != 0:
ip = td_list[1]
port = td_list[2]
anonymity = td_list[6]
if anonymity == "transparent":
anonymity = "0"
elif anonymity == "anonymous":
anonymity = "1"
else:
anonymity = "2"
iptype = ""
country = "中国"
area = td_list[3] + td_list[4]
source = "31f"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_ab57_html(url):
try:
html_text = spider_common(url, HEADERS["ab57"], 5, 'utf-8')
print(html_text)
ipInfo = html_text.split("\n")
ipInfo_list = []
for i in ipInfo:
if i != "" and len(i) != 0 and i != ":":
ipAndPort = i.split(":")
ip = ipAndPort[0]
port = ipAndPort[1]
anonymity = ""
iptype = ""
country = ""
area = ""
source = "ab57"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_atomintersoft_html(url):
try:
html_text = spider_common(url, HEADERS["atomintersoft"], 5, 'utf-8')
doc = PyQuery(html_text)
tr = doc('#node-20 > div.content.clear-block > fieldset:nth-child(2) > div > table > tbody > tr > td').items()
for i in tr:
url = "http://www.atomintersoft.com" + i("a").attr("href")
print(url)
get_atomintersoft_ipInfo(url)
except Exception as e:
print("SPIDER COMMON:", e)
def get_atomintersoft_ipInfo(url):
try:
html_text = spider_common(url, HEADERS["atomintersoft"], 5, 'utf-8')
doc = PyQuery(html_text)
tr = doc('div.node > div.content > fieldset.collapsible:eq(1) > div.form-item > table > thead > tr').items()
ipInfo_list = []
for i in tr:
td = i('td:nth-child(1)').items()
for j in td:
ele = j.text().split('\n')
ip = ele[0].split(":")[0]
port = ele[0].split(":")[1]
anonymity = ele[3]
if anonymity == "Transparent":
anonymity = "0"
elif anonymity == "High anonymity":
anonymity = "2"
else:
anonymity = "1"
iptype = ""
country = ele[2]
area = ""
source = "atomintersoft"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_rmccurdy_html(url):
try:
html_text = spider_common(url, HEADERS["rmccurdy"], 5, 'utf-8')
ipInfo = html_text.split("\n")
ipInfo_list = []
for i in ipInfo:
if i != "" and len(i) != 0 and i != ":":
ipAndPort = i.split(":")
ip = ipAndPort[0]
port = ipAndPort[1]
anonymity = ""
iptype = ""
country = ""
area = ""
source = "rmccurdy"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_iphai_html(url):
try:
html_text = spider_common(url, HEADERS["iphai"], 5, 'utf-8')
# 0-使用pyquery方式解析
doc = PyQuery(html_text)
tr = doc('body > div.container.main-container > div.table-responsive.module > table > tr').items()
ipInfo_list = []
for i in tr:
# ths = i("td").text()
ths = i("td").text().strip().replace("\n", "")
if ths != '' or len(ths) != 0:
print(ths.split(" "))
ip = ths.split(" ")[0]
port = ths.split(" ")[1]
anonymity = ths.split(" ")[2]
if anonymity == "普匿":
anonymity = "1"
else:
anonymity = "2"
iptype = ths.split(" ")[3]
if "HTTP" in iptype and "HTTPS" in iptype or iptype == '':
iptype = "http"
country = ths.split(" ")[4]
if "中国" in country:
country = "中国"
area = country.replace("中国", "")
else:
country = country
area = ""
source = "iphai"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_jiangxianli_html(url):
try:
html_text = spider_common(url, HEADERS["jiangxianli"], 5, 'utf-8')
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_jiangxianli_html(url)
else:
return
print(html_text)
doc = PyQuery(html_text)
trs = doc(
"body > div.row > div > div.box > div.box-body.table-responsive.no-padding > table > tbody > tr").items()
ipInfo_list = []
for i in trs:
td_list = i("td").text().split(" ")
ip = td_list[1]
port = td_list[2]
anonymity = td_list[3]
if anonymity == "透明":
anonymity = "0"
elif anonymity == "高匿":
anonymity = "2"
else:
anonymity = "1"
iptype = td_list[4]
country = td_list[5]
if '' in td_list:
area = ""
else:
area = td_list[6] + td_list[7]
source = "jiangxianli"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_proxylistplus_html(url):
try:
html_text = spider_common(url, HEADERS["proxylistplus"], 5, 'utf-8')
print(html_text)
doc = PyQuery(html_text)
h1 = doc('#page > table.bg > tr').items()
ipInfo_list = []
for j in h1:
msg = j('td').text().strip().replace("\n", "")
if msg != "" or len(msg) != 0:
ip = msg.split(" ")[0]
port = msg.split(" ")[1]
anonymity = msg.split(" ")[2]
if anonymity == "transparent":
anonymity = "0"
elif anonymity == "elite":
anonymity = "1"
else:
anonymity = "2"
country = msg.split(" ")[3]
iptype = msg.split(" ")[5]
if iptype == "yes":
iptype = "https"
else:
iptype = "http"
area = ""
source = "proxylistplus"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_github_html(url):
try:
html_text = spider_common(url, HEADERS["githubusercontent"], 5, 'utf-8')
ipInfo_list = []
for i in (html_text.split("\n")):
if i.replace("\n", "").replace(" ", "") != "":
ip = json.loads(i)["host"]
country = json.loads(i)["country"]
try:
iptype = json.loads(i)["type"]
except:
iptype = ""
port = json.loads(i)["port"]
anonymity = json.loads(i)["anonymity"]
export_address = tuple(json.loads(i)["export_address"])
if anonymity == "transparent" or len(export_address) == 2 or "unknown" in export_address:
anonymity = 0
elif anonymity == "anonymous":
anonymity = 1
else:
anonymity = 2
area = ""
source = "githubusercontent"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_thebigproxylist_html(url):
try:
html_text = spider_common_ip(PROXIESIP, url, HEADERS["thebigproxylist"], 5, 'utf-8')
print(html_text)
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text or "404 Not Found" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_thebigproxylist_html(url)
else:
return
ipInfoList = html_text.split("\n")
ipInfo_list = []
while '' in ipInfoList:
ipInfoList.remove('')
for i in ipInfoList:
ipInfo = i.split(",")
ip = ipInfo[0].split(":")[0]
port = ipInfo[0].split(":")[1]
anonymity = ""
iptype = ipInfo[1]
country = ""
area = ""
source = "thebigproxylist"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_freeproxylist_html(url):
try:
html_text = spider_common(url, HEADERS["free-proxy-list"], 5, 'utf-8')
doc = PyQuery(html_text)
tr = doc('body > div.wrapper > div.container > div > div > div.table-responsive > table > tbody > tr').items()
ipInfo_list = []
for i in tr:
ipInfo = i("td").items()
tdlist = []
for j in ipInfo:
tdlist.append(j.text())
ip = tdlist[0]
port = tdlist[2]
country = tdlist[3]
area = tdlist[4]
iptype = tdlist[8]
anonymity = tdlist[9]
if anonymity == "High Anonymous":
anonymity = "2"
source = "free-proxy-list"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
# print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_cnProxy_html(url):
html_text = spider_common_ip(PROXIESIP, url, HEADERS["cnProxy"], 10, 'utf-8')
print(html_text)
try:
i = 0
if '''setTimeout("location.replace(location.href.split(\\"#\\")[0])",2000);''' in html_text or "function setCookie(name,value)" in html_text:
i += 1
if i != 10:
time.sleep(5)
get_cnProxy_html(url)
else:
return
doc = PyQuery(html_text)
trs = doc("div.table-container > table.sortable > tbody > tr").items()
ipInfo_list = []
for i in trs:
ip_list = i("td").text().split(" ")
ip = ip_list[0]
port = ip_list[1]
anonymity = ""
iptype = ""
country = "中国"
area = ip_list[2]
source = "cnProxy"
trueOrFalse = RedisHelperConnect().redis_sadd("proxiesDuplicates", ip)
if trueOrFalse == 1:
ipInfo_list.append(
{"ip": ip, "port": str(port), "anonymity": str(anonymity), "type": iptype, "country": country,
"area": area, "source": source})
# sql = "insert into proxiesinitial_test(ip,port,anonymity,iptype,country,area,source) VALUES (%s,%s,%s,%s,%s,%s,%s)"
# prames = (ip, str(port), str(anonymity), iptype, country, area, source)
# print(prames)
# MysqlHelper("proxiesinitial_test").insert(sql, *prames)
print(ipInfo_list)
g = RedisHelperConnect().redis_rpush_batch("proxiesInitial", ipInfo_list)
except Exception as e:
print("SPIDER COMMON:", e)
# ***********************************************************************************************************************
def get_ihuan_html(url):
html_text = spider_common_ip(PROXIESIP, url, HEADERS["ihuan"], 5, 'utf-8')
print(html_text)
if "输入验证码证明您不是机器人,输入后可以暂时浏览网站" in html_text:
time.sleep(5)
print("需要验证码")
get_ihuan_html(url)
else:
doc = PyQuery(html_text)
options = doc('div.col-md-10 > div.panel.panel-default > a').items()
for i in options:
countryUrl = "https://ip.ihuan.me" + str(i.attr("href"))
print(countryUrl)
def get_ihuan_ipInfo(url):
html_text = spider_common_ip(PROXIESIP, url, HEADERS["ihuan"], 5, 'utf-8')
print(html_text)
if "输入验证码证明您不是机器人,输入后可以暂时浏览网站" in html_text:
time.sleep(5)
print("需要验证码")
get_ihuan_ipInfo(url)
else:
doc = PyQuery(html_text)
options = doc('div.col-md-10 > div.panel.panel-default > a').items()
for i in options:
countryUrl = "https://ip.ihuan.me" + str(i.attr("href"))
print(countryUrl)
if __name__ == '__main__':
print("sum:==>", SUM)
| 43.404137 | 230 | 0.482798 | 5,691 | 54,559 | 4.520295 | 0.053418 | 0.044781 | 0.012828 | 0.013683 | 0.832342 | 0.813683 | 0.799261 | 0.78449 | 0.769407 | 0.741535 | 0 | 0.018259 | 0.333456 | 54,559 | 1,256 | 231 | 43.438694 | 0.689133 | 0.163511 | 0 | 0.705366 | 1 | 0.016585 | 0.177043 | 0.050147 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03122 | false | 0 | 0.010732 | 0 | 0.063415 | 0.079024 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
78de798a0e76c14f4df13435ec5c4866ea864f1e | 10,460 | py | Python | test/test_chart_processing.py | kevindurston21/YANOM-Note-O-Matic | c61845791bccfc043759eaa91e189d31d7276ae2 | [
"MIT"
] | 7 | 2021-03-01T18:32:26.000Z | 2022-02-05T22:45:33.000Z | test/test_chart_processing.py | kevindurston21/YANOM-Note-O-Matic | c61845791bccfc043759eaa91e189d31d7276ae2 | [
"MIT"
] | 50 | 2021-02-28T17:36:49.000Z | 2022-03-08T20:09:04.000Z | test/test_chart_processing.py | kevindurston21/YANOM-Note-O-Matic | c61845791bccfc043759eaa91e189d31d7276ae2 | [
"MIT"
] | 3 | 2021-06-17T23:55:23.000Z | 2021-08-09T10:29:54.000Z | import os
import re
import pytest
import chart_processing
import conversion_settings
class Note:
"""Fake Note class to support testing"""
def __init__(self):
self.attachments = {}
self.attachment_count = 0
self.nsx_file = 'hello'
self.note_json = {}
self.notebook_folder_name = 'notebook_folder_name'
self.conversion_settings = conversion_settings.ConversionSettings()
self.image_count = 0
self.title = 'title'
self.parent_notebook = 'parent'
@pytest.mark.parametrize(
'input_html, expected', [
("""<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"bar chart title","chartType":"bar","xAxisTitle":"x-axis title","yAxisTitle":"y-axis title"}' chart-data='[["","Number 1","Number 2","Number 3","Number 4"],["Category A",500,520,540,520],["Category B",520,540,560,540],["Category C",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div>""",
"""<p><img src="attachments/replaced_id_number.png"></p><p><a href="attachments/replaced_id_number.csv">Chart data file</a></p><p><table border="1" class="dataframe"><thead><tr style="text-align: right;"><th><strong></strong></th><th><strong>Number 1</strong></th><th><strong>Number 2</strong></th><th><strong>Number 3</strong></th><th><strong>Number 4</strong></th></tr></thead><tbody><tr><th><strong>Category A</strong></th><td>500</td><td>520</td><td>540</td><td>520</td></tr><tr><th><strong>Category B</strong></th><td>520</td><td>540</td><td>560</td><td>540</td></tr><tr><th><strong>Category C</strong></th><td>540</td><td>560</td><td>580</td><td>560</td></tr></tbody></table></p>""",
),
("""<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"Line Chart Title","chartType":"line","xAxisTitle":"x-axis title","yAxisTitle":"y-axis title"}' chart-data='[["","Number 1","Number 2","Number 3","Number 4"],["Category A",500,520,540,520],["Category B",520,540,560,540],["Category C",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div>""",
"""<p><img src="attachments/replaced_id_number.png"></p><p><a href="attachments/replaced_id_number.csv">Chart data file</a></p><p><table border="1" class="dataframe"><thead><tr style="text-align: right;"><th><strong></strong></th><th><strong>Number 1</strong></th><th><strong>Number 2</strong></th><th><strong>Number 3</strong></th><th><strong>Number 4</strong></th></tr></thead><tbody><tr><th><strong>Category A</strong></th><td>500</td><td>520</td><td>540</td><td>520</td></tr><tr><th><strong>Category B</strong></th><td>520</td><td>540</td><td>560</td><td>540</td></tr><tr><th><strong>Category C</strong></th><td>540</td><td>560</td><td>580</td><td>560</td></tr></tbody></table></p>""",
),
("""<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"Pie chart title","chartType":"pie","xAxisTitle":"x-axis title","yAxisTitle":"y axis ttile"}' chart-data='[["","cost","price","value","total value"],["something",500,520,540,520],["something else",520,540,560,540],["another thing",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div>""",
"""<p><img src="attachments/replaced_id_number.png"></p><p><a href="attachments/replaced_id_number.csv">Chart data file</a></p><p><table border="1" class="dataframe"><thead><tr style="text-align: right;"><th><strong></strong></th><th><strong>cost</strong></th><th><strong>price</strong></th><th><strong>value</strong></th><th><strong>total value</strong></th><th><strong>sum</strong></th><th><strong>percent</strong></th></tr></thead><tbody><tr><th><strong>something</strong></th><td>500</td><td>520</td><td>540</td><td>520</td><td>2080</td><td>32.10</td></tr><tr><th><strong>something else</strong></th><td>520</td><td>540</td><td>560</td><td>540</td><td>2160</td><td>33.33</td></tr><tr><th><strong>another thing</strong></th><td>540</td><td>560</td><td>580</td><td>560</td><td>2240</td><td>34.57</td></tr></tbody></table></p>""",
),
], ids=['bar-chart', 'line-chart', 'pie-chart']
)
def test_nsx_chart_processor_check_produced_html(input_html, expected):
note = Note()
chart_processor = chart_processing.NSXChartProcessor(note, input_html)
if os.name == 'nt':
regex = r"\d{13}"
else:
regex = r"\d{15}"
test_string = chart_processor.processed_html
substitute_text = 'replaced_id_number'
result = re.sub(regex, substitute_text, test_string, 0, re.MULTILINE)
assert result == expected
def test_nsx_chart_processor_check_produced_html_does_not_include_chart_elements():
note = Note()
input_html = """<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"bar chart title","chartType":"bar","xAxisTitle":"x-axis title","yAxisTitle":"y-axis title"}' chart-data='[["","Number 1","Number 2","Number 3","Number 4"],["Category A",500,520,540,520],["Category B",520,540,560,540],["Category C",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div><div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"Line Chart Title","chartType":"line","xAxisTitle":"x-axis title","yAxisTitle":"y-axis title"}' chart-data='[["","Number 1","Number 2","Number 3","Number 4"],["Category A",500,520,540,520],["Category B",520,540,560,540],["Category C",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div><div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"Pie chart title","chartType":"pie","xAxisTitle":"x-axis title","yAxisTitle":"y axis ttile"}' chart-data='[["","cost","price","value","total value"],["something",500,520,540,520],["something else",520,540,560,540],["another thing",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div><div>Hello World</div>"""
expected = '<div>Hello World</div>'
chart_processor = chart_processing.NSXChartProcessor(note, input_html, create_image=False, create_csv=False, create_data_table=False)
result = chart_processor.processed_html
assert result == expected
@pytest.mark.parametrize(
'input_html', [
"""<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"bar chart title","chartType":"bar","xAxisTitle":"x-axis title","yAxisTitle":"y-axis title"}' chart-data='[["","Number 1","Number 2","Number 3","Number 4"],["Category A",500,520,540,520],["Category B",520,540,560,540],["Category C",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div>""",
"""<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"Line Chart Title","chartType":"line","xAxisTitle":"x-axis title","yAxisTitle":"y-axis title"}' chart-data='[["","Number 1","Number 2","Number 3","Number 4"],["Category A",500,520,540,520],["Category B",520,540,560,540],["Category C",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div>""",
"""<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"Pie chart title","chartType":"pie","xAxisTitle":"x-axis title","yAxisTitle":"y axis ttile"}' chart-data='[["","cost","price","value","total value"],["something",500,520,540,520],["something else",520,540,560,540],["another thing",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div>""",
], ids=['bar-chart', 'line-chart', 'pie-chart']
)
def test_plot_chart_image_regression_test(input_html, image_regression):
note = Note()
chart_processor = chart_processing.NSXChartProcessor(note, input_html)
for chart in chart_processor.charts:
image_regression.check(chart._png_img_buffer.getvalue())
@pytest.mark.parametrize(
'input_html, csv', [
("""<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"bar chart title","chartType":"bar","xAxisTitle":"x-axis title","yAxisTitle":"y-axis title"}' chart-data='[["","Number 1","Number 2","Number 3","Number 4"],["Category A",500,520,540,520],["Category B",520,540,560,540],["Category C",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div>""",
',Number 1,Number 2,Number 3,Number 4\nCategory A,500,520,540,520\nCategory B,520,540,560,540\nCategory C,540,560,580,560\n'
),
("""<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"Line Chart Title","chartType":"line","xAxisTitle":"x-axis title","yAxisTitle":"y-axis title"}' chart-data='[["","Number 1","Number 2","Number 3","Number 4"],["Category A",500,520,540,520],["Category B",520,540,560,540],["Category C",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div>""",
',Number 1,Number 2,Number 3,Number 4\nCategory A,500,520,540,520\nCategory B,520,540,560,540\nCategory C,540,560,580,560\n'
),
("""<div chart-config='{"range":"A1:E4","direction":"row","rowHeaderExisted":true,"columnHeaderExisted":true,"title":"Pie chart title","chartType":"pie","xAxisTitle":"x-axis title","yAxisTitle":"y axis ttile"}' chart-data='[["","cost","price","value","total value"],["something",500,520,540,520],["something else",520,540,560,540],["another thing",540,560,580,560]]' class="syno-ns-chart-object" style="width: 520px; height: 350px;"></div>""",
',cost,price,value,total value,sum,percent\nsomething,500,520,540,520,2080,32.098765432098766\nsomething else,520,540,560,540,2160,33.33333333333333\nanother thing,540,560,580,560,2240,34.5679012345679\n'
)
], ids=['bar-chart', 'line-chart', 'pie-chart']
)
def test_plot_chart_check_csv_content(input_html, csv):
note = Note()
chart_processor = chart_processing.NSXChartProcessor(note, input_html)
for chart in chart_processor.charts:
# Note - replace \r is for use in windows
assert chart.csv_chart_data_string.replace('\r', '') == csv
| 107.835052 | 1,354 | 0.669598 | 1,550 | 10,460 | 4.454839 | 0.105806 | 0.019117 | 0.019551 | 0.026068 | 0.833888 | 0.800724 | 0.800724 | 0.800724 | 0.776973 | 0.771904 | 0 | 0.092542 | 0.084704 | 10,460 | 96 | 1,355 | 108.958333 | 0.628682 | 0.00717 | 0 | 0.38961 | 0 | 0.12987 | 0.674262 | 0.48437 | 0 | 0 | 0 | 0 | 0.038961 | 1 | 0.064935 | false | 0 | 0.064935 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
60109c427433ec7e13551ee5ebc58c9337a2fff6 | 4,427 | py | Python | OnlineDelivery/migrations/0001_initial.py | fallprojects/FoodDelivery | 05fe4a6ee53b28c37e8d1a95175114c760aa71db | [
"MIT"
] | null | null | null | OnlineDelivery/migrations/0001_initial.py | fallprojects/FoodDelivery | 05fe4a6ee53b28c37e8d1a95175114c760aa71db | [
"MIT"
] | 4 | 2020-11-06T11:32:45.000Z | 2020-11-17T13:02:26.000Z | OnlineDelivery/migrations/0001_initial.py | fallprojects/FoodDelivery | 05fe4a6ee53b28c37e8d1a95175114c760aa71db | [
"MIT"
] | null | null | null | # Generated by Django 3.1.3 on 2020-11-11 11:57
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Actions',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('first', models.CharField(max_length=200, null=True)),
('second', models.CharField(max_length=200, null=True)),
('third', models.CharField(max_length=200, null=True)),
],
),
migrations.CreateModel(
name='Adresses',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('adress', models.CharField(max_length=200, null=True)),
('phone', models.CharField(max_length=200, null=True)),
],
),
migrations.CreateModel(
name='Bowl',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, null=True)),
('price', models.FloatField(null=True)),
('category', models.CharField(choices=[('Материал', 'Материал'), ('Производитель', 'Производитель'), ('Вид', 'Вид')], max_length=50, null=True)),
('decsription', models.CharField(blank=True, max_length=50, null=True)),
],
),
migrations.CreateModel(
name='Coals',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, null=True)),
('price', models.FloatField(null=True)),
('category', models.CharField(choices=[('Размер', 'Размер'), ('Производитель', 'Производитель')], max_length=50, null=True)),
('decsription', models.CharField(blank=True, max_length=50, null=True)),
],
),
migrations.CreateModel(
name='Comments',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
migrations.CreateModel(
name='Hookah',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, null=True)),
('price', models.FloatField(null=True)),
('category', models.CharField(choices=[('Материал', 'Материал'), ('Производитель', 'Производитель'), ('Высота шахты', 'Высота шахты')], max_length=50, null=True)),
('decsription', models.CharField(blank=True, max_length=50, null=True)),
],
),
migrations.CreateModel(
name='Tabacco',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, null=True)),
('price', models.FloatField(null=True)),
('category', models.CharField(choices=[('Крепость', 'Крепость'), ('Вкусопередача', 'Вкусопередача'), ('Производитель', 'Производитель')], max_length=50, null=True)),
('decsription', models.CharField(blank=True, max_length=50, null=True)),
],
),
migrations.CreateModel(
name='Complect',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bowl', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='OnlineDelivery.bowl')),
('coals', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='OnlineDelivery.coals')),
('hookah', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='OnlineDelivery.hookah')),
('tabacco', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='OnlineDelivery.tabacco')),
],
),
]
| 49.741573 | 181 | 0.575559 | 439 | 4,427 | 5.692483 | 0.161731 | 0.080032 | 0.052821 | 0.072029 | 0.832733 | 0.832733 | 0.832733 | 0.790716 | 0.790716 | 0.790716 | 0 | 0.016636 | 0.266772 | 4,427 | 88 | 182 | 50.306818 | 0.753235 | 0.010165 | 0 | 0.641975 | 1 | 0 | 0.125114 | 0.009817 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.024691 | 0 | 0.074074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
60277d44d8845c76ba7331bebb1c8443af792d3b | 423 | py | Python | Matrix_Inverse_Using_Numpy-Scipy.py | ThomIves/MatrixInverse | 40451892badec2ecbca6595f2aaa40d3faf94100 | [
"Unlicense"
] | 15 | 2018-11-01T04:01:22.000Z | 2022-01-16T08:01:57.000Z | Matrix_Inverse_Using_Numpy-Scipy.py | ThomIves/MatrixInverse | 40451892badec2ecbca6595f2aaa40d3faf94100 | [
"Unlicense"
] | 2 | 2020-01-27T16:11:53.000Z | 2021-04-27T18:47:52.000Z | Matrix_Inverse_Using_Numpy-Scipy.py | ThomIves/MatrixInverse | 40451892badec2ecbca6595f2aaa40d3faf94100 | [
"Unlicense"
] | 12 | 2019-07-23T13:43:46.000Z | 2021-08-28T13:08:24.000Z | <<<<<<< HEAD
import numpy as np
from numpy.linalg import inv
a = np.array([[5., 3., 1.], [3., 9., 4.], [1., 3., 5.]])
print(a, '\n')
ainv = inv(a)
ainv = ainv.round(3)
print(ainv)
=======
import numpy as np
from numpy.linalg import inv
a = np.array([[5., 3., 1.], [3., 9., 4.], [1., 3., 5.]])
print(a, '\n')
ainv = inv(a)
ainv = ainv.round(3)
print(ainv)
>>>>>>> 7b6dac91eaad3501f2ee5e18c151c6f6b5c913e6
| 16.269231 | 57 | 0.548463 | 68 | 423 | 3.411765 | 0.294118 | 0.068966 | 0.112069 | 0.12931 | 0.810345 | 0.810345 | 0.810345 | 0.810345 | 0.810345 | 0.810345 | 0 | 0.122449 | 0.189125 | 423 | 25 | 58 | 16.92 | 0.553936 | 0 | 0 | 0.823529 | 0 | 0 | 0.009709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.235294 | null | null | 0.235294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
608aad4315ce2a7f4da81a405a4f995cc2afc452 | 3,649 | py | Python | array_api_tests/function_stubs/elementwise_functions.py | leofang/array-api-tests | 6789a05da5595e1e53b30db22fc51206dd825c1d | [
"MIT"
] | 1 | 2021-07-07T14:50:28.000Z | 2021-07-07T14:50:28.000Z | array_api_tests/function_stubs/elementwise_functions.py | leofang/array-api-tests | 6789a05da5595e1e53b30db22fc51206dd825c1d | [
"MIT"
] | null | null | null | array_api_tests/function_stubs/elementwise_functions.py | leofang/array-api-tests | 6789a05da5595e1e53b30db22fc51206dd825c1d | [
"MIT"
] | null | null | null | """
Function stubs for elementwise functions.
NOTE: This file is generated automatically by the generate_stubs.py script. Do
not modify it directly.
See
https://github.com/data-apis/array-api/blob/master/spec/API_specification/elementwise_functions.md
"""
from __future__ import annotations
from ._types import array
def abs(x: array, /) -> array:
pass
def acos(x: array, /) -> array:
pass
def acosh(x: array, /) -> array:
pass
def add(x1: array, x2: array, /) -> array:
pass
def asin(x: array, /) -> array:
pass
def asinh(x: array, /) -> array:
pass
def atan(x: array, /) -> array:
pass
def atan2(x1: array, x2: array, /) -> array:
pass
def atanh(x: array, /) -> array:
pass
def bitwise_and(x1: array, x2: array, /) -> array:
pass
def bitwise_left_shift(x1: array, x2: array, /) -> array:
pass
def bitwise_invert(x: array, /) -> array:
pass
def bitwise_or(x1: array, x2: array, /) -> array:
pass
def bitwise_right_shift(x1: array, x2: array, /) -> array:
pass
def bitwise_xor(x1: array, x2: array, /) -> array:
pass
def ceil(x: array, /) -> array:
pass
def cos(x: array, /) -> array:
pass
def cosh(x: array, /) -> array:
pass
def divide(x1: array, x2: array, /) -> array:
pass
def equal(x1: array, x2: array, /) -> array:
pass
def exp(x: array, /) -> array:
pass
def expm1(x: array, /) -> array:
pass
def floor(x: array, /) -> array:
pass
def floor_divide(x1: array, x2: array, /) -> array:
pass
def greater(x1: array, x2: array, /) -> array:
pass
def greater_equal(x1: array, x2: array, /) -> array:
pass
def isfinite(x: array, /) -> array:
pass
def isinf(x: array, /) -> array:
pass
def isnan(x: array, /) -> array:
pass
def less(x1: array, x2: array, /) -> array:
pass
def less_equal(x1: array, x2: array, /) -> array:
pass
def log(x: array, /) -> array:
pass
def log1p(x: array, /) -> array:
pass
def log2(x: array, /) -> array:
pass
def log10(x: array, /) -> array:
pass
def logical_and(x1: array, x2: array, /) -> array:
pass
def logical_not(x: array, /) -> array:
pass
def logical_or(x1: array, x2: array, /) -> array:
pass
def logical_xor(x1: array, x2: array, /) -> array:
pass
def multiply(x1: array, x2: array, /) -> array:
pass
def negative(x: array, /) -> array:
pass
def not_equal(x1: array, x2: array, /) -> array:
pass
def positive(x: array, /) -> array:
pass
def pow(x1: array, x2: array, /) -> array:
pass
def remainder(x1: array, x2: array, /) -> array:
pass
def round(x: array, /) -> array:
pass
def sign(x: array, /) -> array:
pass
def sin(x: array, /) -> array:
pass
def sinh(x: array, /) -> array:
pass
def square(x: array, /) -> array:
pass
def sqrt(x: array, /) -> array:
pass
def subtract(x1: array, x2: array, /) -> array:
pass
def tan(x: array, /) -> array:
pass
def tanh(x: array, /) -> array:
pass
def trunc(x: array, /) -> array:
pass
__all__ = ['abs', 'acos', 'acosh', 'add', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'bitwise_and', 'bitwise_left_shift', 'bitwise_invert', 'bitwise_or', 'bitwise_right_shift', 'bitwise_xor', 'ceil', 'cos', 'cosh', 'divide', 'equal', 'exp', 'expm1', 'floor', 'floor_divide', 'greater', 'greater_equal', 'isfinite', 'isinf', 'isnan', 'less', 'less_equal', 'log', 'log1p', 'log2', 'log10', 'logical_and', 'logical_not', 'logical_or', 'logical_xor', 'multiply', 'negative', 'not_equal', 'positive', 'pow', 'remainder', 'round', 'sign', 'sin', 'sinh', 'square', 'sqrt', 'subtract', 'tan', 'tanh', 'trunc']
| 20.160221 | 601 | 0.59578 | 508 | 3,649 | 4.192913 | 0.187008 | 0.258216 | 0.361502 | 0.430986 | 0.614085 | 0.39061 | 0.322066 | 0.23662 | 0.035681 | 0 | 0 | 0.019642 | 0.21869 | 3,649 | 180 | 602 | 20.272222 | 0.727464 | 0.068238 | 0 | 0.486726 | 1 | 0 | 0.109375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.486726 | false | 0.486726 | 0.017699 | 0 | 0.504425 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
60a8cc7ad1fa85cf47e7b3db700884047e800520 | 129 | py | Python | virtual-scada/__init__.py | slaclab/VADER-Analytics | 9d2dd5b11b4f632eb511278c52aa8236f9e252f5 | [
"BSD-3-Clause-LBNL"
] | 6 | 2019-10-14T12:17:49.000Z | 2021-12-19T14:34:00.000Z | virtual-scada/__init__.py | slaclab/VADER-Analytics | 9d2dd5b11b4f632eb511278c52aa8236f9e252f5 | [
"BSD-3-Clause-LBNL"
] | null | null | null | virtual-scada/__init__.py | slaclab/VADER-Analytics | 9d2dd5b11b4f632eb511278c52aa8236f9e252f5 | [
"BSD-3-Clause-LBNL"
] | 3 | 2020-06-24T10:46:15.000Z | 2021-06-29T19:24:25.000Z | from virtualscada.vs import removeRows
from virtualscada.vs import removeValues
from virtualscada.vs import fillValuesMLPFForward | 43 | 49 | 0.891473 | 15 | 129 | 7.666667 | 0.466667 | 0.417391 | 0.469565 | 0.626087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085271 | 129 | 3 | 49 | 43 | 0.974576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
60cb5c817d12c41ebc7cd7de834d32d195e67522 | 8,506 | py | Python | app/tests/routers/test_messages.py | kalaspuff/newshades-api | e22a8875b6e50f71e67dfdaf7b1b3e85817fb5b9 | [
"CC0-1.0"
] | null | null | null | app/tests/routers/test_messages.py | kalaspuff/newshades-api | e22a8875b6e50f71e67dfdaf7b1b3e85817fb5b9 | [
"CC0-1.0"
] | null | null | null | app/tests/routers/test_messages.py | kalaspuff/newshades-api | e22a8875b6e50f71e67dfdaf7b1b3e85817fb5b9 | [
"CC0-1.0"
] | null | null | null | import pytest
from fastapi import FastAPI
from httpx import AsyncClient
from pymongo.database import Database
from app.models.channel import Channel
from app.models.message import Message, MessageReaction
from app.models.server import Server
from app.models.user import User
from app.services.crud import get_item_by_id
from app.services.messages import get_messages
class TestMessagesRoutes:
@pytest.mark.asyncio
async def test_create_message(
self,
app: FastAPI,
db: Database,
current_user: User,
authorized_client: AsyncClient,
server: Server,
server_channel: Channel,
):
data = {"content": "gm!", "server": str(server.id), "channel": str(server_channel.id)}
response = await authorized_client.post("/messages", json=data)
assert response.status_code == 201
json_response = response.json()
assert json_response != {}
assert "content" in json_response
assert json_response["content"] == data["content"]
assert json_response["server"] == data["server"] == str(server.id)
assert json_response["channel"] == data["channel"] == str(server_channel.id)
@pytest.mark.asyncio
async def test_add_reaction_to_message(
self,
app: FastAPI,
db: Database,
current_user: User,
authorized_client: AsyncClient,
server: Server,
server_channel: Channel,
channel_message: Message,
):
messages = await get_messages(channel_id=str(server_channel.id), current_user=current_user, size=100)
assert len(messages) == 1
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert message == channel_message
assert len(message.reactions) == 0
response = await authorized_client.post(f"/messages/{str(message.id)}/reactions/🙌")
assert response.status_code == 204
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert len(message.reactions) == 1
reaction = message.reactions[0]
assert reaction.emoji == "🙌"
assert reaction.count == 1
assert [user.pk for user in reaction.users] == [current_user.id]
@pytest.mark.asyncio
async def test_add_same_reaction_to_message(
self,
app: FastAPI,
db: Database,
current_user: User,
authorized_client: AsyncClient,
server: Server,
server_channel: Channel,
channel_message: Message,
guest_user: User,
):
emoji = "😍"
channel_message.reactions = [MessageReaction(emoji=emoji, count=1, users=[guest_user.pk])]
await channel_message.commit()
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert message == channel_message
assert len(message.reactions) == 1
response = await authorized_client.post(f"/messages/{str(message.id)}/reactions/{emoji}")
assert response.status_code == 204
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert len(message.reactions) == 1
reaction = message.reactions[0]
assert reaction.emoji == emoji
assert reaction.count == 2
assert [user.pk for user in reaction.users] == [guest_user.id, current_user.id]
@pytest.mark.asyncio
async def test_add_same_user_reaction_to_message(
self,
app: FastAPI,
db: Database,
current_user: User,
authorized_client: AsyncClient,
server: Server,
server_channel: Channel,
channel_message: Message,
guest_user: User,
):
emoji = "😍"
channel_message.reactions = [MessageReaction(emoji=emoji, count=1, users=[current_user.pk])]
await channel_message.commit()
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert message == channel_message
assert len(message.reactions) == 1
response = await authorized_client.post(f"/messages/{str(message.id)}/reactions/{emoji}")
assert response.status_code == 204
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert len(message.reactions) == 1
reaction = message.reactions[0]
assert reaction.emoji == emoji
assert reaction.count == 1
assert [user.pk for user in reaction.users] == [current_user.id]
@pytest.mark.asyncio
async def test_add_new_reaction_to_message_with_reactions(
self,
app: FastAPI,
db: Database,
current_user: User,
authorized_client: AsyncClient,
server: Server,
server_channel: Channel,
channel_message: Message,
guest_user: User,
):
channel_message.reactions = [MessageReaction(emoji="😍", count=1, users=[guest_user.pk])]
await channel_message.commit()
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert message == channel_message
assert len(message.reactions) == 1
new_emoji = "💪"
response = await authorized_client.post(f"/messages/{str(message.id)}/reactions/{new_emoji}")
assert response.status_code == 204
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert len(message.reactions) == 2
first_reaction = message.reactions[0]
assert first_reaction.emoji == "😍"
assert first_reaction.count == 1
assert [user.pk for user in first_reaction.users] == [guest_user.id]
second_reaction = message.reactions[1]
assert second_reaction.emoji == new_emoji
assert second_reaction.count == 1
assert [user.pk for user in second_reaction.users] == [current_user.id]
response = await authorized_client.get(f"/channels/{str(server_channel.id)}/messages")
assert response.status_code == 200
json_response = response.json()
json_message = json_response[0]
assert "reactions" in json_message
assert len(json_message["reactions"]) == 2
@pytest.mark.asyncio
async def test_remove_reaction_from_message(
self,
app: FastAPI,
db: Database,
current_user: User,
authorized_client: AsyncClient,
server: Server,
server_channel: Channel,
channel_message: Message,
guest_user: User,
):
channel_message.reactions = [MessageReaction(emoji="😍", count=1, users=[current_user.pk])]
await channel_message.commit()
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert message == channel_message
assert len(message.reactions) == 1
response = await authorized_client.delete(f"/messages/{str(message.id)}/reactions/😍")
assert response.status_code == 204
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert len(message.reactions) == 0
@pytest.mark.asyncio
async def test_remove_reaction_from_message_with_multiple_reactions(
self,
app: FastAPI,
db: Database,
current_user: User,
authorized_client: AsyncClient,
server: Server,
server_channel: Channel,
channel_message: Message,
guest_user: User,
):
channel_message.reactions = [MessageReaction(emoji="😍", count=2, users=[current_user.pk, guest_user.pk])]
await channel_message.commit()
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert message == channel_message
assert len(message.reactions) == 1
response = await authorized_client.delete(f"/messages/{str(message.id)}/reactions/😍")
assert response.status_code == 204
message = await get_item_by_id(id_=channel_message.id, result_obj=Message, current_user=current_user)
assert len(message.reactions) == 1
reaction = message.reactions[0]
assert reaction.emoji == "😍"
assert reaction.count == 1
assert [user.pk for user in reaction.users] == [guest_user.id]
| 39.018349 | 113 | 0.66941 | 1,030 | 8,506 | 5.304854 | 0.079612 | 0.080527 | 0.021413 | 0.026171 | 0.806369 | 0.771047 | 0.765739 | 0.765739 | 0.759517 | 0.746706 | 0 | 0.009038 | 0.232542 | 8,506 | 217 | 114 | 39.198157 | 0.826134 | 0 | 0 | 0.707447 | 0 | 0 | 0.047613 | 0.035152 | 0 | 0 | 0 | 0 | 0.276596 | 1 | 0 | false | 0 | 0.053191 | 0 | 0.058511 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
60ee9f298ffd28245bf3e775a808d4458340ba49 | 15,497 | py | Python | pyEX/premium/brain/__init__.py | andrescevp/pyEX | 4c8daa411b01133a292d341a78f6e1b80cc2be99 | [
"Apache-2.0"
] | null | null | null | pyEX/premium/brain/__init__.py | andrescevp/pyEX | 4c8daa411b01133a292d341a78f6e1b80cc2be99 | [
"Apache-2.0"
] | null | null | null | pyEX/premium/brain/__init__.py | andrescevp/pyEX | 4c8daa411b01133a292d341a78f6e1b80cc2be99 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from functools import wraps
from ...stocks import timeSeries, timeSeriesDF
from ...common import _expire, _UTC
@_expire(hour=8, tz=_UTC)
def _base(id, symbol="", **kwargs):
"""internal"""
kwargs["id"] = id
kwargs["key"] = symbol or kwargs.pop("key", "")
return timeSeries(**kwargs)
@_expire(hour=8, tz=_UTC)
def _baseDF(id, symbol="", **kwargs):
"""internal"""
kwargs["id"] = id
kwargs["key"] = symbol or kwargs.pop("key", "")
return timeSeriesDF(**kwargs)
@wraps(timeSeries)
def brain30DaySentiment(symbol="", **kwargs):
"""Brain Company’s Sentiment Indicator monitors the stock sentiment from the last 30 days of public financial news for about 3,500 US stocks. The sentiment scoring technology is based on a combination of various natural language processing techniques. The sentiment score assigned to each stock is a value ranging from -1 (most negative) to +1 (most positive) that is updated daily.
https://iexcloud.io/docs/api/#brain-companys-30-day-sentiment-indicator
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_SENTIMENT_30_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain30DaySentimentDF(symbol="", **kwargs):
"""Brain Company’s Sentiment Indicator monitors the stock sentiment from the last 30 days of public financial news for about 3,500 US stocks. The sentiment scoring technology is based on a combination of various natural language processing techniques. The sentiment score assigned to each stock is a value ranging from -1 (most negative) to +1 (most positive) that is updated daily.
https://iexcloud.io/docs/api/#brain-companys-30-day-sentiment-indicator
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_SENTIMENT_30_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain7DaySentiment(symbol="", **kwargs):
"""Brain Company’s Sentiment Indicator monitors the stock sentiment from the last 7 days of public financial news for about 3,500 US stocks. The sentiment scoring technology is based on a combination of various natural language processing techniques. The sentiment score assigned to each stock is a value ranging from -1 (most negative) to +1 (most positive) that is updated daily.
https://iexcloud.io/docs/api/#brain-companys-7-day-sentiment-indicator
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_SENTIMENT_7_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain7DaySentimentDF(symbol="", **kwargs):
"""Brain Company’s Sentiment Indicator monitors the stock sentiment from the last 7 days of public financial news for about 3,500 US stocks. The sentiment scoring technology is based on a combination of various natural language processing techniques. The sentiment score assigned to each stock is a value ranging from -1 (most negative) to +1 (most positive) that is updated daily.
https://iexcloud.io/docs/api/#brain-companys-7-day-sentiment-indicator
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_SENTIMENT_7_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain21DayMLReturnRanking(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 21 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-21-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_RANKING_21_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain21DayMLReturnRankingDF(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 21 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-21-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_RANKING_21_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain10DayMLReturnRanking(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 10 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-10-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_RANKING_10_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain10DayMLReturnRankingDF(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 10 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-10-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_RANKING_10_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain5DayMLReturnRanking(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 10 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-5-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_RANKING_5_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain5DayMLReturnRankingDF(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 10 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-5-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_RANKING_5_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain3DayMLReturnRanking(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 10 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-3-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_RANKING_3_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain3DayMLReturnRankingDF(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 10 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-3-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_RANKING_3_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain2DayMLReturnRanking(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 10 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-2-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_RANKING_2_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brain2DayMLReturnRankingDF(symbol="", **kwargs):
"""Brain Company’s Machine Learning proprietary platform is used to generate a daily stock ranking based on the predicted future returns of a universe of around 1,000 stocks over 10 days. The model implements a voting scheme of machine learning classifiers that non linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio.
https://iexcloud.io/docs/api/#brain-companys-2-day-machine-learning-estimated-return-ranking
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_RANKING_2_DAYS", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brainLanguageMetricsOnCompanyFilingsAll(symbol="", **kwargs):
"""Metrics about the language used in a company’s most recent annual or quarterly filings (10Ks and 10Qs). Includes metrics on the financial sentiment and the scores based on the prevalence of words in the statement categorized into four themes: constraining language, interesting language, litigious language, and language indicating uncertainty.
https://iexcloud.io/docs/api/#brain-companys-language-metrics-on-company-filings-quarterly-and-annual
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_LANGUAGE_METRICS_ALL", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brainLanguageMetricsOnCompanyFilingsAllDF(symbol="", **kwargs):
"""Metrics about the language used in a company’s most recent annual or quarterly filings (10Ks and 10Qs). Includes metrics on the financial sentiment and the scores based on the prevalence of words in the statement categorized into four themes: constraining language, interesting language, litigious language, and language indicating uncertainty.
https://iexcloud.io/docs/api/#brain-companys-language-metrics-on-company-filings-quarterly-and-annual
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_LANGUAGE_METRICS_ALL", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brainLanguageMetricsOnCompanyFilings(symbol="", **kwargs):
"""Metrics about the language used in a company’s most recent annual filing (10Ks). Includes metrics on the financial sentiment and the scores based on the prevalence of words in the statement categorized into four themes: constraining language, interesting language, litigious language, and language indicating uncertainty.
https://iexcloud.io/docs/api/#brain-companys-language-metrics-on-company-filings-annual-only
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_LANGUAGE_METRICS_10K", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brainLanguageMetricsOnCompanyFilingsDF(symbol="", **kwargs):
"""Metrics about the language used in a company’s most recent annual filing (10Ks). Includes metrics on the financial sentiment and the scores based on the prevalence of words in the statement categorized into four themes: constraining language, interesting language, litigious language, and language indicating uncertainty.
https://iexcloud.io/docs/api/#brain-companys-language-metrics-on-company-filings-annual-only
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_LANGUAGE_METRICS_10K", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brainLanguageMetricsOnCompanyFilingsDifferenceAll(symbol="", **kwargs):
"""Compares Brain’s sentiment and language metrics from the company’s most recent repot (annual or quarterly) to the report from last year (10Ks) or the corresponding quarter the prior year (10Qs).
https://iexcloud.io/docs/api/#brain-companys-differences-in-language-metrics-on-company-filings-quarterly-and-annual-from-prior-period
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_LANGUAGE_DIFFERENCES_ALL", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brainLanguageMetricsOnCompanyFilingsDifferenceAllDF(symbol="", **kwargs):
"""Compares Brain’s sentiment and language metrics from the company’s most recent repot (annual or quarterly) to the report from last year (10Ks) or the corresponding quarter the prior year (10Qs).
https://iexcloud.io/docs/api/#brain-companys-differences-in-language-metrics-on-company-filings-quarterly-and-annual-from-prior-period
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_LANGUAGE_DIFFERENCES_ALL", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brainLanguageMetricsOnCompanyFilingsDifference(symbol="", **kwargs):
"""Compares Brain’s sentiment and language metrics from the company’s most recent annual filing (10K) to the report from last year.
https://iexcloud.io/docs/api/#brain-companys-differences-in-language-metrics-on-company-annual-filings-from-prior-year
Args:
symbol (str): symbol to use
"""
return _base(id="PREMIUM_BRAIN_LANGUAGE_DIFFERENCES_10K", symbol=symbol, **kwargs)
@wraps(timeSeries)
def brainLanguageMetricsOnCompanyFilingsDifferenceDF(symbol="", **kwargs):
"""Compares Brain’s sentiment and language metrics from the company’s most recent annual filing (10K) to the report from last year.
https://iexcloud.io/docs/api/#brain-companys-differences-in-language-metrics-on-company-annual-filings-from-prior-year
Args:
symbol (str): symbol to use
"""
return _baseDF(id="PREMIUM_BRAIN_LANGUAGE_DIFFERENCES_10K", symbol=symbol, **kwargs)
| 58.923954 | 444 | 0.762406 | 2,167 | 15,497 | 5.397785 | 0.08491 | 0.047192 | 0.039497 | 0.04514 | 0.928272 | 0.928272 | 0.925024 | 0.923485 | 0.923485 | 0.919894 | 0 | 0.014075 | 0.151836 | 15,497 | 262 | 445 | 59.148855 | 0.875837 | 0.703104 | 0 | 0.35443 | 0 | 0 | 0.172918 | 0.169021 | 0 | 0 | 0 | 0 | 0 | 1 | 0.303797 | false | 0 | 0.037975 | 0 | 0.64557 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7156d0fe38f43047708b7b62caa3e0f8bb7b81b8 | 190 | py | Python | mincrawler/pipelines/stages/__init__.py | altescy/mincrawler | 36d28172b37c6825d74ec9887bfabe440838d50f | [
"MIT"
] | 1 | 2020-05-31T02:16:40.000Z | 2020-05-31T02:16:40.000Z | mincrawler/pipelines/stages/__init__.py | altescy/mincrawler | 36d28172b37c6825d74ec9887bfabe440838d50f | [
"MIT"
] | null | null | null | mincrawler/pipelines/stages/__init__.py | altescy/mincrawler | 36d28172b37c6825d74ec9887bfabe440838d50f | [
"MIT"
] | 1 | 2021-09-21T22:36:42.000Z | 2021-09-21T22:36:42.000Z | from mincrawler.pipelines.stages.drop_duplicate import DropDuplicate
from mincrawler.pipelines.stages.stage import PipelineStage
from mincrawler.pipelines.stages.store_item import StoreItem
| 47.5 | 68 | 0.889474 | 23 | 190 | 7.26087 | 0.565217 | 0.251497 | 0.413174 | 0.520958 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063158 | 190 | 3 | 69 | 63.333333 | 0.938202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
e08a3091db00e41c71e104158295690d7ebe1631 | 33,725 | py | Python | tests/codegen.py | mernst/cozy | d7b2c0ee575057dea4ebec201d579f0ecd785b1b | [
"Apache-2.0"
] | 188 | 2017-11-27T18:59:34.000Z | 2021-12-31T02:28:33.000Z | tests/codegen.py | mernst/cozy | d7b2c0ee575057dea4ebec201d579f0ecd785b1b | [
"Apache-2.0"
] | 95 | 2017-11-13T01:21:48.000Z | 2020-10-30T06:38:14.000Z | tests/codegen.py | mernst/cozy | d7b2c0ee575057dea4ebec201d579f0ecd785b1b | [
"Apache-2.0"
] | 16 | 2018-02-13T04:49:09.000Z | 2021-02-06T13:26:46.000Z | from collections import OrderedDict, defaultdict
import io
import os
import subprocess
import tempfile
import unittest
from cozy.target_syntax import *
from cozy.structures.heaps import *
from cozy.syntax_tools import pprint, inline_calls
from cozy.codegen import CxxPrinter, JavaPrinter
from cozy.codegen.optimization import simplify_and_optimize, simplify_and_optimize_expression
from cozy.structures.rewriting import rewrite_extensions
CODE_GENERATORS = (
lambda out: CxxPrinter(out=out),
lambda out: JavaPrinter(out=out, boxed=True),
lambda out: JavaPrinter(out=out, boxed=False),
)
class TestCodegen(unittest.TestCase):
def trove_path(self):
dir = "/tmp"
path = os.path.join(dir, "trove-3.0.3.jar")
if not os.path.exists(path):
subprocess.run(["curl", "-LO", "https://bitbucket.org/trove4j/trove/downloads/trove-3.0.3.tar.gz"], cwd=dir)
subprocess.run(["tar", "xf", "trove-3.0.3.tar.gz"], cwd=dir)
subprocess.run(["ln", "3.0.3/lib/trove-3.0.3.jar", path], cwd=dir)
return path
def check(self, impl, state_map, share_info, codegen, target_languages=None):
with io.StringIO() as f:
codegen = codegen(f)
if target_languages is not None and type(codegen) not in target_languages:
return
codegen.visit(impl, state_map, share_info)
code = f.getvalue()
ext = "java" if isinstance(codegen, JavaPrinter) else "cpp"
compile = ["javac"] if isinstance(codegen, JavaPrinter) else ["c++", "-std=c++11", "-w", "-c", "-o", "/dev/null"]
if isinstance(codegen, JavaPrinter) and not codegen.boxed:
compile.extend(["-classpath", self.trove_path()])
dir = tempfile.mkdtemp()
print("Writing impls to {}".format(dir))
filename = os.path.join(dir, "{}.{}".format(impl.name, ext))
args = compile + [filename]
print("Running `{}`".format(" ".join(args)))
with open(filename, "w") as f:
f.write(code)
res = subprocess.run(args)
assert res.returncode == 0
def test_regression01(self):
impl = Spec('MaxBag', [], [], [('_var656', TInt()), ('_var753', TMap(TInt(), TBool())), ('_var1653', TBool()), ('_var2642', TMaxHeap(TInt(), TInt())), ('_var4385', TInt())], (), (Query('get_max', 'pubilc', [], (), EVar('_var656').with_type(TInt()), ''), Op('add', [('x', TInt())], [], SSeq(SSeq(SDecl(EVar('_var32322').with_type(TInt()), ECond(EVar('_var1653').with_type(TBool()), EArgMax(EBinOp(ESingleton(EVar('_var656').with_type(TInt())).with_type(TBag(TInt())), '+', ESingleton(EVar('x').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), ELambda(EVar('_var662').with_type(TInt()), EVar('_var662').with_type(TInt()))).with_type(TInt()), EVar('x').with_type(TInt())).with_type(TInt())), SSeq(SDecl(EVar('_var32323').with_type(TInt()), EBinOp(EVar('_var4385').with_type(TInt()), '+', ENum(1).with_type(TInt())).with_type(TInt())), SAssign(EVar('_var656').with_type(TInt()), EVar('_var32322').with_type(TInt())))), SSeq(SSeq(SAssign(EVar('_var1653').with_type(TBool()), EBool(True).with_type(TBool())), SSeq(SCall(EVar('_var2642').with_type(TMaxHeap(TInt(), TInt())), 'remove_all', (EVar('_var4385').with_type(TInt()), EEmptyList().with_type(TBag(TInt())))), SSeq(SCall(EVar('_var2642').with_type(TMaxHeap(TInt(), TInt())), 'add_all', (EVar('_var4385').with_type(TInt()), ESingleton(EVar('x').with_type(TInt())).with_type(TBag(TInt())))), SForEach(EVar('_var3970').with_type(TInt()), EEmptyList().with_type(TBag(TInt())), SCall(EVar('_var2642').with_type(TMaxHeap(TInt(), TInt())), 'update', (EVar('_var3970').with_type(TInt()), EVar('_var3970').with_type(TInt()))))))), SSeq(SAssign(EVar('_var4385').with_type(TInt()), EVar('_var32323').with_type(TInt())), SSeq(SForEach(EVar('_var1135').with_type(TInt()), EEmptyList().with_type(TBag(TInt())), SMapDel(EVar('_var753').with_type(TMap(TInt(), TBool())), EVar('_var1135').with_type(TInt()))), SForEach(EVar('_var1135').with_type(TInt()), ESingleton(EVar('x').with_type(TInt())).with_type(TBag(TInt())), SMapUpdate(EVar('_var753').with_type(TMap(TInt(), TBool())), EVar('_var1135').with_type(TInt()), EVar('_var1136').with_type(TBool()), SAssign(EVar('_var1136').with_type(TBool()), EBool(True).with_type(TBool())))))))), ''), Op('remove', [('x', TInt())], [], SSeq(SSeq(SSeq(SDecl(EVar('_var32324').with_type(TInt()), ECond(EBinOp(EVar('x').with_type(TInt()), '==', EVar('_var656').with_type(TInt())).with_type(TBool()), EHeapPeek2(EVar('_var2642').with_type(TMaxHeap(TInt(), TInt()))).with_type(TInt()), EVar('_var656').with_type(TInt())).with_type(TInt())), SDecl(EVar('_var32325').with_type(TInt()), EBinOp(EVar('_var4385').with_type(TInt()), '-', EUnaryOp('len', ECond(EHasKey(EVar('_var753').with_type(TMap(TInt(), TBool())), EVar('x').with_type(TInt())).with_type(TBool()), ESingleton(EVar('x').with_type(TInt())).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TInt())).with_type(TInt()))), SSeq(SDecl(EVar('_var32326').with_type(TBag(TInt())), ECond(EHasKey(EVar('_var753').with_type(TMap(TInt(), TBool())), EVar('x').with_type(TInt())).with_type(TBool()), ESingleton(EVar('x').with_type(TInt())).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt()))), SAssign(EVar('_var656').with_type(TInt()), EVar('_var32324').with_type(TInt())))), SSeq(SSeq(SAssign(EVar('_var1653').with_type(TBool()), EBinOp(EBinOp(EVar('_var4385').with_type(TInt()), '-', ECond(EHasKey(EVar('_var753').with_type(TMap(TInt(), TBool())), EVar('x').with_type(TInt())).with_type(TBool()), ENum(1).with_type(TInt()), ENum(0).with_type(TInt())).with_type(TInt())).with_type(TInt()), '>', ENum(0).with_type(TInt())).with_type(TBool())), SSeq(SCall(EVar('_var2642').with_type(TMaxHeap(TInt(), TInt())), 'remove_all', (EVar('_var4385').with_type(TInt()), ECond(EHasKey(EVar('_var753').with_type(TMap(TInt(), TBool())), EVar('x').with_type(TInt())).with_type(TBool()), ESingleton(EVar('x').with_type(TInt())).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt())))), SSeq(SCall(EVar('_var2642').with_type(TMaxHeap(TInt(), TInt())), 'add_all', (EBinOp(EVar('_var4385').with_type(TInt()), '-', EUnaryOp('len', ECond(EHasKey(EVar('_var753').with_type(TMap(TInt(), TBool())), EVar('x').with_type(TInt())).with_type(TBool()), ESingleton(EVar('x').with_type(TInt())).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TInt())).with_type(TInt()), EEmptyList().with_type(TBag(TInt())))), SForEach(EVar('_var6136').with_type(TInt()), EEmptyList().with_type(TBag(TInt())), SCall(EVar('_var2642').with_type(TMaxHeap(TInt(), TInt())), 'update', (EVar('_var6136').with_type(TInt()), EVar('_var6136').with_type(TInt()))))))), SSeq(SAssign(EVar('_var4385').with_type(TInt()), EVar('_var32325').with_type(TInt())), SSeq(SForEach(EVar('_var2481').with_type(TInt()), EVar('_var32326').with_type(TBag(TInt())), SMapDel(EVar('_var753').with_type(TMap(TInt(), TBool())), EVar('_var2481').with_type(TInt()))), SForEach(EVar('_var2481').with_type(TInt()), EEmptyList().with_type(TBag(TInt())), SMapUpdate(EVar('_var753').with_type(TMap(TInt(), TBool())), EVar('_var2481').with_type(TInt()), EVar('_var2482').with_type(TBool()), SNoOp())))))), '')), '', '', '')
state_map = OrderedDict([('_var656', EArgMax(EVar('l').with_type(TBag(TInt())),
ELambda(EVar('x').with_type(TInt()),
EVar('x').with_type(TInt()))).with_type(TInt())), (
'_var753', EMakeMap2(EVar('l').with_type(TBag(TInt())),
ELambda(EVar('_var690').with_type(TInt()),
EBool(True).with_type(TBool()))).with_type(
TMap(TInt(), TBool()))),
('_var1653', EUnaryOp('exists', EVar('l').with_type(TBag(TInt()))).with_type(TBool())),
('_var2642', EMakeMaxHeap(EVar('l').with_type(TBag(TInt())),
ELambda(EVar('_var758').with_type(TInt()),
EVar('_var758').with_type(TInt()))).with_type(
TMaxHeap(TInt(), TInt()))),
('_var4385', EUnaryOp('len', EVar('l').with_type(TBag(TInt()))).with_type(TInt()))])
impl = inline_calls(impl)
impl, state_map = rewrite_extensions(impl, state_map)
for v, e in state_map.items():
print(" - {} = {}".format(v, pprint(e)))
print(pprint(impl))
for codegen in CODE_GENERATORS:
share_info = defaultdict(list, {})
self.check(impl, state_map, share_info, lambda out: codegen(out=out))
def test_regression02(self):
impl = Spec('EagerMapper', [], [ExternFunc('log', [('f', TFloat())], TFloat(), 'std::log({f})')], [('_var1492', TInt()), ('_var4588', TList(TFloat())), ('_var6562', TList(TFloat()))], (), (Query('take', Visibility.Public, [], (), EVar('_var6562').with_type(TList(TFloat())), ''), Op('rm', [('index', TInt())], [], SAssign(EVar('_var6562').with_type(TList(TFloat())), EBinOp(EListSlice(EVar('_var4588').with_type(TList(TFloat())), ENum(0).with_type(TInt()), EVar('index').with_type(TInt())).with_type(TList(TFloat())), '+', EListSlice(EVar('_var4588').with_type(TList(TFloat())), EBinOp(EVar('index').with_type(TInt()), '+', ENum(1).with_type(TInt())).with_type(TInt()), EVar('_var1492').with_type(TInt())).with_type(TList(TFloat()))).with_type(TList(TFloat()))), ''), Op('restore', [], [], SAssign(EVar('_var6562').with_type(TList(TFloat())), EVar('_var4588').with_type(TList(TFloat()))), '')), '\n#include <cmath>\n', '', '')
state_map = OrderedDict([('_var1492', EUnaryOp('len', EVar('xs').with_type(TList(TFloat()))).with_type(TInt())), ('_var4588', EMap(EVar('xs').with_type(TList(TFloat())), ELambda(EVar('x').with_type(TFloat()), EBinOp(ECall('log', (EBinOp(ENum(1.0).with_type(TFloat()), '+', EVar('x').with_type(TFloat())).with_type(TFloat()),)).with_type(TFloat()), '+', ECall('log', (ENum(1.5).with_type(TFloat()),)).with_type(TFloat())).with_type(TFloat()))).with_type(TList(TFloat()))), ('_var6562', ECond(EVar('isDropped').with_type(TBool()), EMap(EBinOp(EListSlice(EVar('xs').with_type(TList(TFloat())), ENum(0).with_type(TInt()), EVar('dropped').with_type(TInt())).with_type(TList(TFloat())), '+', EListSlice(EVar('xs').with_type(TList(TFloat())), EBinOp(EVar('dropped').with_type(TInt()), '+', ENum(1).with_type(TInt())).with_type(TInt()), EUnaryOp('len', EVar('xs').with_type(TList(TFloat()))).with_type(TInt())).with_type(TList(TFloat()))).with_type(TList(TFloat())), ELambda(EVar('x').with_type(TFloat()), EBinOp(ECall('log', (EBinOp(ENum(1.0).with_type(TFloat()), '+', EVar('x').with_type(TFloat())).with_type(TFloat()),)).with_type(TFloat()), '+', ECall('log', (ENum(1.5).with_type(TFloat()),)).with_type(TFloat())).with_type(TFloat()))).with_type(TList(TFloat())), EMap(EVar('xs').with_type(TList(TFloat())), ELambda(EVar('x').with_type(TFloat()), EBinOp(ECall('log', (EBinOp(ENum(1.0).with_type(TFloat()), '+', EVar('x').with_type(TFloat())).with_type(TFloat()),)).with_type(TFloat()), '+', ECall('log', (ENum(1.5).with_type(TFloat()),)).with_type(TFloat())).with_type(TFloat()))).with_type(TList(TFloat()))).with_type(TList(TFloat())))])
share_info = defaultdict(list, {})
impl = inline_calls(impl)
impl, state_map = rewrite_extensions(impl, state_map)
for v, e in state_map.items():
print(" - {} = {}".format(v, pprint(e)))
print(pprint(impl))
for codegen in CODE_GENERATORS:
self.check(impl, state_map, share_info, lambda out: codegen(out=out), target_languages=[CxxPrinter])
def test_regression04(self):
impl = Spec('Basic', [], [], [('_var12', TList(TInt())), ('_var895', TMap(TInt(), TList(TInt()))), ('_var9841', TMap(TInt(), TList(TInt()))), ('_var10947', TMap(TInt(), TBool()))], [], [Query('elems', 'public', [], (), EVar('_var12').with_type(TList(TInt())), ""), Query('_name13', 'internal', [('n', TInt())], (), ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt())), ""), Query('_name14', 'internal', [('n', TInt())], (), EEmptyList().with_type(TBag(TInt())), ""), Query('_name35', 'internal', [('n', TInt())], (), EMapGet(EVar('_var895').with_type(TMap(TInt(), TList(TInt()))), EVar('n').with_type(TInt())).with_type(TList(TInt())), ""), Query('_name911', 'internal', [('_var905', TInt()), ('n', TInt())], (), ESingleton(EVar('_var905').with_type(TInt())).with_type(TBag(TInt())), ""), Query('_name912', 'internal', [('_var905', TInt()), ('n', TInt())], (), EEmptyList().with_type(TBag(TInt())), ""), Query('_name914', 'internal', [('n', TInt())], (), EFilter(ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt())), ELambda(EVar('_var1498').with_type(TInt()), EUnaryOp('not', EMapGet(EVar('_var10947').with_type(TMap(TInt(), TBool())), EVar('_var1498').with_type(TInt())).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), ""), Query('_name1497', 'internal', [('_var1492', TInt()), ('n', TInt())], (), EEmptyList().with_type(TBag(TInt())), ""), Query('_name1506', 'internal', [('_var1492', TInt()), ('n', TInt())], (), ESingleton(EVar('_var1492').with_type(TInt())).with_type(TBag(TInt())), ""), Query('_name1519', 'internal', [('n', TInt())], (), EMapGet(EVar('_var9841').with_type(TMap(TInt(), TList(TInt()))), EVar('n').with_type(TInt())).with_type(TList(TInt())), ""), Query('_name9848', 'internal', [('_var9842', TInt()), ('n', TInt())], (), EBinOp(ECond(EBinOp(EVar('_var9842').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool()), EBinOp(EFilter(EVar('_var12').with_type(TList(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ESingleton(EVar('_var9842').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), '+', EFilter(ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ESingleton(EVar('_var9842').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ECond(EBinOp(EVar('_var9842').with_type(TInt()), 'in', EVar('_var12').with_type(TList(TInt()))).with_type(TBool()), EFilter(EVar('_var12').with_type(TList(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('_var9842').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBag(TInt())), ""), Query('_name9855', 'internal', [('_var9842', TInt()), ('n', TInt())], (), EBinOp(ECond(EBinOp(EVar('_var9842').with_type(TInt()), 'in', EVar('_var12').with_type(TList(TInt()))).with_type(TBool()), EFilter(EVar('_var12').with_type(TList(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('_var9842').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ECond(EBinOp(EVar('_var9842').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool()), EBinOp(EFilter(EVar('_var12').with_type(TList(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ESingleton(EVar('_var9842').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), '+', EFilter(ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ESingleton(EVar('_var9842').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBag(TInt())), ""), Query('_name9863', 'internal', [('n', TInt())], (), EFilter(EUnaryOp('distinct', EBinOp(EUnaryOp('distinct', EVar('_var12').with_type(TList(TInt()))).with_type(TList(TInt())), '+', EUnaryOp('distinct', EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBag(TInt())), ELambda(EVar('_var9842').with_type(TInt()), EUnaryOp('not', EBinOp(ECond(EBinOp(EVar('_var9842').with_type(TInt()), 'in', EVar('_var12').with_type(TList(TInt()))).with_type(TBool()), EFilter(EVar('_var12').with_type(TList(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('_var9842').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt())), '==', ECond(EBinOp(EVar('_var9842').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool()), EBinOp(EFilter(EVar('_var12').with_type(TList(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ESingleton(EVar('_var9842').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), '+', EFilter(ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ESingleton(EVar('_var9842').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), ""), Query('_name16354', 'internal', [('_var16321', TInt()), ('n', TInt())], (), EBinOp(ECond(EBinOp(EVar('_var16321').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool()), EFilter(EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ESingleton(EVar('_var16321').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ECond(EBinOp(EVar('_var16321').with_type(TInt()), 'in', EVar('_var12').with_type(TList(TInt()))).with_type(TBool()), EFilter(EVar('_var12').with_type(TList(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('_var16321').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBag(TInt())), ""), Query('_name16357', 'internal', [('_var16321', TInt()), ('n', TInt())], (), EBinOp(ECond(EBinOp(EVar('_var16321').with_type(TInt()), 'in', EVar('_var12').with_type(TList(TInt()))).with_type(TBool()), EFilter(EVar('_var12').with_type(TList(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('_var16321').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ECond(EBinOp(EVar('_var16321').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool()), EFilter(EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt())), '-', ESingleton(EVar('_var16321').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())), EEmptyList().with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBag(TInt())), ""), Query('_name24791', 'internal', [('_var24789', TInt()), ('n', TInt())], (), ECond(EBinOp(EVar('_var24789').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool()), EBinOp(EVar('_var24789').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '+', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool()), EBool(False).with_type(TBool())).with_type(TBool()), ""), Query('_name28311', 'internal', [('_var28307', TInt()), ('n', TInt())], (), ECond(EBinOp(EVar('_var28307').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool()), EBinOp(EVar('_var28307').with_type(TInt()), 'in', EBinOp(EVar('_var12').with_type(TList(TInt())), '-', ESingleton(EVar('n').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool()), EBool(False).with_type(TBool())).with_type(TBool()), ""), Op('add', [('n', TInt())], [], SSeq(SSeq(SSeq(SSeq(SForEach(EVar('_var15').with_type(TInt()), ECall('_name14', [EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SCall(EVar('_var12').with_type(TList(TInt())), 'remove', [EVar('_var15').with_type(TInt())])), SForEach(EVar('_var15').with_type(TInt()), ECall('_name13', [EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SCall(EVar('_var12').with_type(TList(TInt())), 'add', [EVar('_var15').with_type(TInt())]))), SForEach(EVar('_var905').with_type(TInt()), ECall('_name914', [EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SMapUpdate(EVar('_var895').with_type(TMap(TInt(), TList(TInt()))), EVar('_var905').with_type(TInt()), EVar('_var906').with_type(TList(TInt())), SSeq(SForEach(EVar('_var913').with_type(TInt()), ECall('_name912', [EVar('_var905').with_type(TInt()), EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SCall(EVar('_var906').with_type(TList(TInt())), 'remove', [EVar('_var913').with_type(TInt())])), SForEach(EVar('_var913').with_type(TInt()), ECall('_name911', [EVar('_var905').with_type(TInt()), EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SCall(EVar('_var906').with_type(TList(TInt())), 'add', [EVar('_var913').with_type(TInt())])))))), SForEach(EVar('_var9842').with_type(TInt()), ECall('_name9863', [EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SMapUpdate(EVar('_var9841').with_type(TMap(TInt(), TList(TInt()))), EVar('_var9842').with_type(TInt()), EVar('_var9843').with_type(TList(TInt())), SSeq(SForEach(EVar('_var9856').with_type(TInt()), ECall('_name9855', [EVar('_var9842').with_type(TInt()), EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SCall(EVar('_var9843').with_type(TList(TInt())), 'remove', [EVar('_var9856').with_type(TInt())])), SForEach(EVar('_var9856').with_type(TInt()), ECall('_name9848', [EVar('_var9842').with_type(TInt()), EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SCall(EVar('_var9843').with_type(TList(TInt())), 'add', [EVar('_var9856').with_type(TInt())])))))), SForEach(EVar('_var24789').with_type(TInt()), ECall('_name914', (EVar('n').with_type(TInt()),)).with_type(TBag(TInt())), SMapUpdate(EVar('_var10947').with_type(TMap(TInt(), TBool())), EVar('_var24789').with_type(TInt()), EVar('_var24790').with_type(TBool()), SAssign(EVar('_var24790').with_type(TBool()), ECall('_name24791', (EVar('_var24789').with_type(TInt()), EVar('n').with_type(TInt()))).with_type(TBool()))))), ""), Op('remove', [('n', TInt())], [], SSeq(SSeq(SSeq(SSeq(SForEach(EVar('_var36').with_type(TInt()), ECall('_name35', (EVar('n').with_type(TInt()),)).with_type(TList(TInt())), SCall(EVar('_var12').with_type(TList(TInt())), 'remove', [EVar('_var36').with_type(TInt())])), SForEach(EVar('_var36').with_type(TInt()), ECall('_name14', (EVar('n').with_type(TInt()),)).with_type(TBag(TInt())), SCall(EVar('_var12').with_type(TList(TInt())), 'add', [EVar('_var36').with_type(TInt())]))), SForEach(EVar('_var1492').with_type(TInt()), ECall('_name1519', [EVar('n').with_type(TInt())]).with_type(TList(TInt())), SMapUpdate(EVar('_var895').with_type(TMap(TInt(), TList(TInt()))), EVar('_var1492').with_type(TInt()), EVar('_var1493').with_type(TList(TInt())), SSeq(SForEach(EVar('_var1507').with_type(TInt()), ECall('_name1506', [EVar('_var1492').with_type(TInt()), EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SCall(EVar('_var1493').with_type(TList(TInt())), 'remove', [EVar('_var1507').with_type(TInt())])), SForEach(EVar('_var1507').with_type(TInt()), ECall('_name1497', [EVar('_var1492').with_type(TInt()), EVar('n').with_type(TInt())]).with_type(TBag(TInt())), SCall(EVar('_var1493').with_type(TList(TInt())), 'add', [EVar('_var1507').with_type(TInt())])))))), SForEach(EVar('_var16321').with_type(TInt()), ECall('_name35', (EVar('n').with_type(TInt()),)).with_type(TList(TInt())), SMapUpdate(EVar('_var9841').with_type(TMap(TInt(), TList(TInt()))), EVar('_var16321').with_type(TInt()), EVar('_var16322').with_type(TList(TInt())), SSeq(SForEach(EVar('_var16358').with_type(TInt()), ECall('_name16357', (EVar('_var16321').with_type(TInt()), EVar('n').with_type(TInt()))).with_type(TBag(TInt())), SCall(EVar('_var16322').with_type(TList(TInt())), 'remove', [EVar('_var16358').with_type(TInt())])), SForEach(EVar('_var16358').with_type(TInt()), ECall('_name16354', (EVar('_var16321').with_type(TInt()), EVar('n').with_type(TInt()))).with_type(TBag(TInt())), SCall(EVar('_var16322').with_type(TList(TInt())), 'add', [EVar('_var16358').with_type(TInt())])))))), SForEach(EVar('_var28307').with_type(TInt()), ECall('_name1519', (EVar('n').with_type(TInt()),)).with_type(TList(TInt())), SMapUpdate(EVar('_var10947').with_type(TMap(TInt(), TBool())), EVar('_var28307').with_type(TInt()), EVar('_var28309').with_type(TBool()), SAssign(EVar('_var28309').with_type(TBool()), ECall('_name28311', (EVar('_var28307').with_type(TInt()), EVar('n').with_type(TInt()))).with_type(TBool()))))), "")], "", "", "")
state_map = {'_var12': EVar('l').with_type(TBag(TInt())), '_var895': EMakeMap2(EVar('l').with_type(TBag(TInt())), ELambda(EVar('_var116').with_type(TInt()), ESingleton(EVar('_var116').with_type(TInt())).with_type(TBag(TInt())))).with_type(TMap(TInt(), TBag(TInt()))), '_var9841': EMakeMap2(EVar('l').with_type(TBag(TInt())), ELambda(EVar('_var2516').with_type(TInt()), EFilter(EVar('l').with_type(TBag(TInt())), ELambda(EVar('_var2515').with_type(TInt()), EUnaryOp('not', EBinOp(EVar('_var2515').with_type(TInt()), 'in', EBinOp(EVar('l').with_type(TBag(TInt())), '-', ESingleton(EVar('_var2516').with_type(TInt())).with_type(TBag(TInt()))).with_type(TBag(TInt()))).with_type(TBool())).with_type(TBool()))).with_type(TBag(TInt())))).with_type(TMap(TInt(), TBag(TInt()))), '_var10947': EMakeMap2(EVar('l').with_type(TBag(TInt())), ELambda(EVar('_var1498').with_type(TInt()), EBinOp(EVar('_var1498').with_type(TInt()), 'in', EVar('l').with_type(TBag(TInt()))).with_type(TBool()))).with_type(TMap(TInt(), TBool()))}
impl = inline_calls(impl)
impl, state_map = rewrite_extensions(impl, state_map)
for v, e in state_map.items():
print(" - {} = {}".format(v, pprint(e)))
print(pprint(impl))
for codegen in CODE_GENERATORS:
share_info = {}
self.check(impl, state_map, share_info, lambda out: codegen(out=out))
def test_distinct_foreach(self):
with io.StringIO() as f:
for codegen_type in CODE_GENERATORS:
codegen = codegen_type(out=f)
bag = EFilter(EVar("v").with_type(TBag(INT)), mk_lambda(INT, lambda x: EBinOp(x, ">", ZERO))).with_type(TBag(INT))
x = fresh_var(INT)
v = fresh_var(INT)
stm = SForEach(x, EUnaryOp(UOp.Distinct, bag).with_type(TSet(INT)), SAssign(v, x))
stm = simplify_and_optimize(stm)
codegen.visit(stm)
def test_distinct(self):
with io.StringIO() as f:
for codegen_type in CODE_GENERATORS:
codegen = codegen_type(out=f)
bag = EFilter(EVar("v").with_type(TBag(INT)), mk_lambda(INT, lambda x: EBinOp(x, ">", ZERO))).with_type(TBag(INT))
e = EUnaryOp(UOp.Distinct, bag).with_type(TSet(INT))
stm, e = simplify_and_optimize_expression(e)
codegen.visit(stm)
print(codegen.visit(e))
def test_len(self):
with io.StringIO() as f:
for codegen_type in CODE_GENERATORS:
codegen = codegen_type(out=f)
bag = EFilter(EVar("v").with_type(TBag(INT)), mk_lambda(INT, lambda x: EBinOp(x, ">", ZERO))).with_type(TBag(INT))
e = EUnaryOp(UOp.Length, bag).with_type(TSet(INT))
stm, e = simplify_and_optimize_expression(e)
codegen.visit(stm)
print(codegen.visit(e))
def test_all(self):
with io.StringIO() as f:
for codegen_type in CODE_GENERATORS:
codegen = codegen_type(out=f)
bag = EMap(EVar("v").with_type(TBag(INT)), mk_lambda(INT, lambda x: EBinOp(x, ">", ZERO))).with_type(TBag(BOOL))
e = EUnaryOp(UOp.All, bag).with_type(TSet(INT))
stm, e = simplify_and_optimize_expression(e)
codegen.visit(stm)
print(codegen.visit(e))
def test_any(self):
with io.StringIO() as f:
for codegen_type in CODE_GENERATORS:
codegen = codegen_type(out=f)
bag = EMap(EVar("v").with_type(TBag(INT)), mk_lambda(INT, lambda x: EBinOp(x, ">", ZERO).with_type(BOOL))).with_type(TBag(BOOL))
e = EUnaryOp(UOp.Any, bag).with_type(TSet(INT))
stm, e = simplify_and_optimize_expression(e)
codegen.visit(stm)
print(codegen.visit(e))
def test_argmin(self):
with io.StringIO() as f:
for codegen_type in CODE_GENERATORS:
codegen = codegen_type(out=f)
bag = EMap(EVar("v").with_type(TBag(INT)), mk_lambda(INT, lambda x: EBinOp(x, ">", ZERO).with_type(BOOL))).with_type(TBag(BOOL))
e = EArgMin(bag, mk_lambda(INT, lambda x: EUnaryOp("-", x).with_type(x.type))).with_type(INT)
stm, e = simplify_and_optimize_expression(e)
codegen.visit(stm)
print(codegen.visit(e))
| 208.179012 | 17,192 | 0.64172 | 4,523 | 33,725 | 4.543666 | 0.060137 | 0.256143 | 0.144227 | 0.137025 | 0.878449 | 0.828476 | 0.793489 | 0.742251 | 0.710963 | 0.68746 | 0 | 0.040254 | 0.093966 | 33,725 | 161 | 17,193 | 209.47205 | 0.632314 | 0 | 0 | 0.424658 | 0 | 0.006849 | 0.103988 | 0.000741 | 0.006849 | 0 | 0 | 0 | 0.006849 | 1 | 0.075342 | false | 0 | 0.082192 | 0 | 0.178082 | 0.09589 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e098f49f02fbcfaf27cfead8aeeeeca0639fb1bc | 15,331 | py | Python | Code/LPfunctions.py | rohit-konda/kround | e4b902502a8c331b6bc62836dc9908e295186190 | [
"MIT"
] | null | null | null | Code/LPfunctions.py | rohit-konda/kround | e4b902502a8c331b6bc62836dc9908e295186190 | [
"MIT"
] | null | null | null | Code/LPfunctions.py | rohit-konda/kround | e4b902502a8c331b6bc62836dc9908e295186190 | [
"MIT"
] | null | null | null | import numpy as np
from itertools import product
def kroundLPnonempty(w, f, k):
n = len(w)-1
n_ac = k+2
partition = list(product(product([1, 0], repeat=n_ac), repeat=n))[:-1]
n_c = len(partition)
C = np.zeros((n_c,), dtype='float')
br_cons = np.zeros((n*(n_ac-1)*(n_ac-2), n_c), dtype='float')
A = np.zeros((1, n_c), dtype='float')
a_k = -2
a_opt = -1
for i, p in enumerate(partition):
opt_i = [pl for pl in range(n) if p[pl][a_opt]==1]
kbest_i = [pl for pl in range(n) if p[pl][a_k]==1]
C[i] = w[len(kbest_i)]
A[0, i] = w[len(opt_i)]
c = 0
for j in range(n):
for b in range(k):
bnext = b + 1
nextac = [pl for pl in range(j+1, n) if p[pl][b]==1]
prevac = [pl for pl in range(j) if p[pl][bnext]==1]
other_ac = nextac + prevac
for a_other in range(n_ac):
if a_other == bnext:
continue
if p[j][bnext] == 1 and p[j][a_other] == 0:
br_cons[c][i] = f[len(other_ac + [j])]
elif p[j][bnext] == 0 and p[j][a_other] == 1:
br_cons[c][i] = -f[len(other_ac + [j])]
c += 1
cons_2 = np.identity(n_c)
G = -np.vstack((br_cons, cons_2))
H = np.zeros((len(G), 1))
B = np.ones((1, 1))
return (C, G, H, A, B), partition
def kroundLPempty(w, f, k, reduced=False):
n = len(w)-1
n_ac = k+1
partition = list(product(product([1, 0], repeat=n_ac), repeat=n))[:-1]
n_c = len(partition)
C = np.zeros((n_c,), dtype='float')
br_cons = np.zeros((n*n_ac*k, n_c), dtype='float')
A = np.zeros((1, n_c), dtype='float')
a_k = -2
a_opt = -1
for i, p in enumerate(partition):
opt_i = [pl for pl in range(n) if p[pl][a_opt]==1]
kbest_i = [pl for pl in range(n) if p[pl][a_k]==1]
if reduced:
C[i] = w[len(kbest_i)] + .0001*sum([sum(pi) for pi in p])
else:
C[i] = w[len(kbest_i)]
A[0, i] = w[len(opt_i)]
c = 0
for j in range(n):
for b in range(k):
prevac = [pl for pl in range(j) if p[pl][b]==1]
nextac = [pl for pl in range(j+1, n) if p[pl][b-1]==1] if b > 0 else []
other_ac = nextac + prevac
for a_other in range(n_ac):
if a_other == b:
continue
if p[j][b] == 1 and p[j][a_other] == 0:
br_cons[c][i] = f[len(other_ac + [j])]
elif p[j][b] == 0 and p[j][a_other] == 1:
br_cons[c][i] = -f[len(other_ac + [j])]
c += 1
cons_2 = np.identity(n_c)
print(np.shape(br_cons), np.shape(cons_2))
G = -np.vstack((br_cons, cons_2))
H = np.zeros((len(G), 1))
B = np.ones((1, 1))
return (C, G, H, A, B), partition
def oneemptydual(w, f, mindual=False):
n = len(w) - 1
partition = list(product(product([1, 0], repeat=2), repeat=n))
C = np.zeros((n+1,), dtype='float')
C[0] = 1
if mindual:
C += .0001
G = np.zeros((len(partition), n+1))
cons_2 = -np.hstack((np.zeros((n, 1)), np.identity(n)))
G = np.vstack((G, cons_2))
H = np.zeros((len(G), 1))
for i, p in enumerate(partition):
brall = [e[0] for e in p]
optall = [e[1] for e in p]
G[i, 0] = -w[sum(brall)]
H[i, 0] = -w[sum(optall)]
for j in range(n):
brsubj = int(sum(brall[:j]))
allj = p[j]
if allj == (1, 0):
G[i, j+1] = f[brsubj+1]
elif allj == (0, 1):
G[i, j+1] = -f[brsubj+1]
return C, G, H
def oneemptyprimal(w, f):
def nashfunc(jselect, NJ):
if jselect == 1:
return f[len(NJ + [j])]
elif jselect == 3:
return -f[len(NJ + [j])]
else:
return 0
n = len(w)-1 # number of players
n_c = 4**n - 1 # number of allocation types
partition = list(product([1, 2, 3, 4], repeat=n))[:-1] # resource types
c = np.zeros((n_c,), dtype='float')
cons_1 = np.zeros((n, n_c), dtype='float')
A = np.zeros((1, n_c), dtype='float')
for j in range(n):
for i, p in enumerate(partition):
# allocations for all agents
Na = [k for k in range(n) if p[k]==1]
Nx = [k for k in range(n) if p[k]==2]
Nb = [k for k in range(n) if p[k]==3]
# allocations for first j agents
NJ = [k for k in range(j) if p[k]<=2]
jselect= p[j]
c[i] = -w[len(Nb + Nx)] # maximize welfare of optimal allocation
A[0, i] = w[len(Na + Nx)] # set welfare of 1 round best response to 1
cons_1[j][i] = nashfunc(jselect, NJ) # best response constraint
cons_2 = np.identity(n_c)
G = -np.vstack((cons_1, cons_2))
h = np.zeros((n_c+n, 1))
b = np.array([[1]], dtype='float')
return (c, G, h, A, b), partition
def oneemptyprimalsum(w, f):
def nashfunc(jselect, NJ):
if jselect == 1:
return f[len(NJ + [j])]
elif jselect == 3:
return -f[len(NJ + [j])]
else:
return 0
n = len(w)-1 # number of players
n_c = 4**n - 1 # number of allocation types
partition = list(product([1, 2, 3, 4], repeat=n))[:-1] # resource types
c = np.zeros((n_c,), dtype='float')
cons_1 = np.zeros((n, n_c), dtype='float')
A = np.zeros((1, n_c), dtype='float')
for j in range(n):
for i, p in enumerate(partition):
# allocations for all agents
Na = [k for k in range(n) if p[k]==1]
Nx = [k for k in range(n) if p[k]==2]
Nb = [k for k in range(n) if p[k]==3]
# allocations for first j agents
NJ = [k for k in range(j) if p[k]<=2]
jselect= p[j]
c[i] = -w[len(Nb + Nx)] # maximize welfare of optimal allocation
A[0, i] = w[len(Na + Nx)] # set welfare of 1 round best response to 1
cons_1[j][i] = nashfunc(jselect, NJ) # best response constraint
cons_1 = np.sum(cons_1, 0)
cons_2 = np.identity(n_c)
G = -np.vstack((cons_1, cons_2))
h = np.zeros((n_c+1, 1))
b = np.array([[1]], dtype='float')
return (c, G, h, A, b), partition
def oneroundbest(w, f):
n = len(w)-1
n_ac = 2
partition = list(product(product([1, 0], repeat=n_ac), repeat=n))[:-1]
n_c = len(partition)
C = np.zeros((n_c,), dtype='float')
br_cons = np.zeros((n, n_c), dtype='float')
A = np.zeros((1, n_c), dtype='float')
for i, p in enumerate(partition):
startac = [pl for pl in range(n) if p[pl][0]==1]
endac = [pl for pl in range(n) if p[pl][1]==1]
C[i] = w[len(endac)]
A[0, i] = w[len(startac)]
for j in range(n):
startalloc = [pl for pl in range(j+1, n) if p[pl][0]==1]
endalloc = [pl for pl in range(j) if p[pl][1]==1]
otheralloc = startalloc + endalloc
if p[j][1] == 1 and p[j][0] == 0:
br_cons[j][i] = f[len(otheralloc + [j])]
elif p[j][1] == 0 and p[j][0] == 1:
br_cons[j][i] = -f[len(otheralloc + [j])]
cons_2 = np.identity(n_c)
G = -np.vstack((br_cons, cons_2))
H = np.zeros((len(G), 1))
B = np.ones((1, 1))
return C, G, H, A, B
def kroundbest(w, f, k):
n = len(w)-1
n_ac = k+1
partition = list(product(product([1, 0], repeat=n_ac), repeat=n))[:-1]
n_c = len(partition)
C = np.zeros((n_c,), dtype='float')
br_cons = np.zeros((n*k*(n_ac+1), n_c), dtype='float')
A = np.zeros((1, n_c), dtype='float')
for i, p in enumerate(partition):
startac = [pl for pl in range(n) if p[pl][0]==1]
endac = [pl for pl in range(n) if p[pl][1]==1]
C[i] = w[len(endac)]
A[0, i] = w[len(startac)]
c = 0
for j in range(n):
for b in range(k):
bnext = b + 1
nextac = [pl for pl in range(j+1, n) if p[pl][b]==1]
prevac = [pl for pl in range(j) if p[pl][bnext]==1]
other_ac = nextac + prevac
for a_other in range(n_ac):
if a_other == bnext:
continue
if p[j][bnext] == 1 and p[j][a_other] == 0:
br_cons[c][i] = f[len(other_ac + [j])]
elif p[j][bnext] == 0 and p[j][a_other] == 1:
br_cons[c][i] = -f[len(other_ac + [j])]
c += 1
cons_2 = np.identity(n_c)
G = -np.vstack((br_cons, cons_2))
H = np.zeros((len(G), 1))
B = np.ones((1, 1))
return C, G, H, A, B
def reducedoneLPempty(w, f, reduced=False, expand=False):
def optless(p):
p = list(p)
optall = sum([1 for e in p if e[1] == 1])
nashall = sum([1 for e in p if e[0] == 1])
if (nashall < 1 and optall < 2) or optall < 1:
return False
for i in range(len(p)):
if p[i][1] == 1:
nashbef = sum([1 for e in p[:i] if e[0] == 1])
if optall > nashbef:
return True
return False
def nashlast(p):
p = list(p)[::-1]
nashall = sum([1 for e in p if e[0] == 1])
if nashall < 2:
return False
for e in p:
if e == (0, 1):
return False
elif e[0] == 1:
return True
def par_reduce(p):
if optless(p) or nashlast(p):
return None
else:
return p
n = len(w)-1
partition = list(product(product([1, 0], repeat=2), repeat=n))[:-1]
if reduced:
for i, p in enumerate(partition):
partition[i] = par_reduce(p)
partition = [e for e in partition if e is not None]
n_c = len(partition)
C = np.zeros((n_c,), dtype='float')
br_cons = np.zeros((n, n_c), dtype='float')
A = np.zeros((1, n_c), dtype='float')
for i, p in enumerate(partition):
opt_i = [pl for pl in range(n) if p[pl][1]==1]
best_i = [pl for pl in range(n) if p[pl][0]==1]
if expand:
C[i] = -w[len(opt_i)] - .0001*sum([sum(pi) for pi in p])
A[0, i] = w[len(best_i)]
else:
C[i] = w[len(best_i)]
A[0, i] = w[len(opt_i)]
for j in range(n):
other_ac = [pl for pl in range(j) if p[pl][0]==1]
if p[j][0] == 1 and p[j][1] == 0:
br_cons[j][i] = f[len(other_ac)+1]
elif p[j][0] == 0 and p[j][1] == 1:
br_cons[j][i] = -f[len(other_ac)+1]
cons_2 = np.identity(n_c)
G = -np.vstack((br_cons, cons_2))
H = np.zeros((len(G), 1))
B = np.ones((1, 1))
return (C, G, H, A, B), partition
def arbitrarykorderempty(w, f, order, reduced=False):
n = len(w)-1
k = len(order)
n_ac = k+1
partition = list(product(product([1, 0], repeat=n_ac), repeat=n))[:-1]
n_c = len(partition)
C = np.zeros((n_c,), dtype='float')
br_cons = np.zeros((n*n_ac*k, n_c), dtype='float')
A = np.zeros((1, n_c), dtype='float')
a_k = -2
a_opt = -1
for i, p in enumerate(partition):
opt_i = [pl for pl in range(n) if p[pl][a_opt]==1]
kbest_i = [pl for pl in range(n) if p[pl][a_k]==1]
if reduced:
C[i] = w[len(kbest_i)] + .0001*sum([sum(pi) for pi in p])
else:
C[i] = w[len(kbest_i)]
A[0, i] = w[len(opt_i)]
c = 0
for b in range(k):
order_k = order[b]
for j in range(n):
ind = order_k.index(j)
prevac = [pl for pl in order_k[:ind] if p[pl][b]==1]
nextac = [pl for pl in order_k[ind+1:] if p[pl][b-1]==1] if b > 0 else []
other_ac = nextac + prevac
for a_other in range(n_ac):
if a_other == b:
continue
if p[j][b] == 1 and p[j][a_other] == 0:
br_cons[c][i] = f[len(other_ac + [j])]
elif p[j][b] == 0 and p[j][a_other] == 1:
br_cons[c][i] = -f[len(other_ac + [j])]
c += 1
cons_2 = np.identity(n_c)
G = -np.vstack((br_cons, cons_2))
H = np.zeros((len(G), 1))
B = np.ones((1, 1))
return (C, G, H, A, B), partition
def step1redLPmod(w, f, reduced=False):
def makepart():
partition = [[[0, 0] for _ in range(n)] for _ in range(n*(n+1)+2*n)]
c = 0
for a in range(1, n+1):
for b in range(n+1):
for i in range(a):
partition[c][i][0] = 1
for j in range(n-b, n):
partition[c][j][1] = 1
c += 1
for i in range(n):
partition[c][i][0] = 1
c += 1
for i in range(n):
partition[c][i][1] = 1
c += 1
return partition
n = len(w) - 1
partition = makepart()
#print(partition)
C = np.zeros((n+1,), dtype='float')
C[0] = 1
if reduced:
C[1:] = .0001
G = np.zeros((len(partition), n+1))
cons_2 = -np.hstack((np.zeros((n, 1)), np.identity(n)))
G = np.vstack((G, cons_2))
H = np.zeros((len(G), 1))
for i, p in enumerate(partition):
#print('p: ', p)
brall = [e[0] for e in p]
optall = [e[1] for e in p]
#print(brall, optall)
G[i, 0] = -w[sum(brall)]
H[i, 0] = -w[sum(optall)]
for j in range(n):
brsubj = int(sum(brall[:j]))
allj = p[j]
if allj == [1, 0]:
G[i, j+1] = f[brsubj+1]
elif allj == [0, 1]:
G[i, j+1] = -f[brsubj+1]
cons_3 = np.zeros((2, n+1))
cons_3[0, n-1] = -1
cons_3[1, n] = -1
H3 = np.zeros((2, 1))
H3[0, 0] = -1.578
H3 [1, 0] = -1.578
H = np.vstack((H, H3))
G = np.vstack((G, cons_3))
print(G, H)
return C, G, H
if __name__ == '__main__':
from LPsolver import lp
w = [0, 1, 1, 1]
f = [0, 1, 0, 0]
args = arbitrarykorderempty(w, f, [[0, 1, 2], [2, 1, 0]])
sol = lp('cvxopt', *args[0])
if sol is not None:
pob = round(sol['min'], 2)
values = [round(e, 2) for e in sol['argmin']]
print(pob)
partition = list(product(product([1, 0], repeat=3), repeat=3))[:-1]
for i, p in enumerate(partition):
if values[i] > 0:
print(p, values[i])
# df = .05
# for order in [[0, 1, 2], [0, 2, 1], [1, 0, 2], [1, 2, 0], [2, 0, 1], [2, 1, 0]]:
# for f2 in np.arange(0, 1+df, df):
# for f3 in np.arange(0, 1+df, df):
# f = [0, 1, f2, f3]
# args = arbitrarykorderempty(w, f, [[0, 1, 2], order])
# sol = lp('cvxopt', *args[0])
# if sol is not None:
# pob = round(sol['min'], 2)
# if pob > .74:
# print([round(e, 2) for e in f], pob, order) | 31.415984 | 89 | 0.459005 | 2,564 | 15,331 | 2.667317 | 0.056942 | 0.057318 | 0.044451 | 0.042111 | 0.828045 | 0.801141 | 0.785056 | 0.758444 | 0.746162 | 0.73154 | 0 | 0.043572 | 0.366773 | 15,331 | 488 | 90 | 31.415984 | 0.660898 | 0.063597 | 0 | 0.720317 | 0 | 0 | 0.011378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042216 | false | 0 | 0.007916 | 0 | 0.116095 | 0.010554 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e0b952c32e8541b9031f1f0e3beb8efff90a1f7a | 41,086 | py | Python | legal-api/tests/unit/services/test_document_meta.py | jordiwes/lear | 07af71ef98db36a57a75bc8135a4aad6b48a5c0e | [
"Apache-2.0"
] | null | null | null | legal-api/tests/unit/services/test_document_meta.py | jordiwes/lear | 07af71ef98db36a57a75bc8135a4aad6b48a5c0e | [
"Apache-2.0"
] | null | null | null | legal-api/tests/unit/services/test_document_meta.py | jordiwes/lear | 07af71ef98db36a57a75bc8135a4aad6b48a5c0e | [
"Apache-2.0"
] | null | null | null | # Copyright © 2019 Province of British Columbia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests to assure the Document Meta Service.
Test-Suite to ensure that the Document Meta Service is working as expected.
"""
import copy
from unittest.mock import patch
from registry_schemas.example_data import (
CORRECTION_INCORPORATION,
INCORPORATION_FILING_TEMPLATE,
TRANSITION_FILING_TEMPLATE,
)
from legal_api.models import Business, Filing
from legal_api.services import DocumentMetaService
from tests.unit.models import factory_business, factory_filing
FILING_DATE = '2020-07-14T11:41:07.230473-07:00'
COA_TITLE = 'Address Change'
NOA_TITLE = 'Notice of Articles'
NOA_FILENAME = 'BC1234567 - Notice of Articles - 2020-07-14.pdf'
COD_TITLE = 'Director Change'
CON_TITLE = 'Legal Name Change'
def test_business_not_found(session, app):
"""Assert that no documents are returned when the filing's business is not found."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'DONT_CARE',
'name': 'changeOfAddress',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC7654321'
}
}
}
assert len(document_meta.get_documents(filing)) == 0
# also verify document class properties:
assert document_meta._business_identifier == 'BC7654321'
assert document_meta._legal_type is None
def test_wrong_filing_status(session, app):
"""Assert that no documents are returned for a non- PAID and COMPLETED filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'NOT_PAID_OR_COMPLETE',
'name': 'changeOfAddress',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
assert len(document_meta.get_documents(filing)) == 0
# also verify document class properties:
assert document_meta._business_identifier == 'BC1234567'
assert document_meta._legal_type == Business.LegalTypes.BCOMP.value
def test_available_on_paper_only(session, app):
"""Assert that no documents are returned for a paper-only filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'changeOfAddress',
'availableOnPaperOnly': True,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
assert len(document_meta.get_documents(filing)) == 0
def test_coa_paid(session, app):
"""Assert that an Address Change document is returned for a PAID COA filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'PAID',
'name': 'changeOfAddress',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == COA_TITLE
assert documents[0]['filename'] == 'BC1234567 - Address Change - 2020-07-14.pdf'
def test_coa_completed_bc(session, app):
"""Assert that Address Change + NOA documents are returned for a COMPLETED BCOMP COA filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'changeOfAddress',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 2
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == COA_TITLE
assert documents[0]['filename'] == 'BC1234567 - Address Change - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'noa'
assert documents[1]['filingId'] == 12356
assert documents[1]['title'] == NOA_TITLE
assert documents[1]['filename'] == NOA_FILENAME
def test_coa_completed_cp(session, app):
"""Assert that an Address Change document is returned for a COMPLETED COOP COA filing."""
document_meta = DocumentMetaService()
factory_business(identifier='CP1234567', entity_type='CP')
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'changeOfAddress',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'CP1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == COA_TITLE
assert documents[0]['filename'] == 'CP1234567 - Address Change - 2020-07-14.pdf'
def test_ar(session, app):
"""Assert that an Annual Report document is returned for an AR filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'annualReport',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == 'Annual Report'
assert documents[0]['filename'] == 'BC1234567 - Annual Report - 2020-07-14.pdf'
def test_cod_paid(session, app):
"""Assert that a Director Change document is returned for a PAID COD filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'PAID',
'name': 'changeOfDirectors',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == COD_TITLE
assert documents[0]['filename'] == 'BC1234567 - Director Change - 2020-07-14.pdf'
def test_cod_completed_bc(session, app):
"""Assert that Director Change + NOA documents are returned for a COMPLETED BCOMP COD filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'changeOfDirectors',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 2
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == COD_TITLE
assert documents[0]['filename'] == 'BC1234567 - Director Change - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'noa'
assert documents[1]['filingId'] == 12356
assert documents[1]['title'] == NOA_TITLE
assert documents[1]['filename'] == NOA_FILENAME
def test_cod_completed_cp(session, app):
"""Assert that a Director Change document is returned for a COMPLETED COOP COD filing."""
document_meta = DocumentMetaService()
factory_business(identifier='CP1234567', entity_type='CP')
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'changeOfDirectors',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'CP1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == COD_TITLE
assert documents[0]['filename'] == 'CP1234567 - Director Change - 2020-07-14.pdf'
def test_con_paid(session, app):
"""Assert that a Legal Name Change document is returned for a PAID CON filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'PAID',
'name': 'changeOfName',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == CON_TITLE
assert documents[0]['filename'] == 'BC1234567 - Legal Name Change - 2020-07-14.pdf'
def test_con_completed_bc(session, app):
"""Assert that Legal Name Change + NOA documents are returned for a COMPLETED BCOMP CON filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'changeOfName',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 2
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == CON_TITLE
assert documents[0]['filename'] == 'BC1234567 - Legal Name Change - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'noa'
assert documents[1]['filingId'] == 12356
assert documents[1]['title'] == NOA_TITLE
assert documents[1]['filename'] == NOA_FILENAME
def test_con_completed_cp(session, app):
"""Assert that a Legal Name Change document is returned for a COMPLETED COOP CON filing."""
document_meta = DocumentMetaService()
factory_business(identifier='CP1234567', entity_type='CP')
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'changeOfName',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'CP1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == CON_TITLE
assert documents[0]['filename'] == 'CP1234567 - Legal Name Change - 2020-07-14.pdf'
def test_special_resolution_paid(session, app):
"""Assert that no documents are returned for a PAID Special Resolution filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'PAID',
'name': 'specialResolution',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 0
def test_special_resolution_completed(session, app):
"""Assert that a Special Resolution document is returned for a COMPLETED Special Resolution filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'specialResolution',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == 'Special Resolution'
assert documents[0]['filename'] == 'BC1234567 - Special Resolution - 2020-07-14.pdf'
def test_voluntary_dissolution_paid(session, app):
"""Assert that no documents are returned for a PAID Voluntary Dissolution filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'PAID',
'name': 'voluntaryDissolution',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 0
def test_voluntary_dissolution_completed(session, app):
"""Assert that a Voluntary Dissolution document is returned for a COMPLETED Voluntary Dissolution filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'voluntaryDissolution',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == 'Voluntary Dissolution'
assert documents[0]['filename'] == 'BC1234567 - Voluntary Dissolution - 2020-07-14.pdf'
def test_correction(session, app):
"""Assert that no documents are returned for a Correction filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'correction',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 0
def test_alteration(session, app):
"""Assert that no documents are returned for an Alteration filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'alteration',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
def test_ia_fed(app):
"""Assert that an IA - FED document is returned for a future effective IA filing."""
from legal_api.utils.legislation_datetime import LegislationDatetime
document_meta = DocumentMetaService()
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'PAID',
'name': 'incorporationApplication',
'availableOnPaperOnly': False,
'effectiveDate': LegislationDatetime.tomorrow_midnight().isoformat(),
'date': FILING_DATE
},
'business': {
'identifier': 'T12345678'
},
'incorporationApplication': {
'nameRequest': {
'legalType': Business.LegalTypes.BCOMP.value
}
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == 'Incorporation Application - Future Effective Incorporation'
assert documents[0]['filename'] == 'T12345678 - Incorporation Application (Future Effective) - 2020-07-14.pdf'
def test_ia_paid(app):
"""Assert that an IA - Pending document is returned for a PAID IA filing."""
document_meta = DocumentMetaService()
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'PAID',
'name': 'incorporationApplication',
'availableOnPaperOnly': False,
'effectiveDate': FILING_DATE,
'date': FILING_DATE
},
'business': {
'identifier': 'T12345678'
},
'incorporationApplication': {
'nameRequest': {
'legalType': Business.LegalTypes.BCOMP.value
}
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == 'Incorporation Application - Pending'
assert documents[0]['filename'] == 'T12345678 - Incorporation Application (Pending) - 2020-07-14.pdf'
def test_ia_completed(session, app):
"""Assert that IA + NOA + Certificate documents are returned for a COMPLETED IA filing."""
document_meta = DocumentMetaService()
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'incorporationApplication',
'availableOnPaperOnly': False,
'effectiveDate': FILING_DATE,
'date': FILING_DATE
},
'business': {
'identifier': 'T12345678'
},
'incorporationApplication': {
'nameRequest': {
'legalType': Business.LegalTypes.BCOMP.value
}
}
}
}
with patch.object(Filing, 'find_by_id', return_value=Filing()):
documents = document_meta.get_documents(filing)
assert len(documents) == 3
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == 'Incorporation Application'
assert documents[0]['filename'] == 'T12345678 - Incorporation Application - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'noa'
assert documents[1]['filingId'] == 12356
assert documents[1]['title'] == 'Notice of Articles'
assert documents[1]['filename'] == 'T12345678 - Notice of Articles - 2020-07-14.pdf'
assert documents[2]['type'] == 'REPORT'
assert documents[2]['reportType'] == 'certificate'
assert documents[2]['filingId'] == 12356
assert documents[2]['title'] == 'Certificate'
assert documents[2]['filename'] == 'T12345678 - Certificate - 2020-07-14.pdf'
def test_ia_completed_bcomp(session, app):
"""Assert that IA + NOA + Certificate documents are returned for a COMPLETED IA filing when business is a BCOMP."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12356,
'status': 'COMPLETED',
'name': 'incorporationApplication',
'availableOnPaperOnly': False,
'effectiveDate': FILING_DATE,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
with patch.object(Filing, 'find_by_id', return_value=Filing()):
documents = document_meta.get_documents(filing)
assert len(documents) == 3
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12356
assert documents[0]['title'] == 'Incorporation Application'
assert documents[0]['filename'] == 'BC1234567 - Incorporation Application - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'noa'
assert documents[1]['filingId'] == 12356
assert documents[1]['title'] == NOA_TITLE
assert documents[1]['filename'] == NOA_FILENAME
assert documents[2]['type'] == 'REPORT'
assert documents[2]['reportType'] == 'certificate'
assert documents[2]['filingId'] == 12356
assert documents[2]['title'] == 'Certificate'
assert documents[2]['filename'] == 'BC1234567 - Certificate - 2020-07-14.pdf'
def test_ia_completed_bcomp_original(session, app):
"""Assert that IA + Certificate documents with (Original) are returned for a COMPLETED IA."""
document_meta = DocumentMetaService()
b = factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
original_filing = factory_filing(b, INCORPORATION_FILING_TEMPLATE)
CORRECTION_INCORPORATION['filing']['correction']['correctedFilingId'] = original_filing.id
corrected_filing = factory_filing(b, CORRECTION_INCORPORATION)
filing = {
'filing': {
'header': {
'filingId': original_filing.id,
'status': 'COMPLETED',
'name': 'incorporationApplication',
'availableOnPaperOnly': False,
'effectiveDate': FILING_DATE,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
}
}
}
original_filing.parent_filing_id = corrected_filing.id
original_filing.save()
documents = document_meta.get_documents(filing)
assert len(documents) == 3
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == original_filing.id
assert documents[0]['title'] == 'Incorporation Application (Original)'
assert documents[0]['filename'] == 'BC1234567 - Incorporation Application (Original) - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'noa'
assert documents[1]['filingId'] == original_filing.id
assert documents[1]['title'] == NOA_TITLE
assert documents[1]['filename'] == NOA_FILENAME
assert documents[2]['type'] == 'REPORT'
assert documents[2]['reportType'] == 'certificate'
assert documents[2]['filingId'] == original_filing.id
assert documents[2]['title'] == 'Certificate (Original)'
assert documents[2]['filename'] == 'BC1234567 - Certificate (Original) - 2020-07-14.pdf'
def test_correction_ia(session, app):
"""Assert that IA + NOA documents are returned for a Correction filing without name change."""
document_meta = DocumentMetaService()
b = factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
original_filing = factory_filing(b, INCORPORATION_FILING_TEMPLATE)
filing = {
'filing': {
'header': {
'filingId': 12357,
'status': 'COMPLETED',
'name': 'correction',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
},
'correction': {
'correctedFilingId': original_filing.id
},
'incorporationApplication': {
'nameRequest': {
'legalType': Business.LegalTypes.BCOMP.value
}
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 2
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12357
assert documents[0]['title'] == 'Incorporation Application (Corrected)'
assert documents[0]['filename'] == 'BC1234567 - Incorporation Application (Corrected) - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'noa'
assert documents[1]['filingId'] == 12357
assert documents[1]['title'] == NOA_TITLE
assert documents[1]['filename'] == NOA_FILENAME
def test_correction_ia_with_cert_nr_change(session, app):
"""Assert that IA + NOA + Certificate documents are returned for a Correction filing with name change."""
document_meta = DocumentMetaService()
b = factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
INCORPORATION_FILING_TEMPLATE['filing']['incorporationApplication']['nameRequest']['nrNumber'] = 'NR 1234567'
original_filing = factory_filing(b, INCORPORATION_FILING_TEMPLATE)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12357,
'status': 'COMPLETED',
'name': 'correction',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
},
'correction': {
'correctedFilingId': original_filing.id
},
'incorporationApplication': {
'nameRequest': {
'legalType': Business.LegalTypes.BCOMP.value,
'nrNumber': 'NR 3456789'
}
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 3
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12357
assert documents[0]['title'] == 'Incorporation Application (Corrected)'
assert documents[0]['filename'] == 'BC1234567 - Incorporation Application (Corrected) - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'certificate'
assert documents[1]['filingId'] == 12357
assert documents[1]['title'] == 'Certificate (Corrected)'
assert documents[1]['filename'] == 'BC1234567 - Certificate (Corrected) - 2020-07-14.pdf'
assert documents[2]['type'] == 'REPORT'
assert documents[2]['reportType'] == 'noa'
assert documents[2]['filingId'] == 12357
assert documents[2]['title'] == NOA_TITLE
assert documents[2]['filename'] == NOA_FILENAME
def test_correction_ia_with_cert_name_correction(session, app):
"""Assert that IA + NOA + Certificate documents are returned for a Correction filing with name change."""
document_meta = DocumentMetaService()
b = factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
INCORPORATION_FILING_TEMPLATE['filing']['incorporationApplication']['nameRequest']['nrNumber'] = 'NR 1234567'
INCORPORATION_FILING_TEMPLATE['filing']['incorporationApplication']['nameRequest']['legalName'] = 'abc'
original_filing = factory_filing(b, INCORPORATION_FILING_TEMPLATE)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12357,
'status': 'COMPLETED',
'name': 'correction',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
},
'correction': {
'correctedFilingId': original_filing.id
},
'incorporationApplication': {
'nameRequest': {
'legalType': Business.LegalTypes.BCOMP.value,
'nrNumber': 'NR 1234567',
'legalName': 'abc.'
}
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 3
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12357
assert documents[0]['title'] == 'Incorporation Application (Corrected)'
assert documents[0]['filename'] == 'BC1234567 - Incorporation Application (Corrected) - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'certificate'
assert documents[1]['filingId'] == 12357
assert documents[1]['title'] == 'Certificate (Corrected)'
assert documents[1]['filename'] == 'BC1234567 - Certificate (Corrected) - 2020-07-14.pdf'
assert documents[2]['type'] == 'REPORT'
assert documents[2]['reportType'] == 'noa'
assert documents[2]['filingId'] == 12357
assert documents[2]['title'] == NOA_TITLE
assert documents[2]['filename'] == NOA_FILENAME
def test_correction_ia_with_named_to_numbered(session, app):
"""Assert that IA + NOA + Certificate documents are returned for a Correction filing with name change."""
document_meta = DocumentMetaService()
b = factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
INCORPORATION_FILING_TEMPLATE['filing']['incorporationApplication']['nameRequest']['nrNumber'] = 'NR 1234567'
INCORPORATION_FILING_TEMPLATE['filing']['incorporationApplication']['nameRequest']['legalName'] = 'abc'
original_filing = factory_filing(b, INCORPORATION_FILING_TEMPLATE)
with app.app_context():
filing = {
'filing': {
'header': {
'filingId': 12357,
'status': 'COMPLETED',
'name': 'correction',
'availableOnPaperOnly': False,
'date': FILING_DATE
},
'business': {
'identifier': 'BC1234567'
},
'correction': {
'correctedFilingId': original_filing.id
},
'incorporationApplication': {
'nameRequest': {
'legalType': Business.LegalTypes.BCOMP.value
}
}
}
}
documents = document_meta.get_documents(filing)
assert len(documents) == 3
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 12357
assert documents[0]['title'] == 'Incorporation Application (Corrected)'
assert documents[0]['filename'] == 'BC1234567 - Incorporation Application (Corrected) - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'certificate'
assert documents[1]['filingId'] == 12357
assert documents[1]['title'] == 'Certificate (Corrected)'
assert documents[1]['filename'] == 'BC1234567 - Certificate (Corrected) - 2020-07-14.pdf'
assert documents[2]['type'] == 'REPORT'
assert documents[2]['reportType'] == 'noa'
assert documents[2]['filingId'] == 12357
assert documents[2]['title'] == NOA_TITLE
assert documents[2]['filename'] == NOA_FILENAME
def test_transition_bcomp_paid(session, app):
"""Assert that Transition Application document is returned for a filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = copy.deepcopy(TRANSITION_FILING_TEMPLATE)
filing['filing']['header']['date'] = FILING_DATE
filing['filing']['header']['status'] = 'PAID'
filing['filing']['header']['availableOnPaperOnly'] = False
documents = document_meta.get_documents(filing)
assert len(documents) == 1
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 1
assert documents[0]['title'] == 'Transition Application - Pending'
assert documents[0]['filename'] == 'BC1234567 - Transition Application (Pending) - 2020-07-14.pdf'
def test_transition_bcomp_completed(session, app):
"""Assert that Transition Application + NOA documents are returned for a filing."""
document_meta = DocumentMetaService()
factory_business(identifier='BC1234567', entity_type=Business.LegalTypes.BCOMP.value)
with app.app_context():
filing = copy.deepcopy(TRANSITION_FILING_TEMPLATE)
filing['filing']['header']['date'] = FILING_DATE
filing['filing']['header']['status'] = 'COMPLETED'
filing['filing']['header']['availableOnPaperOnly'] = False
documents = document_meta.get_documents(filing)
assert len(documents) == 2
assert documents[0]['type'] == 'REPORT'
assert documents[0]['reportType'] is None
assert documents[0]['filingId'] == 1
assert documents[0]['title'] == 'Transition Application'
assert documents[0]['filename'] == 'BC1234567 - Transition Application - 2020-07-14.pdf'
assert documents[1]['type'] == 'REPORT'
assert documents[1]['reportType'] == 'noa'
assert documents[1]['filingId'] == 1
assert documents[1]['title'] == NOA_TITLE
assert documents[1]['filename'] == NOA_FILENAME
| 39.696618 | 119 | 0.55581 | 3,643 | 41,086 | 6.153719 | 0.061488 | 0.133821 | 0.082077 | 0.044607 | 0.904675 | 0.8871 | 0.859086 | 0.837184 | 0.825765 | 0.811 | 0 | 0.053208 | 0.324368 | 41,086 | 1,034 | 120 | 39.73501 | 0.75435 | 0.079248 | 0 | 0.735566 | 0 | 0 | 0.218301 | 0.011684 | 0 | 0 | 0 | 0 | 0.270208 | 1 | 0.034642 | false | 0 | 0.008083 | 0 | 0.042725 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e0d08a15f1618c8834bda37c4c1b7a5d99f4b0da | 75 | py | Python | Mundo 3/Ex111/Utilidadescev/__init__.py | legna7/Python | 52e0b642d1b7acc592ec82dd360c5697fb0765db | [
"MIT"
] | null | null | null | Mundo 3/Ex111/Utilidadescev/__init__.py | legna7/Python | 52e0b642d1b7acc592ec82dd360c5697fb0765db | [
"MIT"
] | null | null | null | Mundo 3/Ex111/Utilidadescev/__init__.py | legna7/Python | 52e0b642d1b7acc592ec82dd360c5697fb0765db | [
"MIT"
] | null | null | null | from Ex111.Utilidadescev import moeda
from Ex111.Utilidadescev import dado
| 25 | 37 | 0.866667 | 10 | 75 | 6.5 | 0.6 | 0.276923 | 0.676923 | 0.861538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089552 | 0.106667 | 75 | 2 | 38 | 37.5 | 0.880597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
46025c5389ccd4c6cc90d450b5da4c802c8f2885 | 1,461 | py | Python | src/icolos/utils/enums/compound_enums.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 11 | 2022-01-30T14:36:13.000Z | 2022-03-22T09:40:57.000Z | src/icolos/utils/enums/compound_enums.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 2 | 2022-03-23T07:56:49.000Z | 2022-03-24T12:01:42.000Z | src/icolos/utils/enums/compound_enums.py | jharrymoore/Icolos | c60cc00c34208ab7011d41d52a74651763673e7a | [
"Apache-2.0"
] | 8 | 2022-01-28T10:32:31.000Z | 2022-03-22T09:40:59.000Z | class CompoundTagsEnum:
CONFORMER_ENERGY_TAG = "conformer_energy"
FORMAL_CHARGE_TAG = "formal_charge"
# try to find the internal value and return
def __getattr__(self, name):
if name in self:
return name
raise AttributeError
# prohibit any attempt to set any values
def __setattr__(self, key, value):
raise ValueError("No changes allowed.")
class CompoundContainerEnum:
# try to find the internal value and return
def __getattr__(self, name):
if name in self:
return name
raise AttributeError
# prohibit any attempt to set any values
def __setattr__(self, key, value):
raise ValueError("No changes allowed.")
class EnumerationContainerEnum:
# try to find the internal value and return
def __getattr__(self, name):
if name in self:
return name
raise AttributeError
# prohibit any attempt to set any values
def __setattr__(self, key, value):
raise ValueError("No changes allowed.")
class ConformerContainerEnum:
EXTRA_DATA_COSMOFILE = "cosmo_file"
EXTRA_DATA_COORDFILE = "coord_file"
# try to find the internal value and return
def __getattr__(self, name):
if name in self:
return name
raise AttributeError
# prohibit any attempt to set any values
def __setattr__(self, key, value):
raise ValueError("No changes allowed.")
| 25.631579 | 47 | 0.665982 | 176 | 1,461 | 5.278409 | 0.25 | 0.021529 | 0.038751 | 0.051668 | 0.782562 | 0.782562 | 0.782562 | 0.782562 | 0.782562 | 0.782562 | 0 | 0 | 0.276523 | 1,461 | 56 | 48 | 26.089286 | 0.878903 | 0.221081 | 0 | 0.75 | 0 | 0 | 0.110717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
460f01abbd0adc2f3322e2750e3f90871e0e81ea | 4,071 | py | Python | tests/models/battle_creature_test.py | DaveTCode/CreatureRogue | 74ce2bf731b52b014198a2376dfba0d9782cbf01 | [
"MIT"
] | 1 | 2020-06-12T00:10:32.000Z | 2020-06-12T00:10:32.000Z | tests/models/battle_creature_test.py | DaveTCode/CreatureRogue | 74ce2bf731b52b014198a2376dfba0d9782cbf01 | [
"MIT"
] | null | null | null | tests/models/battle_creature_test.py | DaveTCode/CreatureRogue | 74ce2bf731b52b014198a2376dfba0d9782cbf01 | [
"MIT"
] | null | null | null | import CreatureRogue.creature_creator as creature_creator
import CreatureRogue.settings as settings
from CreatureRogue.data_layer.data import ATTACK_STAT
from CreatureRogue.data_layer.db_layer import Loader
from CreatureRogue.models.creature import Creature
from CreatureRogue.models.battle_creature import BattleCreature
loader = Loader(settings.DB_FILE)
static_game_data = loader.load_static_data()
def test_string_blank_name():
creature = Creature(species=static_game_data.species[1], level=1, nickname=None, trainer=None,
individual_values=creature_creator.random_stat_values(static_game_data.stats, 1, 15),
effort_values=creature_creator.zero_stat_values(static_game_data.stats),
current_xp=1, was_traded=False, moves=[])
battle_creature = BattleCreature(creature=creature, static_game_data=static_game_data)
assert "Wild Bulbasaur" == str(battle_creature)
def test_stat_value_function():
creature = Creature(species=static_game_data.species[1], level=1, nickname=None, trainer=None,
individual_values=creature_creator.random_stat_values(static_game_data.stats, 1, 15),
effort_values=creature_creator.zero_stat_values(static_game_data.stats),
current_xp=1, was_traded=False, moves=[])
battle_creature = BattleCreature(creature=creature, static_game_data=static_game_data)
assert 6.0 == battle_creature.stat_value(static_game_data.stats[ATTACK_STAT])
battle_creature.adjust_stat_adjusts(static_game_data.stats[ATTACK_STAT], 1)
assert 6.0 * 1.5 == battle_creature.stat_value(static_game_data.stats[ATTACK_STAT]) # Multiply default by 1.5
def test_stat_value_positive_capping():
creature = Creature(species=static_game_data.species[1], level=1, nickname=None, trainer=None,
individual_values=creature_creator.random_stat_values(static_game_data.stats, 1, 15),
effort_values=creature_creator.zero_stat_values(static_game_data.stats),
current_xp=1, was_traded=False, moves=[])
battle_creature = BattleCreature(creature=creature, static_game_data=static_game_data)
battle_creature.adjust_stat_adjusts(static_game_data.stats[ATTACK_STAT], 6)
assert 6.0 * 4.0 == battle_creature.stat_value(static_game_data.stats[ATTACK_STAT])
assert 0 == battle_creature.adjust_stat_adjusts(static_game_data.stats[ATTACK_STAT], 1)
assert 6.0 * 4.0 == battle_creature.stat_value(static_game_data.stats[ATTACK_STAT]) # Should still be the same
def test_stat_value_negative_capping():
creature = Creature(species=static_game_data.species[1], level=1, nickname=None, trainer=None,
individual_values=creature_creator.random_stat_values(static_game_data.stats, 1, 15),
effort_values=creature_creator.zero_stat_values(static_game_data.stats),
current_xp=1, was_traded=False, moves=[])
battle_creature = BattleCreature(creature=creature, static_game_data=static_game_data)
battle_creature.adjust_stat_adjusts(static_game_data.stats[ATTACK_STAT], -6)
assert 6.0 / 4.0 == battle_creature.stat_value(static_game_data.stats[ATTACK_STAT])
assert 0 == battle_creature.adjust_stat_adjusts(static_game_data.stats[ATTACK_STAT], -1)
assert 6.0 / 4.0 == battle_creature.stat_value(static_game_data.stats[ATTACK_STAT]) # Should still be the same
def test_modified_catch_rate():
creature = Creature(species=static_game_data.species[1], level=1, nickname=None, trainer=None,
individual_values=creature_creator.random_stat_values(static_game_data.stats, 1, 15),
effort_values=creature_creator.zero_stat_values(static_game_data.stats),
current_xp=1, was_traded=False, moves=[])
battle_creature = BattleCreature(creature=creature, static_game_data=static_game_data)
assert 15.0 == battle_creature.modified_catch_rate(static_game_data.pokeballs[1])
| 63.609375 | 115 | 0.746991 | 546 | 4,071 | 5.203297 | 0.130037 | 0.133756 | 0.187258 | 0.140444 | 0.804646 | 0.804646 | 0.804646 | 0.804646 | 0.804646 | 0.804646 | 0 | 0.019101 | 0.164087 | 4,071 | 63 | 116 | 64.619048 | 0.815751 | 0.017932 | 0 | 0.568627 | 0 | 0 | 0.003505 | 0 | 0 | 0 | 0 | 0 | 0.196078 | 1 | 0.098039 | false | 0 | 0.117647 | 0 | 0.215686 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.