hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
40454572b0240a72523121bdf116cbffb6925f72 | 18,137 | py | Python | sdk/python/pulumi_vault/ad/secret_library.py | pulumi/pulumi-vault | 1682875f4a5d7d508f36e166529ad2b8aec34090 | [
"ECL-2.0",
"Apache-2.0"
] | 10 | 2019-10-07T17:44:18.000Z | 2022-03-30T20:46:33.000Z | sdk/python/pulumi_vault/ad/secret_library.py | pulumi/pulumi-vault | 1682875f4a5d7d508f36e166529ad2b8aec34090 | [
"ECL-2.0",
"Apache-2.0"
] | 79 | 2019-10-11T18:13:07.000Z | 2022-03-31T21:09:41.000Z | sdk/python/pulumi_vault/ad/secret_library.py | pulumi/pulumi-vault | 1682875f4a5d7d508f36e166529ad2b8aec34090 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2019-10-28T10:08:40.000Z | 2020-03-17T14:20:55.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['SecretLibraryArgs', 'SecretLibrary']
@pulumi.input_type
class SecretLibraryArgs:
def __init__(__self__, *,
backend: pulumi.Input[str],
service_account_names: pulumi.Input[Sequence[pulumi.Input[str]]],
disable_check_in_enforcement: Optional[pulumi.Input[bool]] = None,
max_ttl: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
ttl: Optional[pulumi.Input[int]] = None):
"""
The set of arguments for constructing a SecretLibrary resource.
:param pulumi.Input[str] backend: The mount path for the AD backend.
:param pulumi.Input[Sequence[pulumi.Input[str]]] service_account_names: The names of all the service accounts that can be checked out from this set. These service accounts must already exist
in Active Directory.
:param pulumi.Input[bool] disable_check_in_enforcement: Disable enforcing that service accounts must be checked in by the entity or client token that checked them out.
:param pulumi.Input[int] max_ttl: The maximum amount of time, in seconds, a check-out last with renewal before Vault automatically checks it back in.
:param pulumi.Input[str] name: The name of the set of service accounts.
:param pulumi.Input[int] ttl: The amount of time, in seconds, a single check-out lasts before Vault automatically checks it back in.
"""
pulumi.set(__self__, "backend", backend)
pulumi.set(__self__, "service_account_names", service_account_names)
if disable_check_in_enforcement is not None:
pulumi.set(__self__, "disable_check_in_enforcement", disable_check_in_enforcement)
if max_ttl is not None:
pulumi.set(__self__, "max_ttl", max_ttl)
if name is not None:
pulumi.set(__self__, "name", name)
if ttl is not None:
pulumi.set(__self__, "ttl", ttl)
@property
@pulumi.getter
def backend(self) -> pulumi.Input[str]:
"""
The mount path for the AD backend.
"""
return pulumi.get(self, "backend")
@backend.setter
def backend(self, value: pulumi.Input[str]):
pulumi.set(self, "backend", value)
@property
@pulumi.getter(name="serviceAccountNames")
def service_account_names(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
The names of all the service accounts that can be checked out from this set. These service accounts must already exist
in Active Directory.
"""
return pulumi.get(self, "service_account_names")
@service_account_names.setter
def service_account_names(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "service_account_names", value)
@property
@pulumi.getter(name="disableCheckInEnforcement")
def disable_check_in_enforcement(self) -> Optional[pulumi.Input[bool]]:
"""
Disable enforcing that service accounts must be checked in by the entity or client token that checked them out.
"""
return pulumi.get(self, "disable_check_in_enforcement")
@disable_check_in_enforcement.setter
def disable_check_in_enforcement(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disable_check_in_enforcement", value)
@property
@pulumi.getter(name="maxTtl")
def max_ttl(self) -> Optional[pulumi.Input[int]]:
"""
The maximum amount of time, in seconds, a check-out last with renewal before Vault automatically checks it back in.
"""
return pulumi.get(self, "max_ttl")
@max_ttl.setter
def max_ttl(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_ttl", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the set of service accounts.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def ttl(self) -> Optional[pulumi.Input[int]]:
"""
The amount of time, in seconds, a single check-out lasts before Vault automatically checks it back in.
"""
return pulumi.get(self, "ttl")
@ttl.setter
def ttl(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "ttl", value)
@pulumi.input_type
class _SecretLibraryState:
def __init__(__self__, *,
backend: Optional[pulumi.Input[str]] = None,
disable_check_in_enforcement: Optional[pulumi.Input[bool]] = None,
max_ttl: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
service_account_names: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
ttl: Optional[pulumi.Input[int]] = None):
"""
Input properties used for looking up and filtering SecretLibrary resources.
:param pulumi.Input[str] backend: The mount path for the AD backend.
:param pulumi.Input[bool] disable_check_in_enforcement: Disable enforcing that service accounts must be checked in by the entity or client token that checked them out.
:param pulumi.Input[int] max_ttl: The maximum amount of time, in seconds, a check-out last with renewal before Vault automatically checks it back in.
:param pulumi.Input[str] name: The name of the set of service accounts.
:param pulumi.Input[Sequence[pulumi.Input[str]]] service_account_names: The names of all the service accounts that can be checked out from this set. These service accounts must already exist
in Active Directory.
:param pulumi.Input[int] ttl: The amount of time, in seconds, a single check-out lasts before Vault automatically checks it back in.
"""
if backend is not None:
pulumi.set(__self__, "backend", backend)
if disable_check_in_enforcement is not None:
pulumi.set(__self__, "disable_check_in_enforcement", disable_check_in_enforcement)
if max_ttl is not None:
pulumi.set(__self__, "max_ttl", max_ttl)
if name is not None:
pulumi.set(__self__, "name", name)
if service_account_names is not None:
pulumi.set(__self__, "service_account_names", service_account_names)
if ttl is not None:
pulumi.set(__self__, "ttl", ttl)
@property
@pulumi.getter
def backend(self) -> Optional[pulumi.Input[str]]:
"""
The mount path for the AD backend.
"""
return pulumi.get(self, "backend")
@backend.setter
def backend(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "backend", value)
@property
@pulumi.getter(name="disableCheckInEnforcement")
def disable_check_in_enforcement(self) -> Optional[pulumi.Input[bool]]:
"""
Disable enforcing that service accounts must be checked in by the entity or client token that checked them out.
"""
return pulumi.get(self, "disable_check_in_enforcement")
@disable_check_in_enforcement.setter
def disable_check_in_enforcement(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disable_check_in_enforcement", value)
@property
@pulumi.getter(name="maxTtl")
def max_ttl(self) -> Optional[pulumi.Input[int]]:
"""
The maximum amount of time, in seconds, a check-out last with renewal before Vault automatically checks it back in.
"""
return pulumi.get(self, "max_ttl")
@max_ttl.setter
def max_ttl(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_ttl", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the set of service accounts.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="serviceAccountNames")
def service_account_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The names of all the service accounts that can be checked out from this set. These service accounts must already exist
in Active Directory.
"""
return pulumi.get(self, "service_account_names")
@service_account_names.setter
def service_account_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "service_account_names", value)
@property
@pulumi.getter
def ttl(self) -> Optional[pulumi.Input[int]]:
"""
The amount of time, in seconds, a single check-out lasts before Vault automatically checks it back in.
"""
return pulumi.get(self, "ttl")
@ttl.setter
def ttl(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "ttl", value)
class SecretLibrary(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
backend: Optional[pulumi.Input[str]] = None,
disable_check_in_enforcement: Optional[pulumi.Input[bool]] = None,
max_ttl: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
service_account_names: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
ttl: Optional[pulumi.Input[int]] = None,
__props__=None):
"""
Create a SecretLibrary resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] backend: The mount path for the AD backend.
:param pulumi.Input[bool] disable_check_in_enforcement: Disable enforcing that service accounts must be checked in by the entity or client token that checked them out.
:param pulumi.Input[int] max_ttl: The maximum amount of time, in seconds, a check-out last with renewal before Vault automatically checks it back in.
:param pulumi.Input[str] name: The name of the set of service accounts.
:param pulumi.Input[Sequence[pulumi.Input[str]]] service_account_names: The names of all the service accounts that can be checked out from this set. These service accounts must already exist
in Active Directory.
:param pulumi.Input[int] ttl: The amount of time, in seconds, a single check-out lasts before Vault automatically checks it back in.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: SecretLibraryArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Create a SecretLibrary resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param SecretLibraryArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(SecretLibraryArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
backend: Optional[pulumi.Input[str]] = None,
disable_check_in_enforcement: Optional[pulumi.Input[bool]] = None,
max_ttl: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
service_account_names: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
ttl: Optional[pulumi.Input[int]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = SecretLibraryArgs.__new__(SecretLibraryArgs)
if backend is None and not opts.urn:
raise TypeError("Missing required property 'backend'")
__props__.__dict__["backend"] = backend
__props__.__dict__["disable_check_in_enforcement"] = disable_check_in_enforcement
__props__.__dict__["max_ttl"] = max_ttl
__props__.__dict__["name"] = name
if service_account_names is None and not opts.urn:
raise TypeError("Missing required property 'service_account_names'")
__props__.__dict__["service_account_names"] = service_account_names
__props__.__dict__["ttl"] = ttl
super(SecretLibrary, __self__).__init__(
'vault:ad/secretLibrary:SecretLibrary',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
backend: Optional[pulumi.Input[str]] = None,
disable_check_in_enforcement: Optional[pulumi.Input[bool]] = None,
max_ttl: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
service_account_names: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
ttl: Optional[pulumi.Input[int]] = None) -> 'SecretLibrary':
"""
Get an existing SecretLibrary resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] backend: The mount path for the AD backend.
:param pulumi.Input[bool] disable_check_in_enforcement: Disable enforcing that service accounts must be checked in by the entity or client token that checked them out.
:param pulumi.Input[int] max_ttl: The maximum amount of time, in seconds, a check-out last with renewal before Vault automatically checks it back in.
:param pulumi.Input[str] name: The name of the set of service accounts.
:param pulumi.Input[Sequence[pulumi.Input[str]]] service_account_names: The names of all the service accounts that can be checked out from this set. These service accounts must already exist
in Active Directory.
:param pulumi.Input[int] ttl: The amount of time, in seconds, a single check-out lasts before Vault automatically checks it back in.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _SecretLibraryState.__new__(_SecretLibraryState)
__props__.__dict__["backend"] = backend
__props__.__dict__["disable_check_in_enforcement"] = disable_check_in_enforcement
__props__.__dict__["max_ttl"] = max_ttl
__props__.__dict__["name"] = name
__props__.__dict__["service_account_names"] = service_account_names
__props__.__dict__["ttl"] = ttl
return SecretLibrary(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def backend(self) -> pulumi.Output[str]:
"""
The mount path for the AD backend.
"""
return pulumi.get(self, "backend")
@property
@pulumi.getter(name="disableCheckInEnforcement")
def disable_check_in_enforcement(self) -> pulumi.Output[Optional[bool]]:
"""
Disable enforcing that service accounts must be checked in by the entity or client token that checked them out.
"""
return pulumi.get(self, "disable_check_in_enforcement")
@property
@pulumi.getter(name="maxTtl")
def max_ttl(self) -> pulumi.Output[int]:
"""
The maximum amount of time, in seconds, a check-out last with renewal before Vault automatically checks it back in.
"""
return pulumi.get(self, "max_ttl")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the set of service accounts.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="serviceAccountNames")
def service_account_names(self) -> pulumi.Output[Sequence[str]]:
"""
The names of all the service accounts that can be checked out from this set. These service accounts must already exist
in Active Directory.
"""
return pulumi.get(self, "service_account_names")
@property
@pulumi.getter
def ttl(self) -> pulumi.Output[int]:
"""
The amount of time, in seconds, a single check-out lasts before Vault automatically checks it back in.
"""
return pulumi.get(self, "ttl")
| 46.505128 | 198 | 0.658929 | 2,259 | 18,137 | 5.079239 | 0.075697 | 0.091075 | 0.079484 | 0.067544 | 0.855064 | 0.836151 | 0.819766 | 0.80434 | 0.799895 | 0.795451 | 0 | 0.000073 | 0.247064 | 18,137 | 389 | 199 | 46.624679 | 0.840144 | 0.332139 | 0 | 0.733906 | 1 | 0 | 0.097691 | 0.051066 | 0 | 0 | 0 | 0 | 0 | 1 | 0.158798 | false | 0.004292 | 0.021459 | 0 | 0.274678 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
40b728357a1cf78ff8c40718d059423c898441aa | 3,949 | py | Python | sigpy/fft.py | davidyzeng/sigpy | 56f8eb9be57b5a80e53ae09f2ba0802586fe69bc | [
"BSD-3-Clause"
] | null | null | null | sigpy/fft.py | davidyzeng/sigpy | 56f8eb9be57b5a80e53ae09f2ba0802586fe69bc | [
"BSD-3-Clause"
] | null | null | null | sigpy/fft.py | davidyzeng/sigpy | 56f8eb9be57b5a80e53ae09f2ba0802586fe69bc | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""FFT functions.
This module contains FFT functions that support centered operation.
"""
import numpy as np
from sigpy import config, util
if config.cupy_enabled:
import cupy as cp
def fft(input, oshape=None, axes=None, center=True, norm='ortho'):
"""FFT function that supports centering.
Args:
input (array): input array.
oshape (None or array of ints): output shape.
axes (None or array of ints): Axes over which to compute the FFT.
norm (Nonr or ``"ortho"``): Keyword to specify the normalization mode.
Returns:
array: FFT result of dimension oshape.
See Also:
:func:`numpy.fft.fftn`
"""
device = util.get_device(input)
xp = device.xp
with device:
if not np.issubdtype(input.dtype, np.complexfloating):
input = input.astype(np.complex)
if center:
output = _fftc(input, oshape=oshape, axes=axes, norm=norm)
else:
output = xp.fft.fftn(input, s=oshape, axes=axes, norm=norm)
if np.issubdtype(input.dtype, np.complexfloating) and input.dtype != output.dtype:
output = output.astype(input.dtype)
return output
def ifft(input, oshape=None, axes=None, center=True, norm='ortho'):
"""IFFT function that supports centering.
Args:
input (array): input array.
oshape (None or array of ints): output shape.
axes (None or array of ints): Axes over which to compute the inverse FFT.
norm (None or ``"ortho"``): Keyword to specify the normalization mode.
Returns:
array of dimension oshape.
See Also:
:func:`numpy.fft.ifftn`
"""
device = util.get_device(input)
xp = device.xp
with device:
if not np.issubdtype(input.dtype, np.complexfloating):
input = input.astype(np.complex)
if center:
output = _ifftc(input, oshape=oshape, axes=axes, norm=norm)
else:
output = xp.fft.ifftn(input, s=oshape, axes=axes, norm=norm)
if np.issubdtype(input.dtype, np.complexfloating) and input.dtype != output.dtype:
output = output.astype(input.dtype)
return output
def _fftc(input, oshape=None, axes=None, norm='ortho'):
ndim = input.ndim
axes = util._normalize_axes(axes, ndim)
device = util.get_device(input)
xp = device.xp
if oshape is None:
oshape = input.shape
with device:
tmp = input
tshape = list(input.shape)
for a in axes:
i = oshape[a]
tshape[a] = i
idx = xp.arange(i, dtype=input.dtype)
tmp = tmp.swapaxes(a, -1)
tshape[a], tshape[-1] = tshape[-1], tshape[a]
tmp = util.resize(tmp, tshape)
tmp = xp.fft.ifftshift(tmp, axes=-1)
tmp = xp.fft.fft(tmp, axis=-1, norm=norm)
tmp = xp.fft.fftshift(tmp, axes=-1)
tmp = tmp.swapaxes(a, -1)
tshape[a], tshape[-1] = tshape[-1], tshape[a]
output = tmp
return output
def _ifftc(input, oshape=None, axes=None, norm='ortho'):
ndim = input.ndim
axes = util._normalize_axes(axes, ndim)
device = util.get_device(input)
xp = device.xp
if oshape is None:
oshape = input.shape
with device:
tmp = input
tshape = list(input.shape)
for a in axes:
i = oshape[a]
tshape[a] = i
idx = xp.arange(i, dtype=input.dtype)
tmp = tmp.swapaxes(a, -1)
tshape[a], tshape[-1] = tshape[-1], tshape[a]
tmp = util.resize(tmp, tshape)
tmp = xp.fft.ifftshift(tmp, axes=-1)
tmp = xp.fft.ifft(tmp, axis=-1, norm=norm)
tmp = xp.fft.fftshift(tmp, axes=-1)
tmp = tmp.swapaxes(a, -1)
tshape[a], tshape[-1] = tshape[-1], tshape[a]
output = tmp
return output
| 26.503356 | 90 | 0.579134 | 522 | 3,949 | 4.356322 | 0.185824 | 0.036939 | 0.028144 | 0.033421 | 0.888303 | 0.888303 | 0.888303 | 0.888303 | 0.85664 | 0.819701 | 0 | 0.006884 | 0.301089 | 3,949 | 148 | 91 | 26.682432 | 0.817029 | 0.21018 | 0 | 0.820513 | 0 | 0 | 0.006629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.038462 | 0 | 0.141026 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
40e926c39d7b924ba3e1dd2cd209ddc1169f47a3 | 247 | py | Python | nmigen_boards/tinyfpga_ax1.py | lethalbit/nmigen-boards | aaf18252e457ff95257137da2a629820c0ff2bfa | [
"BSD-2-Clause"
] | 11 | 2021-12-10T12:23:29.000Z | 2022-03-13T08:40:20.000Z | nmigen_boards/tinyfpga_ax1.py | lethalbit/nmigen-boards | aaf18252e457ff95257137da2a629820c0ff2bfa | [
"BSD-2-Clause"
] | 12 | 2021-12-11T18:51:29.000Z | 2022-03-12T05:08:52.000Z | nmigen_boards/tinyfpga_ax1.py | lethalbit/nmigen-boards | aaf18252e457ff95257137da2a629820c0ff2bfa | [
"BSD-2-Clause"
] | 7 | 2021-12-12T07:20:21.000Z | 2022-03-06T06:20:55.000Z | from amaranth_boards.tinyfpga_ax1 import *
from amaranth_boards.tinyfpga_ax1 import __all__
import warnings
warnings.warn("instead of nmigen_boards.tinyfpga_ax1, use amaranth_boards.tinyfpga_ax1",
DeprecationWarning, stacklevel=2)
| 30.875 | 88 | 0.809717 | 31 | 247 | 6.064516 | 0.516129 | 0.297872 | 0.361702 | 0.398936 | 0.37234 | 0.37234 | 0 | 0 | 0 | 0 | 0 | 0.023364 | 0.133603 | 247 | 7 | 89 | 35.285714 | 0.85514 | 0 | 0 | 0 | 0 | 0 | 0.287449 | 0.222672 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
40e97528a432d45c92e827ad1f4cd5486c7c14c5 | 17,962 | py | Python | tests/assessment_authoring/test_managers.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 2 | 2018-02-23T12:16:11.000Z | 2020-10-08T17:54:24.000Z | tests/assessment_authoring/test_managers.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 87 | 2017-04-21T18:57:15.000Z | 2021-12-13T19:43:57.000Z | tests/assessment_authoring/test_managers.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 1 | 2018-03-01T16:44:25.000Z | 2018-03-01T16:44:25.000Z | """Unit tests of assessment.authoring managers."""
import pytest
from ..utilities.general import is_never_authz, is_no_authz, uses_cataloging, uses_filesystem_only
from dlkit.abstract_osid.osid import errors
from dlkit.abstract_osid.type.objects import TypeList as abc_type_list
from dlkit.primordium.id.primitives import Id
from dlkit.primordium.type.primitives import Type
from dlkit.runtime import PROXY_SESSION, proxy_example
from dlkit.runtime.managers import Runtime
REQUEST = proxy_example.SimpleRequest()
CONDITION = PROXY_SESSION.get_proxy_condition()
CONDITION.set_http_request(REQUEST)
PROXY = PROXY_SESSION.get_proxy(CONDITION)
DEFAULT_TYPE = Type(**{'identifier': 'DEFAULT', 'namespace': 'DEFAULT', 'authority': 'DEFAULT'})
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_authoring_profile_class_fixture(request):
request.cls.service_config = request.param
request.cls.mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
@pytest.fixture(scope="function")
def assessment_authoring_profile_test_fixture(request):
pass
@pytest.mark.usefixtures("assessment_authoring_profile_class_fixture", "assessment_authoring_profile_test_fixture")
class TestAssessmentAuthoringProfile(object):
"""Tests for AssessmentAuthoringProfile"""
def test_supports_assessment_part_lookup(self):
"""Tests supports_assessment_part_lookup"""
assert isinstance(self.mgr.supports_assessment_part_lookup(), bool)
def test_supports_assessment_part_query(self):
"""Tests supports_assessment_part_query"""
assert isinstance(self.mgr.supports_assessment_part_query(), bool)
def test_supports_assessment_part_admin(self):
"""Tests supports_assessment_part_admin"""
assert isinstance(self.mgr.supports_assessment_part_admin(), bool)
def test_supports_assessment_part_bank(self):
"""Tests supports_assessment_part_bank"""
assert isinstance(self.mgr.supports_assessment_part_bank(), bool)
def test_supports_assessment_part_bank_assignment(self):
"""Tests supports_assessment_part_bank_assignment"""
assert isinstance(self.mgr.supports_assessment_part_bank_assignment(), bool)
def test_supports_assessment_part_item(self):
"""Tests supports_assessment_part_item"""
assert isinstance(self.mgr.supports_assessment_part_item(), bool)
def test_supports_assessment_part_item_design(self):
"""Tests supports_assessment_part_item_design"""
assert isinstance(self.mgr.supports_assessment_part_item_design(), bool)
def test_supports_sequence_rule_lookup(self):
"""Tests supports_sequence_rule_lookup"""
assert isinstance(self.mgr.supports_sequence_rule_lookup(), bool)
def test_supports_sequence_rule_admin(self):
"""Tests supports_sequence_rule_admin"""
assert isinstance(self.mgr.supports_sequence_rule_admin(), bool)
def test_get_assessment_part_record_types(self):
"""Tests get_assessment_part_record_types"""
assert isinstance(self.mgr.get_assessment_part_record_types(), abc_type_list)
def test_get_assessment_part_search_record_types(self):
"""Tests get_assessment_part_search_record_types"""
assert isinstance(self.mgr.get_assessment_part_search_record_types(), abc_type_list)
def test_get_sequence_rule_record_types(self):
"""Tests get_sequence_rule_record_types"""
assert isinstance(self.mgr.get_sequence_rule_record_types(), abc_type_list)
def test_get_sequence_rule_search_record_types(self):
"""Tests get_sequence_rule_search_record_types"""
assert isinstance(self.mgr.get_sequence_rule_search_record_types(), abc_type_list)
def test_get_sequence_rule_enabler_record_types(self):
"""Tests get_sequence_rule_enabler_record_types"""
assert isinstance(self.mgr.get_sequence_rule_enabler_record_types(), abc_type_list)
def test_get_sequence_rule_enabler_search_record_types(self):
"""Tests get_sequence_rule_enabler_search_record_types"""
assert isinstance(self.mgr.get_sequence_rule_enabler_search_record_types(), abc_type_list)
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_authoring_manager_class_fixture(request):
request.cls.service_config = request.param
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
implementation=request.cls.service_config)
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for assessment.authoring manager tests'
catalog = request.cls.svc_mgr.create_bank(create_form)
request.cls.catalog_id = catalog.get_id()
request.cls.mgr = Runtime().get_manager('ASSESSMENT_AUTHORING', 'TEST_JSON_1', (3, 0, 0))
else:
request.cls.catalog_id = Id('resource.Resource%3A000000000000000000000000%40DLKIT.MIT.EDU')
def class_tear_down():
if not is_never_authz(request.cls.service_config):
request.cls.svc_mgr.delete_bank(request.cls.catalog_id)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def assessment_authoring_manager_test_fixture(request):
pass
@pytest.mark.usefixtures("assessment_authoring_manager_class_fixture", "assessment_authoring_manager_test_fixture")
class TestAssessmentAuthoringManager(object):
"""Tests for AssessmentAuthoringManager"""
def test_get_assessment_part_lookup_session(self):
"""Tests get_assessment_part_lookup_session"""
# From tests_templates/resource.py::ResourceManager::get_resource_lookup_session_template
if self.svc_mgr.supports_assessment_part_lookup():
self.svc_mgr.get_assessment_part_lookup_session()
def test_get_assessment_part_lookup_session_for_bank(self):
"""Tests get_assessment_part_lookup_session_for_bank"""
# From tests_templates/resource.py::ResourceManager::get_resource_lookup_session_for_bin_template
if self.svc_mgr.supports_assessment_part_lookup():
self.svc_mgr.get_assessment_part_lookup_session_for_bank(self.catalog_id)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_lookup_session_for_bank()
def test_get_assessment_part_query_session(self):
"""Tests get_assessment_part_query_session"""
# From tests_templates/resource.py::ResourceManager::get_resource_lookup_session_template
if self.svc_mgr.supports_assessment_part_query():
self.svc_mgr.get_assessment_part_query_session()
def test_get_assessment_part_query_session_for_bank(self):
"""Tests get_assessment_part_query_session_for_bank"""
# From tests_templates/resource.py::ResourceManager::get_resource_lookup_session_for_bin_template
if self.svc_mgr.supports_assessment_part_query():
self.svc_mgr.get_assessment_part_query_session_for_bank(self.catalog_id)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_query_session_for_bank()
def test_get_assessment_part_admin_session(self):
"""Tests get_assessment_part_admin_session"""
# From tests_templates/resource.py::ResourceManager::get_resource_admin_session_template
if self.svc_mgr.supports_assessment_part_admin():
self.svc_mgr.get_assessment_part_admin_session()
def test_get_assessment_part_admin_session_for_bank(self):
"""Tests get_assessment_part_admin_session_for_bank"""
# From tests_templates/resource.py::ResourceManager::get_resource_admin_session_for_bin_template
if self.svc_mgr.supports_assessment_part_admin():
self.svc_mgr.get_assessment_part_admin_session_for_bank(self.catalog_id)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_admin_session_for_bank()
def test_get_assessment_part_bank_session(self):
"""Tests get_assessment_part_bank_session"""
# From tests_templates/resource.py::ResourceManager::get_resource_admin_session_template
if self.svc_mgr.supports_assessment_part_bank():
self.svc_mgr.get_assessment_part_bank_session()
def test_get_assessment_part_bank_assignment_session(self):
"""Tests get_assessment_part_bank_assignment_session"""
# From tests_templates/resource.py::ResourceManager::get_resource_admin_session_template
if self.svc_mgr.supports_assessment_part_bank_assignment():
self.svc_mgr.get_assessment_part_bank_assignment_session()
def test_get_sequence_rule_lookup_session(self):
"""Tests get_sequence_rule_lookup_session"""
# From tests_templates/resource.py::ResourceManager::get_resource_lookup_session_template
if self.svc_mgr.supports_sequence_rule_lookup():
self.svc_mgr.get_sequence_rule_lookup_session()
def test_get_sequence_rule_lookup_session_for_bank(self):
"""Tests get_sequence_rule_lookup_session_for_bank"""
# From tests_templates/resource.py::ResourceManager::get_resource_lookup_session_for_bin_template
if self.svc_mgr.supports_sequence_rule_lookup():
self.svc_mgr.get_sequence_rule_lookup_session_for_bank(self.catalog_id)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_sequence_rule_lookup_session_for_bank()
def test_get_sequence_rule_admin_session(self):
"""Tests get_sequence_rule_admin_session"""
# From tests_templates/resource.py::ResourceManager::get_resource_admin_session_template
if self.svc_mgr.supports_sequence_rule_admin():
self.svc_mgr.get_sequence_rule_admin_session()
def test_get_sequence_rule_admin_session_for_bank(self):
"""Tests get_sequence_rule_admin_session_for_bank"""
# From tests_templates/resource.py::ResourceManager::get_resource_admin_session_for_bin_template
if self.svc_mgr.supports_sequence_rule_admin():
self.svc_mgr.get_sequence_rule_admin_session_for_bank(self.catalog_id)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_sequence_rule_admin_session_for_bank()
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_authoring_proxy_manager_class_fixture(request):
request.cls.service_config = request.param
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for assessment.authoring manager tests'
catalog = request.cls.svc_mgr.create_bank(create_form)
request.cls.catalog_id = catalog.get_id()
request.cls.mgr = Runtime().get_manager('ASSESSMENT_AUTHORING', 'TEST_JSON_1', (3, 0, 0))
else:
request.cls.catalog_id = Id('resource.Resource%3A000000000000000000000000%40DLKIT.MIT.EDU')
def class_tear_down():
if not is_never_authz(request.cls.service_config):
request.cls.svc_mgr.delete_bank(request.cls.catalog_id)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def assessment_authoring_proxy_manager_test_fixture(request):
pass
@pytest.mark.usefixtures("assessment_authoring_proxy_manager_class_fixture", "assessment_authoring_proxy_manager_test_fixture")
class TestAssessmentAuthoringProxyManager(object):
"""Tests for AssessmentAuthoringProxyManager"""
def test_get_assessment_part_lookup_session(self):
"""Tests get_assessment_part_lookup_session"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_lookup_session_template
if self.svc_mgr.supports_assessment_part_lookup():
self.svc_mgr.get_assessment_part_lookup_session(PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_lookup_session()
def test_get_assessment_part_lookup_session_for_bank(self):
"""Tests get_assessment_part_lookup_session_for_bank"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_lookup_session_for_bin_template
if self.svc_mgr.supports_assessment_part_lookup():
self.svc_mgr.get_assessment_part_lookup_session_for_bank(self.catalog_id, PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_lookup_session_for_bank()
def test_get_assessment_part_query_session(self):
"""Tests get_assessment_part_query_session"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_lookup_session_template
if self.svc_mgr.supports_assessment_part_query():
self.svc_mgr.get_assessment_part_query_session(PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_query_session()
def test_get_assessment_part_query_session_for_bank(self):
"""Tests get_assessment_part_query_session_for_bank"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_lookup_session_for_bin_template
if self.svc_mgr.supports_assessment_part_query():
self.svc_mgr.get_assessment_part_query_session_for_bank(self.catalog_id, PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_query_session_for_bank()
def test_get_assessment_part_admin_session(self):
"""Tests get_assessment_part_admin_session"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_admin_session_template
if self.svc_mgr.supports_assessment_part_admin():
self.svc_mgr.get_assessment_part_admin_session(PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_admin_session()
def test_get_assessment_part_admin_session_for_bank(self):
"""Tests get_assessment_part_admin_session_for_bank"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_admin_session_for_bin_template
if self.svc_mgr.supports_assessment_part_admin():
self.svc_mgr.get_assessment_part_admin_session_for_bank(self.catalog_id, PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_admin_session_for_bank()
def test_get_assessment_part_bank_session(self):
"""Tests get_assessment_part_bank_session"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_admin_session_template
if self.svc_mgr.supports_assessment_part_bank():
self.svc_mgr.get_assessment_part_bank_session(PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_bank_session()
def test_get_assessment_part_bank_assignment_session(self):
"""Tests get_assessment_part_bank_assignment_session"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_admin_session_template
if self.svc_mgr.supports_assessment_part_bank_assignment():
self.svc_mgr.get_assessment_part_bank_assignment_session(PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_assessment_part_bank_assignment_session()
def test_get_sequence_rule_lookup_session(self):
"""Tests get_sequence_rule_lookup_session"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_lookup_session_template
if self.svc_mgr.supports_sequence_rule_lookup():
self.svc_mgr.get_sequence_rule_lookup_session(PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_sequence_rule_lookup_session()
def test_get_sequence_rule_lookup_session_for_bank(self):
"""Tests get_sequence_rule_lookup_session_for_bank"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_lookup_session_for_bin_template
if self.svc_mgr.supports_sequence_rule_lookup():
self.svc_mgr.get_sequence_rule_lookup_session_for_bank(self.catalog_id, PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_sequence_rule_lookup_session_for_bank()
def test_get_sequence_rule_admin_session(self):
"""Tests get_sequence_rule_admin_session"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_admin_session_template
if self.svc_mgr.supports_sequence_rule_admin():
self.svc_mgr.get_sequence_rule_admin_session(PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_sequence_rule_admin_session()
def test_get_sequence_rule_admin_session_for_bank(self):
"""Tests get_sequence_rule_admin_session_for_bank"""
# From tests_templates/resource.py::ResourceProxyManager::get_resource_admin_session_for_bin_template
if self.svc_mgr.supports_sequence_rule_admin():
self.svc_mgr.get_sequence_rule_admin_session_for_bank(self.catalog_id, PROXY)
with pytest.raises(errors.NullArgument):
self.svc_mgr.get_sequence_rule_admin_session_for_bank()
| 52.063768 | 176 | 0.770738 | 2,287 | 17,962 | 5.551377 | 0.054657 | 0.112476 | 0.087035 | 0.041982 | 0.939036 | 0.902016 | 0.869171 | 0.826402 | 0.797653 | 0.780876 | 0 | 0.004046 | 0.146866 | 17,962 | 344 | 177 | 52.215116 | 0.824458 | 0.226534 | 0 | 0.657277 | 0 | 0 | 0.07756 | 0.053391 | 0 | 0 | 0 | 0 | 0.070423 | 1 | 0.220657 | false | 0.014085 | 0.037559 | 0 | 0.2723 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dc013fe0399442996301bc5afc618d26517a6b73 | 128 | py | Python | discord/ui/button.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/ui/button.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/ui/button.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | from disnake.ui.button import *
from disnake.ui.button import __dict__ as __original_dict__
locals().update(__original_dict__)
| 25.6 | 59 | 0.828125 | 18 | 128 | 5.111111 | 0.555556 | 0.23913 | 0.282609 | 0.413043 | 0.543478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 128 | 4 | 60 | 32 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
9076980762601544f648c21998ff229afc80a8df | 18,366 | py | Python | src/framework/ContainerLib.py | securedataplane/mts | 9ffe415ce586600e558e7a2855348c9cd1651f49 | [
"MIT"
] | 1 | 2022-03-10T13:00:25.000Z | 2022-03-10T13:00:25.000Z | src/framework/ContainerLib.py | securedataplane/mts | 9ffe415ce586600e558e7a2855348c9cd1651f49 | [
"MIT"
] | 1 | 2019-07-23T08:49:09.000Z | 2019-07-23T08:49:09.000Z | src/framework/ContainerLib.py | securedataplane/mts | 9ffe415ce586600e558e7a2855348c9cd1651f49 | [
"MIT"
] | null | null | null | import expLib as exp
from datetime import datetime
DpdkMem = "1024,0"
outDestMac = "00:00:00:00:30:56"
VmRam = "4G"
totalTenantVMs = 2
totalCpuCores = 16
l2fwdCpuCores = 8
InitMessage = "The scenario is getting prepared ..."
def phy2phy_SRIOV_Dynamic_Container(cnx_server, isDPDK, nbrCores, isIsolated=False, numVMs=1, numCont=1, numTenants=4):
# sanity checks
if numVMs > 4:
print("[ERROR] Currently, only 4 VM instances are available.")
return False
elif numCont > 64 // 3:
print("[ERROR] The maximum number of configurable containers is 21 as the PF's totalvfs is 64.") # not correct
return False
elif numCont % numVMs != 0:
print("[ERROR] Number of containers should be a multiple of number of VMs.")
return False
elif numTenants % numCont != 0:
print("[ERROR] Number of Flow Rules has to be a multiple of the number of Containers.")
return False
elif numVMs < 1 or numCont < 1:
return False
cpuArray = exp.cpuAllocation(numVMs, nbrCores, isIsolated, True, totalTenantVMs, 2, totalCpuCores)
exp.HOST_CPU = cpuArray["hostCpu"]
OvsCpu = cpuArray["ovsCpu"]
DpdkCpu = cpuArray["ovsDpdk"]
cpuDpdkPorts = cpuArray["cpuDpdkPorts"]
OvsVMsCpuArray = cpuArray["OvsVMsCpuArray"]
# upper bound derived from p2v, results in same offsets for in/out ports
num_vfs = str(numTenants * 2 + numCont)
offset = int(num_vfs) / numCont
exp.NicConfig = ["1", num_vfs]
exp.NicType = "mlx"
exp.isSRIOV = True
exp.Server_cnx = cnx_server
exp.pf_index = 0
exp.pfs = []
exp.vfs = []
exp.scsName = "phy2phy_SRIOV_Dynamic_Container" + "_IsDPDK=" + str(isDPDK) + "_IsIsolated=" + str(isIsolated) + "_numVMs=" + str(numVMs) + "_numCont=" + str(numCont)
logTimeStamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# exp.EmailNotify(InitMessage, "is beeing prepared", logTimeStamp)
exp.Logs("", logTimeStamp)
exp.isDPDK = isDPDK
if isDPDK:
# exp.OVS_PATH = exp.dpdk_path
print("Container mode currently does not support DPDK.")
return
exp.PhyPorts = [
("enp2s0f0", num_vfs),
("enp2s0f1", num_vfs)
]
exp.InitialConfig()
# set isContainer only now to make sure that OvS is also stopped on the host (container stopping/cleaning is done in ConfigOVS)
exp.isContainer = True
port_1_Vfs = exp.pfs[0][2]
port_2_Vfs = exp.pfs[1][2]
exp.MyVfs = []
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
print("[*] VM INDEX: " + str(vm_idx))
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
print(" => VM : " + vm)
exp.MyVfs.append((port_1_Vfs[i*offset], "0", "off", vm, i))
exp.MyVfs.append((port_2_Vfs[i*offset], "0", "off", vm, i))
exp.usedVms = [("vswitch-vm-container", OvsVMsCpuArray[0], VmRam)]
for i in range(1, numVMs):
exp.usedVms.append(("vswitch-vm-container-{}".format(i + 1), OvsVMsCpuArray[i], VmRam))
if isDPDK:
pass
else:
ovs_ports = []
for i in range(0, numCont):
ovs_ports.append([
(port_1_Vfs[i*offset], False),
(port_2_Vfs[i*offset], False)])
msg = exp.GetScenarioSummary([a for a in ovs_ports], OvsCpu, DpdkCpu, DpdkMem, isIsolated)
# exp.EmailNotify(msg, "is beeing prepared", logTimeStamp)
exp.Logs(msg, logTimeStamp)
exp.Vfsconfig()
if isDPDK:
pass
else:
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
exp.ConfigOVS(vm, "br0", ovs_ports[i], OvsCpu, ContNum=i, VMNum=vm_idx)
# add flow rules for each container
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
for f in range(0, numTenants / numCont):
match = "in_port={},ip,nw_dst=10.0.0.{}".format(exp.VfsMatch[port_1_Vfs[i*offset]], (i * (numTenants / numCont)) + 2 + f)
action = "mod_dl_dst:{},{}".format(outDestMac, exp.VfsMatch[port_2_Vfs[i*offset]])
exp.addFlowRule(vm, exp.OVS_PATH, "br0", match, action, ContNum=i)
exp.showFlowRules(vm, exp.OVS_PATH, "br0", ContNum=i)
print("[*] Offset: {}".format(offset))
print("[*] in/out MACs: ")
for i in range(0, numCont):
print(" Container ID: {}".format(i))
print(" IPs: 10.0.0.{}".format(range(i*(numTenants / numCont) + 2, (i+1)*(numTenants / numCont) + 2)))
print(" {}".format(exp.GetMacByVf(port_1_Vfs[i*offset])))
print(" {}".format(exp.GetMacByVf(port_2_Vfs[i*offset])))
# exp.EmailNotify(msg, "is ready", logTimeStamp)
return True
def phy2vm2vm2phy_SRIOV_Dynamic_Container(cnx_server, isDPDK, nbrCores, isIsolated, numVMs=1, numCont=1, numTenants=4):
# sanity checks
if numVMs > 4:
print("[ERROR] Currently, only 4 VM instances are available.")
return False
elif numCont > 24:
print("[ERROR] The maximum number of configurable containers is 24 as the PF's totalvfs is 64.")
return False
elif numCont % numVMs != 0:
print("[ERROR] Number of containers should be a multiple of number of VMs.")
return False
elif numTenants % numCont != 0:
print("[ERROR] Number of Tenants has to be a multiple of the number of Containers.")
return False
elif numVMs < 1 or numCont < 1:
return False
# 2 cores per l2fwd session, all running on one tenant VM
exp.isContainer = True
cpuArray = exp.cpuAllocation(numVMs, nbrCores, isIsolated, True, 1, l2fwdCpuCores, totalCpuCores)
exp.isContainer = False
exp.HOST_CPU = cpuArray["hostCpu"]
OvsCpu = cpuArray["ovsCpu"]
DpdkCpu = cpuArray["ovsDpdk"]
cpuDpdkPorts = cpuArray["cpuDpdkPorts"]
TenantVMsCpuArray = cpuArray["TenantVMsCpuArray"]
OvsVMsCpuArray = cpuArray["OvsVMsCpuArray"]
# upper bound derived from p2v, results in same offsets for in/out ports
num_vfs = str(numTenants * 2 + numCont)
offset = int(num_vfs) / numCont
exp.NicConfig = ["1", num_vfs]
exp.NicType = "mlx"
exp.isSRIOV = True
exp.Server_cnx = cnx_server
exp.pf_index = 0
exp.pfs = []
exp.vfs = []
exp.scsName= "phy2vm2vm2phy_SRIOV_Dynamic_Container" + "_IsDPDK=" + str(isDPDK) + "_IsIsolated=" + str(isIsolated) + "_numVMs=" + str(numVMs) + "_numCont=" + str(numCont)
logTimeStamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# exp.EmailNotify(InitMessage, "is beeing prepared", logTimeStamp)
exp.Logs("", logTimeStamp)
exp.IsDPDK = isDPDK
if isDPDK:
# exp.OVS_PATH = exp.dpdk_path
print("Container mode currently does not support DPDK.")
return
exp.PhyPorts= [
("enp2s0f0", num_vfs),
("enp2s0f1", num_vfs)
]
exp.InitialConfig()
# set isContainer only now to make sure that OvS is also stopped on the host (container stopping/cleaning is done in ConfigOVS)
exp.isContainer = True
port_1_Vfs = exp.pfs[0][2]
port_2_Vfs = exp.pfs[1][2]
exp.MyVfs = []
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
print("[*] VM INDEX: " + str(vm_idx))
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
tenant_vm = "tenant-green-1"
print(" => VM : " + vm)
# in/out VF per container
exp.MyVfs.append((port_1_Vfs[i*offset], "0", "off", vm, i))
exp.MyVfs.append((port_2_Vfs[i*offset], "0", "off", vm, i))
for f in range(0, numTenants / numCont):
vlan = str(((numTenants / numCont)*i + f + 1) * 10)
# tenant specific gateway VFs
exp.MyVfs.append((port_1_Vfs[i*offset + f*2 + 1], vlan, "off", vm, i))
exp.MyVfs.append((port_2_Vfs[i*offset + f*2 + 1], vlan, "off", vm, i))
exp.MyVfs.append((port_1_Vfs[i*offset + f*2 + 2], vlan, "off", tenant_vm))
exp.MyVfs.append((port_2_Vfs[i*offset + f*2 + 2], vlan, "off", tenant_vm))
exp.usedVms = [("vswitch-vm-container", OvsVMsCpuArray[0], VmRam)]
for i in range(1, numVMs):
exp.usedVms.append(("vswitch-vm-container-{}".format(i + 1), OvsVMsCpuArray[i], VmRam))
# allocate some base memory for the VM and 1GB + some overhead per tenant, as each l2fwd application needs at least one 1GB hugepage
exp.usedVms.append(("tenant-green-{}".format(1), TenantVMsCpuArray[0], str(488281 + 1074219 * numTenants)))
if isDPDK:
return
else:
ovs_ports = []
for i in range(0, numCont):
p = [(port_1_Vfs[i*offset], False),
(port_2_Vfs[i*offset], False)]
for f in range(0, numTenants / numCont):
p.append((port_1_Vfs[i*offset + f*2 + 1], False))
p.append((port_2_Vfs[i*offset + f*2 + 1], False))
ovs_ports.append(p)
msg= exp.GetScenarioSummary([a for a in ovs_ports], OvsCpu, DpdkCpu, DpdkMem, isIsolated)
# exp.EmailNotify(msg, "is beeing prepared", logTimeStamp)
exp.Logs(msg, logTimeStamp)
exp.Vfsconfig()
if isDPDK:
return
else:
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
exp.ConfigOVS(vm, "br0", ovs_ports[i], OvsCpu, ContNum=i, VMNum=vm_idx)
# add flow rules for each container
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
for f in range(0, numTenants / numCont):
match = "in_port={},ip,nw_dst=10.0.0.{}".format(exp.VfsMatch[port_1_Vfs[i*offset]], (numTenants / numCont)*i + f + 2)
action = "mod_dl_dst:{},{}".format(exp.GetMacByVf(port_1_Vfs[i*offset + f*2 + 2]), exp.VfsMatch[port_1_Vfs[i*offset + f*2 + 1]])
exp.addFlowRule(vm, exp.OVS_PATH, "br0", match, action, ContNum=i)
match = "in_port={}".format(exp.VfsMatch[port_2_Vfs[i*offset + f*2 + 1]])
action = "mod_dl_dst:{},{}".format(outDestMac, exp.VfsMatch[port_2_Vfs[i*offset]])
exp.addFlowRule(vm, exp.OVS_PATH, "br0", match, action, ContNum=i)
# adjust MAC address in l2fwd binary on tenant vm
tenant = "tenant-green-1"
mac = exp.GetMacByVf(port_2_Vfs[i*offset + f*2 + 1])
exp.patchMAC(tenant, "./container/l2fwd-container-{}".format((numTenants / numCont)*i + 1), mac)
exp.showFlowRules(vm, exp.OVS_PATH, "br0", ContNum=i)
exp.startL2Frwd(numCont, vswitchMode="p2v_container", tenantCount=numCont)
print("[*] Offset: {}".format(offset))
print("[*] in/out MACs: ")
for i in range(0, numCont):
print(" Container ID: {}".format(i))
print(" IPs: 10.0.0.{}".format(range(i*(numTenants / numCont) + 2, (i+1)*(numTenants / numCont) + 2)))
print(" {}".format(exp.GetMacByVf(port_1_Vfs[i*offset])))
print(" {}".format(exp.GetMacByVf(port_2_Vfs[i*offset])))
# exp.EmailNotify(msg, "is ready", logTimeStamp)
return True
# untested
def vm2vm_SRIOV_Dynamic_Container(cnx_server, isDPDK, nbrCores, isIsolated, numVMs=1, numCont=1, numTenants=4):
# sanity checks
if numVMs > 4:
print("[ERROR] Currently, only 4 VM instances are available.")
return False
elif numCont > 2:
print("[ERROR] The number of possible OVS containers is limited by the number of available cores: 2 cores per l2fwd session, 2 sessions per OVS instance => limit=3")
return False
elif numCont % numVMs != 0:
print("[ERROR] Number of containers should be a multiple of number of VMs.")
return False
elif numTenants % numCont != 0:
print("[ERROR] Number of Flow Rules has to be a multiple of the number of Containers.")
return False
elif numVMs < 1 or numCont < 1:
return False
exp.isContainer = True
cpuArray = exp.cpuAllocation(2, nbrCores, isIsolated, True, 2, l2fwdCpuCores, totalCpuCores)
exp.isContainer = False
exp.HOST_CPU = cpuArray["hostCpu"]
OvsCpu = cpuArray["ovsCpu"]
DpdkCpu = cpuArray["ovsDpdk"]
cpuDpdkPorts = cpuArray["cpuDpdkPorts"]
TenantVMsCpuArray = cpuArray["TenantVMsCpuArray"]
OvsVMsCpuArray = cpuArray["OvsVMsCpuArray"]
num_vfs = str(numCont * 3)
exp.NicConfig = ["1", num_vfs]
exp.NicType = "mlx"
exp.isSRIOV = True
exp.Server_cnx= cnx_server
exp.pf_index = 0
exp.pfs = []
exp.vfs = []
exp.scsName = "vm2vm_SRIOV_Dynamic_Container"+"_IsDPDK="+str(isDPDK)+"_IsIsolated="+str(isIsolated)
logTimeStamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# exp.EmailNotify(InitMessage, "is beeing prepared", logTimeStamp)
exp.Logs("", logTimeStamp)
exp.IsDPDK = isDPDK
if isDPDK:
exp.OVS_PATH = exp.dpdk_path
else:
exp.OVS_PATH = exp.nodpdk_path
exp.PhyPorts= [
("enp2s0f0", num_vfs),
("enp2s0f1", num_vfs)
]
exp.InitialConfig()
exp.isContainer = True
port_1_Vfs = exp.pfs[0][2]
port_2_Vfs = exp.pfs[1][2]
exp.MyVfs = []
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
print("[*] VM INDEX: " + str(vm_idx))
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
print(" => VM : " + vm)
exp.MyVfs.append((port_1_Vfs[i*5], "0", "off", vm, i))
exp.MyVfs.append((port_2_Vfs[i*5], "0", "off", vm, i))
for l in range(0, 2):
vlan = str((i*2 + l + 1)*10)
# tenant_vm = "tenant-green-{}".format(i*2 + l + 1)
tenant_vm = "tenant-green-{}".format(l + 1)
exp.MyVfs.append((port_1_Vfs[i*5 + l*2 + 1], vlan, "off", vm, i))
exp.MyVfs.append((port_2_Vfs[i*5 + l*2 + 1], vlan, "off", vm, i))
exp.MyVfs.append((port_1_Vfs[i*5 + l*2 + 2], vlan, "off", tenant_vm))
exp.MyVfs.append((port_2_Vfs[i*5 + l*2 + 2], vlan, "off", tenant_vm))
exp.usedVms = [("vswitch-vm-container", OvsVMsCpuArray[0], VmRam)]
for i in range(1, numVMs):
exp.usedVms.append(("vswitch-vm-container-{}".format(i + 1), OvsVMsCpuArray[i], VmRam))
for i in range(0, 2):
exp.usedVms.append(("tenant-green-{}".format(i + 1), TenantVMsCpuArray[i], str(2929688 * numCont)))
if isDPDK:
ovs_ports = []
for i in range(0, numCont):
ovs_ports.append([
(port_1_Vfs[i*5], True, cpuDpdkPorts),
(port_2_Vfs[i*5], True, cpuDpdkPorts),
(port_1_Vfs[i*5 + 1], True, cpuDpdkPorts),
(port_2_Vfs[i*5 + 1], True, cpuDpdkPorts),
(port_1_Vfs[i*5 + 3], True, cpuDpdkPorts),
(port_2_Vfs[i*5 + 3], True, cpuDpdkPorts)])
else:
ovs_ports = []
for i in range(0, numCont):
ovs_ports.append([
(port_1_Vfs[i*5], False),
(port_2_Vfs[i*5], False),
(port_1_Vfs[i*5 + 1], False),
(port_2_Vfs[i*5 + 1], False),
(port_1_Vfs[i*5 + 3], False),
(port_2_Vfs[i*5 + 3], False)])
msg= exp.GetScenarioSummary([a for a in ovs_ports], OvsCpu, DpdkCpu, DpdkMem, isIsolated)
# exp.EmailNotify(msg, "is beeing prepared", logTimeStamp)
exp.Logs(msg, logTimeStamp)
exp.Vfsconfig()
if isDPDK:
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
exp.ConfigOVS(vm, "br0", ovs_ports[i], OvsCpu, DpdkMem, DpdkCpu, ContNum=i, VMNum=vm_idx)
else:
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
exp.ConfigOVS(vm, "br0", ovs_ports[i], OvsCpu, ContNum=i, VMNum=vm_idx)
for i in range(0, numCont):
vm_idx = int(float(i) / numCont * numVMs) + 1
vm = "vswitch-vm-container-{}".format(vm_idx) if vm_idx != 1 else "vswitch-vm-container"
match="in_port={},ip,nw_dst=10.0.0.{}".format(exp.VfsMatch[port_1_Vfs[i*5]], (i+1)*2)
action="mod_dl_dst:{},{}".format(exp.GetMacByVf(port_1_Vfs[i*5 + 2]), exp.VfsMatch[port_1_Vfs[i*5 + 1]])
exp.addFlowRule(vm, exp.OVS_PATH, "br0", match, action)
match="in_port={}".format(exp.VfsMatch[port_2_Vfs[i*5 + 1]])
action="mod_dl_dst:{},{}".format(exp.GetMacByVf(port_1_Vfs[i*5 + 4]), exp.VfsMatch[port_1_Vfs[i*5 + 3]])
exp.addFlowRule(vm, exp.OVS_PATH, "br0", match, action)
match="in_port={}".format(exp.VfsMatch[port_2_Vfs[i*5 + 3]])
action="mod_dl_dst:{},{}".format(outDestMac, exp.VfsMatch[port_2_Vfs[i*5]])
exp.addFlowRule(vm, exp.OVS_PATH, "br0", match, action)
exp.showFlowRules(vm, exp.OVS_PATH,"br0")
tenant = "tenant-green-{}".format(1)
mac = exp.GetMacByVf(port_2_Vfs[i*5 + 1])
exp.patchMAC(tenant, "./container/l2fwd-FourOvsVm-container-{}".format(i + 1), mac)
tenant = "tenant-green-{}".format(2)
mac = exp.GetMacByVf(port_2_Vfs[i*5 + 3])
exp.patchMAC(tenant, "./container/l2fwd-FourOvsVm-container-{}".format(i + 1), mac)
exp.startL2Frwd(numCont, vswitchMode="v2v_container", tenantCount=numCont)
# exp.EmailNotify(msg, "is ready", logTimeStamp)
return True | 42.22069 | 174 | 0.59425 | 2,478 | 18,366 | 4.291768 | 0.095642 | 0.02031 | 0.022567 | 0.022849 | 0.887823 | 0.87607 | 0.851998 | 0.810531 | 0.789469 | 0.755618 | 0 | 0.030337 | 0.260536 | 18,366 | 435 | 175 | 42.22069 | 0.752743 | 0.07797 | 0 | 0.728916 | 0 | 0.003012 | 0.174672 | 0.035254 | 0.009036 | 0 | 0 | 0 | 0 | 1 | 0.009036 | false | 0.006024 | 0.006024 | 0 | 0.081325 | 0.096386 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
908adad600220bfa758e4c08f2b765c35975c807 | 140 | py | Python | durasftp/__init__.py | scholarsmate/durasftp | e7d4d5ed72278b88882c04688134c15efb7d38cb | [
"MIT"
] | 3 | 2019-09-11T20:58:15.000Z | 2021-12-20T06:46:43.000Z | durasftp/__init__.py | scholarsmate/durasftp | e7d4d5ed72278b88882c04688134c15efb7d38cb | [
"MIT"
] | 1 | 2021-06-02T00:02:11.000Z | 2021-06-02T00:02:11.000Z | durasftp/__init__.py | scholarsmate/durasftp | e7d4d5ed72278b88882c04688134c15efb7d38cb | [
"MIT"
] | 2 | 2019-09-14T18:19:53.000Z | 2021-09-10T21:00:39.000Z | from durasftp.common.sftp.connection import DurableSFTPConnection
from durasftp.common.sftp.mirrorer import Mirrorer
__version__ = "1.0.0"
| 28 | 65 | 0.835714 | 18 | 140 | 6.277778 | 0.611111 | 0.212389 | 0.318584 | 0.389381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023438 | 0.085714 | 140 | 4 | 66 | 35 | 0.859375 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9097041db68a3509dcacea3051c78e009ad218b4 | 16,618 | py | Python | tests/test_htmltag.py | humitos/sphinx-hoverxref | 2b7145537dc9fe62703f5dc2bcc32c0da3bdc767 | [
"MIT"
] | 3 | 2019-08-16T20:03:12.000Z | 2019-08-21T14:03:53.000Z | tests/test_htmltag.py | humitos/sphinx-hoverxref | 2b7145537dc9fe62703f5dc2bcc32c0da3bdc767 | [
"MIT"
] | 11 | 2019-06-04T07:37:59.000Z | 2019-08-21T10:25:05.000Z | tests/test_htmltag.py | humitos/sphinx-hoverxref | 2b7145537dc9fe62703f5dc2bcc32c0da3bdc767 | [
"MIT"
] | null | null | null | import re
import pytest
import sphinx
import textwrap
from hoverxref import __version__
from .utils import srcdir, prefixdocumentsrcdir, customobjectsrcdir, pythondomainsrcdir, intersphinxsrc, bibtexdomainsrcdir
@pytest.mark.sphinx(
srcdir=srcdir,
)
def test_default_settings(app, status, warning):
"""The extension should not change the output if not configured."""
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks = [
'<a class="reference internal" href="chapter-i.html#chapter-i"><span class="std std-ref">This a :ref: to Chapter I</span></a>',
'<a class="hoverxref tooltip reference internal" href="chapter-i.html#section-i"><span class="std std-ref">This a :hoverxref: to Chapter I, Section I</span></a>',
]
for chunk in chunks:
assert chunk in content
@pytest.mark.sphinx(
srcdir=srcdir,
)
def test_js_render(app, status, warning):
app.build()
path = app.outdir / '_static' / 'js' / 'hoverxref.js'
assert path.exists() is True
content = open(path).read()
chunks = [
"theme: ['tooltipster-shadow', 'tooltipster-shadow-custom']",
"interactive: true",
"maxWidth: 450",
"animation: 'fade'",
"animationDuration: 0",
"contentAsHTML: true",
"content: 'Loading...'",
"var url = 'https://readthedocs.org' + '/api/v3/embed/?' + $.param(params);",
textwrap.indent(
textwrap.dedent("""
var params = {{
'doctool': 'sphinx',
'doctoolversion': '{}',
'url': url,
}}""".format(sphinx.__version__)),
' ',
).strip(),
"var sphinxtabs = false",
"var mathjax = false",
"var url = getEmbedURL(href);",
f"headers: {{'X-HoverXRef-Version': '{__version__}'}}",
]
for chunk in chunks:
assert chunk in content
@pytest.mark.sphinx(
srcdir=prefixdocumentsrcdir,
)
def test_autosectionlabel_project_version_settings(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks = [
'<a class="reference internal" href="chapter-i.html#chapter-i"><span class="std std-ref">This a :ref: to Chapter I</span></a>.',
'<a class="hoverxref tooltip reference internal" href="chapter-i.html#chapter-i"><span class="std std-ref">This a :hoverxref: to Chapter I</span></a>',
]
for chunk in chunks:
assert chunk in content
@pytest.mark.sphinx(
srcdir=customobjectsrcdir,
confoverrides={},
)
def test_custom_object(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks = [
'<a class="hoverxref tooltip reference internal" href="configuration.html#confval-conf-title"><code class="xref std std-confval docutils literal notranslate"><span class="pre">This</span> <span class="pre">is</span> <span class="pre">a</span> <span class="pre">:confval:</span> <span class="pre">to</span> <span class="pre">conf-title</span></code></a>',
'<a class="hoverxref tooltip reference internal" href="configuration.html#configuration"><span class="std std-ref">This is a :hoverxref: to Configuration document</span></a>',
'<a class="hoverxref tooltip reference internal" href="code.html#python-code-block"><span class="std std-numref">This is a :numref: to a Python code block (PyExample)</span></a>'
]
for chunk in chunks:
assert chunk in content
@pytest.mark.sphinx(
srcdir=pythondomainsrcdir,
confoverrides={
'hoverxref_domains': ['py'],
},
)
def test_python_domain(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks = [
'<a class="hoverxref tooltip reference internal" href="api.html#hoverxref.extension.HoverXRefStandardDomainMixin" title="hoverxref.extension.HoverXRefStandardDomainMixin"><code class="xref py py-class docutils literal notranslate"><span class="pre">This</span> <span class="pre">is</span> <span class="pre">a</span> <span class="pre">:py:class:</span> <span class="pre">role</span> <span class="pre">to</span> <span class="pre">a</span> <span class="pre">Python</span> <span class="pre">object</span></code></a>',
'<a class="hoverxref tooltip reference internal" href="api.html#module-hoverxref.extension" title="hoverxref.extension"><code class="xref py py-mod docutils literal notranslate"><span class="pre">hoverxref.extension</span></code></a>',
'<a class="hoverxref tooltip reference internal" href="api.html#hoverxref.extension.setup" title="hoverxref.extension.setup"><code class="xref py py-func docutils literal notranslate"><span class="pre">hoverxref.extension.setup()</span></code></a>',
'<code class="xref py py-const docutils literal notranslate"><span class="pre">Constant</span></code>',
]
for chunk in chunks:
assert chunk in content
@pytest.mark.sphinx(
srcdir=pythondomainsrcdir,
confoverrides={
'hoverxref_domains': ['py'],
'hoverxref_intersphinx': ['python'],
'hoverxref_auto_ref': True,
'extensions': [
'sphinx.ext.autodoc',
'sphinx.ext.autosectionlabel',
'sphinx.ext.intersphinx',
'hoverxref.extension',
],
},
)
def test_python_domain_intersphinx(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks = [
'<a class="hoverxref tooltip reference internal" href="api.html#hoverxref.extension.HoverXRefStandardDomainMixin" title="hoverxref.extension.HoverXRefStandardDomainMixin"><code class="xref py py-class docutils literal notranslate"><span class="pre">This</span> <span class="pre">is</span> <span class="pre">a</span> <span class="pre">:py:class:</span> <span class="pre">role</span> <span class="pre">to</span> <span class="pre">a</span> <span class="pre">Python</span> <span class="pre">object</span></code></a>',
'<a class="hoverxref tooltip reference internal" href="api.html#module-hoverxref.extension" title="hoverxref.extension"><code class="xref py py-mod docutils literal notranslate"><span class="pre">hoverxref.extension</span></code></a>',
'<a class="hoverxref tooltip reference internal" href="api.html#hoverxref.extension.setup" title="hoverxref.extension.setup"><code class="xref py py-func docutils literal notranslate"><span class="pre">hoverxref.extension.setup()</span></code></a>',
'<code class="xref py py-const docutils literal notranslate"><span class="pre">Constant</span></code>',
]
for chunk in chunks:
assert chunk in content
@pytest.mark.skipif(
sphinx.version_info < (2, 1, 0),
reason='sphinxcontrib-bibtex requires Sphinx>=2.1 to work',
)
@pytest.mark.sphinx(
srcdir=bibtexdomainsrcdir,
confoverrides={
'hoverxref_domains': ['cite'],
},
)
def test_bibtex_domain(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks = [
'<p>See <span id="id1">Nelson [<a class="hoverxref tooltip reference internal" href="#id4" title="Edward Nelson. Radically Elementary Probability Theory. Princeton University Press, 1987.">Nel87</a>]</span> for an introduction to non-standard analysis.\nNon-standard analysis is fun <span id="id2">[<a class="hoverxref tooltip reference internal" href="#id4" title="Edward Nelson. Radically Elementary Probability Theory. Princeton University Press, 1987.">Nel87</a>]</span>.</p>',
]
for chunk in chunks:
assert chunk in content
@pytest.mark.sphinx(
srcdir=srcdir,
confoverrides={
'hoverxref_roles': ['term'],
},
)
def test_glossary_term_domain(app, status, warning):
app.build()
path = app.outdir / 'glossary.html'
assert path.exists() is True
content = open(path).read()
chunks = [
'<p>See definition <a class="hoverxref tooltip reference internal" href="#term-builder"><span class="xref std std-term">builder</span></a> for more information.</p>',
]
for chunk in chunks:
assert chunk in content
@pytest.mark.sphinx(
srcdir=srcdir,
confoverrides={
'hoverxref_default_type': 'modal',
},
)
def test_default_type(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks = [
'<a class="reference internal" href="chapter-i.html#chapter-i"><span class="std std-ref">This a :ref: to Chapter I</span></a>',
'<a class="hoverxref modal reference internal" href="chapter-i.html#section-i"><span class="std std-ref">This a :hoverxref: to Chapter I, Section I</span></a>',
]
for chunk in chunks:
assert chunk in content
@pytest.mark.sphinx(
srcdir=srcdir,
confoverrides={
'hoverxref_ignore_refs': [
'section i',
],
},
)
def test_ignore_refs(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks = [
'<a class="reference internal" href="chapter-i.html#chapter-i"><span class="std std-ref">This a :ref: to Chapter I</span></a>',
'<a class="reference internal" href="chapter-i.html#section-i"><span class="std std-ref">This a :hoverxref: to Chapter I, Section I</span></a>',
]
for chunk in chunks:
assert chunk in content
ignored_chunks = [
'<a class="hoverxref reference internal" href="chapter-i.html#section-i"><span class="std std-ref">This a :hoverxref: to Chapter I, Section I</span></a>',
]
for chunk in ignored_chunks:
assert chunk not in content
@pytest.mark.sphinx(
srcdir=intersphinxsrc,
)
def test_intersphinx_default_configs(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks_regex = [
r'<a class="reference external" href="https://docs.python.org/3/tutorial/index.html#tutorial-index" title="\(in Python v3.\d\d?\)"><span class="xref std std-ref">This a :ref: to The Python Tutorial using intersphinx</span></a>',
r'<a class="reference external" href="https://docs.python.org/3/library/datetime.html#datetime-datetime" title="\(in Python v3.\d\d?\)"><span class="xref std std-ref">This a :ref: to datetime.datetime Python’s function using intersphinx</span></a>',
r'<a class="reference external" href="https://docs.readthedocs.io/en/stable/config-file/v2.html#python" title="\(in Read the Docs user documentation v\d\d?.\d\d?.\d\d?\)"><span class="xref std std-ref">This a :ref: to Config File v2 Read the Docs’ page using intersphinx</span></a>',
r'<a class="reference external" href="https://docs.python.org/3/library/functions.html#float" title="\(in Python v3.\d\d?\)"><code class="xref py py-class docutils literal notranslate"><span class="pre">float</span></code></a>',
]
chunks = [
'<a class="reference internal" href="#hoverxref.extension.setup" title="hoverxref.extension.setup"><code class="xref py py-func docutils literal notranslate"><span class="pre">hoverxref.extension.setup()</span></code></a>',
]
if sphinx.version_info >= (4, 0):
chunks.extend([
'<dt class="sig sig-object py" id="hoverxref.extension.setup">',
])
else:
chunks.extend([
'<dt id="hoverxref.extension.setup">',
])
for chunk in chunks:
assert chunk in content
for chunk in chunks_regex:
assert re.search(chunk, content)
@pytest.mark.sphinx(
srcdir=intersphinxsrc,
confoverrides={
'hoverxref_intersphinx': [
'python',
],
},
)
def test_intersphinx_python_mapping(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks_regex = [
# Python's links do have hoverxref enabled
r'<a class="hoverxref tooltip reference external" href="https://docs.python.org/3/tutorial/index.html#tutorial-index" title="\(in Python v3.\d\d?\)"><span class="xref std std-ref">This a :ref: to The Python Tutorial using intersphinx</span></a>',
r'<a class="hoverxref tooltip reference external" href="https://docs.python.org/3/library/datetime.html#datetime-datetime" title="\(in Python v3.\d\d?\)"><span class="xref std std-ref">This a :ref: to datetime.datetime Python’s function using intersphinx</span></a>',
r'<a class="hoverxref tooltip reference external" href="https://docs.python.org/3/library/functions.html#float" title="\(in Python v3.\d\d?\)"><code class="xref py py-class docutils literal notranslate"><span class="pre">float</span></code></a>',
# Read the Docs' link does not have hoverxref enabled
r'<a class="reference external" href="https://docs.readthedocs.io/en/stable/config-file/v2.html#python" title="\(in Read the Docs user documentation v\d\d?.\d\d?.\d\d?\)"><span class="xref std std-ref">This a :ref: to Config File v2 Read the Docs’ page using intersphinx</span></a>',
]
chunks = [
# Python's domain does not have hoverxref enabled
'<a class="reference internal" href="#hoverxref.extension.setup" title="hoverxref.extension.setup"><code class="xref py py-func docutils literal notranslate"><span class="pre">hoverxref.extension.setup()</span></code></a>',
]
for chunk in chunks:
assert chunk in content
for chunk in chunks_regex:
assert re.search(chunk, content)
@pytest.mark.sphinx(
srcdir=intersphinxsrc,
confoverrides={
'hoverxref_intersphinx': [
'readthedocs',
'python',
],
'hoverxref_intersphinx_types': {
'readthedocs': 'modal',
'python': {
'class': 'modal',
}
},
'hoverxref_domains': ['py'],
'default_role': 'obj',
},
)
def test_intersphinx_all_mappings(app, status, warning):
app.build()
path = app.outdir / 'index.html'
assert path.exists() is True
content = open(path).read()
chunks_regex = [
# Python's links do have hoverxref enabled
r'<a class="hoverxref tooltip reference external" href="https://docs.python.org/3/tutorial/index.html#tutorial-index" title="\(in Python v3.\d\d?\)"><span class="xref std std-ref">This a :ref: to The Python Tutorial using intersphinx</span></a>',
r'<a class="hoverxref tooltip reference external" href="https://docs.python.org/3/library/datetime.html#datetime-datetime" title="\(in Python v3.\d\d?\)"><span class="xref std std-ref">This a :ref: to datetime.datetime Python’s function using intersphinx</span></a>',
r'<a class="hoverxref modal reference external" href="https://docs.python.org/3/library/functions.html#float" title="\(in Python v3.\d\d?\)"><code class="xref py py-class docutils literal notranslate"><span class="pre">float</span></code></a>',
# Read the Docs' link does have hoverxref enabled
r'<a class="hoverxref modal reference external" href="https://docs.readthedocs.io/en/stable/config-file/v2.html#python" title="\(in Read the Docs user documentation v\d\d?.\d\d?.\d+\)"><span class="xref std std-ref">This a :ref: to Config File v2 Read the Docs’ page using intersphinx</span></a>',
# Using `default_role = 'obj'`
# Note the difference with the same `float` line previouly. Here it uses `py-obj` instead of `py-class`.
'<a class="hoverxref tooltip reference external" href="https://docs.python.org/3/library/functions.html#float" title="\(in Python v3.\d\d?\)"><code class="xref py py-obj docutils literal notranslate"><span class="pre">float</span></code></a>'
]
chunks = [
# Python domain's link does have hoverxref enabled
'<a class="hoverxref tooltip reference internal" href="#hoverxref.extension.setup" title="hoverxref.extension.setup"><code class="xref py py-func docutils literal notranslate"><span class="pre">hoverxref.extension.setup()</span></code></a>',
]
for chunk in chunks:
assert chunk in content
for chunk in chunks_regex:
assert re.search(chunk, content)
| 44.433155 | 521 | 0.657961 | 2,189 | 16,618 | 4.961169 | 0.106898 | 0.048066 | 0.040884 | 0.042541 | 0.811234 | 0.803683 | 0.795672 | 0.783978 | 0.780571 | 0.76593 | 0 | 0.003983 | 0.184258 | 16,618 | 373 | 522 | 44.552279 | 0.797138 | 0.028463 | 0 | 0.548822 | 0 | 0.131313 | 0.61722 | 0.190491 | 0.003367 | 0 | 0 | 0 | 0.10101 | 1 | 0.043771 | false | 0 | 0.020202 | 0 | 0.063973 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2908dbf3c1c5846e30c5542d436220439b4300e8 | 129 | py | Python | paltas/Substructure/__init__.py | swagnercarena/paltas | 62495381e406dfb508a1ace4aa69cbe9a4207e38 | [
"MIT"
] | 5 | 2022-02-11T19:58:03.000Z | 2022-03-07T19:45:23.000Z | paltas/Substructure/__init__.py | swagnercarena/paltas | 62495381e406dfb508a1ace4aa69cbe9a4207e38 | [
"MIT"
] | 8 | 2022-02-01T00:42:34.000Z | 2022-03-31T17:42:55.000Z | paltas/Substructure/__init__.py | swagnercarena/paltas | 62495381e406dfb508a1ace4aa69cbe9a4207e38 | [
"MIT"
] | 1 | 2022-02-11T19:54:53.000Z | 2022-02-11T19:54:53.000Z | from . import nfw_functions
from . import subhalos_dg19
from . import subhalos_base
from . import los_base
from . import los_dg19 | 25.8 | 27 | 0.813953 | 20 | 129 | 5 | 0.4 | 0.5 | 0.36 | 0.34 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036364 | 0.147287 | 129 | 5 | 28 | 25.8 | 0.872727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
291e5606f53afb2a14f1e239c5dc7c43b71959c0 | 1,936 | py | Python | USERS1.py | XQuickmathsX/library-management-oncemore | e4e1650ebf40c63f9ed0aa0893cec010aebb0e76 | [
"MIT"
] | null | null | null | USERS1.py | XQuickmathsX/library-management-oncemore | e4e1650ebf40c63f9ed0aa0893cec010aebb0e76 | [
"MIT"
] | null | null | null | USERS1.py | XQuickmathsX/library-management-oncemore | e4e1650ebf40c63f9ed0aa0893cec010aebb0e76 | [
"MIT"
] | null | null | null | import mysql.connector
mydb= mysql.connector(host="127.0.0.1", user="root", passwd="2zkNKcz&EOZaRjc$",database="library_management_project")
def useradd():
staffIDp=int(input("ENTER YOUR STAFF_ID:- "))
#todo query to add this expression into the database
stffnamep=input("ENTER YOUR FIRST NAME:- ")
#todo query to add this expression into the database
stflnamep=input("ENTER YOUR LAST NAME")
#todo query to add this expression into the database
stfcontactnumberp=int(input("ENTER YOUR CONTACT NUMBER:- "))
#todo query to add this expression into the database
stfemailp=input("ENTER YOUR EMAIL_ID:- ")
#todo query to add this expression into the database
stfaddressp=input("ENTER YOUR ADDRESS:- ")
#todo query to add this expression into the database
stfpasswordp=input("ENTER YOUR PASSWORD:- ")
#todo query to add this expression into the database
stftypep=input("ENTER YOUR STAFF TYPE:- ")
#todo query to add this expression into the database
def userupdate():
staffIDp=int(input("ENTER YOUR STAFF_ID:- "))
#todo query to update this expression into the database
stffnamep=input("ENTER YOUR FIRST NAME:- ")
#todo query to update this expression into the database
stflnamep=input("ENTER YOUR LAST NAME")
#todo query to update this expression into the database
stfcontactnumberp=int(input("ENTER YOUR CONTACT NUMBER:- "))
#todo query to update this expression into the database
stfemailp=input("ENTER YOUR EMAIL_ID:- ")
#todo query to update this expression into the database
stfaddressp=input("ENTER YOUR ADDRESS:- ")
#todo query to update this expression into the database
stfpasswordp=input("ENTER YOUR PASSWORD:- ")
#todo query to update this expression into the database
stftypep=input("ENTER YOUR STAFF TYPE:- ")
#todo query to update this expression into the database | 53.777778 | 118 | 0.716426 | 264 | 1,936 | 5.231061 | 0.19697 | 0.115858 | 0.162201 | 0.243302 | 0.905141 | 0.905141 | 0.905141 | 0.905141 | 0.905141 | 0.837075 | 0 | 0.004516 | 0.19938 | 1,936 | 36 | 119 | 53.777778 | 0.886452 | 0.433884 | 0 | 0.8 | 0 | 0 | 0.402486 | 0.024857 | 0 | 0 | 0 | 0.027778 | 0 | 1 | 0.1 | false | 0.15 | 0.05 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 10 |
29331c362c6f1cdbe27a719b1db711950c083d91 | 140 | py | Python | keepaAPI/__init__.py | alexzhangwh/keepaAPI | 1c3d5ba07b07d4abf0b5b2c751ee05af2d9a1225 | [
"Apache-2.0"
] | null | null | null | keepaAPI/__init__.py | alexzhangwh/keepaAPI | 1c3d5ba07b07d4abf0b5b2c751ee05af2d9a1225 | [
"Apache-2.0"
] | null | null | null | keepaAPI/__init__.py | alexzhangwh/keepaAPI | 1c3d5ba07b07d4abf0b5b2c751ee05af2d9a1225 | [
"Apache-2.0"
] | null | null | null | from keepaAPI._version import __version__
from keepaAPI.interface import *
from keepaAPI.plotting import *
from keepaAPI.keepaTime import *
| 28 | 41 | 0.835714 | 17 | 140 | 6.588235 | 0.411765 | 0.428571 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 140 | 4 | 42 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2968a3a21618fef94e22896a90ee8ce0a0dae88b | 9,182 | py | Python | src/genie/libs/parser/iosxe/tests/ShowSpanningTreeDetail/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxe/tests/ShowSpanningTreeDetail/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxe/tests/ShowSpanningTreeDetail/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"rapid_pvst": {
"forwarding_delay": 15,
"hello_time": 2,
"hold_count": 6,
"max_age": 20,
"vlans": {
1: {
"aging_timer": 480,
"bridge_address": "000e.39ff.71a2",
"bridge_priority": 24576,
"bridge_sysid": 1,
"forwarding_delay": 15,
"hello_time": 2,
"hello_timer": 0,
"hold_count": 6,
"hold_time": 1,
"interfaces": {
"Port-channel220": {
"cost": 1,
"counters": {"bpdu_received": 0, "bpdu_sent": 20120147},
"designated_bridge_address": "000e.39ff.71a2",
"designated_bridge_priority": 24577,
"designated_path_cost": 0,
"designated_port_id": "128.1671",
"designated_root_address": "000e.39ff.71a2",
"designated_root_priority": 24577,
"forward_delay": 0,
"hold": 0,
"link_type": "point-to-point",
"message_age": 0,
"name": "Port-channel220",
"number_of_forward_transitions": 1,
"port_identifier": "128.1671.",
"port_num": 1671,
"port_priority": 128,
"status": "designated forwarding",
},
"Port-channel265": {
"cost": 3,
"counters": {"bpdu_received": 0, "bpdu_sent": 21320048},
"designated_bridge_address": "000e.39ff.71a2",
"designated_bridge_priority": 24577,
"designated_path_cost": 0,
"designated_port_id": "128.1673",
"designated_root_address": "000e.39ff.71a2",
"designated_root_priority": 24577,
"forward_delay": 0,
"hold": 0,
"link_type": "point-to-point",
"message_age": 0,
"name": "Port-channel265",
"number_of_forward_transitions": 1,
"port_identifier": "128.1673.",
"port_num": 1673,
"port_priority": 128,
"status": "designated forwarding",
},
},
"max_age": 20,
"notification_timer": 0,
"notification_times": 2,
"root_of_spanning_tree": True,
"time_since_topology_change": "38w1d",
"topology_change_flag": False,
"topology_change_timer": 0,
"topology_change_times": 35,
"topology_changes": 10,
"topology_detected_flag": False,
"topology_from_port": "GigabitEthernet8/10",
"vlan_id": 1,
},
115: {
"aging_timer": 480,
"bridge_address": "000e.39ff.71a2",
"bridge_priority": 24576,
"bridge_sysid": 115,
"forwarding_delay": 15,
"hello_time": 2,
"hello_timer": 0,
"hold_count": 6,
"hold_time": 1,
"interfaces": {
"Port-channel210": {
"cost": 2,
"counters": {"bpdu_received": 4, "bpdu_sent": 10172865},
"designated_bridge_address": "000e.39ff.71a2",
"designated_bridge_priority": 24691,
"designated_path_cost": 0,
"designated_port_id": "128.1670",
"designated_root_address": "000e.39ff.71a2",
"designated_root_priority": 24691,
"forward_delay": 0,
"hold": 0,
"link_type": "point-to-point",
"message_age": 0,
"name": "Port-channel210",
"number_of_forward_transitions": 1,
"port_identifier": "128.1670.",
"port_num": 1670,
"port_priority": 128,
"status": "designated forwarding",
}
},
"max_age": 20,
"notification_timer": 0,
"notification_times": 2,
"root_of_spanning_tree": True,
"time_since_topology_change": "33w6d",
"topology_change_flag": False,
"topology_change_timer": 0,
"topology_change_times": 35,
"topology_changes": 2,
"topology_detected_flag": False,
"topology_from_port": "Port-channel210",
"vlan_id": 115,
},
116: {
"aging_timer": 480,
"bridge_address": "000e.39ff.71a2",
"bridge_priority": 24576,
"bridge_sysid": 116,
"forwarding_delay": 15,
"hello_time": 2,
"hello_timer": 0,
"hold_count": 6,
"hold_time": 1,
"interfaces": {
"Port-channel210": {
"cost": 2,
"counters": {"bpdu_received": 4, "bpdu_sent": 10172829},
"designated_bridge_address": "000e.39ff.71a2",
"designated_bridge_priority": 24692,
"designated_path_cost": 0,
"designated_port_id": "128.1670",
"designated_root_address": "000e.39ff.71a2",
"designated_root_priority": 24692,
"forward_delay": 0,
"hold": 0,
"link_type": "point-to-point",
"message_age": 0,
"name": "Port-channel210",
"number_of_forward_transitions": 1,
"port_identifier": "128.1670.",
"port_num": 1670,
"port_priority": 128,
"status": "designated forwarding",
}
},
"max_age": 20,
"notification_timer": 0,
"notification_times": 2,
"root_of_spanning_tree": True,
"time_since_topology_change": "33w6d",
"topology_change_flag": False,
"topology_change_timer": 0,
"topology_change_times": 35,
"topology_changes": 2,
"topology_detected_flag": False,
"topology_from_port": "Port-channel210",
"vlan_id": 116,
},
118: {
"aging_timer": 480,
"bridge_address": "000e.39ff.71a2",
"bridge_priority": 24576,
"bridge_sysid": 118,
"forwarding_delay": 15,
"hello_time": 2,
"hello_timer": 0,
"hold_count": 6,
"hold_time": 1,
"interfaces": {
"Port-channel210": {
"cost": 2,
"counters": {"bpdu_received": 4, "bpdu_sent": 10172791},
"designated_bridge_address": "000e.39ff.71a2",
"designated_bridge_priority": 24694,
"designated_path_cost": 0,
"designated_port_id": "128.1670",
"designated_root_address": "000e.39ff.71a2",
"designated_root_priority": 24694,
"forward_delay": 0,
"hold": 0,
"link_type": "point-to-point",
"message_age": 0,
"name": "Port-channel210",
"number_of_forward_transitions": 1,
"port_identifier": "128.1670.",
"port_num": 1670,
"port_priority": 128,
"status": "designated forwarding",
}
},
"max_age": 20,
"notification_timer": 0,
"notification_times": 2,
"root_of_spanning_tree": True,
"time_since_topology_change": "33w6d",
"topology_change_flag": False,
"topology_change_timer": 0,
"topology_change_times": 35,
"topology_changes": 2,
"topology_detected_flag": False,
"topology_from_port": "Port-channel210",
"vlan_id": 118,
},
},
}
}
| 43.516588 | 80 | 0.411022 | 695 | 9,182 | 5.083453 | 0.135252 | 0.063402 | 0.05944 | 0.07529 | 0.923861 | 0.923861 | 0.888197 | 0.876592 | 0.851684 | 0.801585 | 0 | 0.109156 | 0.480179 | 9,182 | 210 | 81 | 43.72381 | 0.63105 | 0 | 0 | 0.728571 | 0 | 0 | 0.380091 | 0.117513 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4652b034f45e78826bda2126a1aa4ccddc00b00c | 21,418 | py | Python | sdk/python/pulumi_artifactory/distribution_webhook.py | pulumi/terraform-provider-artifactory | 4f217f2e6bc2f7e5395a148cd3b3b7b5aaa66372 | [
"ECL-2.0",
"Apache-2.0"
] | 4 | 2021-11-17T15:06:59.000Z | 2022-03-21T02:36:15.000Z | sdk/python/pulumi_artifactory/distribution_webhook.py | pulumi/terraform-provider-artifactory | 4f217f2e6bc2f7e5395a148cd3b3b7b5aaa66372 | [
"ECL-2.0",
"Apache-2.0"
] | 113 | 2021-11-09T14:14:50.000Z | 2022-03-31T23:18:29.000Z | sdk/python/pulumi_artifactory/distribution_webhook.py | pulumi/terraform-provider-artifactory | 4f217f2e6bc2f7e5395a148cd3b3b7b5aaa66372 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-11-22T11:19:48.000Z | 2021-12-17T01:39:20.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['DistributionWebhookArgs', 'DistributionWebhook']
@pulumi.input_type
class DistributionWebhookArgs:
def __init__(__self__, *,
criteria: pulumi.Input['DistributionWebhookCriteriaArgs'],
event_types: pulumi.Input[Sequence[pulumi.Input[str]]],
handlers: pulumi.Input[Sequence[pulumi.Input['DistributionWebhookHandlerArgs']]],
key: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None):
"""
The set of arguments for constructing a DistributionWebhook resource.
:param pulumi.Input['DistributionWebhookCriteriaArgs'] criteria: Specifies where the webhook will be applied on which repositories.
:param pulumi.Input[Sequence[pulumi.Input[str]]] event_types: List of Events in Artifactory, Distribution, Release Bundle that function as the event trigger for the Webhook. Allow values: "distribute_started", "distribute_completed", "distribute_aborted", "distribute_failed", "delete_started", "delete_completed", "delete_failed"
:param pulumi.Input[Sequence[pulumi.Input['DistributionWebhookHandlerArgs']]] handlers: At least one is required.
:param pulumi.Input[str] key: The identity key of the webhook. Must be between 2 and 200 characters. Cannot contain spaces.
:param pulumi.Input[str] description: Webhook description. Max length 1000 characters.
:param pulumi.Input[bool] enabled: Status of webhook. Default to 'true'.
"""
pulumi.set(__self__, "criteria", criteria)
pulumi.set(__self__, "event_types", event_types)
pulumi.set(__self__, "handlers", handlers)
pulumi.set(__self__, "key", key)
if description is not None:
pulumi.set(__self__, "description", description)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
@property
@pulumi.getter
def criteria(self) -> pulumi.Input['DistributionWebhookCriteriaArgs']:
"""
Specifies where the webhook will be applied on which repositories.
"""
return pulumi.get(self, "criteria")
@criteria.setter
def criteria(self, value: pulumi.Input['DistributionWebhookCriteriaArgs']):
pulumi.set(self, "criteria", value)
@property
@pulumi.getter(name="eventTypes")
def event_types(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
List of Events in Artifactory, Distribution, Release Bundle that function as the event trigger for the Webhook. Allow values: "distribute_started", "distribute_completed", "distribute_aborted", "distribute_failed", "delete_started", "delete_completed", "delete_failed"
"""
return pulumi.get(self, "event_types")
@event_types.setter
def event_types(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "event_types", value)
@property
@pulumi.getter
def handlers(self) -> pulumi.Input[Sequence[pulumi.Input['DistributionWebhookHandlerArgs']]]:
"""
At least one is required.
"""
return pulumi.get(self, "handlers")
@handlers.setter
def handlers(self, value: pulumi.Input[Sequence[pulumi.Input['DistributionWebhookHandlerArgs']]]):
pulumi.set(self, "handlers", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
"""
The identity key of the webhook. Must be between 2 and 200 characters. Cannot contain spaces.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Webhook description. Max length 1000 characters.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Status of webhook. Default to 'true'.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@pulumi.input_type
class _DistributionWebhookState:
def __init__(__self__, *,
criteria: Optional[pulumi.Input['DistributionWebhookCriteriaArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
event_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
handlers: Optional[pulumi.Input[Sequence[pulumi.Input['DistributionWebhookHandlerArgs']]]] = None,
key: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering DistributionWebhook resources.
:param pulumi.Input['DistributionWebhookCriteriaArgs'] criteria: Specifies where the webhook will be applied on which repositories.
:param pulumi.Input[str] description: Webhook description. Max length 1000 characters.
:param pulumi.Input[bool] enabled: Status of webhook. Default to 'true'.
:param pulumi.Input[Sequence[pulumi.Input[str]]] event_types: List of Events in Artifactory, Distribution, Release Bundle that function as the event trigger for the Webhook. Allow values: "distribute_started", "distribute_completed", "distribute_aborted", "distribute_failed", "delete_started", "delete_completed", "delete_failed"
:param pulumi.Input[Sequence[pulumi.Input['DistributionWebhookHandlerArgs']]] handlers: At least one is required.
:param pulumi.Input[str] key: The identity key of the webhook. Must be between 2 and 200 characters. Cannot contain spaces.
"""
if criteria is not None:
pulumi.set(__self__, "criteria", criteria)
if description is not None:
pulumi.set(__self__, "description", description)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if event_types is not None:
pulumi.set(__self__, "event_types", event_types)
if handlers is not None:
pulumi.set(__self__, "handlers", handlers)
if key is not None:
pulumi.set(__self__, "key", key)
@property
@pulumi.getter
def criteria(self) -> Optional[pulumi.Input['DistributionWebhookCriteriaArgs']]:
"""
Specifies where the webhook will be applied on which repositories.
"""
return pulumi.get(self, "criteria")
@criteria.setter
def criteria(self, value: Optional[pulumi.Input['DistributionWebhookCriteriaArgs']]):
pulumi.set(self, "criteria", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Webhook description. Max length 1000 characters.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Status of webhook. Default to 'true'.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter(name="eventTypes")
def event_types(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of Events in Artifactory, Distribution, Release Bundle that function as the event trigger for the Webhook. Allow values: "distribute_started", "distribute_completed", "distribute_aborted", "distribute_failed", "delete_started", "delete_completed", "delete_failed"
"""
return pulumi.get(self, "event_types")
@event_types.setter
def event_types(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "event_types", value)
@property
@pulumi.getter
def handlers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['DistributionWebhookHandlerArgs']]]]:
"""
At least one is required.
"""
return pulumi.get(self, "handlers")
@handlers.setter
def handlers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['DistributionWebhookHandlerArgs']]]]):
pulumi.set(self, "handlers", value)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
"""
The identity key of the webhook. Must be between 2 and 200 characters. Cannot contain spaces.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
class DistributionWebhook(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
criteria: Optional[pulumi.Input[pulumi.InputType['DistributionWebhookCriteriaArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
event_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
handlers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DistributionWebhookHandlerArgs']]]]] = None,
key: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides an Artifactory webhook resource. This can be used to register and manage Artifactory webhook subscription which enables you to be notified or notify other users when such events take place in Artifactory.
## Example Usage
.
```python
import pulumi
import pulumi_artifactory as artifactory
distribution_webhook = artifactory.DistributionWebhook("distribution-webhook",
criteria=artifactory.DistributionWebhookCriteriaArgs(
any_release_bundle=False,
exclude_patterns=["bar/**"],
include_patterns=["foo/**"],
registered_release_bundle_names=["bundle-name"],
),
event_types=[
"distribute_started",
"distribute_completed",
"distribute_aborted",
"distribute_failed",
"delete_started",
"delete_completed",
"delete_failed",
],
handlers=[artifactory.DistributionWebhookHandlerArgs(
custom_http_headers={
"header-1": "value-1",
"header-2": "value-2",
},
proxy="proxy-key",
secret="some-secret",
url="http://tempurl.org/webhook",
)],
key="distribution-webhook")
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['DistributionWebhookCriteriaArgs']] criteria: Specifies where the webhook will be applied on which repositories.
:param pulumi.Input[str] description: Webhook description. Max length 1000 characters.
:param pulumi.Input[bool] enabled: Status of webhook. Default to 'true'.
:param pulumi.Input[Sequence[pulumi.Input[str]]] event_types: List of Events in Artifactory, Distribution, Release Bundle that function as the event trigger for the Webhook. Allow values: "distribute_started", "distribute_completed", "distribute_aborted", "distribute_failed", "delete_started", "delete_completed", "delete_failed"
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DistributionWebhookHandlerArgs']]]] handlers: At least one is required.
:param pulumi.Input[str] key: The identity key of the webhook. Must be between 2 and 200 characters. Cannot contain spaces.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: DistributionWebhookArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides an Artifactory webhook resource. This can be used to register and manage Artifactory webhook subscription which enables you to be notified or notify other users when such events take place in Artifactory.
## Example Usage
.
```python
import pulumi
import pulumi_artifactory as artifactory
distribution_webhook = artifactory.DistributionWebhook("distribution-webhook",
criteria=artifactory.DistributionWebhookCriteriaArgs(
any_release_bundle=False,
exclude_patterns=["bar/**"],
include_patterns=["foo/**"],
registered_release_bundle_names=["bundle-name"],
),
event_types=[
"distribute_started",
"distribute_completed",
"distribute_aborted",
"distribute_failed",
"delete_started",
"delete_completed",
"delete_failed",
],
handlers=[artifactory.DistributionWebhookHandlerArgs(
custom_http_headers={
"header-1": "value-1",
"header-2": "value-2",
},
proxy="proxy-key",
secret="some-secret",
url="http://tempurl.org/webhook",
)],
key="distribution-webhook")
```
:param str resource_name: The name of the resource.
:param DistributionWebhookArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(DistributionWebhookArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
criteria: Optional[pulumi.Input[pulumi.InputType['DistributionWebhookCriteriaArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
event_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
handlers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DistributionWebhookHandlerArgs']]]]] = None,
key: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = DistributionWebhookArgs.__new__(DistributionWebhookArgs)
if criteria is None and not opts.urn:
raise TypeError("Missing required property 'criteria'")
__props__.__dict__["criteria"] = criteria
__props__.__dict__["description"] = description
__props__.__dict__["enabled"] = enabled
if event_types is None and not opts.urn:
raise TypeError("Missing required property 'event_types'")
__props__.__dict__["event_types"] = event_types
if handlers is None and not opts.urn:
raise TypeError("Missing required property 'handlers'")
__props__.__dict__["handlers"] = handlers
if key is None and not opts.urn:
raise TypeError("Missing required property 'key'")
__props__.__dict__["key"] = key
super(DistributionWebhook, __self__).__init__(
'artifactory:index/distributionWebhook:DistributionWebhook',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
criteria: Optional[pulumi.Input[pulumi.InputType['DistributionWebhookCriteriaArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
event_types: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
handlers: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DistributionWebhookHandlerArgs']]]]] = None,
key: Optional[pulumi.Input[str]] = None) -> 'DistributionWebhook':
"""
Get an existing DistributionWebhook resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['DistributionWebhookCriteriaArgs']] criteria: Specifies where the webhook will be applied on which repositories.
:param pulumi.Input[str] description: Webhook description. Max length 1000 characters.
:param pulumi.Input[bool] enabled: Status of webhook. Default to 'true'.
:param pulumi.Input[Sequence[pulumi.Input[str]]] event_types: List of Events in Artifactory, Distribution, Release Bundle that function as the event trigger for the Webhook. Allow values: "distribute_started", "distribute_completed", "distribute_aborted", "distribute_failed", "delete_started", "delete_completed", "delete_failed"
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DistributionWebhookHandlerArgs']]]] handlers: At least one is required.
:param pulumi.Input[str] key: The identity key of the webhook. Must be between 2 and 200 characters. Cannot contain spaces.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _DistributionWebhookState.__new__(_DistributionWebhookState)
__props__.__dict__["criteria"] = criteria
__props__.__dict__["description"] = description
__props__.__dict__["enabled"] = enabled
__props__.__dict__["event_types"] = event_types
__props__.__dict__["handlers"] = handlers
__props__.__dict__["key"] = key
return DistributionWebhook(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def criteria(self) -> pulumi.Output['outputs.DistributionWebhookCriteria']:
"""
Specifies where the webhook will be applied on which repositories.
"""
return pulumi.get(self, "criteria")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
Webhook description. Max length 1000 characters.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Status of webhook. Default to 'true'.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter(name="eventTypes")
def event_types(self) -> pulumi.Output[Sequence[str]]:
"""
List of Events in Artifactory, Distribution, Release Bundle that function as the event trigger for the Webhook. Allow values: "distribute_started", "distribute_completed", "distribute_aborted", "distribute_failed", "delete_started", "delete_completed", "delete_failed"
"""
return pulumi.get(self, "event_types")
@property
@pulumi.getter
def handlers(self) -> pulumi.Output[Sequence['outputs.DistributionWebhookHandler']]:
"""
At least one is required.
"""
return pulumi.get(self, "handlers")
@property
@pulumi.getter
def key(self) -> pulumi.Output[str]:
"""
The identity key of the webhook. Must be between 2 and 200 characters. Cannot contain spaces.
"""
return pulumi.get(self, "key")
| 46.662309 | 338 | 0.649267 | 2,230 | 21,418 | 6.052466 | 0.098206 | 0.08802 | 0.059124 | 0.048159 | 0.838927 | 0.818997 | 0.794176 | 0.7678 | 0.755353 | 0.75276 | 0 | 0.004027 | 0.246428 | 21,418 | 458 | 339 | 46.764192 | 0.832218 | 0.394248 | 0 | 0.64557 | 1 | 0 | 0.128333 | 0.059837 | 0 | 0 | 0 | 0 | 0 | 1 | 0.156118 | false | 0.004219 | 0.029536 | 0 | 0.278481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
465a63cb24b4317a4b67ccc45cfea9da7dcaf97b | 86 | py | Python | pruner/__init__.py | anang2602/dnn-compression | 6df76c72bd5403c962612b2b2be0909730243ceb | [
"MIT"
] | 1 | 2021-07-11T07:14:29.000Z | 2021-07-11T07:14:29.000Z | pruner/__init__.py | anang2602/dnn-compression | 6df76c72bd5403c962612b2b2be0909730243ceb | [
"MIT"
] | null | null | null | pruner/__init__.py | anang2602/dnn-compression | 6df76c72bd5403c962612b2b2be0909730243ceb | [
"MIT"
] | null | null | null | from pruner.levelpruner import level_prune
from pruner.levelpruner import LevelPruner
| 28.666667 | 42 | 0.883721 | 11 | 86 | 6.818182 | 0.545455 | 0.266667 | 0.56 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 86 | 2 | 43 | 43 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
46605eb4b035307157d224e733dc8c0055a05cea | 264 | py | Python | calendar_resource/models/__init__.py | woodbrettm/odoo | bd843b901f2c3852fe069eee705692fd21cffa15 | [
"BSD-Source-Code"
] | null | null | null | calendar_resource/models/__init__.py | woodbrettm/odoo | bd843b901f2c3852fe069eee705692fd21cffa15 | [
"BSD-Source-Code"
] | null | null | null | calendar_resource/models/__init__.py | woodbrettm/odoo | bd843b901f2c3852fe069eee705692fd21cffa15 | [
"BSD-Source-Code"
] | null | null | null | # -*- coding: utf-8 -*-
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
from . import calendar_event_type
from . import calendar_event
from . import resource_calendar_attendance
from . import resource_calendar
from . import resource_resource
| 29.333333 | 68 | 0.772727 | 38 | 264 | 5.184211 | 0.578947 | 0.253807 | 0.274112 | 0.233503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012931 | 0.121212 | 264 | 8 | 69 | 33 | 0.836207 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
469890b21137826da8a3d1b7edc310780751807b | 113 | py | Python | sllutils/ml/models.py | cwinsnes/sllutils | d615ddd99b28f7990b0a23cbc7d5230e991f4936 | [
"Apache-2.0"
] | null | null | null | sllutils/ml/models.py | cwinsnes/sllutils | d615ddd99b28f7990b0a23cbc7d5230e991f4936 | [
"Apache-2.0"
] | null | null | null | sllutils/ml/models.py | cwinsnes/sllutils | d615ddd99b28f7990b0a23cbc7d5230e991f4936 | [
"Apache-2.0"
] | null | null | null | # Base objects
from sllutils.ml.dnn import DNN
from sllutils.ml.data_processing import Binarizer, cutoff_tuning
| 22.6 | 64 | 0.831858 | 17 | 113 | 5.411765 | 0.705882 | 0.26087 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115044 | 113 | 4 | 65 | 28.25 | 0.92 | 0.106195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
310509a5430283b0126573f3bbbc926cc2d09f97 | 26,267 | py | Python | websecurityscanner/tests/unit/gapic/v1alpha/test_web_security_scanner_client_v1alpha.py | theacodes/google-cloud-python | 57dafcb78540e12c82f7ca0fc77d75edeb269390 | [
"Apache-2.0"
] | 1 | 2020-10-25T04:39:41.000Z | 2020-10-25T04:39:41.000Z | websecurityscanner/tests/unit/gapic/v1alpha/test_web_security_scanner_client_v1alpha.py | theacodes/google-cloud-python | 57dafcb78540e12c82f7ca0fc77d75edeb269390 | [
"Apache-2.0"
] | 4 | 2018-11-13T22:15:36.000Z | 2018-12-07T18:31:38.000Z | websecurityscanner/tests/unit/gapic/v1alpha/test_web_security_scanner_client_v1alpha.py | theacodes/google-cloud-python | 57dafcb78540e12c82f7ca0fc77d75edeb269390 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests."""
import mock
import pytest
from google.cloud import websecurityscanner_v1alpha
from google.cloud.websecurityscanner_v1alpha.proto import crawled_url_pb2
from google.cloud.websecurityscanner_v1alpha.proto import finding_pb2
from google.cloud.websecurityscanner_v1alpha.proto import scan_config_pb2
from google.cloud.websecurityscanner_v1alpha.proto import scan_run_pb2
from google.cloud.websecurityscanner_v1alpha.proto import web_security_scanner_pb2
from google.protobuf import empty_pb2
from google.protobuf import field_mask_pb2
class MultiCallableStub(object):
"""Stub for the grpc.UnaryUnaryMultiCallable interface."""
def __init__(self, method, channel_stub):
self.method = method
self.channel_stub = channel_stub
def __call__(self, request, timeout=None, metadata=None, credentials=None):
self.channel_stub.requests.append((self.method, request))
response = None
if self.channel_stub.responses:
response = self.channel_stub.responses.pop()
if isinstance(response, Exception):
raise response
if response:
return response
class ChannelStub(object):
"""Stub for the grpc.Channel interface."""
def __init__(self, responses=[]):
self.responses = responses
self.requests = []
def unary_unary(self,
method,
request_serializer=None,
response_deserializer=None):
return MultiCallableStub(method, self)
class CustomException(Exception):
pass
class TestWebSecurityScannerClient(object):
def test_create_scan_config(self):
# Setup Expected Response
name = 'name3373707'
display_name = 'displayName1615086568'
max_qps = 844445913
expected_response = {
'name': name,
'display_name': display_name,
'max_qps': max_qps
}
expected_response = scan_config_pb2.ScanConfig(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
parent = client.project_path('[PROJECT]')
scan_config = {}
response = client.create_scan_config(parent, scan_config)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.CreateScanConfigRequest(
parent=parent, scan_config=scan_config)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_create_scan_config_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
parent = client.project_path('[PROJECT]')
scan_config = {}
with pytest.raises(CustomException):
client.create_scan_config(parent, scan_config)
def test_delete_scan_config(self):
channel = ChannelStub()
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
name = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
client.delete_scan_config(name)
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.DeleteScanConfigRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_delete_scan_config_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
name = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
with pytest.raises(CustomException):
client.delete_scan_config(name)
def test_get_scan_config(self):
# Setup Expected Response
name_2 = 'name2-1052831874'
display_name = 'displayName1615086568'
max_qps = 844445913
expected_response = {
'name': name_2,
'display_name': display_name,
'max_qps': max_qps
}
expected_response = scan_config_pb2.ScanConfig(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
name = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
response = client.get_scan_config(name)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.GetScanConfigRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_get_scan_config_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
name = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
with pytest.raises(CustomException):
client.get_scan_config(name)
def test_list_scan_configs(self):
# Setup Expected Response
next_page_token = ''
scan_configs_element = {}
scan_configs = [scan_configs_element]
expected_response = {
'next_page_token': next_page_token,
'scan_configs': scan_configs
}
expected_response = web_security_scanner_pb2.ListScanConfigsResponse(
**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
parent = client.project_path('[PROJECT]')
paged_list_response = client.list_scan_configs(parent)
resources = list(paged_list_response)
assert len(resources) == 1
assert expected_response.scan_configs[0] == resources[0]
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.ListScanConfigsRequest(
parent=parent)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_list_scan_configs_exception(self):
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
parent = client.project_path('[PROJECT]')
paged_list_response = client.list_scan_configs(parent)
with pytest.raises(CustomException):
list(paged_list_response)
def test_update_scan_config(self):
# Setup Expected Response
name = 'name3373707'
display_name = 'displayName1615086568'
max_qps = 844445913
expected_response = {
'name': name,
'display_name': display_name,
'max_qps': max_qps
}
expected_response = scan_config_pb2.ScanConfig(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
scan_config = {}
update_mask = {}
response = client.update_scan_config(scan_config, update_mask)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.UpdateScanConfigRequest(
scan_config=scan_config, update_mask=update_mask)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_update_scan_config_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
scan_config = {}
update_mask = {}
with pytest.raises(CustomException):
client.update_scan_config(scan_config, update_mask)
def test_start_scan_run(self):
# Setup Expected Response
name_2 = 'name2-1052831874'
urls_crawled_count = 1749797253
urls_tested_count = 1498664068
has_vulnerabilities = False
progress_percent = 2137894861
expected_response = {
'name': name_2,
'urls_crawled_count': urls_crawled_count,
'urls_tested_count': urls_tested_count,
'has_vulnerabilities': has_vulnerabilities,
'progress_percent': progress_percent
}
expected_response = scan_run_pb2.ScanRun(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
name = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
response = client.start_scan_run(name)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.StartScanRunRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_start_scan_run_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
name = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
with pytest.raises(CustomException):
client.start_scan_run(name)
def test_get_scan_run(self):
# Setup Expected Response
name_2 = 'name2-1052831874'
urls_crawled_count = 1749797253
urls_tested_count = 1498664068
has_vulnerabilities = False
progress_percent = 2137894861
expected_response = {
'name': name_2,
'urls_crawled_count': urls_crawled_count,
'urls_tested_count': urls_tested_count,
'has_vulnerabilities': has_vulnerabilities,
'progress_percent': progress_percent
}
expected_response = scan_run_pb2.ScanRun(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
name = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]')
response = client.get_scan_run(name)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.GetScanRunRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_get_scan_run_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
name = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]')
with pytest.raises(CustomException):
client.get_scan_run(name)
def test_list_scan_runs(self):
# Setup Expected Response
next_page_token = ''
scan_runs_element = {}
scan_runs = [scan_runs_element]
expected_response = {
'next_page_token': next_page_token,
'scan_runs': scan_runs
}
expected_response = web_security_scanner_pb2.ListScanRunsResponse(
**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
parent = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
paged_list_response = client.list_scan_runs(parent)
resources = list(paged_list_response)
assert len(resources) == 1
assert expected_response.scan_runs[0] == resources[0]
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.ListScanRunsRequest(
parent=parent)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_list_scan_runs_exception(self):
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
parent = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
paged_list_response = client.list_scan_runs(parent)
with pytest.raises(CustomException):
list(paged_list_response)
def test_stop_scan_run(self):
# Setup Expected Response
name_2 = 'name2-1052831874'
urls_crawled_count = 1749797253
urls_tested_count = 1498664068
has_vulnerabilities = False
progress_percent = 2137894861
expected_response = {
'name': name_2,
'urls_crawled_count': urls_crawled_count,
'urls_tested_count': urls_tested_count,
'has_vulnerabilities': has_vulnerabilities,
'progress_percent': progress_percent
}
expected_response = scan_run_pb2.ScanRun(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
name = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]')
response = client.stop_scan_run(name)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.StopScanRunRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_stop_scan_run_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
name = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]')
with pytest.raises(CustomException):
client.stop_scan_run(name)
def test_list_crawled_urls(self):
# Setup Expected Response
next_page_token = ''
crawled_urls_element = {}
crawled_urls = [crawled_urls_element]
expected_response = {
'next_page_token': next_page_token,
'crawled_urls': crawled_urls
}
expected_response = web_security_scanner_pb2.ListCrawledUrlsResponse(
**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
parent = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]',
'[SCAN_RUN]')
paged_list_response = client.list_crawled_urls(parent)
resources = list(paged_list_response)
assert len(resources) == 1
assert expected_response.crawled_urls[0] == resources[0]
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.ListCrawledUrlsRequest(
parent=parent)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_list_crawled_urls_exception(self):
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
parent = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]',
'[SCAN_RUN]')
paged_list_response = client.list_crawled_urls(parent)
with pytest.raises(CustomException):
list(paged_list_response)
def test_get_finding(self):
# Setup Expected Response
name_2 = 'name2-1052831874'
http_method = 'httpMethod820747384'
fuzzed_url = 'fuzzedUrl-2120677666'
body = 'body3029410'
description = 'description-1724546052'
reproduction_url = 'reproductionUrl-244934180'
frame_url = 'frameUrl545464221'
final_url = 'finalUrl355601190'
tracking_id = 'trackingId1878901667'
expected_response = {
'name': name_2,
'http_method': http_method,
'fuzzed_url': fuzzed_url,
'body': body,
'description': description,
'reproduction_url': reproduction_url,
'frame_url': frame_url,
'final_url': final_url,
'tracking_id': tracking_id
}
expected_response = finding_pb2.Finding(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
name = client.finding_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]',
'[FINDING]')
response = client.get_finding(name)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.GetFindingRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_get_finding_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
name = client.finding_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]',
'[FINDING]')
with pytest.raises(CustomException):
client.get_finding(name)
def test_list_findings(self):
# Setup Expected Response
next_page_token = ''
findings_element = {}
findings = [findings_element]
expected_response = {
'next_page_token': next_page_token,
'findings': findings
}
expected_response = web_security_scanner_pb2.ListFindingsResponse(
**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
parent = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]',
'[SCAN_RUN]')
filter_ = 'filter-1274492040'
paged_list_response = client.list_findings(parent, filter_)
resources = list(paged_list_response)
assert len(resources) == 1
assert expected_response.findings[0] == resources[0]
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.ListFindingsRequest(
parent=parent, filter=filter_)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_list_findings_exception(self):
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
parent = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]',
'[SCAN_RUN]')
filter_ = 'filter-1274492040'
paged_list_response = client.list_findings(parent, filter_)
with pytest.raises(CustomException):
list(paged_list_response)
def test_list_finding_type_stats(self):
# Setup Expected Response
expected_response = {}
expected_response = web_security_scanner_pb2.ListFindingTypeStatsResponse(
**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup Request
parent = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]',
'[SCAN_RUN]')
response = client.list_finding_type_stats(parent)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = web_security_scanner_pb2.ListFindingTypeStatsRequest(
parent=parent)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_list_finding_type_stats_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
patch = mock.patch('google.api_core.grpc_helpers.create_channel')
with patch as create_channel:
create_channel.return_value = channel
client = websecurityscanner_v1alpha.WebSecurityScannerClient()
# Setup request
parent = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]',
'[SCAN_RUN]')
with pytest.raises(CustomException):
client.list_finding_type_stats(parent)
| 38.290087 | 82 | 0.662162 | 2,690 | 26,267 | 6.161338 | 0.082528 | 0.06118 | 0.021962 | 0.031374 | 0.854954 | 0.826958 | 0.803367 | 0.790033 | 0.76795 | 0.752806 | 0 | 0.022688 | 0.256634 | 26,267 | 685 | 83 | 38.345985 | 0.826129 | 0.068032 | 0 | 0.703093 | 0 | 0 | 0.104916 | 0.050346 | 0 | 0 | 0 | 0 | 0.086598 | 1 | 0.061856 | false | 0.002062 | 0.020619 | 0.002062 | 0.094845 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
31a09decd303ce704a71dc1bdefcf676c6d41969 | 25,622 | py | Python | models/net.py | Hedlen/faceframe | 4f1c540b4d2e5eb72ace3f34e4754a876ed2724e | [
"MIT"
] | 1 | 2018-07-17T03:34:28.000Z | 2018-07-17T03:34:28.000Z | models/net.py | Hedlen/faceframe | 4f1c540b4d2e5eb72ace3f34e4754a876ed2724e | [
"MIT"
] | null | null | null | models/net.py | Hedlen/faceframe | 4f1c540b4d2e5eb72ace3f34e4754a876ed2724e | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
from metmfm import *
class sphere20(nn.Module):
def __init__(self):
super(sphere20, self).__init__()
# input = B*3*112*96
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv1_1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=2, padding=1) # =>B*64*56*48
self.relu1_1 = nn.PReLU(64)
self.conv1_2 = nn.Conv2d(64, 64, 3, 1, 1, bias=False)
self.relu1_2 = nn.PReLU(64)
self.conv1_3 = nn.Conv2d(64, 64, 3, 1, 1, bias=False)
self.relu1_3 = nn.PReLU(64)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv2_1 = nn.Conv2d(64, 128, 3, 2, 1) # =>B*128*28*24
self.relu2_1 = nn.PReLU(128)
self.conv2_2 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.relu2_2 = nn.PReLU(128)
self.conv2_3 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.relu2_3 = nn.PReLU(128)
self.conv2_4 = nn.Conv2d(128, 128, 3, 1, 1, bias=False) # =>B*128*28*24
self.relu2_4 = nn.PReLU(128)
self.conv2_5 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.relu2_5 = nn.PReLU(128)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv3_1 = nn.Conv2d(128, 256, 3, 2, 1) # =>B*256*14*12
self.relu3_1 = nn.PReLU(256)
self.conv3_2 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.relu3_2 = nn.PReLU(256)
self.conv3_3 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.relu3_3 = nn.PReLU(256)
self.conv3_4 = nn.Conv2d(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.relu3_4 = nn.PReLU(256)
self.conv3_5 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.relu3_5 = nn.PReLU(256)
self.conv3_6 = nn.Conv2d(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.relu3_6 = nn.PReLU(256)
self.conv3_7 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.relu3_7 = nn.PReLU(256)
self.conv3_8 = nn.Conv2d(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.relu3_8 = nn.PReLU(256)
self.conv3_9 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.relu3_9 = nn.PReLU(256)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv4_1 = nn.Conv2d(256, 512, 3, 2, 1) # =>B*512*7*6
self.relu4_1 = nn.PReLU(512)
self.conv4_2 = nn.Conv2d(512, 512, 3, 1, 1, bias=False)
self.relu4_2 = nn.PReLU(512)
self.conv4_3 = nn.Conv2d(512, 512, 3, 1, 1, bias=False)
self.relu4_3 = nn.PReLU(512)
self.fc5 = nn.Linear(512 * 7 * 6, 512)
# Weight initialization
# print('Initialization Network Parameter...')
for m in self.modules():
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
if m.bias is not None:
nn.init.xavier_uniform_(m.weight)
nn.init.constant_(m.bias, 0.0)
else:
m.weight.data.normal_(0, 0.01)
def forward(self, x):
x = self.relu1_1(self.conv1_1(x))
x = x + self.relu1_3(self.conv1_3(self.relu1_2(self.conv1_2(x))))
x = self.relu2_1(self.conv2_1(x))
x = x + self.relu2_3(self.conv2_3(self.relu2_2(self.conv2_2(x))))
x = x + self.relu2_5(self.conv2_5(self.relu2_4(self.conv2_4(x))))
x = self.relu3_1(self.conv3_1(x))
x = x + self.relu3_3(self.conv3_3(self.relu3_2(self.conv3_2(x))))
x = x + self.relu3_5(self.conv3_5(self.relu3_4(self.conv3_4(x))))
x = x + self.relu3_7(self.conv3_7(self.relu3_6(self.conv3_6(x))))
x = x + self.relu3_9(self.conv3_9(self.relu3_8(self.conv3_8(x))))
x = self.relu4_1(self.conv4_1(x))
x = x + self.relu4_3(self.conv4_3(self.relu4_2(self.conv4_2(x))))
x = x.view(x.size(0), -1)
x = self.fc5(x)
return x
def save(self, file_path):
with open(file_path, 'wb') as f:
torch.save(self.state_dict(), f)
class sphere20_bd(nn.Module):
def __init__(self, dropout_Probability=0.2):
super(sphere20_bd, self).__init__()
# input = B*3*112*96
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv1_1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=2, padding=1) # =>B*64*56*48
self.bn1_1 = nn.BatchNorm2d(64)
self.relu1_1 = nn.PReLU(64)
self.conv1_2 = nn.Conv2d(64, 64, 3, 1, 1, bias=False)
self.bn1_2 = nn.BatchNorm2d(64)
self.relu1_2 = nn.PReLU(64)
self.conv1_3 = nn.Conv2d(64, 64, 3, 1, 1, bias=False)
self.bn1_3 = nn.BatchNorm2d(64)
self.relu1_3 = nn.PReLU(64)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv2_1 = nn.Conv2d(64, 128, 3, 2, 1) # =>B*128*28*24
self.bn2_1 = nn.BatchNorm2d(128)
self.relu2_1 = nn.PReLU(128)
self.conv2_2 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_2 = nn.BatchNorm2d(128)
self.relu2_2 = nn.PReLU(128)
self.conv2_3 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_3 = nn.BatchNorm2d(128)
self.relu2_3 = nn.PReLU(128)
self.conv2_4 = nn.Conv2d(128, 128, 3, 1, 1, bias=False) # =>B*128*28*24
self.bn2_4 = nn.BatchNorm2d(128)
self.relu2_4 = nn.PReLU(128)
self.conv2_5 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_5 = nn.BatchNorm2d(128)
self.relu2_5 = nn.PReLU(128)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv3_1 = nn.Conv2d(128, 256, 3, 2, 1) # =>B*256*14*12
self.bn3_1 = nn.BatchNorm2d(256)
self.relu3_1 = nn.PReLU(256)
self.conv3_2 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_2 = nn.BatchNorm2d(256)
self.relu3_2 = nn.PReLU(256)
self.conv3_3 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_3 = nn.BatchNorm2d(256)
self.relu3_3 = nn.PReLU(256)
self.conv3_4 = nn.Conv2d(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.bn3_4 = nn.BatchNorm2d(256)
self.relu3_4 = nn.PReLU(256)
self.conv3_5 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_5 = nn.BatchNorm2d(256)
self.relu3_5 = nn.PReLU(256)
self.conv3_6 = nn.Conv2d(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.bn3_6 = nn.BatchNorm2d(256)
self.relu3_6 = nn.PReLU(256)
self.conv3_7 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_7 = nn.BatchNorm2d(256)
self.relu3_7 = nn.PReLU(256)
self.conv3_8 = nn.Conv2d(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.bn3_8 = nn.BatchNorm2d(256)
self.relu3_8 = nn.PReLU(256)
self.conv3_9 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_9 = nn.BatchNorm2d(256)
self.relu3_9 = nn.PReLU(256)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv4_1 = nn.Conv2d(256, 512, 3, 2, 1) # =>B*512*7*6
self.bn4_1 = nn.BatchNorm2d(512)
self.relu4_1 = nn.PReLU(512)
self.conv4_2 = nn.Conv2d(512, 512, 3, 1, 1, bias=False)
self.bn4_2 = nn.BatchNorm2d(512)
self.relu4_2 = nn.PReLU(512)
self.conv4_3 = nn.Conv2d(512, 512, 3, 1, 1, bias=False)
self.bn4_3 = nn.BatchNorm2d(512)
self.relu4_3 = nn.PReLU(512)
self.dropout = nn.Dropout(p=dropout_Probability)
self.fc5 = nn.Linear(512 * 7 * 6, 512)
#self.bn5 = nn.BatchNorm1d(512)
# Weight initialization
# print('Initialization Network Parameter...')
for m in self.modules():
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
if m.bias is not None:
nn.init.xavier_uniform_(m.weight)
nn.init.constant_(m.bias, 0.0)
else:
m.weight.data.normal_(0, 0.01)
def forward(self, x):
x = self.relu1_1(self.bn1_1(self.conv1_1(x)))
x = x + self.relu1_3(self.bn1_3(self.conv1_3(self.relu1_2(self.bn1_2(self.conv1_2(x))))))
x = self.relu2_1(self.bn2_1(self.conv2_1(x)))
x = x + self.relu2_3(self.bn2_3(self.conv2_3(self.relu2_2(self.bn2_2(self.conv2_2(x))))))
x = x + self.relu2_5(self.bn2_5(self.conv2_5(self.relu2_4(self.bn2_4(self.conv2_4(x))))))
x = self.relu3_1(self.bn3_1(self.conv3_1(x)))
x = x + self.relu3_3(self.bn3_3(self.conv3_3(self.relu3_2(self.bn3_2(self.conv3_2(x))))))
x = x + self.relu3_5(self.bn3_5(self.conv3_5(self.relu3_4(self.bn3_4(self.conv3_4(x))))))
x = x + self.relu3_7(self.bn3_7(self.conv3_7(self.relu3_6(self.bn3_6(self.conv3_6(x))))))
x = x + self.relu3_9(self.bn3_9(self.conv3_9(self.relu3_8(self.bn3_8(self.conv3_8(x))))))
x = self.relu4_1(self.bn4_1(self.conv4_1(x)))
x = x + self.relu4_3(self.bn4_3(self.conv4_3(self.relu4_2(self.bn4_2(self.conv4_2(x))))))
x = x.view(x.size(0), -1)
x = self.dropout(x)
x = self.fc5(x)
return x
def save(self, file_path):
with open(file_path, 'wb') as f:
torch.save(self.state_dict(), f)
class sphere36_bd(nn.Module):
def __init__(self, dropout_Probability=0.2):
super(sphere36_bd, self).__init__()
# input = B*3*112*96
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv1_1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=2, padding=1) # =>B*64*56*48
self.bn1_1 = nn.BatchNorm2d(64)
self.relu1_1 = nn.PReLU(64)
self.conv1_2 = nn.Conv2d(64, 64, 3, 1, 1, bias=False)
self.bn1_2 = nn.BatchNorm2d(64)
self.relu1_2 = nn.PReLU(64)
self.conv1_3 = nn.Conv2d(64, 64, 3, 1, 1, bias=False)
self.bn1_3 = nn.BatchNorm2d(64)
self.relu1_3 = nn.PReLU(64)
self.conv1_4 = nn.Conv2d(64, 64, 3, 1, 1, bias=False)
self.bn1_4 = nn.BatchNorm2d(64)
self.relu1_4 = nn.PReLU(64)
self.conv1_5 = nn.Conv2d(64, 64, 3, 1, 1, bias=False)
self.bn1_5 = nn.BatchNorm2d(64)
self.relu1_5 = nn.PReLU(64)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv2_1 = nn.Conv2d(64, 128, 3, 2, 1) # =>B*128*28*24
self.bn2_1 = nn.BatchNorm2d(128)
self.relu2_1 = nn.PReLU(128)
self.conv2_2 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_2 = nn.BatchNorm2d(128)
self.relu2_2 = nn.PReLU(128)
self.conv2_3 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_3 = nn.BatchNorm2d(128)
self.relu2_3 = nn.PReLU(128)
self.conv2_4 = nn.Conv2d(128, 128, 3, 1, 1, bias=False) # =>B*128*28*24
self.bn2_4 = nn.BatchNorm2d(128)
self.relu2_4 = nn.PReLU(128)
self.conv2_5 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_5 = nn.BatchNorm2d(128)
self.relu2_5 = nn.PReLU(128)
self.conv2_6 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_6 = nn.BatchNorm2d(128)
self.relu2_6 = nn.PReLU(128)
self.conv2_7 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_7 = nn.BatchNorm2d(128)
self.relu2_7 = nn.PReLU(128)
self.conv2_8 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_8 = nn.BatchNorm2d(128)
self.relu2_8 = nn.PReLU(128)
self.conv2_9 = nn.Conv2d(128, 128, 3, 1, 1, bias=False)
self.bn2_9 = nn.BatchNorm2d(128)
self.relu2_9 = nn.PReLU(128)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv3_1 = nn.Conv2d(128, 256, 3, 2, 1) # =>B*256*14*12
self.bn3_1 = nn.BatchNorm2d(256)
self.relu3_1 = nn.PReLU(256)
self.conv3_2 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_2 = nn.BatchNorm2d(256)
self.relu3_2 = nn.PReLU(256)
self.conv3_3 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_3 = nn.BatchNorm2d(256)
self.relu3_3 = nn.PReLU(256)
self.conv3_4 = nn.Conv2d(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.bn3_4 = nn.BatchNorm2d(256)
self.relu3_4 = nn.PReLU(256)
self.conv3_5 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_5 = nn.BatchNorm2d(256)
self.relu3_5 = nn.PReLU(256)
self.conv3_6 = nn.Conv2d(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.bn3_6 = nn.BatchNorm2d(256)
self.relu3_6 = nn.PReLU(256)
self.conv3_7 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_7 = nn.BatchNorm2d(256)
self.relu3_7 = nn.PReLU(256)
self.conv3_8 = nn.Conv2d(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.bn3_8 = nn.BatchNorm2d(256)
self.relu3_8 = nn.PReLU(256)
self.conv3_9 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_9 = nn.BatchNorm2d(256)
self.relu3_9 = nn.PReLU(256)
self.conv3_10 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_10 = nn.BatchNorm2d(256)
self.relu3_10 = nn.PReLU(256)
self.conv3_11 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_11 = nn.BatchNorm2d(256)
self.relu3_11 = nn.PReLU(256)
self.conv3_12 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_12 = nn.BatchNorm2d(256)
self.relu3_12 = nn.PReLU(256)
self.conv3_13 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_13 = nn.BatchNorm2d(256)
self.relu3_13 = nn.PReLU(256)
self.conv3_14 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_14 = nn.BatchNorm2d(256)
self.relu3_14 = nn.PReLU(256)
self.conv3_15 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_15 = nn.BatchNorm2d(256)
self.relu3_15 = nn.PReLU(256)
self.conv3_16 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_16 = nn.BatchNorm2d(256)
self.relu3_16 = nn.PReLU(256)
self.conv3_17 = nn.Conv2d(256, 256, 3, 1, 1, bias=False)
self.bn3_17 = nn.BatchNorm2d(256)
self.relu3_17 = nn.PReLU(256)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv4_1 = nn.Conv2d(256, 512, 3, 2, 1) # =>B*512*7*6
self.bn4_1 = nn.BatchNorm2d(512)
self.relu4_1 = nn.PReLU(512)
self.conv4_2 = nn.Conv2d(512, 512, 3, 1, 1, bias=False)
self.bn4_2 = nn.BatchNorm2d(512)
self.relu4_2 = nn.PReLU(512)
self.conv4_3 = nn.Conv2d(512, 512, 3, 1, 1, bias=False)
self.bn4_3 = nn.BatchNorm2d(512)
self.relu4_3 = nn.PReLU(512)
self.conv4_4 = nn.Conv2d(512, 512, 3, 1, 1, bias=False)
self.bn4_4 = nn.BatchNorm2d(512)
self.relu4_4 = nn.PReLU(512)
self.conv4_5 = nn.Conv2d(512, 512, 3, 1, 1, bias=False)
self.bn4_5 = nn.BatchNorm2d(512)
self.relu4_5 = nn.PReLU(512)
self.dropout = nn.Dropout(p=dropout_Probability)
self.fc5 = nn.Linear(512 * 7 * 6, 512)
self.bn5 = nn.BatchNorm1d(512)
#self.bn5 = nn.BatchNorm1d(512)
# Weight initialization
# print('Initialization Network Parameter...')
for m in self.modules():
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
if m.bias is not None:
nn.init.xavier_uniform_(m.weight)
nn.init.constant_(m.bias, 0.0)
else:
m.weight.data.normal_(0, 0.01)
def forward(self, x):
# Block one
x = self.relu1_1(self.bn1_1(self.conv1_1(x)))
x = x + self.relu1_3(self.bn1_3(self.conv1_3(self.relu1_2(self.bn1_2(self.conv1_2(x))))))
x = x + self.relu1_5(self.bn1_5(self.conv1_5(self.relu1_4(self.bn1_4(self.conv1_4(x))))))
# Block two
x = self.relu2_1(self.bn2_1(self.conv2_1(x)))
x = x + self.relu2_3(self.bn2_3(self.conv2_3(self.relu2_2(self.bn2_2(self.conv2_2(x))))))
x = x + self.relu2_5(self.bn2_5(self.conv2_5(self.relu2_4(self.bn2_4(self.conv2_4(x))))))
x = x + self.relu2_7(self.bn2_7(self.conv2_7(self.relu2_6(self.bn2_6(self.conv2_6(x))))))
x = x + self.relu2_9(self.bn2_9(self.conv2_9(self.relu2_8(self.bn2_8(self.conv2_8(x))))))
# Block three
x = self.relu3_1(self.bn3_1(self.conv3_1(x)))
x = x + self.relu3_3(self.bn3_3(self.conv3_3(self.relu3_2(self.bn3_2(self.conv3_2(x))))))
x = x + self.relu3_5(self.bn3_5(self.conv3_5(self.relu3_4(self.bn3_4(self.conv3_4(x))))))
x = x + self.relu3_7(self.bn3_7(self.conv3_7(self.relu3_6(self.bn3_6(self.conv3_6(x))))))
x = x + self.relu3_9(self.bn3_9(self.conv3_9(self.relu3_8(self.bn3_8(self.conv3_8(x))))))
x = x + self.relu3_11(self.bn3_11(self.conv3_11(self.relu3_10(self.bn3_10(self.conv3_10(x))))))
x = x + self.relu3_13(self.bn3_13(self.conv3_13(self.relu3_12(self.bn3_12(self.conv3_12(x))))))
x = x + self.relu3_15(self.bn3_15(self.conv3_15(self.relu3_14(self.bn3_14(self.conv3_14(x))))))
x = x + self.relu3_17(self.bn3_17(self.conv3_17(self.relu3_16(self.bn3_16(self.conv3_16(x))))))
# Block four
x = self.relu4_1(self.bn4_1(self.conv4_1(x)))
x = x + self.relu4_3(self.bn4_3(self.conv4_3(self.relu4_2(self.bn4_2(self.conv4_2(x))))))
x = x + self.relu4_5(self.bn4_5(self.conv4_5(self.relu4_4(self.bn4_4(self.conv4_4(x))))))
x = x.view(x.size(0), -1)
x = self.dropout(x)
x = self.fc5(x)
x = self.bn5(x)
return x
def save(self, file_path):
with open(file_path, 'wb') as f:
torch.save(self.state_dict(), f)
class sphere36_bd(nn.Module):
def __init__(self, dropout_Probability=0.2):
super(sphere36_bd, self).__init__()
# input = B*3*112*96
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv1_1 = mfm(in_channels=3, out_channels=64, kernel_size=3, stride=2, padding=1) # =>B*64*56*48
self.bn1_1 = nn.BatchNorm2d(64)
#self.relu1_1 = nn.PReLU(64)
self.conv1_2 = mfm(64, 64, 3, 1, 1, bias=False)
self.bn1_2 = nn.BatchNorm2d(64)
#self.relu1_2 = nn.PReLU(64)
self.conv1_3 = mfm(64, 64, 3, 1, 1, bias=False)
self.bn1_3 = nn.BatchNorm2d(64)
#self.relu1_3 = nn.PReLU(64)
self.conv1_4 = mfm(64, 64, 3, 1, 1, bias=False)
self.bn1_4 = nn.BatchNorm2d(64)
#self.relu1_4 = nn.PReLU(64)
self.conv1_5 = mfm(64, 64, 3, 1, 1, bias=False)
self.bn1_5 = nn.BatchNorm2d(64)
#self.relu1_5 = nn.PReLU(64)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv2_1 = mfm(64, 128, 3, 2, 1) # =>B*128*28*24
self.bn2_1 = nn.BatchNorm2d(128)
#self.relu2_1 = nn.PReLU(128)
self.conv2_2 = mfm(128, 128, 3, 1, 1, bias=False)
self.bn2_2 = nn.BatchNorm2d(128)
#self.relu2_2 = nn.PReLU(128)
self.conv2_3 = mfm(128, 128, 3, 1, 1, bias=False)
self.bn2_3 = nn.BatchNorm2d(128)
#self.relu2_3 = nn.PReLU(128)
self.conv2_4 = mfm(128, 128, 3, 1, 1, bias=False) # =>B*128*28*24
self.bn2_4 = nn.BatchNorm2d(128)
#self.relu2_4 = nn.PReLU(128)
self.conv2_5 = mfm(128, 128, 3, 1, 1, bias=False)
self.bn2_5 = nn.BatchNorm2d(128)
#self.relu2_5 = nn.PReLU(128)
self.conv2_6 = mfm(128, 128, 3, 1, 1, bias=False)
self.bn2_6 = nn.BatchNorm2d(128)
#self.relu2_6 = nn.PReLU(128)
self.conv2_7 = mfm(128, 128, 3, 1, 1, bias=False)
self.bn2_7 = nn.BatchNorm2d(128)
#self.relu2_7 = nn.PReLU(128)
self.conv2_8 = mfm(128, 128, 3, 1, 1, bias=False)
self.bn2_8 = nn.BatchNorm2d(128)
#self.relu2_8 = nn.PReLU(128)
self.conv2_9 = mfm(128, 128, 3, 1, 1, bias=False)
self.bn2_9 = nn.BatchNorm2d(128)
#self.relu2_9 = nn.PReLU(128)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv3_1 = mfm(128, 256, 3, 2, 1) # =>B*256*14*12
self.bn3_1 = nn.BatchNorm2d(256)
#self.relu3_1 = nn.PReLU(256)
self.conv3_2 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_2 = nn.BatchNorm2d(256)
#self.relu3_2 = nn.PReLU(256)
self.conv3_3 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_3 = nn.BatchNorm2d(256)
#self.relu3_3 = nn.PReLU(256)
self.conv3_4 = mfm(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.bn3_4 = nn.BatchNorm2d(256)
#self.relu3_4 = nn.PReLU(256)
self.conv3_5 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_5 = nn.BatchNorm2d(256)
#self.relu3_5 = nn.PReLU(256)
self.conv3_6 = mfm(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.bn3_6 = nn.BatchNorm2d(256)
#self.relu3_6 = nn.PReLU(256)
self.conv3_7 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_7 = nn.BatchNorm2d(256)
#self.relu3_7 = nn.PReLU(256)
self.conv3_8 = mfm(256, 256, 3, 1, 1, bias=False) # =>B*256*14*12
self.bn3_8 = nn.BatchNorm2d(256)
#self.relu3_8 = nn.PReLU(256)
self.conv3_9 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_9 = nn.BatchNorm2d(256)
#self.relu3_9 = nn.PReLU(256)
self.conv3_10 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_10 = nn.BatchNorm2d(256)
#self.relu3_10 = nn.PReLU(256)
self.conv3_11 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_11 = nn.BatchNorm2d(256)
#self.relu3_11 = nn.PReLU(256)
self.conv3_12 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_12 = nn.BatchNorm2d(256)
#self.relu3_12 = nn.PReLU(256)
self.conv3_13 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_13 = nn.BatchNorm2d(256)
#self.relu3_13 = nn.PReLU(256)
self.conv3_14 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_14 = nn.BatchNorm2d(256)
#self.relu3_14 = nn.PReLU(256)
self.conv3_15 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_15 = nn.BatchNorm2d(256)
#self.relu3_15 = nn.PReLU(256)
self.conv3_16 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_16 = nn.BatchNorm2d(256)
#self.relu3_16 = nn.PReLU(256)
self.conv3_17 = mfm(256, 256, 3, 1, 1, bias=False)
self.bn3_17 = nn.BatchNorm2d(256)
#self.relu3_17 = nn.PReLU(256)
# First layer of each block have bias (lr_mult: 2 && decay_mult: 0)
self.conv4_1 = mfm(256, 512, 3, 2, 1) # =>B*512*7*6
self.bn4_1 = nn.BatchNorm2d(512)
#self.relu4_1 = nn.PReLU(512)
self.conv4_2 = mfm(512, 512, 3, 1, 1, bias=False)
self.bn4_2 = nn.BatchNorm2d(512)
#self.relu4_2 = nn.PReLU(512)
self.conv4_3 = mfm(512, 512, 3, 1, 1, bias=False)
self.bn4_3 = nn.BatchNorm2d(512)
#self.relu4_3 = nn.PReLU(512)
self.conv4_4 = mfm(512, 512, 3, 1, 1, bias=False)
self.bn4_4 = nn.BatchNorm2d(512)
#self.relu4_4 = nn.PReLU(512)
self.conv4_5 = mfm(512, 512, 3, 1, 1, bias=False)
self.bn4_5 = nn.BatchNorm2d(512)
#self.relu4_5 = nn.PReLU(512)
self.dropout = nn.Dropout(p=dropout_Probability)
self.fc5 = nn.Linear(512 * 7 * 6, 512)
self.bn5 = nn.BatchNorm1d(512)
#self.bn5 = nn.BatchNorm1d(512)
# Weight initialization
# print('Initialization Network Parameter...')
for m in self.modules():
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
if m.bias is not None:
nn.init.xavier_uniform_(m.weight)
nn.init.constant_(m.bias, 0.0)
else:
m.weight.data.normal_(0, 0.01)
def forward(self, x):
# Block one
x = self.bn1_1(self.conv1_1(x))
x = x + self.bn1_3(self.conv1_3(self.bn1_2(self.conv1_2(x))))
x = x + self.bn1_5(self.conv1_5(self.relu1_4(self.conv1_4(x))))
# Block two
x = self.bn2_1(self.conv2_1(x))
x = x + self.bn2_3(self.conv2_3(self.bn2_2(self.conv2_2(x))))
x = x + self.bn2_5(self.conv2_5(self.bn2_4(self.conv2_4(x))))
x = x + self.bn2_7(self.conv2_7(self.bn2_6(self.conv2_6(x))))
x = x + self.bn2_9(self.conv2_9(self.bn2_8(self.conv2_8(x))))
# Block three
x = self.bn3_1(self.conv3_1(x))
x = x + self.bn3_3(self.conv3_3(self.bn3_2(self.conv3_2(x))))
x = x + self.bn3_5(self.conv3_5(self.bn3_4(self.conv3_4(x))))
x = x + self.bn3_7(self.conv3_7(self.bn3_6(self.conv3_6(x))))
x = x + self.bn3_9(self.conv3_9(self.bn3_8(self.conv3_8(x))))
x = x + self.bn3_11(self.conv3_11(self.bn3_10(self.conv3_10(x))))
x = x + self.bn3_13(self.conv3_13(self.bn3_12(self.conv3_12(x))))
x = x + self.bn3_15(self.conv3_15(self.bn3_14(self.conv3_14(x))))
x = x + self.bn3_17(self.conv3_17(self.bn3_16(self.conv3_16(x))))
# Block four
x = self.bn4_1(self.conv4_1(x))
x = x + self.bn4_3(self.conv4_3(self.bn4_2(self.conv4_2(x))))
x = x + self.bn4_5(self.conv4_5(self.bn4_4(self.conv4_4(x))))
x = x.view(x.size(0), -1)
x = self.dropout(x)
x = self.fc5(x)
x = self.bn5(x)
return x
def save(self, file_path):
with open(file_path, 'wb') as f:
torch.save(self.state_dict(), f) | 42.280528 | 116 | 0.581531 | 4,422 | 25,622 | 3.208503 | 0.028042 | 0.016493 | 0.020299 | 0.047364 | 0.988793 | 0.984635 | 0.984635 | 0.957076 | 0.946997 | 0.94446 | 0 | 0.179499 | 0.26247 | 25,622 | 606 | 117 | 42.280528 | 0.571308 | 0.117946 | 0 | 0.729604 | 0 | 0 | 0.000356 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027972 | false | 0 | 0.006993 | 0 | 0.053613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
31ea1a5fb91ee23a9c1216b1904ceb0607731fc2 | 553 | py | Python | makingHouses.py | mkm99/object_oriented_programming_concepts | b238ce5d8dd146de20d353c60b8a2e21bc2a9993 | [
"MIT"
] | null | null | null | makingHouses.py | mkm99/object_oriented_programming_concepts | b238ce5d8dd146de20d353c60b8a2e21bc2a9993 | [
"MIT"
] | null | null | null | makingHouses.py | mkm99/object_oriented_programming_concepts | b238ce5d8dd146de20d353c60b8a2e21bc2a9993 | [
"MIT"
] | null | null | null | from houseBlueprint import House
class SummerHouse(House):
def __init__(self, size, color, num_of_windows, num_of_doors, backyard):
super().__init__(size, color, num_of_windows, num_of_doors, backyard)
def do_We_Need_garage(self):
return "Yes, we need a garage!"
class WinterHouse(House):
def __init__(self, size, color, num_of_windows, num_of_doors, backyard):
super().__init__(size, color, num_of_windows, num_of_doors, backyard)
def do_We_Need_garage(self):
return "No, we do not need a garage!"
| 30.722222 | 77 | 0.714286 | 82 | 553 | 4.353659 | 0.329268 | 0.112045 | 0.134454 | 0.156863 | 0.728291 | 0.728291 | 0.728291 | 0.728291 | 0.728291 | 0.728291 | 0 | 0 | 0.188065 | 553 | 17 | 78 | 32.529412 | 0.7951 | 0 | 0 | 0.545455 | 0 | 0 | 0.090416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0.090909 | 0.181818 | 0.818182 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
9ecfc50276e800f41d4285f6a646b6b4fa35fed2 | 5,247 | py | Python | simulation-ros/src/turtlebot2i/turtlebot2i_safety/tools/Labeling_tools/2_2ManuallyCreatRules/create_rule_table.py | EricssonResearch/scott-eu | aad7fd2f767a3c5e7d89223a593fd979ad596db3 | [
"Apache-2.0"
] | 19 | 2017-06-29T07:41:26.000Z | 2021-11-03T18:48:48.000Z | simulation-ros/src/turtlebot2i/turtlebot2i_safety/tools/Labeling_tools/2_2ManuallyCreatRules/create_rule_table.py | EricssonResearch/scott-eu | aad7fd2f767a3c5e7d89223a593fd979ad596db3 | [
"Apache-2.0"
] | 175 | 2017-06-29T09:37:43.000Z | 2021-07-09T12:55:28.000Z | simulation-ros/src/turtlebot2i/turtlebot2i_safety/tools/Labeling_tools/2_2ManuallyCreatRules/create_rule_table.py | EricssonResearch/scott-eu | aad7fd2f767a3c5e7d89223a593fd979ad596db3 | [
"Apache-2.0"
] | 8 | 2017-10-31T08:53:12.000Z | 2021-07-21T06:14:43.000Z | #!/usr/bin/env python
import os
import csv
def init_var():
print("Init Var")
global rules_folder
#All files will be saved to /home/usr/labels
rules_folder = os.path.join(os.path.expanduser('~'),'labels','rules_csv_format')
if not os.path.exists(rules_folder):
os.makedirs(rules_folder)
print "Label folder doesn't exist, create one"
else:
print "Label folder exists"
global risk_dict
risk_dict = {'0':'VeryLow','1':'Low','2':'Medium','3':'High','4':'VeryHigh','Missed':"TODO"}
global rule_counter
rule_counter = 595
def static_obj(): #open 'labels' folder and creat a result file
print("static_obj")
global rule_counter
#Creat a CSV file for rules
with open(rules_folder+'/'+'static_obj_rules'+'.csv','wb') as myFile:
myWriter=csv.writer(myFile)
myWriter.writerow(["Rule Number","Obj Type","Obj Distance","Obj Direction","Obj Risk"])#,"Obj Speed","Obj Orientation","Obj Risk"])
for object_distance in ['Near','Medium','Far']:
for object_direction in ['Front','FrontLeft','Left','FrontRight','Right','BigRear']:
#for object_speed in ['Slow','Medium','Fast']:
#for object_orientation in ['Front','FrontLeft','Left','RearLeft','Rear','RearRight','Right','FrontRight']:
rule_counter = rule_counter+1
print("Rule Number in 18","Obj Type","Obj Distance","Obj Direction")#,"Obj Speed","Obj Orientation")
print(object_distance,object_direction)#,object_speed,object_orientation)
number = raw_input('Number for the risk (0-4):')
object_risk = risk_dict[number]
print "risk is :",object_risk
myWriter.writerow([rule_counter,"StaObj",object_distance,object_direction,object_risk])#,object_speed,object_orientation,object_risk])
#01-18 rules
def dynamic_obj(): #open 'labels' folder and creat a result file
print("dynamic_obj")
global rule_counter
#Creat a CSV file for rules
with open(rules_folder+'/'+'dynamic_obj_rules'+'.csv','a') as myFile: #open("test.txt", "a") #'wb' write
myWriter=csv.writer(myFile)
myWriter.writerow(["Rule Number","Obj Type","Obj Distance","Obj Direction","Obj Speed","Obj Orientation","Obj Risk"])
for object_distance in ['Near','Medium','Far']:
for object_direction in ['Front','FrontLeft','Left','FrontRight','Right','BigRear']:
for object_speed in ['Slow','Medium','Fast']:
for object_orientation in ['Front','FrontLeft','Left','RearLeft','Rear','RearRight','Right','FrontRight']:
rule_counter = rule_counter+1
print("Rule Number in 432","Obj Type","Obj Distance","Obj Direction","Obj Speed","Obj Orientation")
print(object_distance,object_direction,object_speed,object_orientation)
number = raw_input('Number for the risk (0-4):')
if number.isdigit():
object_risk = risk_dict[number]
else:
object_risk = risk_dict["Missed"]
print("Invalid input. Edit it later")
print "risk is :",object_risk
myWriter.writerow([rule_counter,"DynObj",object_distance,object_direction,object_speed,object_orientation,object_risk])
#19-451
def human_obj(): #open 'labels' folder and creat a result file
print("human_obj")
global rule_counter
#Creat a CSV file for rules
with open(rules_folder+'/'+'human_obj_rules'+'.csv','a') as myFile: #open("test.txt", "a") #'wb' write
myWriter=csv.writer(myFile)
myWriter.writerow(["Rule Number","Obj Type","Obj Distance","Obj Direction","Obj Speed","Obj Orientation","Obj Risk"])
for object_distance in ['Medium','Far']:#'Near','Medium','Far']:
for object_direction in ['Front','FrontLeft','Left','BigRear','Right','FrontRight']:
for object_speed in ['Slow','Medium','Fast']:
for object_orientation in ['Front','FrontLeft','Left','RearLeft','Rear','RearRight','Right','FrontRight']:
rule_counter = rule_counter+1
print("Rule Number in 432","Obj Type","Obj Distance","Obj Direction","Obj Speed","Obj Orientation")
print(object_distance,object_direction,object_speed,object_orientation)
number = raw_input('Number for the risk (0-4):')
if number.isdigit():
object_risk = risk_dict[number]
else:
object_risk = risk_dict["Missed"]
print("Invalid input. Edit it later")
print "risk is :",object_risk
myWriter.writerow([rule_counter,"Human",object_distance,object_direction,object_speed,object_orientation,object_risk])
""" Main program """
if __name__ == "__main__":
init_var()
static_obj()
dynamic_obj()
human_obj()
| 49.037383 | 150 | 0.591004 | 610 | 5,247 | 4.909836 | 0.188525 | 0.051419 | 0.040067 | 0.03606 | 0.806344 | 0.786644 | 0.773957 | 0.773957 | 0.773957 | 0.75793 | 0 | 0.008877 | 0.270059 | 5,247 | 106 | 151 | 49.5 | 0.773107 | 0.131313 | 0 | 0.545455 | 0 | 0 | 0.248174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.025974 | null | null | 0.220779 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9efedfb2ab13bb4ecc82fd87661f2181cf4cd629 | 15,078 | py | Python | lino_book/projects/lydia/tests/dumps/18.12.0/courses_course.py | lino-framework/lino_book | 4eab916832cd8f48ff1b9fc8c2789f0b437da0f8 | [
"BSD-2-Clause"
] | 3 | 2016-08-25T05:58:09.000Z | 2019-12-05T11:13:45.000Z | lino_book/projects/lydia/tests/dumps/18.12.0/courses_course.py | lino-framework/lino_book | 4eab916832cd8f48ff1b9fc8c2789f0b437da0f8 | [
"BSD-2-Clause"
] | 18 | 2016-11-12T21:38:58.000Z | 2019-12-03T17:54:38.000Z | lino_book/projects/lydia/tests/dumps/18.12.0/courses_course.py | lino-framework/lino_book | 4eab916832cd8f48ff1b9fc8c2789f0b437da0f8 | [
"BSD-2-Clause"
] | 9 | 2016-10-15T11:12:33.000Z | 2021-09-22T04:37:37.000Z | # -*- coding: UTF-8 -*-
logger.info("Loading 52 objects to table courses_course...")
# fields: id, modified, ref, start_date, start_time, end_date, end_time, healthcare_plan, user, every_unit, every, monday, tuesday, wednesday, thursday, friday, saturday, sunday, max_events, room, max_date, line, teacher, slot, description, remark, state, max_places, name, enrolments_until, tariff, payment_term, procurer, mandatory, ending_reason, partner_tariff, translator_type, therapy_domain, team, partner, paper_type
loader.save(create_courses_course(1,dt(2018,12,22,12,24,58),None,date(2014,11,4),None,None,None,None,4,u'W',2,False,False,False,False,False,False,False,10,None,None,1,4,None,['', '', ''],u'','01',None,u'Arens Andreas',None,1,None,None,False,None,'16',None,None,None,113,None))
loader.save(create_courses_course(2,dt(2018,12,22,12,24,58),None,date(2014,11,5),None,None,None,None,5,u'W',2,False,False,False,False,False,False,False,10,None,None,1,5,None,['', '', ''],u'','01',None,u'Arens Annette',None,2,None,None,False,None,'16',None,None,None,114,None))
loader.save(create_courses_course(3,dt(2018,12,22,12,24,58),None,date(2014,11,6),None,None,None,None,6,u'W',2,False,False,False,False,False,False,False,10,None,None,1,6,None,['', '', ''],u'','01',None,u'Ausdemwald Alfons',None,1,None,None,False,None,'16',None,None,None,116,None))
loader.save(create_courses_course(4,dt(2018,12,22,12,24,59),None,date(2014,11,7),None,None,None,None,3,u'W',2,False,False,False,False,False,False,False,10,None,None,1,3,None,['', '', ''],u'','01',None,u'Bastiaensen Laurent',None,2,None,None,False,None,'16',None,None,None,117,None))
loader.save(create_courses_course(5,dt(2018,12,22,12,24,59),None,date(2014,11,8),None,None,None,None,2,u'W',2,False,False,False,False,False,False,False,10,None,None,1,2,None,['', '', ''],u'','01',None,u'Chantraine Marc',None,1,None,None,False,None,'16',None,None,None,120,None))
loader.save(create_courses_course(6,dt(2018,12,22,12,24,59),None,date(2014,11,9),None,None,None,None,1,u'W',2,False,False,False,False,False,False,False,10,None,None,1,1,None,['', '', ''],u'','01',None,u'Collard Charlotte',None,2,None,None,False,None,'16',None,None,None,118,None))
loader.save(create_courses_course(7,dt(2018,12,22,12,24,59),None,date(2014,11,10),None,None,None,None,4,u'W',2,False,False,False,False,False,False,False,10,None,None,1,4,None,['', '', ''],u'','01',None,u'Demeulenaere Doroth\xe9e',None,1,None,None,False,None,'16',None,None,None,122,None))
loader.save(create_courses_course(8,dt(2018,12,22,12,24,59),None,date(2014,11,11),None,None,None,None,5,u'W',2,False,False,False,False,False,False,False,10,None,None,1,5,None,['', '', ''],u'','01',None,u'Denon Denis',None,2,None,None,False,None,'16',None,None,None,180,None))
loader.save(create_courses_course(9,dt(2018,12,22,12,24,59),None,date(2014,11,12),None,None,None,None,6,u'W',2,False,False,False,False,False,False,False,10,None,None,1,6,None,['', '', ''],u'','01',None,u'Dericum Daniel',None,1,None,None,False,None,'16',None,None,None,121,None))
loader.save(create_courses_course(10,dt(2018,12,22,12,24,59),None,date(2014,11,13),None,None,None,None,3,u'W',2,False,False,False,False,False,False,False,10,None,None,1,3,None,['', '', ''],u'','01',None,u'Dobbelstein-Demeulenaere Doroth\xe9e',None,2,None,None,False,None,'16',None,None,None,123,None))
loader.save(create_courses_course(11,dt(2018,12,22,12,24,59),None,date(2014,11,14),None,None,None,None,2,u'W',2,False,False,False,False,False,False,False,10,None,None,1,2,None,['', '', ''],u'','01',None,u'Eierschal Emil',None,1,None,None,False,None,'16',None,None,None,175,None))
loader.save(create_courses_course(12,dt(2018,12,22,12,24,59),None,date(2014,11,15),None,None,None,None,1,u'W',2,False,False,False,False,False,False,False,10,None,None,1,1,None,['', '', ''],u'','01',None,u'Emonts Daniel',None,2,None,None,False,None,'16',None,None,None,128,None))
loader.save(create_courses_course(13,dt(2018,12,22,12,25,0),None,date(2014,11,16),None,None,None,None,4,u'W',2,False,False,False,False,False,False,False,10,None,None,1,4,None,['', '', ''],u'','01',None,u'Emonts Erich',None,1,None,None,False,None,'16',None,None,None,150,None))
loader.save(create_courses_course(14,dt(2018,12,22,12,25,0),None,date(2014,11,17),None,None,None,None,5,u'W',2,False,False,False,False,False,False,False,10,None,None,1,5,None,['', '', ''],u'','01',None,u'Emontspool Erwin',None,2,None,None,False,None,'16',None,None,None,151,None))
loader.save(create_courses_course(15,dt(2018,12,22,12,25,0),None,date(2014,11,18),None,None,None,None,6,u'W',2,False,False,False,False,False,False,False,10,None,None,1,6,None,['', '', ''],u'','01',None,u'Engels Edgar',None,1,None,None,False,None,'16',None,None,None,129,None))
loader.save(create_courses_course(16,dt(2018,12,22,12,25,0),None,date(2014,11,19),None,None,None,None,3,u'W',2,False,False,False,False,False,False,False,10,None,None,1,3,None,['', '', ''],u'','01',None,u'Evers Eberhart',None,2,None,None,False,None,'16',None,None,None,127,None))
loader.save(create_courses_course(17,dt(2018,12,22,12,25,0),None,date(2014,11,20),None,None,None,None,2,u'W',2,False,False,False,False,False,False,False,10,None,None,1,2,None,['', '', ''],u'','01',None,u'Evertz Bernd',None,1,None,None,False,None,'16',None,None,None,126,None))
loader.save(create_courses_course(18,dt(2018,12,22,12,25,0),None,date(2014,11,21),None,None,None,None,1,u'W',2,False,False,False,False,False,False,False,10,None,None,1,1,None,['', '', ''],u'','01',None,u'Faymonville Luc',None,2,None,None,False,None,'16',None,None,None,130,None))
loader.save(create_courses_course(19,dt(2018,12,22,12,25,0),None,date(2014,11,22),None,None,None,None,4,u'W',2,False,False,False,False,False,False,False,10,None,None,1,4,None,['', '', ''],u'','01',None,u'Groteclaes Gregory',None,1,None,None,False,None,'16',None,None,None,132,None))
loader.save(create_courses_course(20,dt(2018,12,22,12,25,1),None,date(2014,11,23),None,None,None,None,5,u'W',2,False,False,False,False,False,False,False,10,None,None,1,5,None,['', '', ''],u'','01',None,u'Hilgers Henri',None,2,None,None,False,None,'16',None,None,None,134,None))
loader.save(create_courses_course(21,dt(2018,12,22,12,25,1),None,date(2014,11,24),None,None,None,None,6,u'W',2,False,False,False,False,False,False,False,10,None,None,1,6,None,['', '', ''],u'','01',None,u'Ingels Irene',None,1,None,None,False,None,'16',None,None,None,135,None))
loader.save(create_courses_course(22,dt(2018,12,22,12,25,1),None,date(2014,11,25),None,None,None,None,3,u'W',2,False,False,False,False,False,False,False,10,None,None,1,3,None,['', '', ''],u'','01',None,u'Jacobs Jacqueline',None,2,None,None,False,None,'16',None,None,None,137,None))
loader.save(create_courses_course(23,dt(2018,12,22,12,25,1),None,date(2014,11,26),None,None,None,None,2,u'W',2,False,False,False,False,False,False,False,10,None,None,1,2,None,['', '', ''],u'','01',None,u'Jansen J\xe9r\xe9my',None,1,None,None,False,None,'16',None,None,None,136,None))
loader.save(create_courses_course(24,dt(2018,12,22,12,25,1),None,date(2014,11,27),None,None,None,None,1,u'W',2,False,False,False,False,False,False,False,10,None,None,1,1,None,['', '', ''],u'','01',None,u'Johnen Johann',None,2,None,None,False,None,'16',None,None,None,138,None))
loader.save(create_courses_course(25,dt(2018,12,22,12,25,1),None,date(2014,11,28),None,None,None,None,4,u'W',2,False,False,False,False,False,False,False,10,None,None,1,4,None,['', '', ''],u'','01',None,u'Jonas Josef',None,1,None,None,False,None,'16',None,None,None,139,None))
loader.save(create_courses_course(26,dt(2018,12,22,12,25,2),None,date(2014,11,29),None,None,None,None,5,u'W',2,False,False,False,False,False,False,False,10,None,None,1,5,None,['', '', ''],u'','01',None,u'Kaivers Karl',None,2,None,None,False,None,'16',None,None,None,141,None))
loader.save(create_courses_course(27,dt(2018,12,22,12,25,2),None,date(2014,11,30),None,None,None,None,6,u'W',2,False,False,False,False,False,False,False,10,None,None,1,6,None,['', '', ''],u'','01',None,u'Keller Karl',None,1,None,None,False,None,'16',None,None,None,178,None))
loader.save(create_courses_course(28,dt(2018,12,22,12,25,2),None,date(2014,12,1),None,None,None,None,3,u'W',2,False,False,False,False,False,False,False,10,None,None,1,3,None,['', '', ''],u'','01',None,u'Lahm Lisa',None,2,None,None,False,None,'16',None,None,None,176,None))
loader.save(create_courses_course(29,dt(2018,12,22,12,25,2),None,date(2014,12,2),None,None,None,None,2,u'W',2,False,False,False,False,False,False,False,10,None,None,1,2,None,['', '', ''],u'','01',None,u'Laschet Laura',None,1,None,None,False,None,'16',None,None,None,143,None))
loader.save(create_courses_course(30,dt(2018,12,22,12,25,2),None,date(2014,12,3),None,None,None,None,1,u'W',2,False,False,False,False,False,False,False,10,None,None,1,1,None,['', '', ''],u'','01',None,u'Lazarus Line',None,2,None,None,False,None,'16',None,None,None,144,None))
loader.save(create_courses_course(31,dt(2018,12,22,12,25,2),None,date(2014,12,4),None,None,None,None,4,u'W',2,False,False,False,False,False,False,False,10,None,None,1,4,None,['', '', ''],u'','01',None,u'Malmendier Marc',None,1,None,None,False,None,'16',None,None,None,146,None))
loader.save(create_courses_course(32,dt(2018,12,22,12,25,3),None,date(2014,12,5),None,None,None,None,5,u'W',2,False,False,False,False,False,False,False,10,None,None,1,5,None,['', '', ''],u'','01',None,u'Martelaer Mark',None,2,None,None,False,None,'16',None,None,None,172,None))
loader.save(create_courses_course(33,dt(2018,12,22,12,25,3),None,date(2014,12,6),None,None,None,None,6,u'W',2,False,False,False,False,False,False,False,10,None,None,1,6,None,['', '', ''],u'','01',None,u'Meessen Melissa',None,1,None,None,False,None,'16',None,None,None,147,None))
loader.save(create_courses_course(34,dt(2018,12,22,12,25,3),None,date(2014,12,7),None,None,None,None,3,u'W',2,False,False,False,False,False,False,False,10,None,None,1,3,None,['', '', ''],u'','01',None,u'Mie\xdfen Michael',None,2,None,None,False,None,'16',None,None,None,148,None))
loader.save(create_courses_course(35,dt(2018,12,22,12,25,3),None,date(2014,12,8),None,None,None,None,2,u'W',2,False,False,False,False,False,False,False,10,None,None,1,2,None,['', '', ''],u'','01',None,u'Radermacher Alfons',None,1,None,None,False,None,'16',None,None,None,153,None))
loader.save(create_courses_course(36,dt(2018,12,22,12,25,3),None,date(2014,12,9),None,None,None,None,1,u'W',2,False,False,False,False,False,False,False,10,None,None,1,1,None,['', '', ''],u'','01',None,u'Radermacher Christian',None,2,None,None,False,None,'16',None,None,None,155,None))
loader.save(create_courses_course(37,dt(2018,12,22,12,25,4),None,date(2014,12,10),None,None,None,None,4,u'W',2,False,False,False,False,False,False,False,10,None,None,1,4,None,['', '', ''],u'','01',None,u'Radermacher Daniela',None,1,None,None,False,None,'16',None,None,None,156,None))
loader.save(create_courses_course(38,dt(2018,12,22,12,25,4),None,date(2014,12,11),None,None,None,None,5,u'W',2,False,False,False,False,False,False,False,10,None,None,1,5,None,['', '', ''],u'','01',None,u'Radermacher Edgard',None,2,None,None,False,None,'16',None,None,None,157,None))
loader.save(create_courses_course(39,dt(2018,12,22,12,25,4),None,date(2014,12,12),None,None,None,None,6,u'W',2,False,False,False,False,False,False,False,10,None,None,1,6,None,['', '', ''],u'','01',None,u'Radermacher Guido',None,1,None,None,False,None,'16',None,None,None,159,None))
loader.save(create_courses_course(40,dt(2018,12,22,12,25,4),None,date(2014,12,13),None,None,None,None,3,u'W',2,False,False,False,False,False,False,False,10,None,None,1,3,None,['', '', ''],u'','01',None,u'Radermacher Hans',None,2,None,None,False,None,'16',None,None,None,160,None))
loader.save(create_courses_course(41,dt(2018,12,22,12,25,4),None,date(2014,12,14),None,None,None,None,2,u'W',2,False,False,False,False,False,False,False,10,None,None,1,2,None,['', '', ''],u'','01',None,u'Radermacher Inge',None,1,None,None,False,None,'16',None,None,None,162,None))
loader.save(create_courses_course(42,dt(2018,12,22,12,25,4),None,date(2014,12,15),None,None,None,None,1,u'W',2,False,False,False,False,False,False,False,10,None,None,1,1,None,['', '', ''],u'','01',None,u'Radermacher Jean',None,2,None,None,False,None,'16',None,None,None,163,None))
loader.save(create_courses_course(43,dt(2018,12,22,12,25,5),None,date(2014,12,16),None,None,None,None,4,u'W',2,False,False,False,False,False,False,False,10,None,None,1,4,None,['', '', ''],u'','01',None,u'Radermecker Rik',None,1,None,None,False,None,'16',None,None,None,173,None))
loader.save(create_courses_course(44,dt(2018,12,22,12,25,5),None,date(2014,12,17),None,None,None,None,5,u'W',2,False,False,False,False,False,False,False,10,None,None,1,5,None,['', '', ''],u'','01',None,u'da Vinci David',None,2,None,None,False,None,'16',None,None,None,165,None))
loader.save(create_courses_course(45,dt(2018,12,22,12,25,5),None,date(2014,12,18),None,None,None,None,6,u'W',2,False,False,False,False,False,False,False,10,None,None,1,6,None,['', '', ''],u'','01',None,u'di Rupo Didier',None,1,None,None,False,None,'16',None,None,None,164,None))
loader.save(create_courses_course(46,dt(2018,12,22,12,25,5),None,date(2014,12,19),None,None,None,None,3,u'W',2,False,False,False,False,False,False,False,10,None,None,1,3,None,['', '', ''],u'','01',None,u'\xc4rgerlich Erna',None,2,None,None,False,None,'16',None,None,None,169,None))
loader.save(create_courses_course(47,dt(2018,12,22,12,25,5),None,date(2014,12,20),None,None,None,None,2,u'W',2,False,False,False,False,False,False,False,10,None,None,1,2,None,['', '', ''],u'','01',None,u'\xd5unapuu \xd5ie',None,1,None,None,False,None,'16',None,None,None,167,None))
loader.save(create_courses_course(48,dt(2018,12,22,12,25,6),None,date(2014,12,21),None,None,None,None,1,u'W',2,False,False,False,False,False,False,False,10,None,None,1,1,None,['', '', ''],u'','01',None,u'\xd6stges Otto',None,2,None,None,False,None,'16',None,None,None,168,None))
loader.save(create_courses_course(49,dt(2018,12,22,12,25,6),None,date(2014,11,4),None,None,None,None,4,u'W',1,False,False,False,False,False,False,False,10,None,None,3,4,None,['', '', ''],u'','01',None,u'Alcohol',None,1,None,None,False,None,'16',None,None,None,None,None))
loader.save(create_courses_course(50,dt(2018,12,22,12,25,7),None,date(2014,11,5),None,None,None,None,5,u'W',1,False,False,False,False,False,False,False,10,None,None,3,5,None,['', '', ''],u'','01',None,u'Burnout',None,2,None,None,False,None,'16',None,None,None,None,None))
loader.save(create_courses_course(51,dt(2018,12,22,12,25,8),None,date(2014,11,6),None,None,None,None,6,u'W',1,False,False,False,False,False,False,False,10,None,None,3,6,None,['', '', ''],u'','01',None,u'Women',None,1,None,None,False,None,'16',None,None,None,None,None))
loader.save(create_courses_course(52,dt(2018,12,22,12,25,8),None,date(2014,11,7),None,None,None,None,3,u'W',1,False,False,False,False,False,False,False,10,None,None,3,3,None,['', '', ''],u'','01',None,u'Children',None,2,None,None,False,None,'16',None,None,None,None,None))
loader.flush_deferred_objects()
| 259.965517 | 424 | 0.715745 | 3,042 | 15,078 | 3.507232 | 0.086785 | 0.278939 | 0.365545 | 0.389915 | 0.883775 | 0.881057 | 0.720217 | 0.720217 | 0.720217 | 0.720217 | 0 | 0.128712 | 0.017376 | 15,078 | 57 | 425 | 264.526316 | 0.591388 | 0.029447 | 0 | 0 | 0 | 0 | 0.073269 | 0.00164 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
735333609b555063d6a1238743df9dbc470a2138 | 31,267 | py | Python | Apps/polls/migrations/0004_auto_20210317_2327.py | shadowofgost/WebEngineering | 693af827e3458806cdace959262cf393d29f6504 | [
"Apache-2.0"
] | 1 | 2021-04-05T05:40:17.000Z | 2021-04-05T05:40:17.000Z | Apps/polls/migrations/0004_auto_20210317_2327.py | shadowofgost/WebEngineering | 693af827e3458806cdace959262cf393d29f6504 | [
"Apache-2.0"
] | null | null | null | Apps/polls/migrations/0004_auto_20210317_2327.py | shadowofgost/WebEngineering | 693af827e3458806cdace959262cf393d29f6504 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.0 on 2021-03-17 23:27
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('polls', '0003_auto_20210317_2300'),
]
operations = [
migrations.RemoveField(
model_name='tcyrunningaccount',
name='id_plan',
),
migrations.AlterField(
model_name='tcycurricula',
name='aboutspeaker',
field=models.CharField(blank=True, db_column='AboutSpeaker', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='attr',
field=models.IntegerField(blank=True, db_column='Attr', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='charge',
field=models.IntegerField(blank=True, db_column='Charge', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='dooropen',
field=models.IntegerField(blank=True, db_column='DoorOpen', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='id',
field=models.IntegerField(db_column='ID', default=1, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='tcycurricula',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='listdepts',
field=models.CharField(blank=True, db_column='ListDepts', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='listplaces',
field=models.CharField(blank=True, db_column='ListPlaces', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='mapuser2equ',
field=models.CharField(blank=True, db_column='MapUser2Equ', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='name',
field=models.CharField(blank=True, db_column='Name', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='pwaccess',
field=models.IntegerField(blank=True, db_column='PwAccess', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='pwcontinuous',
field=models.IntegerField(blank=True, db_column='PwContinuous', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='pwdirection',
field=models.IntegerField(blank=True, db_column='PwDirection', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='rangeequs',
field=models.CharField(blank=True, db_column='RangeEqus', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='rangeusers',
field=models.CharField(blank=True, db_column='RangeUsers', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='rem',
field=models.CharField(blank=True, db_column='Rem', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='timebegin',
field=models.IntegerField(blank=True, db_column='TimeBegin', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='timebegincheckbegin',
field=models.IntegerField(blank=True, db_column='TimeBeginCheckBegin', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='timebegincheckend',
field=models.IntegerField(blank=True, db_column='TimeBeginCheckEnd', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='timeend',
field=models.IntegerField(blank=True, db_column='TimeEnd', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='timeendcheckbegin',
field=models.IntegerField(blank=True, db_column='TimeEndCheckBegin', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='timeendcheckend',
field=models.IntegerField(blank=True, db_column='TimeEndCheckEnd', default=1, null=True),
),
migrations.AlterField(
model_name='tcycurricula',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcydept',
name='id',
field=models.IntegerField(db_column='ID', default=1, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='tcydept',
name='id_parent',
field=models.IntegerField(db_column='ID_Parent', default=1, null=True),
),
migrations.AlterField(
model_name='tcydept',
name='idmanager',
field=models.IntegerField(db_column='IdManager', default=1, null=True),
),
migrations.AlterField(
model_name='tcydept',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcydept',
name='name',
field=models.CharField(db_column='Name', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcydept',
name='timeupdate',
field=models.IntegerField(db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='class_field',
field=models.IntegerField(blank=True, db_column='Class', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='dx',
field=models.IntegerField(blank=True, db_column='Dx', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='dy',
field=models.IntegerField(blank=True, db_column='Dy', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='id',
field=models.IntegerField(blank=True, db_column='ID', default=1, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='tcyequipment',
name='id_ip',
field=models.CharField(blank=True, db_column='ID_IP', default='1', max_length=16, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='id_location_sn',
field=models.IntegerField(blank=True, db_column='ID_Locatio_SN', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='itimebegin',
field=models.IntegerField(blank=True, db_column='iTimeBegin', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='itimelogin',
field=models.IntegerField(blank=True, db_column='iTimeLogin', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='key0k',
field=models.IntegerField(blank=True, db_column='Key0k', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='keycancel',
field=models.IntegerField(blank=True, db_column='KeyCancel', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='keydel',
field=models.IntegerField(blank=True, db_column='KeyDel', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='keyf1',
field=models.IntegerField(blank=True, db_column='KeyF1', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='link',
field=models.IntegerField(blank=True, db_column='Link', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='listplaces',
field=models.CharField(blank=True, db_column='ListPlaces', default='1', max_length=64, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='login',
field=models.IntegerField(blank=True, db_column='Login', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='mac',
field=models.CharField(blank=True, db_column='MAC', default='1', max_length=24, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='name',
field=models.CharField(blank=True, db_column='Name', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='onall',
field=models.IntegerField(blank=True, db_column='OnAll', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='portlisten',
field=models.IntegerField(blank=True, db_column='PortListen', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='rangeequs',
field=models.CharField(blank=True, db_column='RangeEqus', default='1', max_length=64, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='rem',
field=models.CharField(blank=True, db_column='Rem', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='state',
field=models.IntegerField(blank=True, db_column='State', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='timedelay',
field=models.IntegerField(blank=True, db_column='TimeDelay', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='type_field',
field=models.IntegerField(blank=True, db_column='Type', default=1, null=True),
),
migrations.AlterField(
model_name='tcyequipment',
name='whitelist',
field=models.CharField(blank=True, db_column='WhiteList', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcylocation',
name='id',
field=models.IntegerField(blank=True, db_column='ID', default=1, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='tcylocation',
name='id_parent',
field=models.IntegerField(blank=True, db_column='ID_Parent', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocation',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocation',
name='name',
field=models.CharField(blank=True, db_column='Name', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcylocation',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='attr',
field=models.IntegerField(blank=True, db_column='Attr', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='datebegin',
field=models.IntegerField(blank=True, db_column='DateBegin', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='dateend',
field=models.IntegerField(blank=True, db_column='DateEnd', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='delaycharged',
field=models.IntegerField(blank=True, db_column='DelayCharged', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='enabledelaycharged',
field=models.IntegerField(blank=True, db_column='EnableDelayCharged', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='enablelimittime_xj',
field=models.IntegerField(blank=True, db_column='EnableLimitTime_XJ', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='enablelimityue_sj',
field=models.IntegerField(blank=True, db_column='EnableLimitYuE_SJ', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='enablelimityue_xj',
field=models.IntegerField(blank=True, db_column='EnableLimitYuE_XJ', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='enablemincost',
field=models.IntegerField(blank=True, db_column='EnableMinCost', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='enablewarntime',
field=models.IntegerField(blank=True, db_column='EnableWarnTime', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='enablewarnyue',
field=models.IntegerField(blank=True, db_column='EnableWarnYuE', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='getequip',
field=models.IntegerField(blank=True, db_column='GetEquIp', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='getequmac',
field=models.IntegerField(blank=True, db_column='GetEquMac', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='getequname',
field=models.IntegerField(blank=True, db_column='GetEquName', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='limittime_xj',
field=models.IntegerField(blank=True, db_column='LimitTime_XJ', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='limityue_sj',
field=models.IntegerField(blank=True, db_column='LimitYuE_SJ', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='limityue_xj',
field=models.IntegerField(blank=True, db_column='LimitYuE_XJ', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='mincost',
field=models.IntegerField(blank=True, db_column='MinCost', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='moderun',
field=models.IntegerField(blank=True, db_column='ModeRun', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='modeshangji',
field=models.IntegerField(blank=True, db_column='ModeShangJi', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='price',
field=models.IntegerField(blank=True, db_column='Price', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='priceminute',
field=models.IntegerField(blank=True, db_column='PriceMinute', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='warntime',
field=models.IntegerField(blank=True, db_column='WarnTime', default=1, null=True),
),
migrations.AlterField(
model_name='tcylocationex',
name='warnyue',
field=models.IntegerField(blank=True, db_column='WarnYuE', default=1, null=True),
),
migrations.AlterField(
model_name='tcymmx',
name='id',
field=models.IntegerField(blank=True, db_column='ID', default=1, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='tcymmx',
name='id_data',
field=models.IntegerField(blank=True, db_column='ID_Data', default=1, null=True),
),
migrations.AlterField(
model_name='tcymmx',
name='id_type',
field=models.IntegerField(blank=True, db_column='ID_Type', default=1, null=True),
),
migrations.AlterField(
model_name='tcymmx',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcymmx',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcymmxdata',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcymmxdata',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='aboutspeaker',
field=models.CharField(blank=True, db_column='AboutSpeaker', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='attr',
field=models.IntegerField(blank=True, db_column='Attr', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='charge',
field=models.IntegerField(blank=True, db_column='Charge', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='dooropen',
field=models.IntegerField(blank=True, db_column='DoorOpen', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='id',
field=models.IntegerField(blank=True, db_column='ID', default=1, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='tcyplan',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='listdepts',
field=models.CharField(blank=True, db_column='ListDepts', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='listplaces',
field=models.CharField(blank=True, db_column='ListPlaces', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='mapuser2equ',
field=models.CharField(blank=True, db_column='MapUser2Equ', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='pwaccess',
field=models.IntegerField(blank=True, db_column='PwAccess', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='pwcontinuous',
field=models.IntegerField(blank=True, db_column='PwContinuous', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='pwdirection',
field=models.IntegerField(blank=True, db_column='PwDirection', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='rangeequs',
field=models.CharField(blank=True, db_column='RangeEqus', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='rangeusers',
field=models.CharField(blank=True, db_column='RangeUsers', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='rem',
field=models.CharField(blank=True, db_column='Rem', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='timebegin',
field=models.IntegerField(blank=True, db_column='TimeBegin', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='timebegincheckbegin',
field=models.IntegerField(blank=True, db_column='TimeBeginCheckBegin', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='timebegincheckend',
field=models.IntegerField(blank=True, db_column='imeBeginCheckEnd', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='timeend',
field=models.IntegerField(blank=True, db_column='TimeEnd', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='timeendcheckbegin',
field=models.IntegerField(blank=True, db_column='TimeEndCheckBegin', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='timeendcheckend',
field=models.IntegerField(blank=True, db_column='TimeEndCheckEnd', default=1, null=True),
),
migrations.AlterField(
model_name='tcyplan',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcyrunningaccount',
name='id',
field=models.IntegerField(blank=True, db_column='ID', default=1, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='tcyrunningaccount',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcyrunningaccount',
name='money',
field=models.IntegerField(blank=True, db_column='Money', default=1, null=True),
),
migrations.AlterField(
model_name='tcyrunningaccount',
name='param1',
field=models.IntegerField(blank=True, db_column='Param1', default=1, null=True),
),
migrations.AlterField(
model_name='tcyrunningaccount',
name='param2',
field=models.OneToOneField(db_column='ID_Plan', on_delete=django.db.models.deletion.CASCADE, related_name='runningaccount_related_to_plan', to='polls.TCyplan'),
),
migrations.AlterField(
model_name='tcyrunningaccount',
name='time',
field=models.IntegerField(blank=True, db_column='Time', default=1, null=True),
),
migrations.AlterField(
model_name='tcyrunningaccount',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcyrunningaccount',
name='type',
field=models.IntegerField(blank=True, db_column='Type', default=1, null=True),
),
migrations.AlterField(
model_name='tcytableinfo',
name='id',
field=models.IntegerField(db_column='ID', default=1, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='tcytableinfo',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcytableinfo',
name='name',
field=models.CharField(db_column='Name', default='1', max_length=50, null=True),
),
migrations.AlterField(
model_name='tcytableinfo',
name='nametable',
field=models.CharField(db_column='NameTable', default='1', max_length=50, null=True),
),
migrations.AlterField(
model_name='tcytableinfo',
name='timeupdate',
field=models.IntegerField(db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcytypera',
name='id_parent',
field=models.IntegerField(blank=True, db_column='ID_Parent', default=1, null=True),
),
migrations.AlterField(
model_name='tcytypera',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcytypera',
name='name',
field=models.CharField(blank=True, db_column='Name', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcytypera',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='attr',
field=models.IntegerField(db_column='Attr', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='attrjf',
field=models.IntegerField(db_column='AttrJf', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='id',
field=models.IntegerField(db_column='ID', default=1, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='tcyuser',
name='idmanager',
field=models.IntegerField(blank=True, db_column='IdManager', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='localid',
field=models.CharField(db_column='LocalID', default='1', max_length=1024, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='name',
field=models.CharField(db_column='Name', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='nocard',
field=models.CharField(db_column='Nocard', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='nouser',
field=models.CharField(db_column='NoUser', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='psw',
field=models.CharField(db_column='Psw', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='sex',
field=models.IntegerField(db_column='Sex', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='timeupdate',
field=models.IntegerField(db_column='TimeUpdate', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='yue',
field=models.IntegerField(db_column='Yue', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuser',
name='yue2',
field=models.IntegerField(db_column='Yue2', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuserex',
name='imark',
field=models.IntegerField(db_column='IMark', default=1, null=True),
),
migrations.AlterField(
model_name='tcyuserex',
name='rem',
field=models.CharField(blank=True, db_column='Rem', default='1', max_length=32, null=True),
),
migrations.AlterField(
model_name='tcyuserex',
name='timeupdate',
field=models.IntegerField(blank=True, db_column='TimeUpdate', default=1, null=True),
),
]
| 40.659298 | 172 | 0.579877 | 2,988 | 31,267 | 5.939759 | 0.051539 | 0.076572 | 0.211291 | 0.245098 | 0.93481 | 0.914188 | 0.904102 | 0.816374 | 0.781046 | 0.771298 | 0 | 0.013177 | 0.291266 | 31,267 | 768 | 173 | 40.71224 | 0.787726 | 0.001375 | 0 | 0.821522 | 1 | 0 | 0.131222 | 0.001698 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.002625 | 0 | 0.006562 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
73573e3555afe9545e845e6552db4195de27c951 | 339 | py | Python | impl/test.py | findvid/main | dd9bd14255af8c642b39b08d59e64cbfa314d908 | [
"MIT"
] | null | null | null | impl/test.py | findvid/main | dd9bd14255af8c642b39b08d59e64cbfa314d908 | [
"MIT"
] | null | null | null | impl/test.py | findvid/main | dd9bd14255af8c642b39b08d59e64cbfa314d908 | [
"MIT"
] | null | null | null | import FindVid as fv
print (fv.getFeatures("testfiles/hardcuts.mp4", "0xFUCKU", 50, [25, 50, 175, 250, 350, 456, 516, 675, 701], "/home/kanonenfutter/tmp"))
#fv.getFeatures("testfiles/hardcuts.mp4", "0xFUCKU", 50, [25, 50, 175, 250, 350, 456, 516, 675, 701], "/home/kanonenfutter/tmp")
#print (fv.getFramerate("testfiles/hardcuts.mp4"))
| 48.428571 | 135 | 0.693215 | 49 | 339 | 4.795918 | 0.469388 | 0.217021 | 0.255319 | 0.255319 | 0.740426 | 0.740426 | 0.740426 | 0.740426 | 0.740426 | 0.740426 | 0 | 0.194079 | 0.103245 | 339 | 6 | 136 | 56.5 | 0.578947 | 0.519174 | 0 | 0 | 0 | 0 | 0.322981 | 0.279503 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 12 |
b43e81836d41489162fb23d3a18a1bb18109cd4b | 101,029 | py | Python | tests/test_cli.py | MasonAmerica/mason-cl | ac9ac5041b12090b94958ec41d2b5a294c4b61e6 | [
"Apache-2.0"
] | 9 | 2019-12-05T23:50:46.000Z | 2021-09-16T17:49:27.000Z | tests/test_cli.py | MasonAmerica/mason-cl | ac9ac5041b12090b94958ec41d2b5a294c4b61e6 | [
"Apache-2.0"
] | 5 | 2018-03-19T21:00:15.000Z | 2021-07-24T08:35:13.000Z | tests/test_cli.py | MasonAmerica/mason-cl | ac9ac5041b12090b94958ec41d2b5a294c4b61e6 | [
"Apache-2.0"
] | 3 | 2018-07-08T14:51:03.000Z | 2021-11-24T17:32:39.000Z | import contextlib
import inspect
import os
import shutil
import time
import unittest
from concurrent.futures.thread import ThreadPoolExecutor
from click.testing import CliRunner
from mock import MagicMock
from cli.config import _manual_atexit_callbacks
from cli.internal.utils.constants import ENDPOINTS
from cli.internal.utils.constants import UPDATE_CHECKER_CACHE
from cli.internal.utils.remote import ApiError
from cli.internal.utils.store import Store
from cli.mason import Config
from cli.mason import cli
from cli.version import __version__
from tests import __tests_root__
class CliTest(unittest.TestCase):
def setUp(self):
self.maxDiff = None
self.runner = CliRunner()
_manual_atexit_callbacks.clear()
os.environ['_MASON_CLI_TEST_MODE'] = 'TRUE'
os.environ.pop('CI', None) # Guarantee test stability
UPDATE_CHECKER_CACHE['last_update_check_timestamp'] = int(time.time())
UPDATE_CHECKER_CACHE['current_version'] = __version__
UPDATE_CHECKER_CACHE.save()
def test__version__command_prints_info(self):
result = self.runner.invoke(cli, ['version'])
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
Mason CLI v{}
Copyright (C) 2019 Mason America (https://bymason.com)
License Apache 2.0 <https://www.apache.org/licenses/LICENSE-2.0>
""".format(__version__)))
def test__version__V_flag_prints_info(self):
result = self.runner.invoke(cli, ['-V'])
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
Mason CLI v{}
Copyright (C) 2019 Mason America (https://bymason.com)
License Apache 2.0 <https://www.apache.org/licenses/LICENSE-2.0>
""".format(__version__)))
def test__version__version_option_prints_info(self):
result = self.runner.invoke(cli, ['--version'])
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
Mason CLI v{}
Copyright (C) 2019 Mason America (https://bymason.com)
License Apache 2.0 <https://www.apache.org/licenses/LICENSE-2.0>
""".format(__version__)))
def test__update_check__displays_notice_when_available(self):
api = MagicMock()
api.get_latest_cli_version = MagicMock(return_value='1.1.0')
config = Config(api=api)
UPDATE_CHECKER_CACHE.clear()
UPDATE_CHECKER_CACHE['runtime_version'] = '1.0'
UPDATE_CHECKER_CACHE.save()
result = self.runner.invoke(cli, ['version'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
Mason CLI v{}
Copyright (C) 2019 Mason America (https://bymason.com)
License Apache 2.0 <https://www.apache.org/licenses/LICENSE-2.0>
==================== NOTICE ====================
A newer version (v1.1.0) of the Mason CLI is available.
Download the latest version:
https://github.com/MasonAmerica/mason-cli/releases/latest
And check out our installation guide:
http://docs.bymason.com/mason-cli/#install
==================== NOTICE ====================
""".format(__version__)))
def test__logging__starts_at_info_level_by_default(self):
result = self.runner.invoke(cli, ['version'])
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertNotIn('Lowest logging level activated.', result.output)
self.assertNotIn('debug: Debug logging activated.', result.output)
def test__logging__switching_to_debug_level_logs_debug_messages(self):
result = self.runner.invoke(cli, ['-v', 'debug', 'version'])
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertNotIn('Lowest logging level activated.', result.output)
self.assertIn('debug: Debug logging activated.', result.output)
def test__logging__switching_to_custom_level_logs_custom_messages(self):
result = self.runner.invoke(cli, ['-v', '1', 'version'])
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertIn('Lowest logging level activated.', result.output)
self.assertIn('debug: Debug logging activated.', result.output)
def test__logging__switching_to_debug_level_through_env_var_logs_debug_messages(self):
os.environ['LOGLEVEL'] = 'DEBUG'
result = self.runner.invoke(cli, ['version'])
del os.environ['LOGLEVEL']
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertNotIn('Lowest logging level activated.', result.output)
self.assertIn('debug: Debug logging activated.', result.output)
def test__logging__switching_to_custom_level_through_env_var_logs_custom_messages(self):
os.environ['LOGLEVEL'] = '1'
result = self.runner.invoke(cli, ['version'])
del os.environ['LOGLEVEL']
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertIn('Lowest logging level activated.', result.output)
self.assertIn('debug: Debug logging activated.', result.output)
@unittest.skipIf(os.name == 'nt', 'Windows doesn\'t support colors')
def test__logging__colors_are_enabled_by_default(self):
result = self.runner.invoke(cli, ['-v', 'debug', 'version'], color=True)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertIn(b'\x1b[34mdebug: \x1b[0mDebug logging activated.', result.stdout_bytes)
@unittest.skipIf(os.name == 'nt', 'Windows doesn\'t support colors')
def test__logging__colors_can_be_disabled(self):
result = self.runner.invoke(cli, ['-v', 'debug', '--no-color', 'version'], color=True)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertNotIn(b'\x1b[34mdebug: \x1b[0mDebug logging activated.', result.stdout_bytes)
def test__cli__default_creds_are_retrieved_from_disk(self):
with self.runner.isolated_filesystem():
auth_store = Store('fake-auth', {}, os.path.abspath(''), False)
auth_store['api_key'] = 'Foobar'
config = Config(auth_store=auth_store)
result = self.runner.invoke(cli, ['version'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertDictEqual(auth_store._fields, {'api_key': 'Foobar'})
def test__cli__api_key_option_updates_creds(self):
with self.runner.isolated_filesystem():
auth_store = Store('fake-auth', {}, os.path.abspath(''), False)
auth_store['api_key'] = 'Foobar'
config = Config(auth_store=auth_store)
result = self.runner.invoke(cli, ['--token', 'New foobar', 'version'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertDictEqual(auth_store._fields, {'api_key': 'New foobar'})
auth_store.clear()
auth_store.restore()
self.assertDictEqual(auth_store._fields, {})
def test__cli__api_key_envar_updates_creds(self):
with self.runner.isolated_filesystem():
auth_store = Store('fake-auth', {}, os.path.abspath(''), False)
auth_store['api_key'] = 'Foobar'
config = Config(auth_store=auth_store)
os.environ['MASON_API_KEY'] = 'New foobar'
result = self.runner.invoke(cli, ['version'], obj=config)
del os.environ['MASON_API_KEY']
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertDictEqual(auth_store._fields, {'api_key': 'New foobar'})
auth_store.clear()
auth_store.restore()
self.assertDictEqual(auth_store._fields, {})
def test__init__no_creds_fails(self):
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['init'], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
@unittest.skipIf(os.name == 'nt', 'The Windows tmp dir is inside the home dir')
def test__init__outside_home_dir_shows_warning(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self.runner.isolated_filesystem():
current_dir = os.path.abspath('.')
result = self.runner.invoke(cli, ['init'], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
## ## ### ###### ####### ## ##
### ### ## ## ## ## ## ## ### ##
#### #### ## ## ## ## ## #### ##
## ### ## ## ## ###### ## ## ## ## ##
## ## ######### ## ## ## ## ####
## ## ## ## ## ## ## ## ## ###
## ## ## ## ###### ####### ## ##
You're about to initialize a Mason project in this directory:
{}
warning: You are currently outside your home directory.
Are you ready to proceed? [Y/n]: n
Aborted!
""".format(current_dir)))
def test__init__in_home_dir_shows_warning(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self._cd(os.path.expanduser('~')):
current_dir = os.path.abspath('.')
result = self.runner.invoke(cli, ['init'], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
## ## ### ###### ####### ## ##
### ### ## ## ## ## ## ## ### ##
#### #### ## ## ## ## ## #### ##
## ### ## ## ## ###### ## ## ## ## ##
## ## ######### ## ## ## ## ####
## ## ## ## ## ## ## ## ## ###
## ## ## ## ###### ####### ## ##
You're about to initialize a Mason project in this directory:
{}
warning: You are initializing your home directory as a Mason project.
Are you ready to proceed? [Y/n]: n
Aborted!
""".format(current_dir)))
def test__init__in_existing_mason_project_dir_shows_warning(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self._home_dir_isolated_filesystem():
current_dir = os.path.abspath('.')
open(os.path.join(current_dir, '.masonrc'), "w").close()
result = self.runner.invoke(cli, ['init'], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
## ## ### ###### ####### ## ##
### ### ## ## ## ## ## ## ### ##
#### #### ## ## ## ## ## #### ##
## ### ## ## ## ###### ## ## ## ## ##
## ## ######### ## ## ## ## ####
## ## ## ## ## ## ## ## ## ###
## ## ## ## ###### ####### ## ##
You're about to initialize a Mason project in this directory:
{}
warning: You are initializing in an existing Mason project directory.
Are you ready to proceed? [Y/n]: n
Aborted!
""".format(current_dir)))
def test__init__selecting_new_project_succeeds(self):
api = MagicMock()
api.get_latest_artifact = MagicMock(return_value=None)
interactivity = MagicMock()
interactivity.pick = MagicMock(return_value=('project-id', 0))
config = Config(
auth_store=self._initialized_auth_store(),
api=api,
interactivity=interactivity
)
with self._home_dir_isolated_filesystem():
current_dir = os.path.abspath('.')
result = self.runner.invoke(cli, ['init'], obj=config, input='y\nproject-id')
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
## ## ### ###### ####### ## ##
### ### ## ## ## ## ## ## ### ##
#### #### ## ## ## ## ## #### ##
## ### ## ## ## ###### ## ## ## ## ##
## ## ######### ## ## ## ## ####
## ## ## ## ## ## ## ## ## ###
## ## ## ## ###### ####### ## ##
You're about to initialize a Mason project in this directory:
{}
Are you ready to proceed? [Y/n]: y
Enter your new project ID: project-id
Where should Mason look for apps? (Enter multiple paths separated by a comma, or leave blank if none.):
Writing configuration file to mason.yml...
Writing project information to .masonrc...
Mason initialization complete!
""".format(current_dir)))
def test__init__selecting_existing_project_succeeds(self):
api = MagicMock()
api.get_latest_artifact = MagicMock(return_value=None)
interactivity = MagicMock()
interactivity.pick = MagicMock(return_value=('project-id', 1))
config = Config(
auth_store=self._initialized_auth_store(),
api=api,
interactivity=interactivity
)
with self._home_dir_isolated_filesystem():
current_dir = os.path.abspath('.')
result = self.runner.invoke(cli, ['init'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
## ## ### ###### ####### ## ##
### ### ## ## ## ## ## ## ### ##
#### #### ## ## ## ## ## #### ##
## ### ## ## ## ###### ## ## ## ## ##
## ## ######### ## ## ## ## ####
## ## ## ## ## ## ## ## ## ###
## ## ## ## ###### ####### ## ##
You're about to initialize a Mason project in this directory:
{}
Are you ready to proceed? [Y/n]:
Where should Mason look for apps? (Enter multiple paths separated by a comma, or leave blank if none.):
Writing configuration file to mason.yml...
Writing project information to .masonrc...
Mason initialization complete!
""".format(current_dir)))
def test__init__selecting_apps_succeeds(self):
api = MagicMock()
api.get_latest_artifact = MagicMock(return_value=None)
interactivity = MagicMock()
interactivity.pick = MagicMock(return_value=('project-id', 1))
config = Config(
auth_store=self._initialized_auth_store(),
api=api,
interactivity=interactivity
)
with self._home_dir_isolated_filesystem():
current_dir = os.path.abspath('.')
os.makedirs(os.path.join(current_dir, 'apps'))
os.makedirs(os.path.join(current_dir, 'apps2'))
apk_file = os.path.join(__tests_root__, 'res', 'v1.apk')
shutil.copyfile(apk_file, os.path.join(current_dir, 'apps', 'app.apk'))
result = self.runner.invoke(cli, ['init'], obj=config, input='y\nfake-dir\napps, apps2')
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
## ## ### ###### ####### ## ##
### ### ## ## ## ## ## ## ### ##
#### #### ## ## ## ## ## #### ##
## ### ## ## ## ###### ## ## ## ## ##
## ## ######### ## ## ## ## ####
## ## ## ## ## ## ## ## ## ###
## ## ## ## ###### ####### ## ##
You're about to initialize a Mason project in this directory:
{}
Are you ready to proceed? [Y/n]: y
App directories found: apps
Where should Mason look for apps? (Enter multiple paths separated by a comma, or leave blank if none.): fake-dir
error: Path does not exist: {}
Where should Mason look for apps? (Enter multiple paths separated by a comma, or leave blank if none.): apps, apps2
Writing configuration file to mason.yml...
Writing project information to .masonrc...
Mason initialization complete!
""".format(current_dir, os.path.join(current_dir, 'fake-dir'))))
def test__register_config__no_files_fails(self):
result = self.runner.invoke(cli, ['register', 'config'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_config__non_existent_file_fails(self):
result = self.runner.invoke(cli, ['register', 'config', 'foobar'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_config__no_creds_fails(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'config', config_file], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__register_config__existing_artifact_fails(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
api.upload_artifact = MagicMock(side_effect=ApiError(
'Artifact already exists and cannot be overwritten'))
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'config', config_file], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
Continue registration? [Y/n]:
error: OS Config 'project-id' at version 1 has already been registered and cannot be overwritten.
Aborted!
""".format(config_file)))
def test__register_config__latest_non_existent_apk_fails(self):
config_file = os.path.join(__tests_root__, 'res', 'complex-project', 'config3.yml')
api = MagicMock()
api.get_latest_artifact = MagicMock(return_value=None)
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api
)
result = self.runner.invoke(cli, ['register', 'config', config_file], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Apk 'com.example.app2' not found, register it first.
error: Media 'splash-1' not found, register it first.
Aborted!
"""))
def test__register_config__latest_non_existent_boot_animation_fails(self):
# noinspection PyUnusedLocal
def version_finder(name, type):
if type == 'apk':
return {'version': '12'}
config_file = os.path.join(__tests_root__, 'res', 'config4.yml')
api = MagicMock()
api.get_latest_artifact = MagicMock(side_effect=version_finder)
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api
)
result = self.runner.invoke(cli, ['register', 'config', config_file], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Media 'anim' not found, register it first.
Aborted!
"""))
def test__register_config__negative_confirmation_aborts(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'config', config_file], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
Continue registration? [Y/n]: n
Aborted!
""".format(config_file)))
def test__register_config__dry_run_exits_cleanly(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', '--dry-run', 'config', config_file
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
""".format(config_file)))
def test__register_config__file_is_registered(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'config', config_file], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
Continue registration? [Y/n]:
OS Config 'project-id' registered.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
""".format(config_file)))
def test__register_config__folder_is_registered(self):
project_dir = os.path.join(__tests_root__, 'res', 'no-app-project')
config_file = os.path.join(project_dir, 'mason.yml')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'config', project_dir], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
Continue registration? [Y/n]:
OS Config 'project-id' registered.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
""".format(config_file)))
def test__register_config__rewritten_file_is_registered(self):
config_file = os.path.join(__tests_root__, 'res', 'config4.yml')
api = MagicMock()
api.get_latest_artifact = MagicMock(return_value={'version': '41'})
api.get_highest_artifact = MagicMock(return_value={'version': '41'})
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'config', config_file], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ OS Config ------------
File path: {}
Name: project-id4
Version: 42
App(s):
- 'com.example.app1' at version 1
- 'com.example.app2' at version 41
Boot animation: 'anim' at version 41
-----------------------------------
Continue registration? [Y/n]:
OS Config 'project-id4' registered.
Build queued for OS Config 'project-id4'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id4
""".format(config_file)))
def test__register_config__files_are_registered(self):
config_file1 = os.path.join(__tests_root__, 'res', 'config.yml')
config_file2 = os.path.join(__tests_root__, 'res', 'config2.yml')
api = MagicMock()
config = Config(
auth_store=self._initialized_auth_store(),
api=api,
executor=ThreadPoolExecutor(max_workers=1)
)
result = self.runner.invoke(cli, [
'register', 'config', config_file1, config_file2
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
------------ OS Config ------------
File path: {}
Name: project-id2
Version: 2
App(s):
- 'com.supercilex.test' at version None
-----------------------------------
Continue registration? [Y/n]:
OS Config 'project-id' registered.
OS Config 'project-id2' registered.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
Build queued for OS Config 'project-id2'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id2
""".format(config_file1, config_file2)))
def test__register_config__config_is_registered_and_awaits_build_completion(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
api.get_build = MagicMock(return_value={'data': {'status': 'COMPLETED'}})
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api
)
result = self.runner.invoke(cli, ['register', 'config', '--await', config_file], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
Continue registration? [Y/n]:
OS Config 'project-id' registered.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
Build completed for OS Config 'project-id'.
""".format(config_file)))
def test__register_config__multiple_configs_are_registered_and_await_build_completion(self):
config_file1 = os.path.join(__tests_root__, 'res', 'config.yml')
config_file2 = os.path.join(__tests_root__, 'res', 'config2.yml')
api = MagicMock()
api.get_build = MagicMock(return_value={'data': {'status': 'COMPLETED'}})
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api,
executor=ThreadPoolExecutor(max_workers=1)
)
result = self.runner.invoke(cli, [
'register', 'config', '--await', config_file1, config_file2
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
------------ OS Config ------------
File path: {}
Name: project-id2
Version: 2
App(s):
- 'com.supercilex.test' at version None
-----------------------------------
Continue registration? [Y/n]:
OS Config 'project-id' registered.
OS Config 'project-id2' registered.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
Build completed for OS Config 'project-id'.
Build queued for OS Config 'project-id2'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id2
Build completed for OS Config 'project-id2'.
""".format(config_file1, config_file2)))
def test__register_apk__no_files_fails(self):
result = self.runner.invoke(cli, ['register', 'apk'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_apk__non_existent_file_fails(self):
result = self.runner.invoke(cli, ['register', 'apk', 'foobar'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_apk__no_creds_fails(self):
apk_file = os.path.join(__tests_root__, 'res', 'v1.apk')
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'apk', apk_file], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__register_apk__invalid_file_fails_cleanly(self):
apk_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'apk', apk_file], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not a valid APK.
Aborted!
"""))
def test__register_apk__existing_artifact_fails(self):
apk_file = os.path.join(__tests_root__, 'res', 'v1.apk')
api = MagicMock()
api.upload_artifact = MagicMock(side_effect=ApiError(
'Artifact already exists and cannot be overwritten'))
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'apk', apk_file], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
Continue registration? [Y/n]:
error: App 'com.supercilex.test' at version 384866 has already been registered and cannot be overwritten.
Aborted!
""".format(apk_file)))
def test__register_apk__negative_confirmation_aborts(self):
apk_file = os.path.join(__tests_root__, 'res', 'v1.apk')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'apk', apk_file], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
Continue registration? [Y/n]: n
Aborted!
""".format(apk_file)))
def test__register_apk__dry_run_exits_cleanly(self):
apk_file = os.path.join(__tests_root__, 'res', 'v1.apk')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', '--dry-run', 'apk', apk_file], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
""".format(apk_file)))
def test__register_apk__file_is_registered(self):
apk_file = os.path.join(__tests_root__, 'res', 'v1.apk')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'apk', apk_file], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
Continue registration? [Y/n]:
App 'com.supercilex.test' registered.
""".format(apk_file)))
def test__register_apk__folder_is_registered(self):
project_dir = os.path.join(__tests_root__, 'res', 'simple-project')
apk_file = os.path.join(project_dir, 'v1.apk')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'apk', project_dir], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
Continue registration? [Y/n]:
App 'com.supercilex.test' registered.
""".format(apk_file)))
def test__register_apk__files_are_registered(self):
apk_file1 = os.path.join(__tests_root__, 'res', 'v1.apk')
apk_file2 = os.path.join(__tests_root__, 'res', 'v1and2.apk')
api = MagicMock()
config = Config(
auth_store=self._initialized_auth_store(),
api=api,
executor=ThreadPoolExecutor(max_workers=1)
)
result = self.runner.invoke(cli, ['register', 'apk', apk_file1, apk_file2], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
Continue registration? [Y/n]:
App 'com.supercilex.test' registered.
App 'com.supercilex.test' registered.
""".format(apk_file1, apk_file2)))
def test__register_boot_animation__no_files_fails(self):
result = self.runner.invoke(cli, ['register', 'media', 'bootanimation', 'Anim name', '1'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_boot_animation__non_existent_file_fails(self):
result = self.runner.invoke(
cli, ['register', 'media', 'bootanimation', 'Anim name', '1', 'foobar'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_boot_animation__invalid_version_fails(self):
media_file = os.path.join(__tests_root__, 'res', 'bootanimation.zip')
result = self.runner.invoke(
cli, ['register', 'media', 'bootanimation', 'Anim name', 'invalid', media_file])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_boot_animation__no_creds_fails(self):
media_file = os.path.join(__tests_root__, 'res', 'bootanimation.zip')
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', 'media',
'bootanimation', 'Anim name', '1', media_file
], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__register_boot_animation__negative_confirmation_aborts(self):
media_file = os.path.join(__tests_root__, 'res', 'bootanimation.zip')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', 'media',
'bootanimation', 'Anim name', '1', media_file
], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Boot animation ------------
File path: {}
Name: Anim name
Version: 1
----------------------------------------
Continue registration? [Y/n]: n
Aborted!
""".format(media_file)))
def test__register_boot_animation__dry_run_exits_cleanly(self):
media_file = os.path.join(__tests_root__, 'res', 'bootanimation.zip')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', '--dry-run', 'media',
'bootanimation', 'Anim name', '1', media_file
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Boot animation ------------
File path: {}
Name: Anim name
Version: 1
----------------------------------------
""".format(media_file)))
def test__register_boot_animation__file_is_registered(self):
media_file = os.path.join(__tests_root__, 'res', 'bootanimation.zip')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', 'media',
'bootanimation', 'Anim name', '1', media_file
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Boot animation ------------
File path: {}
Name: Anim name
Version: 1
----------------------------------------
Continue registration? [Y/n]:
Boot animation 'Anim name' registered.
""".format(media_file)))
def test__register_boot_animation__latest_file_is_registered(self):
media_file = os.path.join(__tests_root__, 'res', 'bootanimation.zip')
api = MagicMock()
api.get_highest_artifact = MagicMock(return_value={'version': '41'})
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', 'media',
'bootanimation', 'Anim name', 'latest', media_file
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Boot animation ------------
File path: {}
Name: Anim name
Version: 42
----------------------------------------
Continue registration? [Y/n]:
Boot animation 'Anim name' registered.
""".format(media_file)))
def test__register_splash__no_files_fails(self):
result = self.runner.invoke(cli, ['register', 'media', 'splash', 'Splash name', '1'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_splash__non_existent_file_fails(self):
result = self.runner.invoke(
cli, ['register', 'media', 'splash', 'Splash name', '1', 'foobar'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_splash__invalid_version_fails(self):
media_file = os.path.join(__tests_root__, 'res', 'splash.png')
result = self.runner.invoke(
cli, ['register', 'media', 'splash', 'Splash name', 'invalid', media_file])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__register_splash__no_creds_fails(self):
media_file = os.path.join(__tests_root__, 'res', 'splash.png')
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', 'media',
'splash', 'Splash name', '1', media_file
], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__register_splash__invalid_image_fails(self):
media_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', 'media',
'splash', 'Splash name', '1', media_file
], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Invalid splash screen: only PNGs are supported
Aborted!
"""))
def test__register_splash__negative_confirmation_aborts(self):
media_file = os.path.join(__tests_root__, 'res', 'splash.png')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', 'media',
'splash', 'Splash name', '1', media_file
], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Splash screen ------------
File path: {}
Name: Splash name
Version: 1
---------------------------------------
Continue registration? [Y/n]: n
Aborted!
""".format(media_file)))
def test__register_splash__dry_run_exits_cleanly(self):
media_file = os.path.join(__tests_root__, 'res', 'splash.png')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', '--dry-run', 'media',
'splash', 'Splash name', '1', media_file
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Splash screen ------------
File path: {}
Name: Splash name
Version: 1
---------------------------------------
""".format(media_file)))
def test__register_splash__file_is_registered(self):
media_file = os.path.join(__tests_root__, 'res', 'splash.png')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', 'media',
'splash', 'Splash name', '1', media_file
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Splash screen ------------
File path: {}
Name: Splash name
Version: 1
---------------------------------------
Continue registration? [Y/n]:
Splash screen 'Splash name' registered.
""".format(media_file)))
def test__register_splash__latest_file_is_registered(self):
media_file = os.path.join(__tests_root__, 'res', 'splash.png')
api = MagicMock()
api.get_highest_artifact = MagicMock(return_value={'version': '41'})
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'register', 'media',
'splash', 'Splash name', 'latest', media_file
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Splash screen ------------
File path: {}
Name: Splash name
Version: 42
---------------------------------------
Continue registration? [Y/n]:
Splash screen 'Splash name' registered.
""".format(media_file)))
def test__register_project__no_context_fails(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['register', 'project'], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: .masonrc file not found. Please run 'mason init' to create the project context.
Aborted!
"""))
def test__register_project__non_existent_resource_fails(self):
invalid_project = os.path.join(__tests_root__, 'res', 'invalid-project')
config_file = os.path.join(__tests_root__, 'res', 'invalid-project', 'mason.yml')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self._cd(invalid_project):
result = self.runner.invoke(cli, ['register', 'project'], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Project resource does not exist: {}
Aborted!
""".format(config_file)))
def test__register_project__no_creds_fails(self):
simple_project = os.path.join(__tests_root__, 'res', 'simple-project')
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
with self._cd(simple_project):
result = self.runner.invoke(cli, ['register', 'project'], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__register_project__negative_confirmation_aborts(self):
simple_project = os.path.join(__tests_root__, 'res', 'simple-project')
config_file = os.path.join(simple_project, 'mason.yml')
apk_file = os.path.join(__tests_root__, 'res', 'simple-project', 'v1.apk')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self._cd(simple_project):
result = self.runner.invoke(cli, ['register', 'project'], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ OS Config ------------
File path: {}
Name: project-id2
Version: 2
App(s):
- 'com.supercilex.test' at version 384866
-----------------------------------
Continue registration? [Y/n]: n
Aborted!
""".format(apk_file, config_file)))
def test__register_project__dry_run_exits_cleanly(self):
simple_project = os.path.join(__tests_root__, 'res', 'simple-project')
config_file = os.path.join(simple_project, 'mason.yml')
apk_file = os.path.join(__tests_root__, 'res', 'simple-project', 'v1.apk')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self._cd(simple_project):
result = self.runner.invoke(cli, ['register', '--dry-run', 'project'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ OS Config ------------
File path: {}
Name: project-id2
Version: 2
App(s):
- 'com.supercilex.test' at version 384866
-----------------------------------
""".format(apk_file, config_file)))
def test__register_project__app_not_present_is_ignored(self):
no_app_project = os.path.join(__tests_root__, 'res', 'no-app-project')
config_file = os.path.join(__tests_root__, 'res', 'no-app-project', 'mason.yml')
apk_file = os.path.join(__tests_root__, 'res', 'no-app-project', 'v1.apk')
api = MagicMock()
api.get_build = MagicMock(return_value={'data': {'status': 'COMPLETED'}})
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self._cd(no_app_project):
result = self.runner.invoke(cli, ['register', 'project'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
Continue registration? [Y/n]:
App 'com.supercilex.test' registered.
OS Config 'project-id' registered.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
Build completed for OS Config 'project-id'.
""".format(apk_file, config_file)))
def test__register_project__config_already_present_fails(self):
# noinspection PyUnusedLocal
def artifact_response(type, name, version):
if type == 'apk':
return {
'version': '384866',
'checksum': {'sha1': '891544e59702a6138962a9a2728cb2527fb77554'}
}
# noinspection PyUnusedLocal
def upload_response(binary, artifact):
if artifact.get_type() == 'config':
raise ApiError('Artifact already exists and cannot be overwritten')
simple_project = os.path.join(__tests_root__, 'res', 'simple-project')
config_file = os.path.join(__tests_root__, 'res', 'simple-project', 'mason.yml')
apk_file = os.path.join(__tests_root__, 'res', 'simple-project', 'v1.apk')
api = MagicMock()
api.get_artifact = MagicMock(side_effect=artifact_response)
api.upload_artifact = MagicMock(side_effect=upload_response)
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self._cd(simple_project):
result = self.runner.invoke(cli, ['register', 'project'], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ OS Config ------------
File path: {}
Name: project-id2
Version: 2
App(s):
- 'com.supercilex.test' at version 384866
-----------------------------------
Continue registration? [Y/n]:
App 'com.supercilex.test' already registered, ignoring.
error: OS Config 'project-id2' at version 2 has already been registered and cannot be overwritten.
Aborted!
""".format(apk_file, config_file)))
def test__register_project__matching_apk_already_present_is_ignored(self):
# noinspection PyUnusedLocal
def api_response(type, name, version):
if type == 'apk':
return {
'version': '384866',
'checksum': {'sha1': '891544e59702a6138962a9a2728cb2527fb77554'}
}
simple_project = os.path.join(__tests_root__, 'res', 'simple-project')
config_file = os.path.join(__tests_root__, 'res', 'simple-project', 'mason.yml')
apk_file = os.path.join(__tests_root__, 'res', 'simple-project', 'v1.apk')
api = MagicMock()
api.get_artifact = MagicMock(side_effect=api_response)
api.get_build = MagicMock(return_value={'data': {'status': 'COMPLETED'}})
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self._cd(simple_project):
result = self.runner.invoke(cli, ['register', 'project'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ OS Config ------------
File path: {}
Name: project-id2
Version: 2
App(s):
- 'com.supercilex.test' at version 384866
-----------------------------------
Continue registration? [Y/n]:
App 'com.supercilex.test' already registered, ignoring.
OS Config 'project-id2' registered.
Build queued for OS Config 'project-id2'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id2
Build completed for OS Config 'project-id2'.
""".format(apk_file, config_file)))
def test__register_project__non_matching_apk_already_present_fails(self):
# noinspection PyUnusedLocal
def api_response(binary, artifact):
if artifact.get_type() == 'apk':
raise ApiError('Artifact already exists and cannot be overwritten')
simple_project = os.path.join(__tests_root__, 'res', 'simple-project')
config_file = os.path.join(__tests_root__, 'res', 'simple-project', 'mason.yml')
apk_file = os.path.join(__tests_root__, 'res', 'simple-project', 'v1.apk')
api = MagicMock()
api.upload_artifact = MagicMock(side_effect=api_response)
api.get_build = MagicMock(return_value={'data': {'status': 'COMPLETED'}})
config = Config(auth_store=self._initialized_auth_store(), api=api)
with self._cd(simple_project):
result = self.runner.invoke(cli, ['register', 'project'], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ OS Config ------------
File path: {}
Name: project-id2
Version: 2
App(s):
- 'com.supercilex.test' at version 384866
-----------------------------------
Continue registration? [Y/n]:
error: App 'com.supercilex.test' at version 384866 has already been registered and cannot be overwritten.
Aborted!
""".format(apk_file, config_file)))
def test__register_project__simple_project_is_registered_and_built(self):
simple_project = os.path.join(__tests_root__, 'res', 'simple-project')
config_file = os.path.join(__tests_root__, 'res', 'simple-project', 'mason.yml')
apk_file = os.path.join(__tests_root__, 'res', 'simple-project', 'v1.apk')
api = MagicMock()
api.get_build = MagicMock(return_value={'data': {'status': 'COMPLETED'}})
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api
)
with self._cd(simple_project):
result = self.runner.invoke(cli, ['register', 'project'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ OS Config ------------
File path: {}
Name: project-id2
Version: 2
App(s):
- 'com.supercilex.test' at version 384866
-----------------------------------
Continue registration? [Y/n]:
App 'com.supercilex.test' registered.
OS Config 'project-id2' registered.
Build queued for OS Config 'project-id2'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id2
Build completed for OS Config 'project-id2'.
""".format(apk_file, config_file)))
def test__register_project__complex_project_is_registered_and_built(self):
complex_project = os.path.join(__tests_root__, 'res', 'complex-project')
config_file1 = os.path.join(
__tests_root__, 'res', 'complex-project', '.mason', 'config2.yml')
config_file2 = os.path.join(__tests_root__, 'res', 'complex-project', 'config3.yml')
apk_file1 = os.path.join(__tests_root__, 'res', 'complex-project', 'test-path', 'v1.apk')
apk_file2 = os.path.join(
__tests_root__, 'res', 'complex-project', 'built-apks', 'built.apk')
boot_animation1 = os.path.join(
__tests_root__, 'res', 'complex-project', 'anims', 'bootanimation.zip')
boot_animation2 = os.path.join(
__tests_root__, 'res', 'complex-project', 'anims', 'bootanimation2.zip')
splash = os.path.join(__tests_root__, 'res', 'splash.png')
api = MagicMock()
api.get_build = MagicMock(return_value={'data': {'status': 'COMPLETED'}})
api.get_latest_artifact = MagicMock(return_value={'version': '41'})
api.get_highest_artifact = MagicMock(return_value={
'version': '41',
'checksum': {'sha1': 'e7788dca3a3797fd152825e600047e7cef870d98'}
})
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api,
executor=ThreadPoolExecutor(max_workers=1)
)
with self._cd(complex_project):
result = self.runner.invoke(cli, ['register', 'project'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ App ------------
File path: {}
Package name: com.supercilex.test
Version name: 0.1.0-4-g0f30bf8-dirty
Version code: 384866
-----------------------------
------------ Boot animation ------------
File path: {}
Name: anim-1
Version: 41
----------------------------------------
------------ Boot animation ------------
File path: {}
Name: anim-2
Version: 42
----------------------------------------
------------ Splash screen ------------
File path: {}
Name: splash-1
Version: 42
---------------------------------------
------------ OS Config ------------
File path: {}
Name: project-id2
Version: 2
App(s):
- 'com.supercilex.test' at version 384866
-----------------------------------
------------ OS Config ------------
File path: {}
Name: project-id3
Version: 42
App(s):
- 'com.example.app1' at version 1
- 'com.supercilex.test' at version 384866
- 'com.example.app2' at version 41
Boot animation: 'anim-1' at version 41
Splash screen: 'splash-1' at version 42
-----------------------------------
Continue registration? [Y/n]:
App 'com.supercilex.test' registered.
App 'com.supercilex.test' registered.
Boot animation 'anim-1' already registered, ignoring.
Boot animation 'anim-2' registered.
Splash screen 'splash-1' registered.
OS Config 'project-id2' registered.
OS Config 'project-id3' registered.
Build queued for OS Config 'project-id2'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id2
Build completed for OS Config 'project-id2'.
Build queued for OS Config 'project-id3'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id3
Build completed for OS Config 'project-id3'.
""".format(apk_file1, apk_file2,
boot_animation1, boot_animation2, splash,
config_file1, config_file2)))
def test__build__invalid_version_fails(self):
result = self.runner.invoke(cli, ['build', 'project-id', 'invalid'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__build__no_creds_fails(self):
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['build', 'project-id', '1'], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
warning: `mason build` is deprecated as `mason register config` now starts a build by default.
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__build__build_is_started(self):
api = MagicMock()
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api
)
result = self.runner.invoke(cli, ['build', 'project-id', '1'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
warning: `mason build` is deprecated as `mason register config` now starts a build by default.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
"""))
def test__build__build_is_started_and_awaited_for_completion(self):
api = MagicMock()
api.get_build = MagicMock(return_value={'data': {'status': 'COMPLETED'}})
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api
)
result = self.runner.invoke(cli, ['build', '--await', 'project-id', '1'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
warning: `mason build` is deprecated as `mason register config` now starts a build by default.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
Build completed for OS Config 'project-id'.
"""))
def test__stage__no_files_fails(self):
result = self.runner.invoke(cli, ['stage'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__stage__non_existent_file_fails(self):
result = self.runner.invoke(cli, ['stage', 'foobar'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__stage__no_creds_fails(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['stage', config_file], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
warning: `mason stage` is deprecated, use `mason register config` instead.
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__stage__negative_confirmation_aborts(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, ['stage', config_file], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
warning: `mason stage` is deprecated, use `mason register config` instead.
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
Continue registration? [Y/n]: n
Aborted!
""".format(config_file)))
def test__stage__file_is_registered(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api
)
result = self.runner.invoke(cli, ['stage', config_file], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
warning: `mason stage` is deprecated, use `mason register config` instead.
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
Continue registration? [Y/n]:
OS Config 'project-id' registered.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
""".format(config_file)))
def test__stage__config_is_registered_and_awaits_build_completion(self):
config_file = os.path.join(__tests_root__, 'res', 'config.yml')
api = MagicMock()
api.get_build = MagicMock(return_value={'data': {'status': 'COMPLETED'}})
config = Config(
auth_store=self._initialized_auth_store(),
endpoints_store=self._initialized_endpoints_store(),
api=api
)
result = self.runner.invoke(cli, ['stage', '--await', config_file], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
warning: `mason stage` is deprecated, use `mason register config` instead.
------------ OS Config ------------
File path: {}
Name: project-id
Version: 1
App(s):
- 'com.example.app' at version 1
-----------------------------------
Continue registration? [Y/n]:
OS Config 'project-id' registered.
Build queued for OS Config 'project-id'.
You can see the status of your build at
https://platform.bymason.com/controller/projects/project-id
Build completed for OS Config 'project-id'.
""".format(config_file)))
def test__deploy_config__invalid_name_fails(self):
result = self.runner.invoke(cli, ['deploy', 'config', 'project-id', 'invalid', 'group'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__deploy_config__no_group_fails(self):
result = self.runner.invoke(cli, ['deploy', 'config', 'project-id', '1'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__deploy_config__no_creds_fails(self):
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'config',
'project-id', '1', 'group'
], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__deploy_config__non_existent_latest_config_fails(self):
api = MagicMock()
api.get_latest_artifact = MagicMock(return_value=None)
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'config',
'project-id', 'latest', 'group'
], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Config 'project-id' not found, register it first.
Aborted!
"""))
def test__deploy_config__negative_confirmation_aborts(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'config',
'project-id', '1', 'group'
], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: project-id
Type: config
Version: 1
Group: group
Push: False
------------------------------------
Continue deployment? [Y/n]: n
Aborted!
"""))
def test__deploy_config__dry_run_exits_cleanly(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', '--dry-run', 'config',
'project-id', '1', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: project-id
Type: config
Version: 1
Group: group
Push: False
------------------------------------
"""))
def test__deploy_config__config_is_deployed(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'config',
'project-id', '1', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: project-id
Type: config
Version: 1
Group: group
Push: False
------------------------------------
Continue deployment? [Y/n]:
Config 'project-id' deployed.
"""))
def test__deploy_config__config_is_deployed_to_multiple_groups(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'config',
'project-id', '1', 'group1', 'group2', 'group3'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: project-id
Type: config
Version: 1
Groups: group1, group2, group3
Push: False
------------------------------------
Continue deployment? [Y/n]:
Config 'project-id' deployed.
"""))
def test__deploy_config__latest_config_is_deployed(self):
api = MagicMock()
api.get_latest_artifact = MagicMock(return_value={'version': '42'})
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'config',
'project-id', 'latest', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: project-id
Type: config
Version: 42
Group: group
Push: False
------------------------------------
Continue deployment? [Y/n]:
Config 'project-id' deployed.
"""))
def test__deploy_config__warning_is_logged_when_no_https_flag_is_used(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', '--no-https', 'config',
'project-id', '1', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: project-id
Type: config
Version: 1
Group: group
Push: False
***WARNING***
--no-https enabled: this deployment will be delivered to devices over HTTP.
***WARNING***
------------------------------------
Continue deployment? [Y/n]:
Config 'project-id' deployed.
"""))
def test__deploy_apk__invalid_name_fails(self):
result = self.runner.invoke(cli, ['deploy', 'apk', 'com.example.app', 'invalid', 'group'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__deploy_apk__no_group_fails(self):
result = self.runner.invoke(cli, ['deploy', 'apk', 'com.example.app', '1'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__deploy_apk__no_creds_fails(self):
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'apk',
'com.example.app', '1', 'group'
], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__deploy_apk__non_existent_latest_apk_fails(self):
api = MagicMock()
api.get_latest_artifact = MagicMock(return_value=None)
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'apk',
'com.example.app', 'latest', 'group'
], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Apk 'com.example.app' not found, register it first.
Aborted!
"""))
def test__deploy_apk__negative_confirmation_aborts(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'apk',
'com.example.app', '1', 'group'
], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: com.example.app
Type: apk
Version: 1
Group: group
Push: False
------------------------------------
Continue deployment? [Y/n]: n
Aborted!
"""))
def test__deploy_apk__dry_run_exits_cleanly(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', '--dry-run', 'apk',
'com.example.app', '1', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: com.example.app
Type: apk
Version: 1
Group: group
Push: False
------------------------------------
"""))
def test__deploy_apk__apk_is_deployed(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'apk',
'com.example.app', '1', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: com.example.app
Type: apk
Version: 1
Group: group
Push: False
------------------------------------
Continue deployment? [Y/n]:
Apk 'com.example.app' deployed.
"""))
def test__deploy_apk__apk_is_deployed_to_multiple_groups(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'apk',
'com.example.app', '1', 'group1', 'group2', 'group3'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: com.example.app
Type: apk
Version: 1
Groups: group1, group2, group3
Push: False
------------------------------------
Continue deployment? [Y/n]:
Apk 'com.example.app' deployed.
"""))
def test__deploy_apk__latest_apk_is_deployed(self):
api = MagicMock()
api.get_latest_artifact = MagicMock(return_value={'version': '42'})
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'apk',
'com.example.app', 'latest', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: com.example.app
Type: apk
Version: 42
Group: group
Push: False
------------------------------------
Continue deployment? [Y/n]:
Apk 'com.example.app' deployed.
"""))
def test__deploy_apk__warning_is_logged_when_no_https_flag_is_used(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', '--no-https', 'apk',
'com.example.app', '1', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: com.example.app
Type: apk
Version: 1
Group: group
Push: False
***WARNING***
--no-https enabled: this deployment will be delivered to devices over HTTP.
***WARNING***
------------------------------------
Continue deployment? [Y/n]:
Apk 'com.example.app' deployed.
"""))
def test__deploy_ota__no_group_fails(self):
result = self.runner.invoke(cli, ['deploy', 'ota', 'mason-os', '1'])
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 2)
def test__deploy_ota__no_creds_fails(self):
api = MagicMock()
config = Config(auth_store=self._uninitialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'ota',
'mason-os', '2.0.0', 'group'
], obj=config)
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
error: Not authenticated. Run 'mason login' to sign in.
Aborted!
"""))
def test__deploy_ota__negative_confirmation_aborts(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'ota',
'mason-os', '2.0.0', 'group'
], obj=config, input='n')
self.assertIsInstance(result.exception, SystemExit)
self.assertEqual(result.exit_code, 1)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: mason-os
Type: ota
Version: 2.0.0
Group: group
Push: False
------------------------------------
Continue deployment? [Y/n]: n
Aborted!
"""))
def test__deploy_ota__dry_run_exits_cleanly(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', '--dry-run', 'ota',
'mason-os', '2.0.0', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: mason-os
Type: ota
Version: 2.0.0
Group: group
Push: False
------------------------------------
"""))
def test__deploy_ota__ota_is_deployed(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'ota',
'mason-os', '2.0.0', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: mason-os
Type: ota
Version: 2.0.0
Group: group
Push: False
------------------------------------
Continue deployment? [Y/n]:
Ota 'mason-os' deployed.
"""))
def test__deploy_ota__ota_is_deployed_to_multiple_groups(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', 'ota',
'mason-os', '2.0.0', 'group1', 'group2', 'group3'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: mason-os
Type: ota
Version: 2.0.0
Groups: group1, group2, group3
Push: False
------------------------------------
Continue deployment? [Y/n]:
Ota 'mason-os' deployed.
"""))
def test__deploy_ota__warning_is_logged_when_no_https_flag_is_used(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', '--no-https', 'ota',
'mason-os', '2.0.0', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
------------ Deployment ------------
Name: mason-os
Type: ota
Version: 2.0.0
Group: group
Push: False
***WARNING***
--no-https enabled: this deployment will be delivered to devices over HTTP.
***WARNING***
------------------------------------
Continue deployment? [Y/n]:
Ota 'mason-os' deployed.
"""))
def test__deploy_ota__warning_is_logged_when_invalid_name(self):
api = MagicMock()
config = Config(auth_store=self._initialized_auth_store(), api=api)
result = self.runner.invoke(cli, [
'deploy', '--no-https', 'ota',
'invalid', '2.0.0', 'group'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
warning: Unknown name 'invalid' for 'ota' deployments. Forcing it to 'mason-os'
------------ Deployment ------------
Name: mason-os
Type: ota
Version: 2.0.0
Group: group
Push: False
***WARNING***
--no-https enabled: this deployment will be delivered to devices over HTTP.
***WARNING***
------------------------------------
Continue deployment? [Y/n]:
Ota 'mason-os' deployed.
"""))
def test__login__saves_creds(self):
with self.runner.isolated_filesystem():
auth_store = Store('fake-auth', {}, os.path.abspath(''), False)
api = MagicMock()
api.login = MagicMock(return_value={'id_token': 'id', 'access_token': 'access'})
config = Config(auth_store=auth_store, api=api)
result = self.runner.invoke(cli, [
'login',
'--token', 'Foobar',
'--username', 'name',
'--password', 'pass'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
Successfully logged in.
""".format(__version__)))
auth_store.clear()
auth_store.restore()
self.assertDictEqual(auth_store._fields, {
'api_key': 'Foobar',
'id_token': 'id',
'access_token': 'access'
})
def test__login__empty_api_key_is_ignored(self):
with self.runner.isolated_filesystem():
auth_store = Store('fake-auth', {}, os.path.abspath(''), False)
auth_store['api_key'] = 'Foobar'
auth_store.save()
api = MagicMock()
api.login = MagicMock(return_value={'id_token': 'id', 'access_token': 'access'})
config = Config(auth_store=auth_store, api=api)
result = self.runner.invoke(cli, [
'login',
'--username', 'name',
'--password', 'pass'
], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
auth_store.clear()
auth_store.restore()
self.assertDictEqual(auth_store._fields, {
'api_key': 'Foobar',
'id_token': 'id',
'access_token': 'access'
})
def test__logout__clears_creds(self):
with self.runner.isolated_filesystem():
auth_store = Store('fake-auth', {}, os.path.abspath(''), False)
auth_store['apk_key'] = 'Foobar'
auth_store.save()
config = Config(auth_store=auth_store)
result = self.runner.invoke(cli, ['logout'], obj=config)
self.assertIsNone(result.exception, result.output)
self.assertEqual(result.exit_code, 0)
self.assertEqual(inspect.cleandoc(result.output), inspect.cleandoc("""
Successfully logged out.
""".format(__version__)))
auth_store.restore()
self.assertIsNone(auth_store['apk_key'])
def _uninitialized_auth_store(self):
with self.runner.isolated_filesystem():
return Store('fake-auth', {}, os.path.abspath(''), False)
def _initialized_auth_store(self):
with self.runner.isolated_filesystem():
auth_store = Store('fake-auth', {}, os.path.abspath(''), False)
auth_store['api_key'] = 'key'
auth_store['id_token'] = 'id'
auth_store['access_token'] = 'access'
return auth_store
def _initialized_endpoints_store(self):
endpoints_store = ENDPOINTS
endpoints_store.clear()
return endpoints_store
@contextlib.contextmanager
def _cd(self, dir):
cwd = os.getcwd()
os.chdir(dir)
try:
yield dir
finally:
os.chdir(cwd)
@contextlib.contextmanager
def _home_dir_isolated_filesystem(self):
cwd = os.getcwd()
t = os.path.join(os.path.expanduser('~'), '.cache', 'tmp-mason-tests')
os.makedirs(t, exist_ok=True)
os.chdir(t)
try:
yield t
finally:
os.chdir(cwd)
try:
shutil.rmtree(t)
except (OSError, IOError):
pass
| 39.964003 | 127 | 0.565531 | 10,441 | 101,029 | 5.257925 | 0.040418 | 0.033936 | 0.033225 | 0.045685 | 0.933185 | 0.921527 | 0.908867 | 0.896754 | 0.886717 | 0.873219 | 0 | 0.011775 | 0.276247 | 101,029 | 2,527 | 128 | 39.979818 | 0.739018 | 0.001574 | 0 | 0.828057 | 0 | 0.007793 | 0.395437 | 0.032847 | 0 | 0 | 0 | 0 | 0.162202 | 1 | 0.060887 | false | 0.001461 | 0.008768 | 0 | 0.073064 | 0.001461 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b4521454aa3319402e8aee86bff029c4e831ae55 | 417 | py | Python | tests/test_utils.py | huihuilong/lyrebird | 732ad57850c64a1a1800d82027ebca1c9e18f5bf | [
"MIT"
] | 737 | 2019-02-20T06:51:50.000Z | 2022-03-31T09:00:32.000Z | tests/test_utils.py | huihuilong/lyrebird | 732ad57850c64a1a1800d82027ebca1c9e18f5bf | [
"MIT"
] | 203 | 2019-02-19T02:57:29.000Z | 2022-03-30T11:11:32.000Z | tests/test_utils.py | huihuilong/lyrebird | 732ad57850c64a1a1800d82027ebca1c9e18f5bf | [
"MIT"
] | 140 | 2019-02-18T03:32:50.000Z | 2022-03-18T03:37:39.000Z | from lyrebird import utils
def test_case_insenstive_dict():
test_dict = utils.CaseInsensitiveDict({'Content-Type':'LBTests'})
assert test_dict.get('Content-Type') == 'LBTests'
assert test_dict.get('content-type') == 'LBTests'
test_dict = utils.CaseInsensitiveDict({'content-type':'LBTests'})
assert test_dict.get('Content-Type') == 'LBTests'
assert test_dict.get('content-type') == 'LBTests'
| 41.7 | 69 | 0.71223 | 51 | 417 | 5.647059 | 0.294118 | 0.166667 | 0.375 | 0.333333 | 0.833333 | 0.833333 | 0.833333 | 0.833333 | 0.833333 | 0.833333 | 0 | 0 | 0.127098 | 417 | 9 | 70 | 46.333333 | 0.791209 | 0 | 0 | 0.5 | 0 | 0 | 0.273381 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
c32cdb03528844c8fb106b5b5e996b59b001fd9a | 35,081 | py | Python | app/test/unittest/test_hub.py | michalkoziara/IoT-RESTful-Webservice | ecb0f3e09cded3190f3646e5cd6c913056d94981 | [
"bzip2-1.0.6"
] | 2 | 2021-09-24T02:45:32.000Z | 2021-11-15T09:44:44.000Z | app/test/unittest/test_hub.py | PKramek/IoT-RESTful-Webservice-1 | ecb0f3e09cded3190f3646e5cd6c913056d94981 | [
"bzip2-1.0.6"
] | null | null | null | app/test/unittest/test_hub.py | PKramek/IoT-RESTful-Webservice-1 | ecb0f3e09cded3190f3646e5cd6c913056d94981 | [
"bzip2-1.0.6"
] | 1 | 2021-09-11T11:47:32.000Z | 2021-09-11T11:47:32.000Z | import base64
import hashlib
from unittest.mock import patch, Mock
import pytest
from app.main.model import DeviceGroup
from app.main.repository.base_repository import BaseRepository
from app.main.repository.deleted_device_repository import DeletedDeviceRepository
from app.main.repository.device_group_repository import DeviceGroupRepository
from app.main.repository.executive_device_repository import ExecutiveDeviceRepository
from app.main.repository.executive_type_repository import ExecutiveTypeRepository
from app.main.repository.formula_repository import FormulaRepository
from app.main.repository.sensor_repository import SensorRepository
from app.main.repository.sensor_type_repository import SensorTypeRepository
from app.main.repository.unconfigured_device_repository import UnconfiguredDeviceRepository
from app.main.service.executive_device_service import ExecutiveDeviceService
from app.main.service.hub_service import HubService
from app.main.service.log_service import LogService
from app.main.service.sensor_service import SensorService
from app.main.util.constants import Constants
def test_get_changed_devices_for_device_group_should_return_device_keys_when_valid_product_key(
get_device_group_default_values,
create_device_group,
get_executive_device_default_values,
create_executive_devices,
get_sensor_default_values,
create_sensors,
get_deleted_device_default_values,
create_deleted_devices):
hub_service_instance = HubService.get_instance()
test_product_key = 'test product key'
test_sensor_key = 'test sensor key'
test_executive_device_key = 'test executive device key'
device_group_values = get_device_group_default_values()
device_group_values['product_key'] = test_product_key
device_group = create_device_group(device_group_values)
sensor_values = get_sensor_default_values()
sensor_values['device_group_id'] = device_group.id
sensor_values['device_key'] = test_sensor_key
sensors = create_sensors([sensor_values])
executive_device_values = get_executive_device_default_values()
executive_device_values['device_group_id'] = device_group.id
executive_device_values['device_key'] = test_executive_device_key
executive_devices = create_executive_devices([executive_device_values])
deleted_device_values = get_deleted_device_default_values()
deleted_devices = create_deleted_devices([deleted_device_values])
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
ExecutiveDeviceRepository,
'get_updated_executive_devices_by_device_group_id'
) as get_updated_executive_devices_by_device_group_id_mock:
get_updated_executive_devices_by_device_group_id_mock.return_value = executive_devices
with patch.object(
SensorRepository,
'get_sensors_by_device_group_id_and_update_status'
) as get_sensors_by_device_group_id_and_update_status_mock:
get_sensors_by_device_group_id_and_update_status_mock.return_value = sensors
with patch.object(
DeletedDeviceRepository,
'get_deleted_devices_by_device_group_id'
) as get_deleted_devices_by_device_group_id_mock:
get_deleted_devices_by_device_group_id_mock.return_value = deleted_devices
with patch.object(BaseRepository, 'delete_but_do_not_commit'):
with patch.object(BaseRepository,
'update_database'
) as update_database_mock:
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
update_database_mock.return_value = True
result, result_values = hub_service_instance.get_changed_devices_for_device_group(
test_product_key,
"test_password"
)
assert result == Constants.RESPONSE_MESSAGE_OK
assert result_values
assert result_values['isUpdated']
assert result_values['changedDevices']
assert len(result_values['changedDevices']) == 2
assert result_values['changedDevices'][0] == test_executive_device_key
assert result_values['changedDevices'][1] == test_sensor_key
assert result_values['isDeleted']
assert result_values['deletedDevices']
assert len(result_values['deletedDevices']) == 1
assert result_values['deletedDevices'][0] == deleted_devices[0].device_key
def test_get_changed_devices_for_device_group_should_not_return_device_keys_when_no_product_key():
hub_service_instance = HubService.get_instance()
result, result_values = hub_service_instance.get_changed_devices_for_device_group(None, "test_password")
assert result == Constants.RESPONSE_MESSAGE_PRODUCT_KEY_NOT_FOUND
assert result_values is None
def test_get_changed_devices_for_device_group_should_not_return_device_keys_when_no_device_group():
hub_service_instance = HubService.get_instance()
test_product_key = 'test product key'
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = None
result, result_values = hub_service_instance.get_changed_devices_for_device_group(
test_product_key,
"test_password")
assert result == Constants.RESPONSE_MESSAGE_PRODUCT_KEY_NOT_FOUND
assert result_values is None
def test_get_changed_devices_for_device_group_should_not_return_device_keys_when_no_updated_devices(
get_device_group_default_values,
create_device_group):
hub_service_instance = HubService.get_instance()
test_product_key = 'test product key'
device_group_values = get_device_group_default_values()
device_group_values['product_key'] = test_product_key
device_group = create_device_group(device_group_values)
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
ExecutiveDeviceRepository,
'get_updated_executive_devices_by_device_group_id'
) as get_updated_executive_devices_by_device_group_id_mock:
get_updated_executive_devices_by_device_group_id_mock.return_value = []
with patch.object(
SensorRepository,
'get_sensors_by_device_group_id_and_update_status'
) as get_sensors_by_device_group_id_and_update_status_mock:
get_sensors_by_device_group_id_and_update_status_mock.return_value = []
with patch.object(
DeletedDeviceRepository,
'get_deleted_devices_by_device_group_id'
) as get_deleted_devices_by_device_group_id_mock:
get_deleted_devices_by_device_group_id_mock.return_value = []
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result, result_values = hub_service_instance.get_changed_devices_for_device_group(
test_product_key,
"test_password")
assert result == Constants.RESPONSE_MESSAGE_OK
assert result_values
assert not result_values['isUpdated']
assert not result_values['changedDevices']
assert not result_values['isDeleted']
assert not result_values['deletedDevices']
def test_add_multiple_devices_to_device_group_should_return_positive_response_when_called_with_right_parameters(
create_device_group
):
hub_service_instance = HubService.get_instance()
device_group = create_device_group()
device_keys = ['test', 'test2']
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
HubService,
'add_device_to_device_group'
) as add_device_to_device_group_mock:
add_device_to_device_group_mock.return_value = Constants.RESPONSE_MESSAGE_CREATED
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result = hub_service_instance.add_multiple_devices_to_device_group(device_group.product_key,
"test_password",
device_keys)
assert result == Constants.RESPONSE_MESSAGE_DEVICES_ADDED_TO_DEVICE_GROUP
def test_add_multiple_devices_to_device_group_should_return_negative_response_when_called_with_wrong_parameters(
create_device_group
):
hub_service_instance = HubService.get_instance()
device_group = create_device_group()
device_keys = ['test', 'test2']
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
HubService,
'add_device_to_device_group'
) as add_device_to_device_group_mock:
add_device_to_device_group_mock.return_value = 'test'
with patch.object(
LogService,
'log_exception'
):
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result = hub_service_instance.add_multiple_devices_to_device_group(device_group.product_key,
device_group.password,
device_keys)
assert result == Constants.RESPONSE_MESSAGE_PARTIALLY_WRONG_DATA
def test_add_device_to_device_group_should_result_true_when_given_valid_keys(
get_device_group_default_values,
create_device_group,
create_unconfigured_device):
hub_service_instance = HubService.get_instance()
test_product_key = 'test product key'
test_device_key = 'test device key'
device_group_values = get_device_group_default_values()
device_group_values['product_key'] = test_product_key
device_group = create_device_group(device_group_values)
unconfigured_device = create_unconfigured_device()
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
UnconfiguredDeviceRepository,
'get_unconfigured_device_by_device_key'
) as get_unconfigured_device_by_device_key_mock:
get_unconfigured_device_by_device_key_mock.return_value = unconfigured_device
with patch.object(UnconfiguredDeviceRepository, 'update_database') as update_database_mock:
update_database_mock.return_value = True
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result = hub_service_instance.add_device_to_device_group(
test_product_key,
"test_password",
test_device_key
)
assert result == Constants.RESPONSE_MESSAGE_CREATED
@pytest.mark.parametrize("test_product_key, test_device_key", [
(None, 'test device key'),
('test product key', None)])
def test_add_device_to_device_group_should_result_false_when_given_invalid_keys(
test_product_key,
test_device_key):
hub_service_instance = HubService.get_instance()
result = hub_service_instance.add_device_to_device_group(
test_product_key,
"test_password",
test_device_key
)
assert result == Constants.RESPONSE_MESSAGE_BAD_REQUEST
def test_add_device_to_device_group_should_result_false_when_no_device_group():
hub_service_instance = HubService.get_instance()
test_product_key = 'test product key'
test_device_key = 'test device key'
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = None
result = hub_service_instance.add_device_to_device_group(
test_product_key,
"test_password",
test_device_key
)
assert result == Constants.RESPONSE_MESSAGE_PRODUCT_KEY_NOT_FOUND
def test_add_device_to_device_group_should_result_false_when_no_unconfigured_device(
get_device_group_default_values,
create_device_group):
hub_service_instance = HubService.get_instance()
test_product_key = 'test product key'
test_device_key = 'test device key'
device_group_values = get_device_group_default_values()
device_group_values['product_key'] = test_product_key
device_group = create_device_group(device_group_values)
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
UnconfiguredDeviceRepository,
'get_unconfigured_device_by_device_key'
) as get_unconfigured_device_by_device_key_mock:
get_unconfigured_device_by_device_key_mock.return_value = None
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result = hub_service_instance.add_device_to_device_group(
test_product_key,
"test_password",
test_device_key
)
assert result == Constants.RESPONSE_MESSAGE_DEVICE_KEY_NOT_FOUND
def test_set_sensors_readings_should_return_update_info_when_called_with_right_parameters(
create_device_group):
hub_service_instance = HubService.get_instance()
device_group = create_device_group()
sensors_readings = [{
"deviceKey": "2",
"readingValue": 0.9,
"isActive": False
}]
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
SensorService,
'set_sensor_reading'
) as _set_sensor_reading_mock:
_set_sensor_reading_mock.return_value = True
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result = hub_service_instance.set_sensors_readings(
device_group.product_key,
device_group.password,
sensors_readings)
assert result == Constants.RESPONSE_MESSAGE_UPDATED_SENSORS_AND_DEVICES
def test_set_sensors_readings_should_return_partial_success_message_when_called_with_right_parameters(
create_device_group):
hub_service_instance = HubService.get_instance()
device_group = create_device_group()
sensors_readings = [{
"deviceKey": "2",
"readingValue": 0.9,
"isActive": False
}]
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
SensorService,
'set_sensor_reading'
) as _set_sensor_reading_mock:
_set_sensor_reading_mock.return_value = False
with patch.object(
LogService,
'log_exception'
) as log_exception_mock:
log_exception_mock.side_effects = None
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result = hub_service_instance.set_sensors_readings(
device_group.product_key,
device_group.password,
sensors_readings)
assert result == Constants.RESPONSE_MESSAGE_PARTIALLY_WRONG_DATA
def test_set_sensors_readings_should_return_product_key_error_when_called_with_wrong_product_key(
create_device_group):
hub_service_instance = HubService.get_instance()
test_product_key = "Test"
sensors_readings = [{
"deviceKey": "2",
"readingValue": 0.9,
"isActive": False
}]
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = None
result = hub_service_instance.set_sensors_readings(
test_product_key,
"test_password",
sensors_readings)
assert result == Constants.RESPONSE_MESSAGE_PRODUCT_KEY_NOT_FOUND
def test_set_devices_states_and_sensors_readings_should_return_sensors_readings_error_when_called_with_wrong_parameter(
create_device_group):
hub_service_instance = HubService.get_instance()
test_product_key = "Test"
sensors_readings = "test"
result = hub_service_instance.set_sensors_readings(
test_product_key,
"test_password",
sensors_readings)
assert result == Constants.RESPONSE_MESSAGE_SENSORS_READINGS_NOT_LIST
def test_add_device_to_device_group_should_result_error_message_when_save_failed(
get_device_group_default_values,
create_device_group,
create_unconfigured_device):
hub_service_instance = HubService.get_instance()
test_product_key = 'test product key'
test_device_key = 'test device key'
device_group_values = get_device_group_default_values()
device_group_values['product_key'] = test_product_key
device_group = create_device_group(device_group_values)
unconfigured_device = create_unconfigured_device()
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
UnconfiguredDeviceRepository,
'get_unconfigured_device_by_device_key'
) as get_unconfigured_device_by_device_key_mock:
get_unconfigured_device_by_device_key_mock.return_value = unconfigured_device
with patch.object(UnconfiguredDeviceRepository, 'update_database') as update_database_mock:
update_database_mock.return_value = False
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result = hub_service_instance.add_device_to_device_group(
test_product_key,
device_group.password,
test_device_key
)
assert result == Constants.RESPONSE_MESSAGE_ERROR
def test_get_devices_informations_should_return_device_informations_when_valid_product_key_and_device_keys(
get_executive_device_default_values,
create_executive_devices,
get_sensor_default_values,
create_sensors,
get_sensor_type_default_values,
create_sensor_types,
get_executive_type_default_values,
create_executive_types,
get_state_enumerator_default_values,
create_state_enumerators,
get_sensor_reading_enumerator_default_values,
create_sensor_reading_enumerators,
get_formula_default_values,
create_formulas):
hub_service_instance = HubService.get_instance()
sensor_values = get_sensor_default_values()
sensors = create_sensors([sensor_values])
executive_device_values = get_executive_device_default_values()
executive_device_values['is_formula_used'] = True
executive_devices = create_executive_devices([executive_device_values])
sensor_type_values = get_sensor_type_default_values()
sensor_type_values['reading_enumerators'] = create_sensor_reading_enumerators(
[get_sensor_reading_enumerator_default_values()]
)
sensor_types = create_sensor_types([sensor_type_values])
executive_type_values = get_executive_type_default_values()
executive_type_values['state_enumerators'] = create_state_enumerators(
[get_state_enumerator_default_values()]
)
executive_type_values['state_type'] = 'Enum'
executive_types = create_executive_types([executive_type_values])
formula_values = get_formula_default_values()
formulas = create_formulas([formula_values])
with patch.object(
SensorRepository,
'get_sensors_by_product_key_and_device_keys'
) as get_sensors_by_product_key_and_device_keys_mock:
get_sensors_by_product_key_and_device_keys_mock.return_value = sensors
with patch.object(
SensorTypeRepository,
'get_sensor_types_by_ids'
) as get_sensor_types_by_ids_mock:
get_sensor_types_by_ids_mock.return_value = sensor_types
with patch.object(
ExecutiveDeviceRepository,
'get_executive_devices_by_product_key_and_device_keys'
) as get_executive_devices_by_product_key_and_device_keys_mock:
get_executive_devices_by_product_key_and_device_keys_mock.return_value = executive_devices
with patch.object(
ExecutiveTypeRepository,
'get_executive_types_by_ids'
) as get_executive_types_by_ids_mock:
get_executive_types_by_ids_mock.return_value = executive_types
with patch.object(
FormulaRepository,
'get_formulas_by_ids'
) as get_formulas_by_ids_mock:
get_formulas_by_ids_mock.return_value = formulas
with patch.object(
BaseRepository,
'update_database'
) as update_database_mock:
update_database_mock.return_value = True
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
with patch.object(DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = Mock(spec=DeviceGroup)
result, result_values = hub_service_instance.get_devices_informations(
"test_product_key",
"test_password",
[
sensors[0].device_key,
executive_devices[0].device_key
]
)
assert result == Constants.RESPONSE_MESSAGE_OK
assert result_values
assert result_values['sensors']
assert result_values['devices']
assert len(result_values['sensors']) == 1
assert len(result_values['devices']) == 1
assert result_values['sensors'][0]['deviceKey'] == sensors[0].device_key
assert result_values['devices'][0]['deviceKey'] == executive_devices[0].device_key
assert result_values['devices'][0]['rule']
assert result_values['sensors'][0]['readingType'] == sensor_type_values['reading_type']
assert result_values['devices'][0]['stateType'] == executive_type_values['state_type']
assert result_values['devices'][0]['defaultState'] == executive_types[0].default_state
assert result_values['devices'][0]['isFormulaUsed'] == executive_devices[0].is_formula_used
assert result_values['devices'][0]['enumerator']
assert len(result_values['devices'][0]['enumerator']) == 1
state_enumerator = result_values['devices'][0]['enumerator'][0]
assert state_enumerator['number'] == get_state_enumerator_default_values()['number']
assert state_enumerator['text'] == get_state_enumerator_default_values()['text']
assert result_values['sensors'][0]['enumerator']
assert len(result_values['sensors'][0]['enumerator']) == 1
reading_enumerator = result_values['sensors'][0]['enumerator'][0]
assert reading_enumerator['number'] == get_sensor_reading_enumerator_default_values()['number']
assert reading_enumerator['text'] == get_sensor_reading_enumerator_default_values()['text']
def test_get_devices_informations_should_return_empty_lists_when_valid_product_key_and_device_keys_but_no_data():
hub_service_instance = HubService.get_instance()
with patch.object(
SensorRepository,
'get_sensors_by_product_key_and_device_keys'
) as get_sensors_by_product_key_and_device_keys_mock:
get_sensors_by_product_key_and_device_keys_mock.return_value = None
with patch.object(
ExecutiveDeviceRepository,
'get_executive_devices_by_product_key_and_device_keys'
) as get_executive_devices_by_product_key_and_device_keys_mock:
get_executive_devices_by_product_key_and_device_keys_mock.return_value = None
with patch.object(
BaseRepository,
'update_database'
) as update_database_mock:
update_database_mock.return_value = True
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
with patch.object(DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = Mock(spec=DeviceGroup)
result, result_values = hub_service_instance.get_devices_informations(
'product_key',
"test_password",
['test']
)
assert result == Constants.RESPONSE_MESSAGE_OK
assert result_values
assert len(result_values['sensors']) == 0
assert len(result_values['devices']) == 0
@pytest.mark.parametrize("product_key, devices, error_message", [
("test product key", None, Constants.RESPONSE_MESSAGE_DEVICE_KEYS_NOT_LIST),
("test product key", [], Constants.RESPONSE_MESSAGE_DEVICE_KEYS_NOT_LIST),
(None, [], Constants.RESPONSE_MESSAGE_PRODUCT_KEY_NOT_FOUND)])
def test_get_devices_informations_should_return_error_message_when_no_parameter(product_key, devices, error_message):
hub_service_instance = HubService.get_instance()
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
with patch.object(DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = Mock(spec=DeviceGroup)
result, result_values = hub_service_instance.get_devices_informations(
product_key,
"test_password",
devices
)
assert result == error_message
assert not result_values
def test_set_devices_states_should_return_update_info_when_called_with_right_parameters(create_device_group):
hub_service_instance = HubService.get_instance()
device_group = create_device_group()
devices_states = [
{
"deviceKey": "1",
"state": 1,
"isActive": False
}
]
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
ExecutiveDeviceService,
'set_device_state'
) as _set_device_state_mock:
_set_device_state_mock.return_value = True
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result = hub_service_instance.set_devices_states(
device_group.product_key,
device_group.password,
devices_states)
assert result == Constants.RESPONSE_MESSAGE_UPDATED_SENSORS_AND_DEVICES
def test_set_devices_states_should_return_partial_success_message_when_called_with_right_parameters(
create_device_group):
hub_service_instance = HubService.get_instance()
device_group = create_device_group()
devices_states = [
{
"deviceKey": "",
"test": 1,
"isActive": False
}
]
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = device_group
with patch.object(
ExecutiveDeviceService,
'set_device_state'
) as _set_device_state_mock:
_set_device_state_mock.return_value = False
with patch.object(
LogService,
'log_exception'
) as log_exception_mock:
log_exception_mock.side_effects = None
with patch.object(HubService,
'is_authorization_correct'
) as is_authorization_correct_mock:
is_authorization_correct_mock.return_value = True
result = hub_service_instance.set_devices_states(
device_group.product_key,
device_group.password,
devices_states)
assert result == Constants.RESPONSE_MESSAGE_PARTIALLY_WRONG_DATA
def test_set_devices_states_should_return_product_key_error_when_called_with_wrong_product_key(
create_device_group):
hub_service_instance = HubService.get_instance()
test_product_key = "Test"
devices_states = [
{
"deviceKey": "",
"test": 1,
"isActive": False
}
]
with patch.object(
DeviceGroupRepository,
'get_device_group_by_product_key'
) as get_device_group_by_product_key_mock:
get_device_group_by_product_key_mock.return_value = None
result = hub_service_instance.set_devices_states(
test_product_key,
"test_password",
devices_states)
assert result == Constants.RESPONSE_MESSAGE_PRODUCT_KEY_NOT_FOUND
def test_set_devices_states_should_return_device_states_error_when_called_with_wrong_parameter(
create_device_group):
hub_service_instance = HubService.get_instance()
test_product_key = "Test"
devices_states = [
"test"
]
result = hub_service_instance.set_devices_states(
test_product_key,
"test_password",
devices_states)
assert result == Constants.RESPONSE_MESSAGE_DEVICE_STATES_NOT_LIST
def test_is_authorization_correct_should_return_true_if_right_authentication_is_passed(
create_device_group
):
hub_service_instance = HubService.get_instance()
device_group = create_device_group()
password = "password"
device_group.password = hashlib.sha224((password + Constants.SECRET_KEY).encode()).hexdigest()
authorization_bytes = (device_group.product_key + ":" + password).encode()
authorization = "Basic " + base64.b64encode(authorization_bytes).decode()
assert hub_service_instance.is_authorization_correct(device_group, authorization)
if __name__ == '__main__':
pytest.main(['app/unittest/{}.py'.format(__file__)])
| 40.79186 | 119 | 0.668253 | 3,778 | 35,081 | 5.638962 | 0.044733 | 0.102751 | 0.037176 | 0.040556 | 0.844724 | 0.798723 | 0.761641 | 0.732398 | 0.711885 | 0.697662 | 0 | 0.002247 | 0.276788 | 35,081 | 859 | 120 | 40.839348 | 0.837452 | 0 | 0 | 0.702941 | 0 | 0 | 0.09307 | 0.04521 | 0 | 0 | 0 | 0 | 0.094118 | 1 | 0.033824 | false | 0.038235 | 0.027941 | 0 | 0.061765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c362317fb1a90ffd4ec214a749eb9194a1170957 | 128 | py | Python | gpnn/__init__.py | jiayixu64/graph-partition-neural-network-samples | 7a8bc3f51e7aedad10b072ba91424a7d621d19a9 | [
"MIT"
] | 53 | 2017-12-04T09:51:31.000Z | 2019-04-21T09:25:47.000Z | gpnn/__init__.py | jiayixu64/graph-partition-neural-network-samples | 7a8bc3f51e7aedad10b072ba91424a7d621d19a9 | [
"MIT"
] | 1 | 2019-02-05T12:03:51.000Z | 2019-02-05T12:03:51.000Z | gpnn/__init__.py | jiayixu64/graph-partition-neural-network-samples | 7a8bc3f51e7aedad10b072ba91424a7d621d19a9 | [
"MIT"
] | 27 | 2019-05-21T22:33:37.000Z | 2022-02-22T03:26:28.000Z | import gpnn.runner.planetoid_runner
import gpnn.reader.gpnn_reader
import gpnn.reader.gpnn_reader_custom
import gpnn.model.gpnn
| 25.6 | 37 | 0.875 | 20 | 128 | 5.4 | 0.35 | 0.37037 | 0.296296 | 0.37037 | 0.481481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 128 | 4 | 38 | 32 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
c37f108e6297515f69b74f321335c0124e2d1078 | 15,685 | py | Python | stress_test/questions_stress_test.py | rit1200/kairon | 674a491f6deeae4800825ca93e0726e4fb6e0866 | [
"Apache-2.0"
] | 97 | 2020-08-18T10:07:48.000Z | 2022-03-26T18:33:37.000Z | stress_test/questions_stress_test.py | rit1200/kairon | 674a491f6deeae4800825ca93e0726e4fb6e0866 | [
"Apache-2.0"
] | 276 | 2020-08-27T23:24:35.000Z | 2022-03-31T09:43:30.000Z | stress_test/questions_stress_test.py | rit1200/kairon | 674a491f6deeae4800825ca93e0726e4fb6e0866 | [
"Apache-2.0"
] | 46 | 2020-09-11T13:29:41.000Z | 2022-03-08T12:27:17.000Z | import inspect
import logging
from locust import HttpUser, SequentialTaskSet, between, task
from locust.exception import StopUser
class ExecuteTask(SequentialTaskSet):
"""
Load test for questions service.
locust -f stress_test/questions_stress_test.py --headless -u 1000 -r 100 --host=http://localhost:8000
u: number of users
r: rate at which users are spawned
host: base url where requests are hit
headless: run with CLI only
To run from UI:
locust -f stress_test/questions_stress_test.py -u 1000 -r 100 --host=http://localhost:8000
"""
wait_time = between(1, 2)
@task
def get_questions_1(self):
request_body = {"data": "where is digite located?"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_2(self):
request_body = {"data": "Can i get a glass of water?"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_3(self):
request_body = {"data": "This bag is full of apples. What about peaches?"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_4(self):
request_body = {"data": "I dont feel like doing any work because I am too lazy!!"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_5(self):
request_body = {
"data": "The weather is very humid today. Let's get to some place cooler. Can you tell me if its snowing?"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_6(self):
request_body = {"data": "I wake up at 3:00am. What time to you wake up daily?"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_7(self):
request_body = {
"data": "Today is 21st Aug 2020. I think he celebrates his birthday on 11th Dec. Maybe his father's anniversory is on 5-11-1995"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_8(self):
request_body = {
"data": "Give me $5 and i will give you ten notes of Rs. 500, rupees 2000, 100 rupees and ten dollars."}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_9(self):
request_body = {"data": "cannot believe my internet speed went from 200kb/s to 0.2MBps!!"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_10(self):
request_body = {
"data": "jan is my favorite month because it snows heavily and thats the reason i named my daughter jan"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_11(self):
request_body = {
"data": "How far away is SUN from earth ? 149,600,000 kilometers (km) or 92,900,000 miles !! whoaaa!!"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_12(self):
request_body = {"data": "Planning to go to uranous coz it has rains diamonds there."}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_13(self):
request_body = {
"data": "astrophotography is beautiful to see but not a child's play when actually doing it. correct?"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_14(self):
request_body = {
"data": "If two pieces of the same type of metal touch in space, they will bond and be permanently stuck together"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_15(self):
request_body = {"data": "water and mercury can exist in all the three states of matter"}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
@task
def get_questions_16(self):
request_body = {"data": "Going to work is more dangerous than going to war."}
with self.client.post("/questions",
json=request_body,
catch_response=True) as response:
if response.text is None or not response.text.strip():
logging.error(inspect.stack()[0][3] + " Failed: response is None")
response.failure(inspect.stack()[0][3] + " Failed: response is None")
else:
logging.info(inspect.stack()[0][3] + ": " + response.text)
response_data = response.json()
if not response_data["success"]:
logging.error(inspect.stack()[0][3] + " Failed: " + response_data['message'])
response.failure(inspect.stack()[0][3] + " Failed: " + response_data['message'])
raise StopUser()
class QuestioningUser(HttpUser):
tasks = [ExecuteTask]
| 53.715753 | 141 | 0.545553 | 1,762 | 15,685 | 4.770715 | 0.136776 | 0.114204 | 0.123721 | 0.133238 | 0.816203 | 0.811444 | 0.811444 | 0.811444 | 0.795265 | 0.795265 | 0 | 0.024317 | 0.323558 | 15,685 | 291 | 142 | 53.900344 | 0.767955 | 0.023143 | 0 | 0.839844 | 0 | 0.019531 | 0.183663 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.015625 | 0 | 0.09375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6f438a925578a2cf6df1ee76f7737624eda48fc0 | 2,069 | py | Python | tests/test_cli.py | dtgoitia/abot | 9fe0721a9a40ed3f7f8a7f6b9db28d810e0f6794 | [
"MIT"
] | 2 | 2018-03-14T22:47:00.000Z | 2018-05-22T20:40:10.000Z | tests/test_cli.py | dtgoitia/abot | 9fe0721a9a40ed3f7f8a7f6b9db28d810e0f6794 | [
"MIT"
] | 14 | 2018-03-10T10:19:58.000Z | 2021-06-01T21:43:18.000Z | tests/test_cli.py | dtgoitia/abot | 9fe0721a9a40ed3f7f8a7f6b9db28d810e0f6794 | [
"MIT"
] | 1 | 2018-03-16T07:14:35.000Z | 2018-03-16T07:14:35.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import, print_function, unicode_literals
import unittest.mock as mock
import pytest
from abot import cli
# Integration tests
@pytest.mark.asyncio
async def test_simple_bot_async_command():
"""Test command through bot interface"""
message_mock = mock.MagicMock()
message_mock.text = 'bot ping'
@cli.group()
def acmds(): pass
ping_mock = mock.MagicMock()
@acmds.command()
async def ping(*args, **kwargs):
ping_mock(*args, **kwargs)
cmd_collection = cli.CommandCollection(sources=[acmds])
await cmd_collection.async_message(message_mock)
ping_mock.assert_called_once_with()
def test_simple_cli_async_command():
"""Test command through cli interface"""
@cli.group()
def acmds(): pass
ping_mock = mock.MagicMock()
@acmds.command()
async def ping(*args, **kwargs):
ping_mock(*args, **kwargs)
cmd_collection = cli.CommandCollection(sources=[acmds])
cmd_collection.main(['ping'], standalone_mode=False)
ping_mock.assert_called_once_with()
@pytest.mark.skip('Not supported: Sync command in cli interface')
def test_simple_cli_sync_command():
@cli.group()
def acmds(): pass
ping_mock = mock.MagicMock()
@acmds.command()
def ping(*args, **kwargs):
ping_mock(*args, **kwargs)
cmd_collection = cli.CommandCollection(sources=[acmds])
cmd_collection.main(['ping'], standalone_mode=False)
ping_mock.assert_called_once_with()
@pytest.mark.skip('Not supported: Sync command in bot interface')
def test_simple_bot_sync_command():
"""Test sync command through bot interface"""
message_mock = mock.MagicMock()
message_mock.text = 'bot ping'
@cli.group()
def acmds(): pass
ping_mock = mock.MagicMock()
@acmds.command()
async def ping(*args, **kwargs):
ping_mock(*args, **kwargs)
cmd_collection = cli.CommandCollection(sources=[acmds])
cmd_collection.async_message(message_mock)
ping_mock.assert_called_once_with()
| 22.988889 | 72 | 0.695022 | 261 | 2,069 | 5.260536 | 0.214559 | 0.06992 | 0.07429 | 0.046613 | 0.78587 | 0.752367 | 0.752367 | 0.752367 | 0.752367 | 0.752367 | 0 | 0.000591 | 0.182214 | 2,069 | 89 | 73 | 23.247191 | 0.810875 | 0.055582 | 0 | 0.72549 | 0 | 0 | 0.058885 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 1 | 0.156863 | false | 0.078431 | 0.078431 | 0 | 0.235294 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
6f4bec0d9fc5813b0e900e7f695749790eb29771 | 53,555 | py | Python | treasury/monthly_treasury_statement.py | areed1192/us-federal-treasury-python-api | 59969bfd865528072ffcedfd861aab2e0f9764ba | [
"MIT"
] | 4 | 2021-05-27T01:43:00.000Z | 2021-11-02T12:16:50.000Z | treasury/monthly_treasury_statement.py | areed1192/us-federal-treasury-python-api | 59969bfd865528072ffcedfd861aab2e0f9764ba | [
"MIT"
] | null | null | null | treasury/monthly_treasury_statement.py | areed1192/us-federal-treasury-python-api | 59969bfd865528072ffcedfd861aab2e0f9764ba | [
"MIT"
] | 1 | 2022-01-16T14:59:32.000Z | 2022-01-16T14:59:32.000Z | from typing import Dict
from typing import List
from treasury.session import FederalTreasurySession
class MonthlyTreasuryStatements():
"""
## Overview:
----
The Monthly Treasury Statement (MTS) dataset provides information on
the flow of money into and out of the U.S. Department of the Treasury.
It includes how deficits are funded, such as borrowing from the public
or reducing operating cash, and how surpluses are distributed. Further
tables categorize spending (outlays) by department and agency, revenue
(receipts) by specific taxes or other government sources of income, and
transactions with trust funds such as Social Security or Medicare. All
values are reported in millions of U.S. dollars.
"""
def __init__(self, session: FederalTreasurySession) -> None:
"""Initializes the `MonthlyTreasuryStatements` object.
### Parameters
----
session : `TreasurySession`
An initialized session of the `TreasurySession`.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
"""
# Set the session.
self.treasury_session: FederalTreasurySession = session
def __repr__(self) -> str:
"""String representation of the `FederalTreasuryClient.MonthlyTreasuryStatements` object."""
# define the string representation
str_representation = '<FederalTreasuryClient.MonthlyTreasuryStatements (active=True, connected=True)>'
return str_representation
def receipts_outlays_and_deficits(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Summary of Receipts, Outlays, and the Deficit/Surplus of
the U.S. Government.
### Overview
----
This summary table shows the total amount of receipts and outlays and the
amount of the budget surplus/deficit by month for the current and prior fiscal
years. This table includes total and subtotal rows that should be excluded
when aggregating data. Some rows represent elements of the dataset's hierarchy,
but are not assigned values. The classification_id for each of these elements
can be used as the parent_id for underlying data elements to calculate their
implied values. Subtotal rows are available to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.receipts_outlays_and_deficits()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_1',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def budgets_and_financing(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Summary of Budget and Off-Budget Results and Financing
of the U.S. Government.
### Overview
----
This summary table shows the on-budget and off-budget receipts and outlays, the
on-budget and off-budget surplus/deficit, and the means of financing the budget
surplus/deficit. The table also shows the budgeted amounts estimated in the President's
Budget for the current fiscal year and next fiscal year for each item on the table.
This table includes total and subtotal rows that should be excluded when aggregating
data. Some rows represent elements of the dataset's hierarchy, but are not assigned
values. The classification_id for each of these elements can be used as the parent_id
for underlying data elements to calculate their implied values. Subtotal rows are
available to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.budgets_and_financing()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_2',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def receipts_and_outlays(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Summary of Receipts and Outlays of the U.S. Government.
### Overview
----
This summary table shows, for Budget Receipts, the total amount of activity
for the current month, the current fiscal year-to-date, the comparable prior
period year-to-date and the budgeted amount estimated for the current fiscal
year for various types of receipts (i.e. individual income tax, corporate income
tax, etc.). The Budget Outlays section of the table shows the total amount of
activity for the current month, the current fiscal year-to-date, the comparable
prior period year-to-date and the budgeted amount estimated for the current fiscal
year for agencies of the federal government. The table also shows the amounts for
the budget/surplus deficit categorized as listed above. This table includes total
and subtotal rows that should be excluded when aggregating data. Some rows represent
elements of the dataset's hierarchy, but are not assigned values. The
classification_id for each of these elements can be used as the parent_id for underlying
data elements to calculate their implied values. Subtotal rows are available to access
this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.receipts_and_outlays()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_3',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def receipts(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Receipts of the U.S. Government.
### Overview
----
This table shows the gross receipts, refunds and net receipts for the current month,
the current fiscal year-to-date and the prior fiscal year-to-date for the various
receipts of the federal government. This table includes total and subtotal rows
that should be excluded when aggregating data. Some rows represent elements of the
dataset's hierarchy, but are not assigned values. The classification_id for each of
these elements can be used as the parent_id for underlying data elements to calculate
their implied values. Subtotal rows are available to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.receipts()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_4',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def outlays(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Outlays of the U.S. Government.
### Overview
----
This table shows the gross outlays, applicable receipts and net outlays
for the current month, current fiscal year-to-date and prior fiscal
year-to-date by various agency programs accounted for in the budget of the
federal government. This table includes total and subtotal rows that should
be excluded when aggregating data. Some rows represent elements of the
dataset's hierarchy, but are not assigned values. The classification_id for
each of these elements can be used as the parent_id for underlying data elements
to calculate their implied values. Subtotal rows are available to access
this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.outlays()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_5',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def means_of_financing(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Means of Financing the Deficit or Disposition of
Surplus by the U.S. Government.
### Overview
----
This table shows the net transactions for the current month, and the current
and prior fiscal year-to-date, as well as account balances for the beginning
of the current fiscal year and current accounting month and the close of the
current accounting month. This activity is related to the means used to finance
the budget deficit or to dispose of a budget surplus. An asset account would
represent an asset to the United States Government, for example United States
Treasury Operating Cash. A liability account would represent a liability to
the United States Government, for example Borrowing from the Public. This table
includes total and subtotal rows that should be excluded when aggregating data.
Some rows represent elements of the dataset's hierarchy, but are not assigned
values. The classification_id for each of these elements can be used as the
parent_id for underlying data elements to calculate their implied values.
Subtotal rows are available to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.means_of_financing()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_6',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def analysis_change_in_liabilities(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Analysis of Change in Excess of Liabilities of
the U.S. Government.
### Overview
----
This table is a subsidiary table for Means of Financing the Deficit or
Disposition of Surplus by the U.S. Government providing a detailed view
of the Change in Excess of Liabilities. This table includes total and subtotal
rows that should be excluded when aggregating data. Some rows represent
elements of the dataset's hierarchy, but are not assigned values. The
classification_id for each of these elements can be used as the parent_id for
underlying data elements to calculate their implied values. Subtotal rows are
available to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.analysis_change_in_liabilities()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_6a',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def securities_issued_special_financing(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Securities Issued by Federal Agencies Under Special
Financing Authorities.
### Overview
----
This table is a subsidiary table for Means of Financing the Deficit or
Disposition of Surplus by the U.S. Government providing a detailed view
of the transactions labelled, Agency Securities, Issued Under Special
Financing Authorities. Special financing authorities include financing that
is established by legislation under special or unique circumstances and for
a specific purpose. This table includes total and subtotal rows that should
be excluded when aggregating data. Some rows represent elements of the dataset's
hierarchy, but are not assigned values. The classification_id for each of these
elements can be used as the parent_id for underlying data elements to calculate
their implied values. Subtotal rows are available to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.securities_issued_special_financing()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_6b',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def borrowing_financed_treasury_securities(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Federal Agency Borrowing Financed Through the Issue
of Treasury Securities.
### Overview
----
This table is a subsidiary table for Means of Financing the Deficit or Disposition
of Surplus by the U.S. Government, providing a detailed view of transactions and
account balances for agency programs that borrow from the United States Treasury
or from the Federal Financing Bank. This table includes total and subtotal rows
that should be excluded when aggregating data. Some rows represent elements of the
dataset's hierarchy, but are not assigned values. The classification_id for each of
these elements can be used as the parent_id for underlying data elements to calculate
their implied values. Subtotal rows are available to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.borrowing_financed_treasury_securities()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_6c',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def investments_federal_securities(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Investments of Federal Government Accounts in Federal
Securities.
### Overview
----
This table is a subsidiary table for Means of Financing the Deficit or Disposition
of Surplus by the U.S. Government providing a detailed view of federal funds and
trust funds that are invested in Government Account Series (GAS) securities. Federal
funds include general funds, special funds, and revolving funds (public enterprise
revolving funds, intragovernmental revolving funds, and credit financing accounts).
A trust fund is a type of account, designated by law, for receipts or offsetting
receipts dedicated to specific purposes and the expenditure of these receipts.
This table includes total and subtotal rows that should be excluded when aggregating
data. Some rows represent elements of the dataset's hierarchy, but are not assigned
values. The classification_id for each of these elements can be used as the parent_id
for underlying data elements to calculate their implied values. Subtotal rows are
available to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.investments_federal_securities()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_6d',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def direct_loan_financing(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Guaranteed and Direct Loan Financing, Net Activity.
### Overview
----
This table is a subsidiary table for Means of Financing the Deficit or Disposition
of Surplus by the U.S. Government providing a detailed view of all direct and
guaranteed loan financing for federal credit programs under the Credit Reform Act
of 1990. Guaranteed loan financing is issuing any debt obligation with a guarantee,
insurance, or other pledge that payment of all or a part of the principal or interest
will be made to the lender. This table applies to lending to non-federal borrowers by
non-federal lenders that carries some form of guarantee by the federal government.
Exceptions include the insurance of deposits, shares, or other withdrawable accounts in
financial institutions. This table includes total and subtotal rows that should be excluded
when aggregating data. Some rows represent elements of the dataset's hierarchy, but are not
assigned values. The classification_id for each of these elements can be used as the parent_id
for underlying data elements to calculate their implied values. Subtotal rows are available
to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.direct_loan_financing()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_6e',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def receipts_and_outlays_by_month(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Receipts and Outlays of the U.S. Government by Month.
### Overview
----
This table shows the receipts and outlays of the United States Government
by month for the current fiscal year, up to and including the current
accounting month. The table also shows the total receipts and outlays for the
current fiscal year-to-date and the comparable prior fiscal year-to-date. This
table includes total and subtotal rows that should be excluded when aggregating data.
Some rows represent elements of the dataset's hierarchy, but are not assigned values.
The classification_id for each of these elements can be used as the parent_id for
underlying data elements to calculate their implied values. Subtotal rows are available
to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.receipts_and_outlays_by_month()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_7',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def trust_fund_impact(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Trust Fund Impact on Budget Results and Investment Holdings.
### Overview
----
This table shows the total receipts and outlays and the resulting surplus
or deficit (shown on the table as excess) for the current month and the current
fiscal year-to-date for all federal trust funds. The table also shows the totals
for securities held as investments by the federal trust funds for the beginning
of the fiscal year and the beginning and ending of the current accounting month.
A trust fund is a type of account, designated by law, for receipts or offsetting
receipts dedicated to specific purposes and the expenditure of these receipts.
This table includes total and subtotal rows that should be excluded when aggregating
data. Some rows represent elements of the dataset's hierarchy, but are not assigned
values. The classification_id for each of these elements can be used as the parent_id
for underlying data elements to calculate their implied values. Subtotal rows are
available to access this same information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.trust_fund_impact()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_8',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
def receipts_by_source_outlay_by_function(
self,
fields: List[str] = None,
sort: List[str] = None,
filters: List[str] = None,
page_number: int = 1,
page_size: int = 100
) -> Dict:
"""Summary of Receipts by Source, and Outlays by Function of the
U.S. Government.
### Overview
----
This summary table shows, for Budget Receipts, the total amount of activity
for the current month, the current fiscal year-to-date, the comparable prior
period year-to-date and the budgeted amount estimated for the current fiscal
year for various types of receipts (i.e. individual income tax, corporate income
tax, etc.). The Budget Outlays section of the table shows the total amount of
activity for the current month, the current fiscal year-to-date, the comparable
prior period year-to-date and the budgeted amount estimated for the current fiscal
year for functions of the federal government. The table also shows the amounts for
the budget/surplus deficit categorized as listed above. This table includes total
and subtotal rows that should be excluded when aggregating data. Some rows represent
elements of the dataset's hierarchy, but are not assigned values. The classification_id
for each of these elements can be used as the parent_id for underlying data elements
to calculate their implied values. Subtotal rows are available to access this same
information.
### Parameters
----
fields : List[str] (optional, Default=None)
The fields parameter allows you to select which field(s) should be
included in the response. If desired fields are not specified, all
fields will be returned.
sort : List[str] (optional, Default=None)
The sort parameter allows a user to sort a field in ascending (least
to greatest) or descending (greatest to least) order. When no sort parameter
is specified, the default is to sort by the first column listed. Most API
endpoints are thus sorted by date in ascending order (historical to most
current).
filters : List[str] (optional, Default=None)
Filters are used to view a subset of the data based on specific
criteria. For example, you may want to find data that falls within
a certain date range, or only show records which contain a value
larger than a certain threshold. When no filters are provided,
the default response will return all fields and all data.
page_number : int (optional, Default=1)
The page number will set the index for the pagination, starting
at 1. This allows the user to paginate through the records
returned from an API request
page_size : int (optional, Default=100)
The page size will set the number of rows that are returned
on a request.
### Returns
----
Dict
A collection of `Records` resources.
### Usage
----
>>> treasury_client = FederalTreasuryClient()
>>> monthly_treasury_service = treasury_client.monthly_treasury_statement()
>>> monthly_treasury_service.receipts_by_source_outlay_by_function()
"""
if fields:
fields = ','.join(fields)
if filters:
filters = ','.join(filters)
if sort:
sort = ','.join(sort)
content = self.treasury_session.make_request(
method='get',
endpoint='/v1/accounting/mts/mts_table_9',
params={
'format': 'json',
'page[number]': page_number,
'page[size]': page_size,
'fields': fields,
'sort': sort,
'filters': filters
}
)
return content
| 41.038314 | 110 | 0.609803 | 6,482 | 53,555 | 4.982259 | 0.058007 | 0.018207 | 0.014306 | 0.028611 | 0.888559 | 0.874005 | 0.866171 | 0.86196 | 0.858895 | 0.85812 | 0 | 0.004391 | 0.328186 | 53,555 | 1,304 | 111 | 41.069785 | 0.893215 | 0.669909 | 0 | 0.832041 | 0 | 0 | 0.105376 | 0.039123 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041344 | false | 0 | 0.007752 | 0 | 0.090439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6f63163ae5e641054f46f6f1c62b1250e7a95d40 | 136 | py | Python | psm/aux_functions/__init__.py | amoodie/PRYSM | 99cf02145e3df0aad8b4d9103ab6c081d97bff1f | [
"MIT"
] | 21 | 2015-07-20T21:37:14.000Z | 2022-02-14T04:10:11.000Z | psm/aux_functions/__init__.py | amoodie/PRYSM | 99cf02145e3df0aad8b4d9103ab6c081d97bff1f | [
"MIT"
] | 8 | 2015-10-08T16:37:04.000Z | 2019-01-04T21:02:09.000Z | psm/aux_functions/__init__.py | amoodie/PRYSM | 99cf02145e3df0aad8b4d9103ab6c081d97bff1f | [
"MIT"
] | 13 | 2015-01-30T10:05:14.000Z | 2022-02-05T21:56:21.000Z | import psm.aux_functions.analytical_error
import psm.aux_functions.butter_lowpass_filter
import psm.aux_functions.analytical_err_simple
| 34 | 46 | 0.911765 | 20 | 136 | 5.8 | 0.55 | 0.232759 | 0.310345 | 0.543103 | 0.534483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044118 | 136 | 3 | 47 | 45.333333 | 0.892308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 9 |
48d69ba4db21e1c48d68e468694ae6cff9be746f | 12,734 | py | Python | playnetmano_rm/tests/unit/api/test_quota_manager.py | rickyhai11/playnetmano_rm | 50c94ab3babaeb5a441aa24f129784e3f140a349 | [
"Apache-2.0"
] | null | null | null | playnetmano_rm/tests/unit/api/test_quota_manager.py | rickyhai11/playnetmano_rm | 50c94ab3babaeb5a441aa24f129784e3f140a349 | [
"Apache-2.0"
] | null | null | null | playnetmano_rm/tests/unit/api/test_quota_manager.py | rickyhai11/playnetmano_rm | 50c94ab3babaeb5a441aa24f129784e3f140a349 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import webtest
from oslo_config import cfg
from playnetmano_rm.api.controllers import quota_manager
from playnetmano_rm.common import config
from playnetmano_rm.rpc import client as rpc_client
from playnetmano_rm.tests.unit.api.testroot import KBApiTest
config.register_options()
OPT_GROUP_NAME = 'keystone_authtoken'
cfg.CONF.import_group(OPT_GROUP_NAME, "keystonemiddleware.auth_token")
class Result(object):
def __init__(self, project_id, resource, hard_limit):
self.project_id = project_id
self.resource = resource
self.hard_limit = hard_limit
class TestQuotaManager(KBApiTest):
def setUp(self):
super(TestQuotaManager, self).setUp()
cfg.CONF.set_override('admin_tenant', 'fake_tenant_id',
group='cache')
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_get_all_admin(self, mock_db_api):
Res = Result('tenant_1', 'ram', 100)
mock_db_api.quota_get_all_by_project.return_value = \
{"project_id": Res.project_id,
Res.resource: Res.hard_limit}
response = self.app.get(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X_ROLE': 'admin'})
self.assertEqual(response.status_int, 200)
self.assertEqual({'quota_set': {'project_id': 'tenant_1', 'ram': 100}},
eval(response.text))
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_get_default_admin(self, mock_db_api):
mock_db_api.quota_class_get_default.return_value = \
{'class_name': 'default'}
response = self.app.get(
'/v1.0/fake_tenant_id/os-quota-sets/defaults',
headers={'X_ROLE': 'admin'})
self.assertEqual(response.status_int, 200)
result = eval(response.text)
for resource in result['quota_set']:
self.assertEqual(
cfg.CONF.kingbird_global_limit['quota_' + resource],
result['quota_set'][resource])
@mock.patch.object(rpc_client, 'EngineClient')
def test_get_usages_admin(self, mock_rpc_client):
expected_usage = {"ram": 10}
mock_rpc_client().get_total_usage_for_tenant.return_value = \
expected_usage
response = self.app.get(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1/detail',
headers={'X_ROLE': 'admin'})
self.assertEqual(response.status_int, 200)
self.assertEqual(eval(response.body), {"quota_set": expected_usage})
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_put_admin(self, mock_db_api):
Res = Result('tenant_1', 'cores', 10)
mock_db_api.quota_update.return_value = Res
data = {"quota_set": {Res.resource: Res.hard_limit}}
response = self.app.put_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X-Tenant-Id': 'fake_tenant', 'X_ROLE': 'admin'},
params=data)
self.assertEqual(response.status_int, 200)
self.assertEqual({Res.project_id: {Res.resource: Res.hard_limit}},
eval(response.text))
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_delete_admin(self, mock_db_api):
Res = Result('tenant_1', 'cores', 10)
mock_db_api.quota_destroy.return_value = Res
data = {"quota_set": [Res.resource]}
response = self.app.delete_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X-Tenant-Id': 'fake_tenant', 'X_ROLE': 'admin'},
params=data)
self.assertEqual(response.status_int, 200)
self.assertEqual({'Deleted quota limits': [Res.resource]},
eval(response.text))
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_delete_all_admin(self, mock_db_api):
Res = Result('tenant_1', 'cores', 10)
mock_db_api.quota_destroy_all.return_value = Res
response = self.app.delete_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X-Tenant-Id': 'fake_tenant', 'X_ROLE': 'admin'})
self.assertEqual(response.status_int, 200)
self.assertEqual('Deleted all quota limits for the given project',
eval(response.text))
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
def test_quota_sync_admin(self):
response = self.app.put_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1/sync',
headers={'X-Tenant-Id': 'fake_tenant',
'X_ROLE': 'admin'})
self.assertEqual(response.status_int, 200)
self.assertEqual("triggered quota sync for tenant_1",
eval(response.text))
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
def test_put_nonadmin(self):
Res = Result('tenant_1', 'cores', 10)
data = {"quota_set": {Res.resource: Res.hard_limit}}
try:
self.app.put_json('/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X-Tenant-Id': 'fake_tenant'},
params=data)
except webtest.app.AppError as admin_exception:
self.assertIn('Admin required', admin_exception.message)
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
def test_delete_all_nonadmin(self):
try:
self.app.delete_json('/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X-Tenant-Id': 'fake_tenant'})
except webtest.app.AppError as admin_exception:
self.assertIn('Admin required', admin_exception.message)
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
def test_delete_nonadmin(self):
Res = Result('tenant_1', 'cores', 10)
data = {"quota_set": {Res.resource: Res.hard_limit}}
try:
self.app.delete_json('/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X-Tenant-Id': 'fake_tenant'},
params=data)
except webtest.app.AppError as admin_exception:
self.assertIn('Admin required', admin_exception.message)
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
def test_quota_sync_nonadmin(self):
try:
self.app.put_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1/sync',
headers={'X-Tenant-Id': 'fake_tenant'})
except webtest.app.AppError as admin_exception:
self.assertIn('Admin required', admin_exception.message)
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_get_all_nonadmin(self, mock_db_api):
Res = Result('tenant_1', 'ram', 100)
mock_db_api.quota_get_all_by_project.return_value = \
{"project_id": Res.project_id,
Res.resource: Res.hard_limit}
response = self.app.get(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X_TENANT_ID': 'fake_tenant', 'X_USER_ID': 'nonadmin'})
self.assertEqual(response.status_int, 200)
self.assertEqual({'quota_set': {'project_id': 'tenant_1', 'ram': 100}},
eval(response.text))
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_get_default_nonadmin(self, mock_db_api):
mock_db_api.quota_class_get_default.return_value = \
{'class_name': 'default'}
response = self.app.get(
'/v1.0/fake_tenant_id/os-quota-sets/defaults',
headers={'X_TENANT_ID': 'fake_tenant', 'X_USER_ID': 'nonadmin'})
self.assertEqual(response.status_int, 200)
result = eval(response.text)
for resource in result['quota_set']:
self.assertEqual(
cfg.CONF.kingbird_global_limit['quota_' + resource],
result['quota_set'][resource])
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
def test_quota_sync_bad_request(self):
try:
self.app.post_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1/sync',
headers={'X-Tenant-Id': 'fake_tenant',
'X_ROLE': 'admin'})
except webtest.app.AppError as bad_method_exception:
self.assertIn('Bad response: 404 Not Found',
bad_method_exception.message)
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_put_invalid_payload(self, mock_db_api):
Res = Result('tenant_1', 'cores', 10)
mock_db_api.quota_update.return_value = Res
data = {'quota': {Res.resource: Res.hard_limit}}
try:
self.app.put_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X-Tenant-Id': 'fake_tenant', 'X_ROLE': 'admin'},
params=data)
except webtest.app.AppError as invalid_payload_exception:
self.assertIn('400 Bad Request',
invalid_payload_exception.message)
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_put_invalid_input(self, mock_db_api):
Res = Result('tenant_1', 'cores', -10)
mock_db_api.quota_update.return_value = Res
data = {"quota_set": {Res.resource: Res.hard_limit}}
try:
self.app.put_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X-Tenant-Id': 'fake_tenant', 'X_ROLE': 'admin'},
params=data)
except webtest.app.AppError as invalid_input_exception:
self.assertIn('400 Bad Request',
invalid_input_exception.message)
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
@mock.patch.object(quota_manager, 'db_api')
def test_delete_invalid_quota(self, mock_db_api):
Res = Result('tenant_1', 'invalid_quota', 10)
mock_db_api.quota_destroy.return_value = Res
data = {"quota_set": [Res.resource]}
try:
self.app.delete_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1',
headers={'X-Tenant-Id': 'fake_tenant', 'X_ROLE': 'admin'},
params=data)
except webtest.app.AppError as invalid_quota_exception:
self.assertIn('The resource could not be found',
invalid_quota_exception.message)
@mock.patch.object(rpc_client, 'EngineClient')
def test_get_usages_nonadmin(self, mock_rpc_client):
expected_usage = {"ram": 10}
mock_rpc_client().get_total_usage_for_tenant.return_value = \
expected_usage
response = self.app.get(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1/detail',
headers={'X_TENANT_ID': 'fake_tenant', 'X_USER_ID': 'nonadmin'})
self.assertEqual(response.status_int, 200)
self.assertEqual(eval(response.body), {"quota_set": expected_usage})
@mock.patch.object(rpc_client, 'EngineClient', new=mock.Mock())
def test_quota_sync_bad_action(self):
try:
self.app.put_json(
'/v1.0/fake_tenant_id/os-quota-sets/tenant_1/syncing',
headers={'X-Tenant-Id': 'fake_tenant',
'X_ROLE': 'admin'})
except webtest.app.AppError as bad_method_exception:
self.assertIn('Invalid action, only sync is allowed',
bad_method_exception.message)
| 45.805755 | 79 | 0.628711 | 1,628 | 12,734 | 4.662776 | 0.11855 | 0.047425 | 0.057305 | 0.045053 | 0.8132 | 0.807799 | 0.807799 | 0.796996 | 0.786458 | 0.784613 | 0 | 0.015277 | 0.244385 | 12,734 | 277 | 80 | 45.971119 | 0.773644 | 0.04861 | 0 | 0.742616 | 0 | 0 | 0.199421 | 0.072975 | 0 | 0 | 0 | 0 | 0.122363 | 1 | 0.088608 | false | 0 | 0.033755 | 0 | 0.130802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
48f21bb402bcee2575d8f46d32d3f3b6f67eba47 | 4,912 | py | Python | djangophysics/units/filters.py | fmeurou/django-physics | 0f67efa1b6bd547e0b80191e7a2624c2c971bdc0 | [
"MIT"
] | 1 | 2021-06-15T20:51:45.000Z | 2021-06-15T20:51:45.000Z | djangophysics/units/filters.py | fmeurou/django-physics | 0f67efa1b6bd547e0b80191e7a2624c2c971bdc0 | [
"MIT"
] | null | null | null | djangophysics/units/filters.py | fmeurou/django-physics | 0f67efa1b6bd547e0b80191e7a2624c2c971bdc0 | [
"MIT"
] | 1 | 2021-12-01T00:01:29.000Z | 2021-12-01T00:01:29.000Z | """
Units module API filters
"""
from django.db import models
from django.db.models import QuerySet
from django_filters import rest_framework as filters
from .models import CustomUnit, CustomDimension
class CustomUnitFilter(filters.FilterSet):
"""
Filter on custom units
"""
user = filters.BooleanFilter(
label="filter rate associated to connected user",
method='user_filter')
key = filters.CharFilter(
label="filter rates with key",
method='key_filter')
unit_system = filters.CharFilter(
label="filter by unit system",
field_name='unit_system',
lookup_expr='iexact')
code = filters.CharFilter(
label="filter by code",
field_name='code',
lookup_expr='iexact')
name = filters.CharFilter(
label="filter by name",
field_name='name',
lookup_expr='icontains')
relation = filters.CharFilter(
label="filter by relation",
field_name='relation',
lookup_expr='icontains')
symbol = filters.CharFilter(
label="filter by symbol",
field_name='symbol',
lookup_expr='iexact')
alias = models.CharField(
"Alias",
max_length=20,
null=True,
blank=True)
ordering = filters.OrderingFilter(
# tuple-mapping retains order
fields=(
('key', 'key'),
('unit_system', 'unit_system'),
('code', 'code'),
('name', 'name'),
('relation', 'relation'),
('symbol', 'symbol'),
('alias', 'alias'),
),
)
class Meta:
"""
Meta
"""
model = CustomUnit
fields = [
'user', 'key',
'unit_system', 'code', 'name',
'relation', 'symbol', 'alias'
]
def user_filter(self, queryset: QuerySet,
name: str, value: str) -> QuerySet:
"""
Filter on request user
"""
if self.request and self.request.user and \
self.request.user.is_authenticated:
return queryset.filter(**{
'user': self.request.user,
})
return queryset.filter(user__isnull=True)
def key_filter(self, queryset: QuerySet,
name: str, value: str) -> QuerySet:
"""
Filter on key if request.user is set and authenticated
"""
if self.request and self.request.user and \
self.request.user.is_authenticated:
return queryset.filter(**{
'user': self.request.user,
'key': value
})
return queryset.filter(user__isnull=True)
class CustomDimensionFilter(filters.FilterSet):
"""
Filter on custom units
"""
user = filters.BooleanFilter(
label="filter rate associated to connected user",
method='user_filter')
key = filters.CharFilter(
label="filter rates with key",
method='key_filter')
unit_system = filters.CharFilter(
label="filter by unit system",
field_name='unit_system',
lookup_expr='iexact')
code = filters.CharFilter(
label="filter by code",
field_name='code',
lookup_expr='iexact')
name = filters.CharFilter(
label="filter by name",
field_name='name',
lookup_expr='icontains')
relation = filters.CharFilter(
label="filter by relation",
field_name='relation',
lookup_expr='icontains')
ordering = filters.OrderingFilter(
# tuple-mapping retains order
fields=(
('key', 'key'),
('unit_system', 'unit_system'),
('code', 'code'),
('name', 'name'),
('relation', 'relation'),
),
)
class Meta:
"""
Meta
"""
model = CustomDimension
fields = [
'user', 'key',
'unit_system', 'code', 'name',
'relation'
]
def user_filter(self, queryset: QuerySet,
name: str, value: str) -> QuerySet:
"""
Filter on request user
"""
if self.request and self.request.user and \
self.request.user.is_authenticated:
return queryset.filter(**{
'user': self.request.user,
})
return queryset.filter(user__isnull=True)
def key_filter(self, queryset: QuerySet,
name: str, value: str) -> QuerySet:
"""
Filter on key if request.user is set and authenticated
"""
if self.request and self.request.user and \
self.request.user.is_authenticated:
return queryset.filter(**{
'user': self.request.user,
'key': value
})
return queryset.filter(user__isnull=True) | 28.55814 | 62 | 0.541735 | 478 | 4,912 | 5.460251 | 0.156904 | 0.067433 | 0.068966 | 0.118008 | 0.846743 | 0.835249 | 0.835249 | 0.835249 | 0.805364 | 0.805364 | 0 | 0.000618 | 0.341002 | 4,912 | 172 | 63 | 28.55814 | 0.805684 | 0.05965 | 0 | 0.811024 | 0 | 0 | 0.15485 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031496 | false | 0 | 0.031496 | 0 | 0.283465 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5b17eed8e72510620cbd034ff1e9de2553760fca | 12,006 | py | Python | tests/test_resolver.py | lumikanta/connexion | b6530d32aaee92ebbdfef501540d642a26185174 | [
"Apache-2.0"
] | 1 | 2019-01-12T19:03:29.000Z | 2019-01-12T19:03:29.000Z | tests/test_resolver.py | lumikanta/connexion | b6530d32aaee92ebbdfef501540d642a26185174 | [
"Apache-2.0"
] | 1 | 2019-01-14T22:55:44.000Z | 2019-01-14T22:55:44.000Z | tests/test_resolver.py | lumikanta/connexion | b6530d32aaee92ebbdfef501540d642a26185174 | [
"Apache-2.0"
] | 1 | 2019-01-14T19:55:06.000Z | 2019-01-14T19:55:06.000Z | import connexion.apps
import pytest
from connexion.exceptions import ResolverError
from connexion.operations import Swagger2Operation
from connexion.resolver import Resolver, RestyResolver
PARAMETER_DEFINITIONS = {'myparam': {'in': 'path', 'type': 'integer'}}
def test_standard_get_function():
function = Resolver().resolve_function_from_operation_id('connexion.FlaskApp.common_error_handler')
assert function == connexion.FlaskApp.common_error_handler
def test_resty_get_function():
function = RestyResolver('connexion').resolve_function_from_operation_id('connexion.FlaskApp.common_error_handler')
assert function == connexion.FlaskApp.common_error_handler
def test_missing_operation_id():
# Missing operationIDs should result in a well-defined error that can
# be handled upstream.
with pytest.raises(ResolverError):
Resolver().resolve_function_from_operation_id(None)
with pytest.raises(ResolverError):
RestyResolver('connexion').resolve_function_from_operation_id(None)
def test_bad_operation_id():
# Unresolvable operationIDs should result in a well-defined error that can
# be handled upstream.
with pytest.raises(ResolverError):
Resolver().resolve_function_from_operation_id('ohai.I.do.not.exist')
with pytest.raises(ResolverError):
RestyResolver('connexion').resolve_function_from_operation_id('ohai.I.do.not.exist')
def test_standard_resolve_x_router_controller():
operation = Swagger2Operation(api=None,
method='GET',
path='endpoint',
path_parameters=[],
operation={
'x-swagger-router-controller': 'fakeapi.hello',
'operationId': 'post_greeting',
},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=Resolver())
assert operation.operation_id == 'fakeapi.hello.post_greeting'
def test_resty_resolve_operation_id():
operation = Swagger2Operation(api=None,
method='GET',
path='endpoint',
path_parameters=[],
operation={
'operationId': 'fakeapi.hello.post_greeting',
},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.hello.post_greeting'
def test_resty_resolve_x_router_controller_with_operation_id():
operation = Swagger2Operation(api=None,
method='GET',
path='endpoint',
path_parameters=[],
operation={
'x-swagger-router-controller': 'fakeapi.hello',
'operationId': 'post_greeting',
},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.hello.post_greeting'
def test_resty_resolve_x_router_controller_without_operation_id():
operation = Swagger2Operation(api=None,
method='GET',
path='/hello/{id}',
path_parameters=[],
operation={'x-swagger-router-controller': 'fakeapi.hello'},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.hello.get'
def test_resty_resolve_with_default_module_name():
operation = Swagger2Operation(api=None,
method='GET',
path='/hello/{id}',
path_parameters=[],
operation={},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.hello.get'
def test_resty_resolve_with_default_module_name_lowercase_verb():
operation = Swagger2Operation(api=None,
method='get',
path='/hello/{id}',
path_parameters=[],
operation={},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.hello.get'
def test_resty_resolve_with_default_module_name_will_translate_dashes_in_resource_name():
operation = Swagger2Operation(api=None,
method='GET',
path='/foo-bar',
path_parameters=[],
operation={},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.foo_bar.search'
def test_resty_resolve_with_default_module_name_can_resolve_api_root():
operation = Swagger2Operation(api=None,
method='GET',
path='/',
path_parameters=[],
operation={},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.get'
def test_resty_resolve_with_default_module_name_will_resolve_resource_root_get_as_search():
operation = Swagger2Operation(api=None,
method='GET',
path='/hello',
path_parameters=[],
operation={},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.hello.search'
def test_resty_resolve_with_default_module_name_and_x_router_controller_will_resolve_resource_root_get_as_search():
operation = Swagger2Operation(api=None,
method='GET',
path='/hello',
path_parameters=[],
operation={
'x-swagger-router-controller': 'fakeapi.hello',
},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.hello.search'
def test_resty_resolve_with_default_module_name_will_resolve_resource_root_as_configured():
operation = Swagger2Operation(api=None,
method='GET',
path='/hello',
path_parameters=[],
operation={},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi', 'api_list'))
assert operation.operation_id == 'fakeapi.hello.api_list'
def test_resty_resolve_with_default_module_name_will_resolve_resource_root_post_as_post():
operation = Swagger2Operation(api=None,
method='POST',
path='/hello',
path_parameters=[],
operation={},
app_produces=['application/json'],
app_consumes=['application/json'],
app_security=[],
security_definitions={},
definitions={},
parameter_definitions=PARAMETER_DEFINITIONS,
resolver=RestyResolver('fakeapi'))
assert operation.operation_id == 'fakeapi.hello.post'
| 50.445378 | 119 | 0.468682 | 808 | 12,006 | 6.634901 | 0.112624 | 0.093266 | 0.080582 | 0.073867 | 0.900205 | 0.89293 | 0.882858 | 0.870173 | 0.860847 | 0.850215 | 0 | 0.00197 | 0.450358 | 12,006 | 237 | 120 | 50.658228 | 0.810426 | 0.015159 | 0 | 0.78 | 0 | 0 | 0.1061 | 0.028598 | 0 | 0 | 0 | 0 | 0.07 | 1 | 0.08 | false | 0 | 0.025 | 0 | 0.105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8267dbcfc3efd92ac79e6a4ee0a092a52cf6c9dd | 9,030 | py | Python | applications/office_panel/tests/test_views.py | szypkiwonsz/Physiotherapy-Management-System | 36decab47890e2f4be259c8796f47324ffad28fe | [
"MIT"
] | null | null | null | applications/office_panel/tests/test_views.py | szypkiwonsz/Physiotherapy-Management-System | 36decab47890e2f4be259c8796f47324ffad28fe | [
"MIT"
] | 8 | 2020-08-17T14:36:02.000Z | 2022-03-12T00:33:50.000Z | applications/office_panel/tests/test_views.py | szypkiwonsz/Physiotherapy-Management-System | 36decab47890e2f4be259c8796f47324ffad28fe | [
"MIT"
] | null | null | null | from django.test import TestCase, Client
from django.urls import reverse
from applications.office_panel.models import Patient
from applications.users.models import User, UserOffice
class TestHomeViews(TestCase):
def setUp(self):
self.client = Client()
self.office_home_url = reverse('office_panel:home')
self.patient1 = User.objects.create_user(
'patient', 'patient@gmail.com', 'patientpassword', is_patient=True
)
self.office_user1 = User.objects.create_user(
'office', 'office@gmail.com', 'officepassword', is_office=True
)
self.office1 = UserOffice.objects.create(
user=self.office_user1,
name='name',
address='address',
city='City',
phone_number='000000000',
website='www.website.com'
)
def test_office_home_GET_not_logged_in(self):
response = self.client.get(self.office_home_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/home.html')
def test_office_home_GET_logged_as_patient(self):
self.client.login(username='patient@gmail.com', password='patientpassword')
response = self.client.get(self.office_home_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/home.html')
def test_office_home_GET_logged_as_office(self):
self.client.login(username='office@gmail.com', password='officepassword')
response = self.client.get(self.office_home_url)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'office_panel/home.html')
class TestPatientViews(TestCase):
def setUp(self):
self.patient_url = reverse('office_panel:patients')
self.create_patient_url = reverse('office_panel:patient_add')
self.detail_patient_url = reverse('office_panel:patient_detail', args=[1])
self.update_patient_url = reverse('office_panel:patient_update', args=[1])
self.delete_patient_url = reverse('office_panel:patient_delete', args=[1])
self.timetable_url = reverse('office_panel:appointments:timetable')
self.patient1 = User.objects.create_user(
'patient', 'patient@gmail.com', 'patientpassword', is_patient=True
)
self.office1 = User.objects.create_user(
'office', 'office@gmail.com', 'officepassword', is_office=True
)
self.office_patient1 = Patient.objects.create(
owner=self.office1,
first_name='firstname',
last_name='lastname',
email='patient@gmail.com',
)
def test_patient_list_GET_not_logged_in(self):
response = self.client.get(self.patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patients.html')
def test_patient_list_GET_logged_as_patient(self):
self.client.login(username='patient@gmail.com', password='patientpassword')
response = self.client.get(self.patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patients.html')
def test_patient_list_GET_logged_as_office(self):
self.client.login(username='office@gmail.com', password='officepassword')
response = self.client.get(self.patient_url)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'office_panel/patient/patients.html')
def test_patient_create_GET_not_logged_in(self):
response = self.client.get(self.create_patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patient_add_form.html')
def test_patient_create_GET_logged_as_patient(self):
self.client.login(username='patient@gmail.com', password='patientpassword')
response = self.client.get(self.create_patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patient_add_form.html')
def test_patient_create_GET_logged_as_office(self):
self.client.login(username='office@gmail.com', password='officepassword')
response = self.client.get(self.create_patient_url)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'office_panel/patient/patient_add_form.html')
def test_patient_create_POST(self):
self.client.login(username='office@gmail.com', password='officepassword')
response = self.client.post(self.create_patient_url, {
'first_name': 'firstname',
'last_name': 'lastname',
'email': 'patient2@gmail.com',
'phone_number': '000000000'
})
office_patient2 = Patient.objects.get(id=2)
self.assertEquals(response.status_code, 302)
self.assertEquals(office_patient2.first_name, 'Firstname')
def test_patient_detail_GET_no_logged_in(self):
response = self.client.get(self.detail_patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patient_detail_form.html')
def test_patient_detail_GET_logged_as_patient(self):
self.client.login(username='patient@gmail.com', password='patientpassword')
response = self.client.get(self.detail_patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patient_detail_form.html')
def test_patient_detail_GET_logged_as_office(self):
self.client.login(username='office@gmail.com', password='officepassword')
response = self.client.get(self.detail_patient_url)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'office_panel/patient/patient_detail_form.html')
def test_patient_update_GET_no_logged_in(self):
response = self.client.get(self.update_patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patient_update_form.html')
def test_patient_update_GET_logged_as_patient(self):
self.client.login(username='patient@gmail.com', password='patientpassword')
response = self.client.get(self.update_patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patient_update_form.html')
def test_patient_update_GET_logged_as_office(self):
self.client.login(username='office@gmail.com', password='officepassword')
response = self.client.get(self.update_patient_url)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'office_panel/patient/patient_update_form.html')
def test_patient_update_POST(self):
self.client.login(username='office@gmail.com', password='officepassword')
response = self.client.post(self.update_patient_url, {
'first_name': self.office_patient1.first_name,
'last_name': self.office_patient1.last_name,
'email': 'newpatientemail@gmail.com',
'phone_number': '000000000'
})
office_patient_update = Patient.objects.get(id=1)
self.assertEquals(response.status_code, 302)
self.assertEquals(office_patient_update.email, 'newpatientemail@gmail.com')
def test_patient_delete_GET_not_logged_in(self):
response = self.client.get(self.delete_patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patient_delete_confirm.html')
def test_patient_delete_GET_logged_as_patient(self):
self.client.login(username='patient@gmail.com', password='patientpassword')
response = self.client.get(self.delete_patient_url)
self.assertEquals(response.status_code, 302)
self.assertTemplateNotUsed(response, 'office_panel/patient/patient_delete_confirm.html')
def test_patient_delete_GET_logged_as_office(self):
self.client.login(username='office@gmail.com', password='officepassword')
response = self.client.get(self.delete_patient_url)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, 'office_panel/patient/patient_delete_confirm.html')
def test_patient_delete_POST(self):
self.client.login(username='office@gmail.com', password='officepassword')
response_with_post = self.client.get(self.delete_patient_url)
self.assertEquals(response_with_post.status_code, 200)
response = self.client.post(self.delete_patient_url)
response_with_deleted_post = self.client.get(self.delete_patient_url)
self.assertEquals(response_with_deleted_post.status_code, 404)
| 44.70297 | 96 | 0.717829 | 1,071 | 9,030 | 5.782446 | 0.081232 | 0.062974 | 0.085258 | 0.054901 | 0.853544 | 0.821088 | 0.786533 | 0.772969 | 0.772969 | 0.749717 | 0 | 0.014915 | 0.175858 | 9,030 | 201 | 97 | 44.925373 | 0.817253 | 0 | 0 | 0.496774 | 0 | 0 | 0.194352 | 0.101772 | 0 | 0 | 0 | 0 | 0.270968 | 1 | 0.148387 | false | 0.122581 | 0.025806 | 0 | 0.187097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
7dc6237b1fcb9f861af556f4dcbcbd1a6c49be96 | 39,587 | py | Python | noisy_skyline_implementation.py | Saffr0n1/Iterative-Noisy-Skyline-Comparisons | eb2d3a7d68641be1cf7487560282bf0457e67cfa | [
"MIT"
] | null | null | null | noisy_skyline_implementation.py | Saffr0n1/Iterative-Noisy-Skyline-Comparisons | eb2d3a7d68641be1cf7487560282bf0457e67cfa | [
"MIT"
] | null | null | null | noisy_skyline_implementation.py | Saffr0n1/Iterative-Noisy-Skyline-Comparisons | eb2d3a7d68641be1cf7487560282bf0457e67cfa | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Noisy Skyline Implementation
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1WF8UXH8EeQFBPR5nhNmrQ0IBtdNSAmhZ
"""
import random
import math
def oracle(v1, v2, dim, deltaMain):
# return v1[dim] >= v2[dim] with error probability deltaMain
v1_comp = v1[dim]
v2_comp = v2[dim]
truthful = random.random()
if v1_comp >= v2_comp:
return True if truthful > deltaMain else False
else:
return False if truthful > deltaMain else True
def BoostProb(command, p, q, i, deltaMain, delta1, delta2):
num_true = 0
num_false = 0
num_calls = 0
while num_true - num_false < math.log(1/delta1) and num_false - num_true < math.log(1/delta2):
if command == "dominates":
query, calls = Dominates(p, q, deltaMain)
num_calls += calls
elif command == "oracle":
query = not oracle(q, p, i, deltaMain)
num_calls += 1
else:
return False, num_calls
if query:
num_true += 1
else:
num_false += 1
if num_true - num_false >= math.log(1/delta1):
return True, num_calls
else:
return False, num_calls
def brute_force(s, delta, error):
start = time.time()
n = len(s)
dims = len(s[0])
optimal = []
sorted_i = []
optimal = []
num_calls = 0
for i in range(dims):
s_i, calls = msort2(s, i, error)
num_calls += calls
sorted_i.append(s_i)
changed = True
while changed:
changed = False
for i in range(dims):
optima_i = []
compl = False
curr = -1
while not compl and len(sorted_i[i]) > 0:
dominated, calls = SetDominates(optimal, sorted_i[i][curr], delta/2, delta/2, error)
num_calls += calls
if not np.any(dominated):
optimal.append(sorted_i[i][curr])
changed = True
sorted_i[i].pop(curr)
else:
compl = True
# check internally:
new_optimal = []
for i in range(len(optimal)):
sublist = optimal[:i] + optimal[i + 1:]
dominated, calls = SetDominates(sublist, optimal[i], delta/2, delta/2, error)
num_calls += calls
if not dominated:
new_optimal.append(optimal[i])
end = time.time()
return new_optimal, end - start, num_calls
def msort2(x, dim, error):
if len(x) < 2:
return x, 0
num_calls = 0
result = []
mid = int(len(x) / 2)
y, calls1 = msort2(x[:mid], dim, error)
z, calls2 = msort2(x[mid:], dim, error)
num_calls += calls1
num_calls += calls2
while (len(y) > 0) and (len(z) > 0):
comp, calls = BoostProb("oracle", z[0], y[0], dim, error, 1/(16*len(y)), 1/16)
num_calls += calls
if not comp:
result.append(z[0])
z.pop(0)
else:
result.append(y[0])
y.pop(0)
result += y
result += z
return result, num_calls
v1 = (1, 2)
v2 = (2, 1)
trials = 1000
num_correct = 0
p = 0.3
d1 = 0.01
d2 = 0.1
dim = 1
for _ in range(trials):
output, calls = BoostProb("dominates", v1, v2, dim, p, d1, d2)
if output == False:
num_correct += 1
# print([Dominates(v1, v2, p)[0] for _ in range(5)])
print(num_correct/trials, 1 - d2 if (v1[dim] >= v2[dim]) else 1 - d1)
v1 = (1, 2)
v2 = (2, 1)
trials = 10000
num_correct = 0
p = 0.3
d1 = 0.01
d2 = 0.1
dim = 1
for _ in range(trials):
if BoostProb("oracle", v1, v2, dim, p, d1, d2) == (v1[dim] >= v2[dim]):
num_correct += 1
print(num_correct/trials, 1 - d2 if (v1[dim] >= v2[dim]) else 1 - d1)
v1 = (1, 2)
v2 = (2, 1)
trials = 10000
num_correct = 0
p = 0.1
d1 = 0.01
d2 = 0.1
dim = 0
for _ in range(trials):
if BoostProb("oracle", v1, v2, dim, p, d1, d2) == (v1[dim] >= v2[dim]):
num_correct += 1
print(num_correct/trials, 1 - d2 if (v1[dim] >= v2[dim]) else 1 - d1)
v1 = (2, 1)
v2 = (1, 2)
trials = 10000
num_correct = 0
p = 0.1
d1 = 0.01
d2 = 0.1
dim = 0
for _ in range(trials):
if BoostProb("oracle", v1, v2, dim, p, d1, d2) == (v1[dim] >= v2[dim]):
num_correct += 1
print(num_correct/trials, 1 - d2 if (v1[dim] >= v2[dim]) else 1 - d1)
def Dominates(p,q, deltaMain):
num_calls = 0
for i in range(0,len(p)):
cond, calls = BoostProb("oracle", q, p, i, deltaMain, 1/(16*len(p)), 1/16)
num_calls += calls
if cond:
# print(q, p, i)
return False, num_calls
return True, num_calls
import numpy as np
v1 = (1, 1)
v2 = (1, 1)
print(np.all([v1[i] >= v2[i] for i in range(len(v2))]))
trials = 10000
num_correct = 0
p = 0.3
for _ in range(trials):
output, calls = Dominates(v1, v2, p)
# print(output)
if output == np.all([v1[i] >= v2[i] for i in range(len(v2))]):
num_correct += 1
print(1 - num_correct/trials, 1/16)
import numpy as np
v1 = (2, 1)
v2 = (3, 3)
print(np.all([v1[i] >= v2[i] for i in range(len(v2))]))
trials = 10000
num_correct = 0
p = 0.3
for _ in range(trials):
output,calls = Dominates(v1, v2, p)
# print(output)
if output == np.all([v1[i] >= v2[i] for i in range(len(v2))]):
num_correct += 1
print(1 - num_correct/trials, 1/16)
import numpy as np
v1 = (2, 2)
v2 = (1.5, 1.5)
print(np.all([v1[i] >= v2[i] for i in range(len(v2))]))
trials = 10
num_correct = 0
p = 0.3
for _ in range(trials):
output, calls = Dominates(v1, v2, p)
# print(output)
if output == np.all([v1[i] >= v2[i] for i in range(len(v2))]):
num_correct += 1
print(1 - num_correct/trials, 1/16)
import numpy as np
v1 = (1, 1)
v2 = (5, 3)
print(np.all([v1[i] >= v2[i] for i in range(len(v2))]))
trials = 100
num_correct = 0
p = 0.3
d1 = 0.1
d2 = 0.1
for _ in range(trials):
output, calls = BoostProb("dominates", v1, v2, 0, p, d1, d2)
# print(output)
if output == np.all([v1[i] >= v2[i] for i in range(len(v2))]):
num_correct += 1
print(1 - num_correct/trials, 1/16)
def SetDominates(S, q, delta1, delta2, deltaMain):
num_calls = 0
for i in range(len(S)):
cond, calls = BoostProb("dominates", S[i], q, 0, deltaMain, delta1/len(S), delta2)
num_calls += calls
if cond:
print(i, S[i], q)
return True, num_calls
return False, num_calls
s1 = [(1, 1)]
v2 = (5, 3)
trials = 1000
num_correct = 0
p = 0.1
d1 = 0.01
d2 = 0.1
for _ in range(trials):
cond, calls = SetDominates(s2, v2, d1, d2, p)
if not cond:
num_correct += 1
print(num_correct/trials)
s1 = [(1, 1), (3,5)]
v2 = (5, 3)
trials = 10
num_correct = 0
p = 0.8
d1 = 0.01
d2 = 0.1
for _ in range(trials):
if not SetDominates(s2, v2, d1, d2, p):
num_correct += 1
print(num_correct/trials)
s1 = [(1, 1), (2, 2), (3, 5), (5, 3), (7, 7), (99, 1), (97, 5)]
v2 = (1, 99)
trials = 10
num_correct = 0
p = 0.8
d1 = 0.01
d2 = 0.1
for _ in range(trials):
cond, calls =SetDominates(s2, v2, d1, d2, p)
if not cond:
num_correct += 1
print(num_correct/trials)
def Lex(p,q,deltaMain):
num_calls = 0
for i in range(0,len(p)):
cond1, calls1 = BoostProb("oracle", p, q, i, deltaMain, 1/(32*len(p)), 1/32)
num_calls += calls1
if cond1:
return True, num_calls
else:
cond2, calls2 = BoostProb("oracle", q, p, i, deltaMain, 1/(32*len(p)), 1/32)
num_calls += calls2
if cond2:
return False, num_calls
return True, num_calls
v1 = (1, 2)
v2 = (2, 1)
trials = 10000
num_correct = 0
p = 0.7
d1 = 0.4
d2 = 0.4
for _ in range(trials):
if Lex(v2, v1, p):
num_correct += 1
print(num_correct/trials)
def argmax_lex(a):
return max(enumerate(a), key=lambda a:a[1])[0]
def argmax_rand(a):
b = np.array(a)
return np.random.choice(np.flatnonzero(b == b.max()))
def MaxLex(p, S, delta, deltaMain, use_argmax_lex = True, use_update = (1, 0.5, 1, -2), expected=None, use_cond = False):
if len(S) == 1:
return S[0], 0
c = []
for i in range(0,len(S)):
c.append(math.log(1/delta))
compl = False
num_calls = 0
if use_argmax_lex:
argmax = lambda x: argmax_lex(x)
else:
argmax = lambda x: argmax_rand(x)
rounds = 0
if expected:
ind = S.index(expected)
prev = c[ind]
num_increased = 0
while not compl:
q1Star = argmax(c)
q1 = S[q1Star]
cStar = c[:q1Star] + c[q1Star + 1:]
q2Star = argmax(cStar)
q2Star = q2Star + 1 if q2Star >= q1Star else q2Star
q2 = S[q2Star]
cond1, calls1 = Lex(q1,q2,delta)
num_calls += calls1
if cond1:
x = q1
xStar = q1Star
y = q2Star
else:
x = q2
xStar = q2Star
y = q1Star
c[y] = c[y] - use_update[0]
cond2, calls2 = Dominates(x, p, deltaMain)
num_calls += calls2
if cond2:
c[xStar] = c[xStar] + use_update[1]
else:
c[xStar] = c[xStar] - use_update[2]
cond = (c[q2Star] <= use_update[3])
if len(c) > 2 and use_cond:
remaining = c[:min(q1Star, q2Star)] + c[min(q1Star, q2Star)+1:max(q1Star, q2Star)] + c[max(q1Star, q2Star)+1:]
cond = cond and np.all([x <= -2 for x in remaining])
if cond:
compl = True
rounds += 1
if expected:
curr = c[ind]
if curr == prev + use_update[1]:
num_increased += 1
else:
num_increased = num_increased
prev = curr
print(c, S)
print(num_increased/rounds)
return S[argmax(c)], num_calls
x = [(5,3), (3, 3), (5, 7), (6, 1)]
v1 = (3, 5)
expected = (5, 7)
lexexpected = (9,1)
trials = 1000
p = 0.
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, expected=expected)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexexpected):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(5,3), (3, 3), (5, 7), (6, 1)]
v1 = (3, 5)
expected = (5, 7)
lexexpected = (9,1)
trials = 1000
p = 0.
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, expected=expected, use_cond = False)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexexpected):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(7,3), (3, 3), (5, 7), (6, 1)]
v1 = (3, 5)
expected = (5, 7)
lexexpected = (6,1)
trials = 1000
p = 0.
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, expected=expected, use_cond = False)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexexpected):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(8,3), (3, 5), (5, 7), (9, 1)]
v1 = (3, 5)
expected = (5, 7)
lexexpected = (9,1)
trials = 1000
p = 0.
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, expected=expected)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexexpected):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(8,3), (3, 5), (5, 7), (9, 1)]
v1 = (3, 5)
expected = (5, 7)
lexexpected = (9,1)
trials = 1000
p = 0.
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, expected=expected, use_update = (1, 1, 1.1, -2))
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexexpected):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(8,3), (6,6), (3, 5), (5, 7)]
v1 = (3, 5)
expected = (6, 6)
trials = 1000
p = 0.
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, expected=expected)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set((8,3)):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(8,3), (3, 5), (1, 5), (5, 1)]
v1 = (3, 5)
expected = (3, 5)
lexcorrect = (8, 3)
trials = 1000
p = 0.
expected_p = 0.01
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, expected=expected, use_update = (1, 4, 1.1, -2))
print(output)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexcorrect):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(8,3), (3, 5), (1, 5), (5, 1)]
v1 = (3, 5)
expected = (3, 5)
lexcorrect = (8, 3)
trials = 1000
p = 0.
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, expected=expected, use_update=(1, 1, 1, -2))
print(output)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexcorrect):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(8,3), (7,7), (3, 5)]
v1 = (3, 5)
expected = (7, 7)
trials = 1000
p = 0.
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, expected=expected)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set((8,3)):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(1, 1), (2, 2), (3, 5), (5, 3), (7, 7), (1, 99), (99, 1), (97, 5)]
v1 = (1, 99)
expected = (1, 99)
lexexpected = (99, 1)
trials = 1000
p = 0.3
expected_p = 0.1
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, expected=expected, use_update = (1, 4, 3, -2), use_cond = True)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexexpected):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(1, 1), (2, 2), (3, 5), (5, 3), (7, 7), (1, 99), (99, 1), (97, 5)]
v1 = (1, 99)
expected = (1, 99)
lexexpected = (99, 1)
trials = 1000
p = 0.1
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, expected=expected, use_update = (1, 4, 1.1, -2), use_cond = True)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexexpected):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(1, 1), (2, 2), (3, 5), (5, 3), (7, 7), (1, 99), (99, 1), (97, 5)]
v1 = (97, 5)
expected = (97, 5)
lexexpected = (99, 1)
trials = 1000
p = 0.1
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, expected=expected, use_update = (1, 1, 1, -2), use_cond = True)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexexpected):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(1, 1), (2, 2), (3, 5), (5, 3), (7, 7), (1, 99), (99, 1), (97, 5)]
v1 = (97, 5)
expected = (97, 5)
lexexpected = (99, 1)
trials = 1000
p = 0.3
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, expected=expected, use_cond = True)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set(lexexpected):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(1, 1), (2, 2), (3, 5), (5, 3), (7, 7), (1, 99), (99, 1), (97, 5)]
v1 = (1, 99)
expected = (1, 99)
trials = 1000
p = 0.
expected_p = 0.05
wrong_answers = set([])
num_correct = 0
num_lexcorrect = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, expected=expected, use_cond = False)
if set(output) == set(expected):
num_correct += 1
elif set(output) == set((99, 1)):
num_lexcorrect += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
print(1 - num_lexcorrect/trials, expected_p)
x = [(1, 1), (2, 2), (3, 5), (5, 3), (7, 7), (1, 99), (99, 1), (97, 5)]
v1 = (1, 99)
expected = (1, 99)
trials = 1000
p = 0.2
expected_p = 0.1
wrong_answers = set([])
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True)
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False)
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, use_update = (1, 1, 1, -2))
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, use_update = (1, 1, 1, -2))
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
x = [(1, 1), (2, 2), (3, 5), (5, 3), (7, 7), (1, 99), (99, 1), (97, 5)]
v1 = (1, 99)
expected = (1, 99)
trials = 500
values = []
for one in range(1, 11):
for two in range(1, 21):
for three in range(1, 31):
p = 0.2
expected_p = 0.1
wrong_answers = set([])
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, use_update = (one, two, three, -2))
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print("args;", one, two, three)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
values.append((1 - num_correct/trials, expected_p, wrong_answers, one, two, three))
x = [(1, 1), (0.5, 0.5), (0.25, 0.25), (0.1, 0.1), (0.01, 0.01), (2, 2), (1, 99), (99, 1)]
v1 = (1, 99)
trials = 100
expected = (1, 99)
num_correct = 0
p = 0.1
expected_p = 0.1
wrong_answers = set([])
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True)
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False)
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, use_update = (1, 1, 1, -2))
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, use_update = (1, 1, 1, -2))
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
s = [(1, 1), (2, 2), (1, 99), (99, 1)]
v1 = (1, 99)
expected = (1, 99)
trials = 1000
p = 0.2
expected_p = 0.1
wrong_answers = set([])
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True)
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False)
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True, use_update = (1, 1, 1, -2))
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
num_correct = 0
for _ in range(trials):
s = x.copy()
random.shuffle(s)
output, calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = False, use_update = (1, 1, 1, -2))
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
s = [(1, 1), (3, 3), (5, 3), (3, 5)]
v1 = (3, 3)
trials = 1000
expected = (5,3)
num_correct = 0
p = 0.3
expected_p = 0.1
wrong_answers = set([])
for _ in range(trials):
output, num_calls = MaxLex(v1, s, expected_p, p, use_argmax_lex = True)
if set(output) == set(expected):
num_correct += 1
else:
wrong_answers.add(output)
print(wrong_answers)
print(1 - num_correct/trials, expected_p)
s = [(1, 1), (2, 2), (3, 5), (5, 3), (7, 7), (1, 99), (99, 1), (97, 5)]
v1 = (1, 99)
trials = 10
num_correct = 0
p = 0.1
d1 = 0.4
d2 = 0.4
for _ in range(trials):
output, calls = MaxLex(v1, s, 0.01, p, use_argmax_lex = False, )
if output == (1, 99):
num_correct += 1
else:
print(output)
print(num_correct/trials)
def SkylineHighDim(k, X, delta, deltaMain, use_argmax_lex = True, use_update = (1, 0.5, 1, -2)):
S = []
C = X.copy()
num_calls = 0
for i in range(1,k+1):
#Finding a point p not dominated by current skyline points
found = False
while len(C) > 0 and not found:
p = C[random.randint(0,len(C) -1)]
cond1, calls1 = SetDominates(S, p, delta/(4*k), delta/(4*k*len(X)), deltaMain)
num_calls += calls1
if not cond1:
print(p, S, "not dominated")
found = True
else:
print(p, S, "dominated")
C.remove(p)
# print(C)
if len(C) == 0:
return S, num_calls
else:
#Finding a skyline point that dominates p
pStar, calls2 = MaxLex(p, C, delta/(2*k), deltaMain, use_argmax_lex = use_argmax_lex, use_update = use_update)
num_calls += calls2
C.remove(pStar)
print(pStar, C)
S.append(pStar)
return S, num_calls
s = [(1, 1), (3, 5), (5, 3)]
expected = s[-2:]
k = 4
delta = 0.1
deltaMain = 0.1
trials = 10
num_correct = 0
total_calls = []
wrong_answers = set()
for i in range(trials):
X = s.copy()
random.shuffle(X)
# print("trial", i)
output, num_calls = SkylineHighDim(k,X,delta, deltaMain, use_argmax_lex = True, use_update = (1, 0.5, 1, -2))
total_calls.append(num_calls)
# print(output)
if set(expected) == set(output):
num_correct += 1
else:
wrong_answers.add(tuple(output))
print(num_correct / (trials + 1), 1 - deltaMain, np.mean(total_calls), np.max(total_calls))
print(wrong_answers)
num_correct = 0
total_calls = []
wrong_answers = set()
for i in range(trials):
X = s.copy()
random.shuffle(X)
# print("trial", i)
output, num_calls = SkylineHighDim(k,X,delta, deltaMain, use_argmax_lex = False, use_update = (1, 0.5, 1, -2))
total_calls.append(num_calls)
# print(output)
if set(expected) == set(output):
num_correct += 1
else:
wrong_answers.add(tuple(output))
print(num_correct / (trials + 1), 1 - deltaMain, np.mean(total_calls), np.max(total_calls))
print(wrong_answers)
num_correct = 0
total_calls = []
wrong_answers = set()
for i in range(trials):
X = s.copy()
random.shuffle(X)
# print("trial", i)
output, num_calls = SkylineHighDim(k,X,delta, deltaMain, use_argmax_lex = True, use_update = (1, 1, 1, -2))
total_calls.append(num_calls)
# print(output)
if set(expected) == set(output):
num_correct += 1
else:
wrong_answers.add(tuple(output))
print(num_correct / (trials + 1), 1 - deltaMain, np.mean(total_calls), np.max(total_calls))
print(wrong_answers)
num_correct = 0
total_calls = []
wrong_answers = set()
for i in range(trials):
X = s.copy()
random.shuffle(X)
# print("trial", i)
output, num_calls = SkylineHighDim(k,X,delta, deltaMain, use_argmax_lex = False, use_update = (1, 0.5, 1, -2))
total_calls.append(num_calls)
# print(output)
if set(expected) == set(output):
num_correct += 1
else:
wrong_answers.add(tuple(output))
print(num_correct / (trials + 1), 1 - deltaMain, np.mean(total_calls), np.max(total_calls))
print(wrong_answers)
def is_dominated_noiseless(a, b):
# a is dominated by b
return np.all([a[i] <= b[i] for i in range(len(b))])
def brute_force_noiseless(s):
n = len(s)
dims = len(s[0])
optimal = []
sorted_i = []
optimal = set([])
for i in range(dims):
s_i = msort2_noiseless(s, i)
sorted_i.append(s_i)
max_i = s_i[-1][i]
optima_i = []
compl = False
curr = -1
while not compl and len(s_i) > 0:
if s_i[curr][i] == max_i:
optima_i.append(s_i[curr])
s_i.pop(-1)
else:
compl = True
if i == 0:
optimal = optima_i
else:
for marg in optima_i:
if marg not in optimal:
optimal.append(marg)
changed = True
while changed:
changed = False
for i in range(dims):
optima_i = []
compl = False
curr = -1
while not compl and len(s_i) > 0:
dominated = [is_dominated_noiseless(s_i[curr], x) for x in optimal]
if not np.any(dominated):
optimal.append(s_i[curr])
changed = True
s_i.pop(-1)
else:
compl = True
# check internally:
new_optimal = []
for i in range(len(optimal)):
sublist = optimal[:i] + optimal[i + 1:]
if not np.any([is_dominated_noiseless(optimal[i], x) for x in sublist]):
new_optimal.append(optimal[i])
return new_optimal
def msort2_noiseless(x, dim):
if len(x) < 2:
return x
result = []
mid = int(len(x) / 2)
y = msort2_noiseless(x[:mid], dim)
z = msort2_noiseless(x[mid:], dim)
while (len(y) > 0) and (len(z) > 0):
if y[0][dim] > z[0][dim]:
result.append(z[0])
z.pop(0)
else:
result.append(y[0])
y.pop(0)
result += y
result += z
return result
X = [tuple(x) for x in [[1,1],[2,2],[3,5],[5,3],[7,7], [1,99], [99,1],[97,5]]]
expected = brute_force_noiseless(X)
print(expected)
X = [tuple(x) for x in [[1,1],[2,2],[3,5],[5,3],[7,7], [1,99], [99,1],[97,5]]]
expected = X[-4:]
# print(expected)
k = 8
delta = 0.1
deltaMain = 0.1
trials = 100
num_correct = 0
total_calls = []
hamming_dist = []
for i in range(trials):
X = [tuple(x) for x in [[1,1],[2,2],[3,5],[5,3],[7,7], [1,99], [99,1],[97,5]]]
expected = X[-4:]
# print(X)
# print(expected)
# print("trial", i)
output, num_calls = SkylineHighDim(k,X,delta, deltaMain)
total_calls.append(num_calls)
hamming_dist.append(len(set(output) ^ set(expected)))
# print(output)
if set(expected) == set(output):
num_correct += 1
else:
print(output)
print(num_correct / (trials + 1), 1 - deltaMain, np.mean(total_calls), np.max(total_calls), np.mean(hamming_dist), np.max(hamming_dist))
import numpy as np
import random
import stats
def oracle_max(tup, dim, error):
v1, v2 = tup
v1_comp = v1[dim]
v2_comp = v2[dim]
truthful = random.random()
if v1_comp <= v2_comp:
return 0 if truthful < error else 1
else:
return 1 if truthful < error else 0
def Lex(p,q,deltaMain):
num_calls = 0
for i in range(0,len(p)):
cond1, calls1 = BoostProb("oracle", p, q, i, deltaMain, 1/(32*len(p)), 1/32)
num_calls += calls1
if cond1:
return True, num_calls
else:
cond2, calls2 = BoostProb("oracle", q, p, i, deltaMain, 1/(32*len(p)), 1/32)
num_calls += calls2
if cond2:
return False, num_calls
return True, num_calls
def max_4(s, dim, delta, error):
num_checks = int(2 * len(s) - (1/6)/(error)+1)
num_calls = 0
if num_checks % 2 == 0:
num_checks += 1
if len(s) == 0:
return None
if len(s) == 1:
return s[0]
else:
curr = s[0]
for i in range(1, len(s)):
temp, calls = Lex(curr, s[i], delta / 2)
num_calls += calls
if temp != 0:
curr = s[i]
return curr, num_calls
def find_max(s, dim, delta, error):
s = list(s)
size = len(s)
if size == 0:
return None, 0
if size == 1:
return s[0], 0
# partition s into groups of at most 4
s2 = []
start = 0
num_calls = 0
while start + 4 < size:
subset = s[start: start + 4]
max_picked, calls = max_4(subset, dim, delta, error)
s2.append(max_picked)
num_calls += calls
start += 4
subset = s[start: size]
max_picked, calls = max_4(subset, dim, delta, error)
num_calls += calls
s2.append(max_picked)
mx, calls = find_max(s2, dim, delta / 2, error)
return mx, num_calls + calls
def is_dominated(v, C, delta, error):
dims = len(v)
num_checks = int(np.log(1/delta) * 2)
num_calls = 0
if num_checks % 2 == 0:
num_checks += 1
for c in C:
dominated = np.zeros(dims)
comp = (v, c)
for dim in range(dims):
max_i = stats.mode([oracle_max(comp, dim, error) for _ in range(num_checks)])
num_calls += num_checks
dominated[dim] = max_i
if np.all(dominated == 1):
return True, num_calls
return False, num_calls
def skysample(khat, s, delta, error, use_argmax_lex = None, use_update = None):
assert len(s) > 0
sky = []
dims = len(s[0])
remaining = set(s)
num_calls = 0
for i in range(khat):
# find non-dominated points
to_remove = []
for r in remaining:
comp, calls = is_dominated(r, sky, delta, error)
num_calls += calls
if comp:
to_remove.append(r)
for r in to_remove:
remaining.remove(r)
if len(remaining) > 0:
remaining = list(remaining)
z, calls = MaxLex(remaining[0], remaining, delta/2, error)
num_calls += calls
sky.append(z)
remaining = set(remaining)
remaining.remove(z)
return sky, num_calls
def skyline_computation(s, delta, error, alg, use_argmax_lex = None, use_update = None):
i = 1
k = 4
compl = False
num_calls = 0
while not compl:
r, calls = alg(k, s, delta/(2** i), error, use_argmax_lex = use_argmax_lex, use_update = use_update)
num_calls += calls
if len(r) < k:
compl = True
else:
i += 1
k = k**2
return r, num_calls
trials = 100
dims = 6
data_num = 6
X = [tuple(x) for x in [[1,1],[2,2],[3,5], [1,99]]]
expected = brute_force_noiseless(X)
print(expected)
# num_vec = 2
# len_vec = 10
# s = [tuple([1 for _ in range(num_vec)]) for _ in range(len_vec)] + [tuple([5 for _ in range(num_vec)])]
# expected = [tuple([5 for _ in range(num_vec)])]
calls = []
for p in [1/6]:
num_correct = 0
for i in range(trials):
random.shuffle(X)
output, num_calls = SkyLineHighDim(X, 0.01, p)
calls.append(num_calls)
# print(output)
if set(output) == set(expected):
num_correct += 1
print(1 - num_correct/trials, p)
print(np.mean(num_calls), np.max(num_calls))
trials = 100
dims = 6
data_num = 6
X = [tuple(x) for x in [[1,1],[2,2],[3,5], [1,99]]]
expected = brute_force_noiseless(X)
print(expected)
# num_vec = 2
# len_vec = 10
# s = [tuple([1 for _ in range(num_vec)]) for _ in range(len_vec)] + [tuple([5 for _ in range(num_vec)])]
# expected = [tuple([5 for _ in range(num_vec)])]
calls = []
for p in [1/6]:
num_correct = 0
for i in range(trials):
random.shuffle(X)
output, num_calls = skyline_computation(X, 0.01, p, skysample)
calls.append(num_calls)
# print(output)
if set(output) == set(expected):
num_correct += 1
print(1 - num_correct/trials, p)
print(np.mean(num_calls), np.max(num_calls))
trials = 100
dims = 6
data_num = 6
X = [tuple(x) for x in [[1,1],[2,2],[3,5], [1,99]]]
expected = brute_force_noiseless(X)
print(expected)
# num_vec = 2
# len_vec = 10
# s = [tuple([1 for _ in range(num_vec)]) for _ in range(len_vec)] + [tuple([5 for _ in range(num_vec)])]
# expected = [tuple([5 for _ in range(num_vec)])]
calls = []
for p in [1/6]:
num_correct = 0
for i in range(trials):
random.shuffle(X)
output, num_calls = skyline_computation(X, 0.01, p, SkylineHighDim, use_argmax_lex = True, use_update = (1, 1, 1, -2))
calls.append(num_calls)
if set(output) == set(expected):
num_correct += 1
print(1 - num_correct/trials, p)
print(np.mean(num_calls), np.max(num_calls))
trials = 100
dims = 6
data_num = 6
X = [tuple(x) for x in [[1,1],[2,2],[3,5],[5,3],[7,7], [1,99], [99,1],[97,5]]]
expected = brute_force_noiseless(X)
print(expected)
# num_vec = 2
# len_vec = 10
# s = [tuple([1 for _ in range(num_vec)]) for _ in range(len_vec)] + [tuple([5 for _ in range(num_vec)])]
# expected = [tuple([5 for _ in range(num_vec)])]
for p in [1/9]:
num_correct = 0
for i in range(trials):
random.shuffle(s)
output = skyline_computation(X, 0.1, p)
# print(output)
if set(output) == set(expected):
num_correct += 1
print(1 - num_correct/trials, p)
def is_dominated(v, C, delta, error):
dims = len(v)
num_checks = int(np.log(1/delta) * 3)
if num_checks % 2 == 0:
num_checks += 1
for c in C:
dominated = np.zeros(dims)
comp = (v, c)
for dim in range(dims):
max_i = stats.mode([oracle_max(comp, dim, error) for _ in range(num_checks)])
dominated[dim] = max_i
if np.all(dominated == 1):
return True
else:
num_checks = num_checks
# print(c, dominated)
return False
s1 = [(1, 1), (3,5), (7, 7)]
v2 = (5, 3)
trials = 1000
num_correct = 0
p = 0.05
d1 = 0.05
for _ in range(trials):
if is_dominated(v2, s1, d1, p):
num_correct += 1
print(1 - num_correct/trials, d1)
def max_4(s, dim, delta, error):
num_checks = int(2 * len(s) - (1/6)/(error)+1)
if num_checks % 2 == 0:
num_checks += 1
if len(s) == 0:
return None
if len(s) == 1:
return s[0]
else:
curr = s[0]
for i in range(1, len(s)):
comp = (curr, s[i])
temp = stats.mode([oracle_max(comp, dim, error) for _ in range(num_checks)])
if temp != 0:
curr = s[i]
return curr
for num_vec in range(1, 4):
s = [(1, 1) for _ in range(num_vec)] + [(5, 5)]
random.shuffle(s)
dim = 1
expected = max(s)
trials = 10000
for p in [1/6, 1/9, 1/12, 1/18]:
num_correct = 0
for i in range(trials):
random.shuffle(s)
if max_4(s, dim, p/2, p) == expected:
num_correct += 1
print((1 - num_correct / trials)/p, 1 - num_correct / trials, delta, p, len(s))
| 28.155761 | 138 | 0.576427 | 6,169 | 39,587 | 3.562814 | 0.036797 | 0.071887 | 0.028664 | 0.030575 | 0.808044 | 0.784158 | 0.766868 | 0.750444 | 0.733018 | 0.726648 | 0 | 0.062374 | 0.273448 | 39,587 | 1,405 | 139 | 28.175801 | 0.701794 | 0.039836 | 0 | 0.792587 | 1 | 0 | 0.003241 | 0 | 0 | 0 | 0 | 0 | 0.000789 | 1 | 0.018139 | false | 0 | 0.007098 | 0.001577 | 0.060726 | 0.095426 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7de49f07f901a129c29046911eb8aa64978de62b | 245 | py | Python | tests/test_version.py | alphatwirl/mantichora | f1d15090b5e5a55c3553cd1692d7097178e10599 | [
"BSD-3-Clause"
] | 13 | 2019-07-30T08:30:44.000Z | 2021-02-11T22:25:29.000Z | tests/test_version.py | alphatwirl/mantichora | f1d15090b5e5a55c3553cd1692d7097178e10599 | [
"BSD-3-Clause"
] | 5 | 2019-03-13T11:11:32.000Z | 2021-05-31T21:56:10.000Z | tests/test_version.py | alphatwirl/mantichora | f1d15090b5e5a55c3553cd1692d7097178e10599 | [
"BSD-3-Clause"
] | 1 | 2021-04-20T11:09:07.000Z | 2021-04-20T11:09:07.000Z | # Tai Sakuma <tai.sakuma@gmail.com>
import mantichora
##__________________________________________________________________||
def test_version():
mantichora.__version__
##__________________________________________________________________||
| 27.222222 | 70 | 0.857143 | 13 | 245 | 5.615385 | 0.692308 | 0.246575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069388 | 245 | 8 | 71 | 30.625 | 0.320175 | 0.689796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
7de7c59840fe85ab3227ba5e16bae93b07154655 | 19,270 | py | Python | lib/xds_writer.py | jorgediazjr/fast_dp | 972fe7f09fb28b07053de595faa6857692320cbe | [
"Apache-2.0"
] | null | null | null | lib/xds_writer.py | jorgediazjr/fast_dp | 972fe7f09fb28b07053de595faa6857692320cbe | [
"Apache-2.0"
] | null | null | null | lib/xds_writer.py | jorgediazjr/fast_dp | 972fe7f09fb28b07053de595faa6857692320cbe | [
"Apache-2.0"
] | null | null | null | import os
if 'FAST_DP_ROOT' not in os.environ:
raise RuntimeError('FAST_DP_ROOT undefined')
from run_job import get_number_cpus
# XDS.INP writer functions - two (three) of these, to write out commands
# for autoindexing, integration then postrefinement and scaling. Split
# up thus because XDS will frequently stop after autoindexing complaining
# that your data are not perfect, and then you probably want to run post-
# refinement and scaling a couple of times. With the latter need to be able
# to control the scale factors applied. N.B. these calculate the image
# ranges to use from the input metadata.
def write_xds_inp_autoindex(metadata, xds_inp):
fout = open(xds_inp, 'w')
template = os.path.join(os.environ['FAST_DP_ROOT'],
'lib', 'templates',
'{}_INDEX.INP'.format(metadata['detector']))
if not os.path.exists(template):
raise RuntimeError('template for {} not found at {}'.format(
metadata['detector'], template))
template_str = open(template, 'r').read().strip()
# should somehow hang this from an anomalous flag
friedels_law = 'FALSE'
fout.write('{}\n'.format(template_str.format(
extra_text = metadata.get('extra_text', '!PARAMETER=VALUE'),
no_processors = get_number_cpus(),
nx = metadata['size'][0],
ny = metadata['size'][1],
qx = metadata['pixel'][0],
qy = metadata['pixel'][1],
orgx = metadata['beam'][0] / metadata['pixel'][0],
orgy = metadata['beam'][1] / metadata['pixel'][1],
distance = metadata['distance'],
sensor = metadata.get('sensor', None),
wavelength = metadata['wavelength'],
oscillation = metadata['oscillation'][1],
friedels_law = friedels_law,
template = os.path.join(metadata['directory'],
metadata['template'].replace('#', '?')),
starting_angle = metadata['oscillation'][0],
starting_image = metadata['start'])))
# then we get the non-template stuff
fout.write('DATA_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
# compute the background range as min(all, 5) #TODO maybe 5 degrees?
if metadata['end'] - metadata['start'] > 5:
fout.write('BACKGROUND_RANGE={} {}\n'.format(
metadata['start'], metadata['start'] + 5))
else:
fout.write('BACKGROUND_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
# REFINE(IDXREF)=
fout.write('REFINE(IDXREF)=CELL AXIS ORIENTATION POSITION BEAM\n')
# by default autoindex off all images - can make this better later on.
# Ok: I think it is too slow already. Three wedges, as per xia2...
# that would be 5 images per wedge, then. Erk. Should be *degrees*
images = range(metadata['start'], metadata['end'] + 1)
wedge_size = int(round(5.0 / metadata['oscillation'][1])) - 1
wedge = (images[0], images[0] + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
# if we have more than 90 degrees of data, use wedges at the start,
# 45 degrees in and 90 degrees in, else use a wedge at the start,
# one in the middle and one at the end.
# if less than 15 degrees of data, use all of the images
if (metadata['end'] - metadata['start']) * metadata['oscillation'][1] < 15:
fout.write('SPOT_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
elif int(90.0 / metadata['oscillation'][1]) + wedge_size in images:
wedge = (int(45.0 / metadata['oscillation'][1]),
int(45.0 / metadata['oscillation'][1]) + wedge_size)
fout.write('SPOT_RANGE={0[1]} {0[1]}\n'.format(wedge))
wedge = (int(90.0 / metadata['oscillation'][1]),
int(90.0 / metadata['oscillation'][1]) + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
else:
mid = (len(images) / 2) - wedge_size + images[0] - 1
wedge = (mid, mid + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
wedge = (images[-wedge_size], images[-1])
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
fout.close()
return
def write_xds_inp_autoindex_p1_cell(metadata, xds_inp, cell):
fout = open(xds_inp, 'w')
template = os.path.join(os.environ['FAST_DP_ROOT'],
'lib', 'templates',
'{}_INDEX.INP'.format(metadata['detector']))
if not os.path.exists(template):
raise RuntimeError('template for {} not found at {}'.format(
metadata['detector'], template))
template_str = open(template, 'r').read().strip()
# should somehow hang this from an anomalous flag
friedels_law = 'FALSE'
fout.write('{}\n'.format(template_str.format(
extra_text = metadata.get('extra_text', '!PARAMETER=VALUE'),
no_processors = get_number_cpus(),
nx = metadata['size'][0],
ny = metadata['size'][1],
qx = metadata['pixel'][0],
qy = metadata['pixel'][1],
orgx = metadata['beam'][0] / metadata['pixel'][0],
orgy = metadata['beam'][1] / metadata['pixel'][1],
distance = metadata['distance'],
sensor = metadata.get('sensor', None),
wavelength = metadata['wavelength'],
oscillation = metadata['oscillation'][1],
friedels_law = friedels_law,
template = os.path.join(metadata['directory'],
metadata['template'].replace('#', '?')),
starting_angle = metadata['oscillation'][0],
starting_image = metadata['start'])))
# cell, spacegroup
fout.write('SPACE_GROUP_NUMBER=1\n')
fout.write('UNIT_CELL_CONSTANTS={0[0]} {0[1]} {0[2]} {0[3]} {0[4]} {0[5]}\n'.format(tuple(cell)))
# then we get the non-template stuff
fout.write('DATA_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
# compute the background range as min(all, 5)
if metadata['end'] - metadata['start'] > 5:
fout.write('BACKGROUND_RANGE={} {}\n'.format(
metadata['start'], metadata['start'] + 5))
else:
fout.write('BACKGROUND_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
# by default autoindex off all images - can make this better later on.
# Ok: I think it is too slow already. Three wedges, as per xia2...
# that would be 5 images per wedge, then. Erk. Should be *degrees*
images = range(metadata['start'], metadata['end'] + 1)
wedge_size = int(round(5.0 / metadata['oscillation'][1])) - 1
wedge = (images[0], images[0] + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
# if we have more than 90 degrees of data, use wedges at the start,
# 45 degrees in and 90 degrees in, else use a wedge at the start,
# one in the middle and one at the end.
# if less than 15 degrees of data, use all of the images
if (metadata['end'] - metadata['start']) * metadata['oscillation'][1] < 15:
fout.write('SPOT_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
elif int(90.0 / metadata['oscillation'][1]) + wedge_size in images:
wedge = (int(45.0 / metadata['oscillation'][1]),
int(45.0 / metadata['oscillation'][1]) + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
wedge = (int(90.0 / metadata['oscillation'][1]),
int(90.0 / metadata['oscillation'][1]) + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
else:
mid = (len(images) / 2) - wedge_size + images[0] - 1
wedge = (mid, mid + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
wedge = (images[-5], images[-1])
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
fout.close()
return
def write_xds_inp_integrate(metadata, xds_inp, resolution_low, no_jobs=1, no_processors=0):
# FIXME in here calculate the maximum number of jobs to correspond at the
# least to 5 degree wedges / job.
fout = open(xds_inp, 'w')
template = os.path.join(os.environ['FAST_DP_ROOT'],
'lib', 'templates',
'{}_INTEGRATE.INP'.format(metadata['detector']))
if not os.path.exists(template):
raise RuntimeError('template for {} not found at {}'.format(
metadata['detector'], template))
template_fin = open(template, 'r')
template_str = template_fin.read().strip()
# should somehow hang this from an anomalous flag
friedels_law = 'FALSE'
if no_processors == 0:
no_processors = get_number_cpus()
fout.write('{}\n'.format(template_str.format(
extra_text = metadata.get('extra_text', '!PARAMETER=VALUE'),
no_processors = no_processors,
no_jobs = no_jobs,
resolution_low = resolution_low,
resolution_high = 0.0,
nx = metadata['size'][0],
ny = metadata['size'][1],
qx = metadata['pixel'][0],
qy = metadata['pixel'][1],
orgx = metadata['beam'][0] / metadata['pixel'][0],
orgy = metadata['beam'][1] / metadata['pixel'][1],
distance = metadata['distance'],
sensor = metadata.get('sensor', None),
wavelength = metadata['wavelength'],
oscillation = metadata['oscillation'][1],
friedels_law = friedels_law,
template = os.path.join(metadata['directory'],
metadata['template'].replace('#', '?')),
starting_angle = metadata['oscillation'][0],
starting_image = metadata['start'])))
# then we get the non-template stuff
fout.write('DATA_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
fout.close()
return
def write_xds_inp_redo(metadata, unit_cell, space_group_number,
xds_inp, resolution_low=30.0, resolution_high=0.0,
no_jobs=1, no_processors=0):
# FIXME in here calculate the maximum number of jobs to correspond at the
# least to 5 degree wedges / job.
fout = open(xds_inp, 'w')
template = os.path.join(os.environ['FAST_DP_ROOT'],
'lib', 'templates',
'{}_REDO.INP'.format(metadata['detector']))
if not os.path.exists(template):
raise RuntimeError('template for {} not found at {}'.format(
metadata['detector'], template))
template_fin = open(template, 'r')
template_str = template_fin.read().strip()
# should somehow hang this from an anomalous flag
if 'atom' in metadata:
friedels_law = 'FALSE'
else:
friedels_law = 'TRUE'
corrections = 'ALL'
if no_processors == 0:
no_processors = get_number_cpus()
fout.write('{}\n'.format(template_str.format(
extra_text = metadata.get('extra_text', '!PARAMETER=VALUE'),
no_processors = no_processors,
no_jobs = no_jobs,
resolution_low = resolution_low,
resolution_high = 0.0,
unit_cell_a = unit_cell[0],
unit_cell_b = unit_cell[1],
unit_cell_c = unit_cell[2],
unit_cell_alpha = unit_cell[3],
unit_cell_beta = unit_cell[4],
unit_cell_gamma = unit_cell[5],
space_group_number = space_group_number,
nx = metadata['size'][0],
ny = metadata['size'][1],
qx = metadata['pixel'][0],
qy = metadata['pixel'][1],
orgx = metadata['beam'][0] / metadata['pixel'][0],
orgy = metadata['beam'][1] / metadata['pixel'][1],
distance = metadata['distance'],
sensor = metadata.get('sensor', None),
wavelength = metadata['wavelength'],
oscillation = metadata['oscillation'][1],
friedels_law = friedels_law,
corrections = corrections,
template = os.path.join(metadata['directory'],
metadata['template'].replace('#', '?')),
starting_angle = metadata['oscillation'][0],
starting_image = metadata['start'])))
# then we get the non-template stuff
fout.write('DATA_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
# compute the background range as min(all, 5)
if metadata['end'] - metadata['start'] > 5:
fout.write('BACKGROUND_RANGE={} {}\n'.format(
metadata['start'], metadata['start'] + 5))
else:
fout.write('BACKGROUND_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
# by default autoindex off all images - can make this better later on.
# Ok: I think it is too slow already. Three wedges, as per xia2...
# that would be 5 images per wedge, then. Erk. Should be *degrees*
images = range(metadata['start'], metadata['end'] + 1)
wedge_size = int(round(5.0 / metadata['oscillation'][1])) - 1
wedge = (images[0], images[0] + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
# if we have more than 90 degrees of data, use wedges at the start,
# 45 degrees in and 90 degrees in, else use a wedge at the start,
# one in the middle and one at the end.
# if less than 15 degrees of data, use all of the images
if (metadata['end'] - metadata['start']) * metadata['oscillation'][1] < 15:
fout.write('SPOT_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
elif int(90.0 / metadata['oscillation'][1]) + wedge_size in images:
wedge = (int(45.0 / metadata['oscillation'][1]),
int(45.0 / metadata['oscillation'][1]) + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
wedge = (int(90.0 / metadata['oscillation'][1]),
int(90.0 / metadata['oscillation'][1]) + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
else:
mid = (len(images) / 2) - wedge_size + images[0] - 1
wedge = (mid, mid + wedge_size)
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
wedge = (images[-5], images[-1])
fout.write('SPOT_RANGE={0[0]} {0[1]}\n'.format(wedge))
fout.close()
return
# N.B. this one is a little different to the others as the inclusion of
# the cell constants and symmetry are *mandatory*. N.B. default may be
# to use the triclinic solution in the first pass.
def write_xds_inp_correct(metadata, unit_cell, space_group_number,
xds_inp, scale=True,
resolution_low=30, resolution_high=0.0,
turn_subset=False):
fout = open(xds_inp, 'w')
template = os.path.join(os.environ['FAST_DP_ROOT'],
'lib', 'templates',
'{}_CORRECT.INP'.format(metadata['detector']))
if not os.path.exists(template):
raise RuntimeError('template for {} not found at {}'.format(
metadata['detector'], template))
template_str = open(template, 'r').read().strip()
# should somehow hang this from an anomalous flag
if 'atom' in metadata:
friedels_law = 'FALSE'
else:
friedels_law = 'TRUE'
if scale:
corrections = 'ALL'
else:
corrections = '!'
fout.write('{}\n'.format(template_str.format(
extra_text = metadata.get('extra_text', '!PARAMETER=VALUE'),
no_processors = get_number_cpus(),
resolution_low = resolution_low,
resolution_high = resolution_high,
unit_cell_a = unit_cell[0],
unit_cell_b = unit_cell[1],
unit_cell_c = unit_cell[2],
unit_cell_alpha = unit_cell[3],
unit_cell_beta = unit_cell[4],
unit_cell_gamma = unit_cell[5],
space_group_number = space_group_number,
nx = metadata['size'][0],
ny = metadata['size'][1],
qx = metadata['pixel'][0],
qy = metadata['pixel'][1],
orgx = metadata['beam'][0] / metadata['pixel'][0],
orgy = metadata['beam'][1] / metadata['pixel'][1],
distance = metadata['distance'],
sensor = metadata.get('sensor', None),
wavelength = metadata['wavelength'],
oscillation = metadata['oscillation'][1],
friedels_law = friedels_law,
corrections = corrections,
template = os.path.join(metadata['directory'],
metadata['template'].replace('#', '?')),
starting_angle = metadata['oscillation'][0],
starting_image = metadata['start'])))
# then we get the non-template stuff
if turn_subset:
# limit to 360 degrees...
width = metadata['oscillation'][1]
start, end = metadata['start'], metadata['end']
if (end - start + 1) * width > 360:
end = start + (360. / width) - 1
fout.write('DATA_RANGE={} {}\n'.format(start, end))
else:
fout.write('DATA_RANGE={} {}\n'.format(metadata['start'], metadata['end']))
fout.close()
return
def write_xds_inp_correct_no_cell(metadata,
xds_inp, scale=True,
resolution_low=30, resolution_high=0.0):
fout = open(xds_inp, 'w')
template = os.path.join(os.environ['FAST_DP_ROOT'],
'lib', 'templates',
'{}_CORRECT.INP'.format(metadata['detector']))
template = os.path.join(os.environ['FAST_DP_ROOT'],
'lib', 'templates',
'{}_CORRECT_NO_CELL.INP'.format(metadata['detector']))
if not os.path.exists(template):
raise RuntimeError('template for {} not found at {}'.format(
metadata['detector'], template))
template_str = open(template, 'r').read().strip()
# should somehow hang this from an anomalous flag
friedels_law = 'FALSE'
if scale:
corrections = 'ALL'
else:
corrections = '!'
fout.write('{}\n'.format(template_str.format(
extra_text = metadata.get('extra_text', '!PARAMETER=VALUE'),
no_processors = get_number_cpus(),
resolution_low = resolution_low,
resolution_high = resolution_high,
nx = metadata['size'][0],
ny = metadata['size'][1],
qx = metadata['pixel'][0],
qy = metadata['pixel'][1],
orgx = metadata['beam'][0] / metadata['pixel'][0],
orgy = metadata['beam'][1] / metadata['pixel'][1],
distance = metadata['distance'],
sensor = metadata['sensor'],
wavelength = metadata['wavelength'],
oscillation = metadata['oscillation'][1],
friedels_law = friedels_law,
corrections = corrections,
template = os.path.join(metadata['directory'],
metadata['template'].replace('#', '?')),
starting_angle = metadata['oscillation'][0],
starting_image = metadata['start'])))
# then we get the non-template stuff
fout.write('DATA_RANGE={} {}\n'.format(
metadata['start'], metadata['end']))
fout.close()
return
| 37.129094 | 101 | 0.58246 | 2,385 | 19,270 | 4.586164 | 0.101887 | 0.032913 | 0.051198 | 0.034558 | 0.897605 | 0.892119 | 0.889834 | 0.889834 | 0.883434 | 0.883434 | 0 | 0.023031 | 0.263207 | 19,270 | 518 | 102 | 37.200772 | 0.747359 | 0.147172 | 0 | 0.909091 | 0 | 0.002933 | 0.169546 | 0.004274 | 0 | 0 | 0 | 0.001931 | 0 | 1 | 0.017595 | false | 0 | 0.005865 | 0 | 0.041056 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
816ac6389d2cbefd57bd59da25730fa57498ec7f | 5,715 | py | Python | data_converter.py | dovletov/studious-engine | 601aa6309cd7dc08ea4b51f3c92e01342cac042b | [
"MIT"
] | null | null | null | data_converter.py | dovletov/studious-engine | 601aa6309cd7dc08ea4b51f3c92e01342cac042b | [
"MIT"
] | null | null | null | data_converter.py | dovletov/studious-engine | 601aa6309cd7dc08ea4b51f3c92e01342cac042b | [
"MIT"
] | null | null | null | from config import *
from utils.helper import *
import SimpleITK as sitk
import glob
"""
Convert original '*.nii.gz' data files into numpy arrays and save them as hdf5.
For each of training case there are four files (two images and two GT images):
E.g. for patient001:
patient001_frame01.nii.gz
patient001_frame01_gt.nii.gz
patient001_frame12.nii.gz
patient001_frame12_gt.nii.gz
Files with lower 'frame_id' are saved with '_ED' ending.
Files with higher 'frame_id' are saved with '_ES' ending.
"""
tr_input_path = TRAIN_DIR
tr_output_path = NP_TRAIN_DIR
tr_range = (1,101)
ev_input_path = TEST_DIR
ev_output_path = NP_TEST_DIR
ev_range = (101,151)
dataset_names = ['train', 'train_gt', 'test']
os.mkdir(tr_output_path)
os.mkdir(ev_output_path)
# Convert training dataset
for p_id in range(*tr_range):
folder_name = 'patient' + str(p_id).zfill(3)
folder_path = os.path.join(tr_input_path, folder_name)
pattern = os.path.join(folder_path, folder_name) + '_frame*'
files = glob.glob(pattern)
print(pattern)
if len(files) == 4:
pass
# for i in range(len(files)):
# print("\t"+files[i])
else:
raise ValueError("Wrong number of input files.")
exit(1)
ed_found = False
idx = 0
while not ed_found:
filename_ed_X = os.path.join(folder_path, folder_name + \
'_frame'+str(idx).zfill(2)+'.nii.gz')
filename_ed_y = os.path.join(folder_path, folder_name + \
'_frame'+str(idx).zfill(2)+'_gt.nii.gz')
if os.path.isfile(filename_ed_X) and os.path.isfile(filename_ed_y):
ed_found=True
print("ED files are found")
print('\t' + filename_ed_X)
print('\t' + filename_ed_y)
idx += 1
es_found = False
while not es_found:
filename_es_X = os.path.join(folder_path, folder_name + \
'_frame'+str(idx).zfill(2)+'.nii.gz')
filename_es_y = os.path.join(folder_path, folder_name + \
'_frame'+str(idx).zfill(2)+'_gt.nii.gz')
if os.path.isfile(filename_es_X) and os.path.isfile(filename_es_y):
es_found=True
print("ES files are found")
print('\t' + filename_es_X)
print('\t' + filename_es_y)
idx += 1
reader = sitk.ImageFileReader()
reader.SetFileName(filename_ed_X)
image = reader.Execute()
nda = sitk.GetArrayFromImage(image)
# print(nda.shape)
# for k in reader.GetMetaDataKeys():
# v = reader.GetMetaData(k)
# print("({0}) = = \"{1}\"".format(k,v))
filename = folder_name + '_ED.hdf5'
saveToHdf5(nda, dataset_names[0], tr_output_path, filename)
reader.SetFileName(filename_ed_y)
image = reader.Execute()
nda = sitk.GetArrayFromImage(image)
# print(nda.shape)
# print(np.unique(nda, return_counts=True))
if (nda.max()>4):
raise ValueError("Something wrong with gt image")
exit(1)
filename = folder_name + '_ED_gt.hdf5'
saveToHdf5(nda, dataset_names[1], tr_output_path, filename)
reader.SetFileName(filename_es_X)
image = reader.Execute()
nda = sitk.GetArrayFromImage(image)
# print(nda.shape)
# for k in reader.GetMetaDataKeys():
# v = reader.GetMetaData(k)
# print("({0}) = = \"{1}\"".format(k,v))
filename = folder_name + '_ES.hdf5'
saveToHdf5(nda, dataset_names[0], tr_output_path, filename)
reader.SetFileName(filename_es_y)
image = reader.Execute()
nda = sitk.GetArrayFromImage(image)
# print(nda.shape)
# print(np.unique(nda, return_counts=True))
if (nda.max()>4):
raise ValueError("Something wrong with gt image")
exit(1)
filename = folder_name + '_ES_gt.hdf5'
saveToHdf5(nda, dataset_names[1], tr_output_path, filename)
# Convert testing dataset
for p_id in range(*ev_range):
folder_name = 'patient' + str(p_id).zfill(3)
folder_path = os.path.join(ev_input_path, folder_name)
pattern = os.path.join(folder_path, folder_name) + '_frame*'
files = glob.glob(pattern)
print(pattern)
if len(files) == 2:
pass
# for i in range(len(files)):
# print("\t"+files[i])
else:
raise ValueError("Wrong number of input files.")
exit(1)
ed_found = False
idx = 0
while not ed_found:
filename_ed_X = os.path.join(folder_path, folder_name + \
'_frame'+str(idx).zfill(2)+'.nii.gz')
if os.path.isfile(filename_ed_X):
ed_found=True
print("ED Found")
print('\t' + filename_ed_X)
idx += 1
es_found = False
while not es_found:
filename_es_X = os.path.join(folder_path, folder_name + \
'_frame'+str(idx).zfill(2)+'.nii.gz')
if os.path.isfile(filename_es_X):
es_found=True
print("ES Found")
print('\t' + filename_es_X)
idx += 1
reader = sitk.ImageFileReader()
reader.SetFileName(filename_ed_X)
image = reader.Execute()
nda = sitk.GetArrayFromImage(image)
# print(nda.shape)
# for k in reader.GetMetaDataKeys():
# v = reader.GetMetaData(k)
# print("({0}) = = \"{1}\"".format(k,v))
filename = folder_name + '_ED.hdf5'
saveToHdf5(nda, dataset_names[2], ev_output_path, filename)
reader.SetFileName(filename_es_X)
image = reader.Execute()
nda = sitk.GetArrayFromImage(image)
# print(nda.shape)
# for k in reader.GetMetaDataKeys():
# v = reader.GetMetaData(k)
# print("({0}) = = \"{1}\"".format(k,v))
filename = folder_name + '_ES.hdf5'
saveToHdf5(nda, dataset_names[2], ev_output_path, filename)
| 30.238095 | 79 | 0.624497 | 792 | 5,715 | 4.294192 | 0.150253 | 0.052926 | 0.029403 | 0.037636 | 0.824463 | 0.790944 | 0.735078 | 0.735078 | 0.73449 | 0.730961 | 0 | 0.018942 | 0.24252 | 5,715 | 188 | 80 | 30.398936 | 0.76669 | 0.132283 | 0 | 0.700855 | 0 | 0 | 0.080151 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.017094 | 0.034188 | 0 | 0.034188 | 0.102564 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
81dfab3eccce1d1b132001ce3df053bdec26f8bd | 19,401 | py | Python | src/sadie/numbering/scheme_numbering.py | jwillis0720/pybody | 2d7c68650ac1ef5f3003ccb67171898eac1f63eb | [
"MIT"
] | 9 | 2020-12-22T19:14:01.000Z | 2022-03-17T04:34:06.000Z | src/sadie/numbering/scheme_numbering.py | jwillis0720/pybody | 2d7c68650ac1ef5f3003ccb67171898eac1f63eb | [
"MIT"
] | 32 | 2020-12-28T07:46:44.000Z | 2022-03-31T01:25:01.000Z | src/sadie/numbering/scheme_numbering.py | jwillis0720/pybody | 2d7c68650ac1ef5f3003ccb67171898eac1f63eb | [
"MIT"
] | 2 | 2021-07-30T16:44:46.000Z | 2022-01-12T20:15:17.000Z | # define numbering scheme, then chain, then region def
scheme_numbering = {
"imgt": {
"heavy": {
"imgt": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 26,
"cdr1_aa_start": 27,
"cdr1_aa_end": 38,
"fwr2_aa_start": 39,
"fwr2_aa_end": 55,
"cdr2_aa_start": 56,
"cdr2_aa_end": 65,
"fwr3_aa_start": 66,
"fwr3_aa_end": 104,
"cdr3_aa_start": 105,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
"chothia": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 26,
"cdr1_aa_start": 27,
"cdr1_aa_end": 37,
"fwr2_aa_start": 38,
"fwr2_aa_end": 56,
"cdr2_aa_start": 57,
"cdr2_aa_end": 64,
"fwr3_aa_start": 65,
"fwr3_aa_end": 106,
"cdr3_aa_start": 107,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
"abm": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 26,
"cdr1_aa_start": 27,
"cdr1_aa_end": 40,
"fwr2_aa_start": 41,
"fwr2_aa_end": 54,
"cdr2_aa_start": 55,
"cdr2_aa_end": 66,
"fwr3_aa_start": 67,
"fwr3_aa_end": 106,
"cdr3_aa_start": 107,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
"kabat": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 35,
"cdr1_aa_start": 36,
"cdr1_aa_end": 40,
"fwr2_aa_start": 41,
"fwr2_aa_end": 54,
"cdr2_aa_start": 55,
"cdr2_aa_end": 74,
"fwr3_aa_start": 75,
"fwr3_aa_end": 106,
"cdr3_aa_start": 107,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
"contact": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 30,
"cdr1_aa_start": 35,
"cdr1_aa_end": 40,
"fwr2_aa_start": 41,
"fwr2_aa_end": 51,
"cdr2_aa_start": 52,
"cdr2_aa_end": 66,
"fwr3_aa_start": 67,
"fwr3_aa_end": 104,
"cdr3_aa_start": 105,
"cdr3_aa_end": 116,
"fwr4_aa_start": 117,
"fwr4_aa_end": 128,
},
"scdr": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 27,
"cdr1_aa_start": 28,
"cdr1_aa_end": 40,
"fwr2_aa_start": 41,
"fwr2_aa_end": 52,
"cdr2_aa_start": 53,
"cdr2_aa_end": 68,
"fwr3_aa_start": 69,
"fwr3_aa_end": 104,
"cdr3_aa_start": 105,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
},
"light": {
"imgt": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 26,
"cdr1_aa_start": 27,
"cdr1_aa_end": 38,
"fwr2_aa_start": 39,
"fwr2_aa_end": 55,
"cdr2_aa_start": 56,
"cdr2_aa_end": 65,
"fwr3_aa_start": 66,
"fwr3_aa_end": 104,
"cdr3_aa_start": 105,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
"scdr": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 40,
"fwr2_aa_start": 41,
"fwr2_aa_end": 55,
"cdr2_aa_start": 56,
"cdr2_aa_end": 69,
"fwr3_aa_start": 70,
"fwr3_aa_end": 104,
"cdr3_aa_start": 105,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
"chothia": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 40,
"fwr2_aa_start": 41,
"fwr2_aa_end": 55,
"cdr2_aa_start": 56,
"cdr2_aa_end": 70,
"fwr3_aa_start": 71,
"fwr3_aa_end": 104,
"cdr3_aa_start": 105,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
"abm": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 40,
"fwr2_aa_start": 41,
"fwr2_aa_end": 55,
"cdr2_aa_start": 56,
"cdr2_aa_end": 70,
"fwr3_aa_start": 71,
"fwr3_aa_end": 104,
"cdr3_aa_start": 105,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
"kabat": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 40,
"fwr2_aa_start": 41,
"fwr2_aa_end": 55,
"cdr2_aa_start": 56,
"cdr2_aa_end": 70,
"fwr3_aa_start": 71,
"fwr3_aa_end": 104,
"cdr3_aa_start": 105,
"cdr3_aa_end": 117,
"fwr4_aa_start": 118,
"fwr4_aa_end": 128,
},
"contact": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 29,
"cdr1_aa_start": 30,
"cdr1_aa_end": 42,
"fwr2_aa_start": 43,
"fwr2_aa_end": 51,
"cdr2_aa_start": 52,
"cdr2_aa_end": 69,
"fwr3_aa_start": 70,
"fwr3_aa_end": 104,
"cdr3_aa_start": 105,
"cdr3_aa_end": 116,
"fwr4_aa_start": 117,
"fwr4_aa_end": 128,
},
},
},
"kabat": {
"heavy": {
"imgt": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 25,
"cdr1_aa_start": 26,
"cdr1_aa_end": 33,
"fwr2_aa_start": 34,
"fwr2_aa_end": 50,
"cdr2_aa_start": 51,
"cdr2_aa_end": 57,
"fwr3_aa_start": 58,
"fwr3_aa_end": 92,
"cdr3_aa_start": 93,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
"chothia": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 25,
"cdr1_aa_start": 26,
"cdr1_aa_end": 32,
"fwr2_aa_start": 33,
"fwr2_aa_end": 51,
"cdr2_aa_start": 52,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 94,
"cdr3_aa_start": 95,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
"abm": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 25,
"cdr1_aa_start": 26,
"cdr1_aa_end": 35,
"fwr2_aa_start": 36,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 58,
"fwr3_aa_start": 59,
"fwr3_aa_end": 94,
"cdr3_aa_start": 95,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
"kabat": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 30,
"cdr1_aa_start": 31,
"cdr1_aa_end": 35,
"fwr2_aa_start": 36,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 65,
"fwr3_aa_start": 66,
"fwr3_aa_end": 94,
"cdr3_aa_start": 95,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
"contact": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 29,
"cdr1_aa_start": 30,
"cdr1_aa_end": 35,
"fwr2_aa_start": 36,
"fwr2_aa_end": 46,
"cdr2_aa_start": 47,
"cdr2_aa_end": 58,
"fwr3_aa_start": 69,
"fwr3_aa_end": 92,
"cdr3_aa_start": 93,
"cdr3_aa_end": 101,
"fwr4_aa_start": 102,
"fwr4_aa_end": 113,
},
"scdr": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 26,
"cdr1_aa_start": 27,
"cdr1_aa_end": 35,
"fwr2_aa_start": 36,
"fwr2_aa_end": 47,
"cdr2_aa_start": 48,
"cdr2_aa_end": 60,
"fwr3_aa_start": 61,
"fwr3_aa_end": 92,
"cdr3_aa_start": 93,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
},
"light": {
"imgt": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 26,
"cdr1_aa_start": 27,
"cdr1_aa_end": 32,
"fwr2_aa_start": 33,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 51,
"fwr3_aa_start": 52,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
"scdr": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 34,
"fwr2_aa_start": 35,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
"chothia": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 34,
"fwr2_aa_start": 35,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
"abm": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 34,
"fwr2_aa_start": 35,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
"kabat": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 34,
"fwr2_aa_start": 35,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
# "contact": {
# "fwr1_aa_start": 1,
# "fwr1_aa_end": 27,
# "cdr1_aa_start": 28,
# "cdr1_aa_end": 42,
# "fwr2_aa_start": 43,
# "fwr2_aa_end": 51,
# "cdr2_aa_start": 52,
# "cdr2_aa_end": 69,
# "fwr3_aa_start": 70,
# "fwr3_aa_end": 104,
# "cdr3_aa_start": 105,
# "cdr3_aa_end": 116,
# "fwr4_aa_start": 117,
# "fwr4_aa_end": 128,
# },
},
},
"chothia": {
"heavy": {
"imgt": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 25,
"cdr1_aa_start": 26,
"cdr1_aa_end": 33,
"fwr2_aa_start": 34,
"fwr2_aa_end": 50,
"cdr2_aa_start": 51,
"cdr2_aa_end": 57,
"fwr3_aa_start": 58,
"fwr3_aa_end": 92,
"cdr3_aa_start": 93,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
"chothia": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 25,
"cdr1_aa_start": 26,
"cdr1_aa_end": 32,
"fwr2_aa_start": 33,
"fwr2_aa_end": 51,
"cdr2_aa_start": 52,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 94,
"cdr3_aa_start": 95,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
"abm": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 25,
"cdr1_aa_start": 26,
"cdr1_aa_end": 35,
"fwr2_aa_start": 36,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 58,
"fwr3_aa_start": 59,
"fwr3_aa_end": 94,
"cdr3_aa_start": 95,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
"kabat": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 30,
"cdr1_aa_start": 31,
"cdr1_aa_end": 35,
"fwr2_aa_start": 36,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 65,
"fwr3_aa_start": 66,
"fwr3_aa_end": 94,
"cdr3_aa_start": 95,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
"contact": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 29,
"cdr1_aa_start": 30,
"cdr1_aa_end": 35,
"fwr2_aa_start": 36,
"fwr2_aa_end": 46,
"cdr2_aa_start": 47,
"cdr2_aa_end": 58,
"fwr3_aa_start": 69,
"fwr3_aa_end": 92,
"cdr3_aa_start": 93,
"cdr3_aa_end": 101,
"fwr4_aa_start": 102,
"fwr4_aa_end": 113,
},
"scdr": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 26,
"cdr1_aa_start": 27,
"cdr1_aa_end": 35,
"fwr2_aa_start": 36,
"fwr2_aa_end": 47,
"cdr2_aa_start": 48,
"cdr2_aa_end": 60,
"fwr3_aa_start": 61,
"fwr3_aa_end": 92,
"cdr3_aa_start": 93,
"cdr3_aa_end": 102,
"fwr4_aa_start": 103,
"fwr4_aa_end": 113,
},
},
"light": {
"imgt": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 26,
"cdr1_aa_start": 27,
"cdr1_aa_end": 32,
"fwr2_aa_start": 33,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 51,
"fwr3_aa_start": 52,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
"scdr": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 34,
"fwr2_aa_start": 35,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
"chothia": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 34,
"fwr2_aa_start": 35,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
"abm": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 34,
"fwr2_aa_start": 35,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
"kabat": {
"fwr1_aa_start": 1,
"fwr1_aa_end": 23,
"cdr1_aa_start": 24,
"cdr1_aa_end": 34,
"fwr2_aa_start": 35,
"fwr2_aa_end": 49,
"cdr2_aa_start": 50,
"cdr2_aa_end": 56,
"fwr3_aa_start": 57,
"fwr3_aa_end": 88,
"cdr3_aa_start": 89,
"cdr3_aa_end": 97,
"fwr4_aa_start": 98,
"fwr4_aa_end": 107,
},
},
},
}
| 33.335052 | 54 | 0.371837 | 2,014 | 19,401 | 3.094836 | 0.046177 | 0.275148 | 0.061768 | 0.067383 | 0.97064 | 0.968715 | 0.968715 | 0.966629 | 0.965506 | 0.964704 | 0 | 0.163425 | 0.512396 | 19,401 | 581 | 55 | 33.392427 | 0.495455 | 0.02103 | 0 | 0.888298 | 0 | 0 | 0.3122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
f204fe20a7d96f1301ed8bd9a20fb41332b65282 | 4,737 | py | Python | code/table_to_latex.py | MaryumSayeed/TheSwan | 3e186e15acb41faec7dd508d8a8cd250659eba9c | [
"MIT"
] | null | null | null | code/table_to_latex.py | MaryumSayeed/TheSwan | 3e186e15acb41faec7dd508d8a8cd250659eba9c | [
"MIT"
] | null | null | null | code/table_to_latex.py | MaryumSayeed/TheSwan | 3e186e15acb41faec7dd508d8a8cd250659eba9c | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import csv
import matplotlib.pyplot as plt
filename='LLR_gaia/Gaia_Sample_v5.csv'
# kics = np.loadtxt(filename,skiprows=1,usecols=[0],dtype=str)
# dat = np.loadtxt(filename,skiprows=1,dtype=float)
w=[]
#='RMS: '+str('{0:.2f}'.format(rmsa))
header = ['KICID','Kp', 'Teff', 'Radius','Radp','Radn','True_Logg','Loggp','Loggn','Inferred_Logg','ILoggp','ILoggn','True_Mass','TMassp','TMassn','Inferred_Mass','IMassp','IMassn','SNR','Outlier']
df = pd.read_csv(filename,names=header,skiprows=1)
print('Unsorted:',df.head(10))
df.sort_values(by=['KICID'], inplace=True)
# Check the values make sense:
# plt.plot(df['True_Logg'],df['True_Logg'])
# plt.scatter(df['True_Logg'],df['Inferred_Logg'],s=5)
# plt.show(False)
df_short=df.head(10)
# dat=np.array(df_short)
print('Sorted',df_short)# exit()
w=[]
for i in df.index[0:10]:
d= ' & '
kic=str(int(df['KICID'][i]))
teff=str(int(df['Teff'][i]))
kp = float(df['Kp'][i])
kp = str('{0:.2f}'.format(kp))
r,rp,rn=df['Radius'][i],df['Radp'][i],df['Radn'][i]
r,rp,rn=float(r),float(rp),float(rn)
r,rp,rn=str('{0:.2f}'.format(r)),'{+'+str('{0:.2f}'.format(rp))+'}','{'+str('{0:.2f}'.format(rn))+'}'
l,lp,ln=df['True_Logg'][i],df['Loggp'][i],df['Loggn'][i]
l,lp,ln=float(l),float(lp),float(ln)
l,lp,ln=str('{0:.2f}'.format(l)),'{+'+str('{0:.2f}'.format(lp))+'}','{'+str('{0:.2f}'.format(ln))+'}'
il,ilp,iln=df['Inferred_Logg'][i],df['ILoggp'][i],df['ILoggn'][i]
il,ilp,iln=float(il),float(ilp),float(iln)
il,ilp,iln=str('{0:.2f}'.format(il)),'{+'+str('{0:.2f}'.format(ilp))+'}','{-'+str('{0:.2f}'.format(iln))+'}'
tm,tmp,tmn=df['True_Mass'][i],df['TMassp'][i],df['TMassn'][i]
tm,tmp,tmn=float(tm),float(tmp),float(tmn)
tm,tmp,tmn=str('{0:.2f}'.format(tm)),'{+'+str('{0:.2f}'.format(tmp))+'}','{'+str('{0:.2f}'.format(tmn))+'}'
m,mp,mn=df['Inferred_Mass'][i],df['IMassp'][i],df['IMassn'][i]
m,mp,mn=float(m),float(mp),float(mn)
m,mp,mn=str('{0:.2f}'.format(m)),'{+'+str('{0:.2f}'.format(mp))+'}','{-'+str('{0:.2f}'.format(mn))+'}'
out=str(int(df['Outlier'][i]))
snr=str('{0:.2f}'.format(df['SNR'][i]))
line=kic+d+kp+d+teff+d+r'${}^{}_{}$'.format(r,rp,rn)+d+'${}^{}_{}$'.format(l,lp,ln)+d+'${}^{}_{}$'.format(il,ilp,iln)+d+'${}^{}_{}$'.format(tm,tmp,tmn)+d+'${}^{}_{}$'.format(m,mp,mn)+d+snr+d+out+'\\\\'
print(line)
w.append(line)
# Save format in file:
# outF = open("Pande_latex_table_for_paper.txt", "w")
outF=open('test_pande_table.txt','w')
for line in w:
# write line to output file
outF.write(line)
outF.write("\n")
outF.close()
filename='LLR_seismic/Seismic_Sample_v5.csv'
header = ['KICID','Kp', 'Teff', 'Radius','Radp','Radn','True_Logg','Loggp','Loggn','Inferred_Logg','ILoggp','ILoggn','True_Mass','TMassp','TMassn','Inferred_Mass','IMassp','IMassn','SNR','Outlier']
df = pd.read_csv(filename,names=header,skiprows=1)
print('Unsorted:',df.head(10))
df.sort_values(by=['KICID'], inplace=True)
df_short=df.head(10)
# dat=np.array(df_short)
print('Sorted',df_short)
# df=df[df['Outlier']==0]
w=[]
for i in df.index[0:10]:
d= ' & '
kic=str(int(df['KICID'][i]))
teff=str(int(df['Teff'][i]))
kp = float(df['Kp'][i])
kp = str('{0:.2f}'.format(kp))
r,rp,rn=df['Radius'][i],df['Radp'][i],df['Radn'][i]
r,rp,rn=float(r),float(rp),float(rn)
r,rp,rn=str('{0:.2f}'.format(r)),'{+'+str('{0:.2f}'.format(rp))+'}','{'+str('{0:.2f}'.format(rn))+'}'
l,lp,ln=df['True_Logg'][i],df['Loggp'][i],df['Loggn'][i]
l,lp,ln=float(l),float(lp),float(ln)
l,lp,ln=str('{0:.2f}'.format(l)),'{+'+str('{0:.2f}'.format(lp))+'}','{-'+str('{0:.2f}'.format(ln))+'}'
il,ilp,iln=df['Inferred_Logg'][i],df['ILoggp'][i],df['ILoggn'][i]
il,ilp,iln=float(il),float(ilp),float(iln)
il,ilp,iln=str('{0:.2f}'.format(il)),'{+'+str('{0:.2f}'.format(ilp))+'}','{-'+str('{0:.2f}'.format(iln))+'}'
tm,tmp,tmn=df['True_Mass'][i],df['TMassp'][i],df['TMassn'][i]
tm,tmp,tmn=float(tm),float(tmp),float(tmn)
tm,tmp,tmn=str('{0:.2f}'.format(tm)),'{+'+str('{0:.2f}'.format(tmp))+'}','{-'+str('{0:.2f}'.format(tmn))+'}'
m,mp,mn=df['Inferred_Mass'][i],df['IMassp'][i],df['IMassn'][i]
m,mp,mn=float(m),float(mp),float(mn)
m,mp,mn=str('{0:.2f}'.format(m)),'{+'+str('{0:.2f}'.format(mp))+'}','{-'+str('{0:.2f}'.format(mn))+'}'
out=str(int(df['Outlier'][i]))
snr=str('{0:.2f}'.format(df['SNR'][i]))
line=kic+d+kp+d+teff+d+r'${}^{}_{}$'.format(r,rp,rn)+d+'${}^{}_{}$'.format(l,lp,ln)+d+'${}^{}_{}$'.format(il,ilp,iln)+d+'${}^{}_{}$'.format(tm,tmp,tmn)+d+'${}^{}_{}$'.format(m,mp,mn)+d+snr+d+out+'\\\\'
w.append(line)
# Save format in file:
# outF = open("Astero_latex_table_for_paper.txt", "w")
outF=open('test_astero_table.txt','w')
for line in w:
# write line to output file
outF.write(line)
outF.write("\n")
outF.close()
| 36.72093 | 202 | 0.588769 | 850 | 4,737 | 3.212941 | 0.138824 | 0.051263 | 0.076895 | 0.15379 | 0.878067 | 0.859026 | 0.859026 | 0.859026 | 0.859026 | 0.808495 | 0 | 0.020904 | 0.060798 | 4,737 | 128 | 203 | 37.007813 | 0.592942 | 0.11864 | 0 | 0.841463 | 0 | 0 | 0.253789 | 0.019485 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04878 | 0 | 0.04878 | 0.060976 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
482c457160273d6bd1805fc4c5e3be35f47acb21 | 25,817 | py | Python | test-generate.py | maandree/libzahl | cf4b5d338225ac30d8f7434768c45619928bf3bf | [
"0BSD"
] | 26 | 2016-03-06T10:34:02.000Z | 2022-03-30T10:40:14.000Z | test-generate.py | maandree/libzahl | cf4b5d338225ac30d8f7434768c45619928bf3bf | [
"0BSD"
] | 3 | 2016-05-09T12:34:47.000Z | 2017-04-22T13:11:49.000Z | test-generate.py | maandree/libzahl | cf4b5d338225ac30d8f7434768c45619928bf3bf | [
"0BSD"
] | 4 | 2016-10-14T12:23:43.000Z | 2021-03-23T12:10:26.000Z | #!/usr/bin/env python3
# See LICENSE file for copyright and license details.
import sys, random
def mod(a, b):
r = (abs(a) % abs(b)) * (-1 if a < 0 else 1)
q = div(a, b)
if a != q * b + r:
print('zdivmod does not satisfly n = qd + r', file = sys.stderr)
sys.exit(1)
return r
def div(a, b): # Python's division is floored, not truncated.
r = abs(a) // abs(b)
if a < 0:
r = -r
if b < 0:
r = -r
return r
def gcd(u, v):
if u == 0:
return v
if v == 0:
return u
uneg = u < 0
vneg = v < 0
u = abs(u)
v = abs(v)
shift = 0
while ((u | v) & 1) == 0:
u >>= 1
v >>= 1
shift += 1
while (u & 1) == 0:
u >>= 1
while True:
while (v & 1) == 0:
v >>= 1
if u > v:
(u, v) = (v, u)
v -= u
if v == 0:
break
u <<= shift
if uneg and vneg:
u = -u
return u
def zabs():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
print('zsets(a, "%i");' % a)
print('zabs(b, a);')
print('zabs(a, a);')
print('assert(zcmp(a, b), == 0);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % abs(a))
def zadd():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = a + b
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zadd(c, a, b);')
print('zset(d, b);')
print('zadd(d, a, d);')
print('zadd(a, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
def zadd_unsigned():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = abs(a) + abs(b)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zadd_unsigned(c, a, b);')
print('zset(d, b);')
print('zadd_unsigned(d, a, d);')
print('zadd_unsigned(a, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
c = abs(b) * 2
print('zadd_unsigned(c, b, b);')
print('zadd_unsigned(b, b, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert(zcmp(b, c), == 0);')
def zand():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = abs(a) & abs(b)
if a < 0 and b < 0:
c = -c
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zand(c, a, b);')
print('zset(d, b);')
print('zand(d, a, d);')
print('zand(a, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
print('zsets(a, "%i");' % a)
print('zand(d, a, a);')
print('zand(a, a, a);')
print('assert_s(zstr(d, buf, BUF_N), "%i");' % a)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % a)
def zbits():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
print('zsets(a, "%i");' % a)
a = abs(a)
if a == 0:
b = 1
else:
b = 0
while a > 0:
b += 1
a >>= 1
print('assert_zu(zbits(a), %i);' % b)
def zbset():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = random.randint(0, 2 * LIMIT)
cs = (abs(a) | (1 << b)) * (-1 if a < 0 else 1)
cc = (abs(a) & ~(1 << b)) * (-1 if a < 0 else 1)
cf = (abs(a) ^ (1 << b)) * (-1 if a < 0 else 1)
print('zsets(a, "%i");' % a)
print('zset(d, a);')
print('zbset(b, a, %i, 1);' % b)
print('assert_s(zstr(b, buf, BUF_N), "%i");' % cs)
print('zbset(b, a, %i, 0);' % b)
print('assert_s(zstr(b, buf, BUF_N), "%i");' % cc)
print('zbset(b, a, %i, -1);' % b)
print('assert_s(zstr(b, buf, BUF_N), "%i");' % cf)
print('zset(a, d);')
print('zbset(a, a, %i, 1);' % b)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % cs)
print('zset(a, d);')
print('zbset(a, a, %i, 0);' % b)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % cc)
print('zset(a, d);')
print('zbset(a, a, %i, -1);' % b)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % cf)
def zbtest():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = random.randint(0, 2 * LIMIT)
c = (abs(a) >> b) & 1
print('zsets(a, "%i");' % a)
print('assert(zbtest(a, %i), == %i);' % (b, c))
def zcmp():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = -1 if a < b else (1 if a > b else 0)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('assert(zcmp(a, b), == %i);' % c)
def zcmpmag():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
a = abs(a)
b = abs(b)
c = -1 if a < b else (1 if a > b else 0)
print('assert(zcmpmag(a, b), == %i);' % c)
def zlsb():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
print('zsets(a, "%i");' % a)
a = abs(a)
if a == 0:
b = "SIZE_MAX"
else:
b = 0
while (a & 1) == 0:
b += 1
a >>= 1
b = str(b)
print('assert_zu(zlsb(a), %s);' % b)
def zlsh():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, 2 * LIMIT)
c = a << bits
print('zsets(a, "%i");' % a)
print('zlsh(b, a, %i);' % bits)
print('zlsh(a, a, %i);' % bits)
print('assert(zcmp(a, b), == 0);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
def zneg():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
print('zsets(a, "%i");' % a)
print('zneg(b, a);')
print('zneg(a, a);')
print('assert(zcmp(a, b), == 0);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % -a)
def zor():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = abs(a) | abs(b)
if a < 0 or b < 0:
c = -c
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zor(c, a, b);')
print('zset(d, b);')
print('zor(d, a, d);')
print('zor(a, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
print('zsets(a, "%i");' % a)
print('zor(d, a, a);')
print('zor(a, a, a);')
print('assert_s(zstr(d, buf, BUF_N), "%i");' % a)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % a)
def zrsh():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
c = (abs(a) >> bits) * (-1 if a < 0 else 1)
print('zsets(a, "%i");' % a)
print('zrsh(b, a, %i);' % bits)
print('zrsh(a, a, %i);' % bits)
print('assert(zcmp(a, b), == 0);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
def zsplit():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, 2 * LIMIT)
sign = -1 if a < 0 else 1
c = (abs(a) >> bits) * sign
d = (abs(a) - (abs(c) << bits)) * sign
print('zsets(a, "%i");' % a)
print('zset(b, a);')
print('zsplit(b, d, b, %i);' % bits)
print('assert_s(zstr(b, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('zsplit(c, d, a, %i);' % bits)
print('assert(zcmp(b, c), == 0);')
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('zsplit(c, a, a, %i);' % bits)
print('assert(zcmp(a, d), == 0);')
print('assert(zcmp(b, c), == 0);')
def zstr():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
print('zsets(a, "%i");' % a)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % a)
def zstr_length():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
print('zsets(a, "%i");' % a)
print('assert_zu(zstr_length(a, 10), %i);' % len(str(a)))
def zsub():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = a - b
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zsub(c, a, b);')
print('zset(d, b);')
print('zsub(d, a, d);')
print('zsub(a, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
def zsub_unsigned():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = abs(a) - abs(b)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zsub_unsigned(c, a, b);')
print('zset(d, b);')
print('zsub_unsigned(d, a, d);')
print('zsub_unsigned(a, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
print('zsub_unsigned(a, b, b);')
print('assert(zzero(a), == 1);')
print('zsub_unsigned(b, b, b);')
print('assert(zzero(b), == 1);')
def ztrunc():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, 2 * LIMIT)
c = (abs(a) & ((1 << bits) - 1)) * (-1 if a < 0 else 1)
print('zsets(a, "%i");' % a)
print('ztrunc(b, a, %i);' % bits)
print('ztrunc(a, a, %i);' % bits)
print('assert(zcmp(a, b), == 0);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
def zxor():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = abs(a) ^ abs(b)
if (a < 0) != (b < 0):
c = -c
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zxor(c, a, b);')
print('zset(d, b);')
print('zxor(d, a, d);')
print('zxor(a, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
print('zsets(a, "%i");' % a)
print('zxor(d, a, a);')
print('zxor(a, a, a);')
print('assert(zzero(d), == 1);')
print('assert(zzero(a), == 1);')
def zeven():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = 1 if (abs(a) & 1) == 0 else 0
print('zsets(a, "%i");' % a)
print('assert(zeven(a), == %i);' % b)
def zodd():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = 1 if (abs(a) & 1) != 0 else 0
print('zsets(a, "%i");' % a)
print('assert(zodd(a), == %i);' % b)
def zeven_nonzero():
bits = random.randint(0, LIMIT)
a = 0
while a == 0:
a = random.randint(-(1 << bits), 1 << bits)
b = 1 if (abs(a) & 1) == 0 else 0
print('zsets(a, "%i");' % a)
print('assert(zeven_nonzero(a), == %i);' % b)
def zodd_nonzero():
bits = random.randint(0, LIMIT)
a = 0
while a == 0:
a = random.randint(-(1 << bits), 1 << bits)
b = 1 if (abs(a) & 1) != 0 else 0
print('zsets(a, "%i");' % a)
print('assert(zodd_nonzero(a), == %i);' % b)
def zzero():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = 1 if a == 0 else 0
print('zsets(a, "%i");' % a)
print('assert(zzero(a), == %i);' % b)
def zsignum():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = -1 if a < 0 else (1 if a > 0 else 0)
print('zsets(a, "%i");' % a)
print('assert(zsignum(a), == %i);' % b)
def zdiv():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = 0
while b == 0:
b = random.randint(-(1 << bits), 1 << bits)
c = div(a, b)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zsets(d, "%i");' % c)
print('zdiv(c, a, b);')
print('zdiv(a, a, b);')
print('assert(zcmp(c, d), == 0);')
print('assert(zcmp(a, d), == 0);')
print('zsets(a, "%i");' % a)
print('zdiv(b, a, b);')
print('assert(zcmp(b, d), == 0);')
def zmod():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = 0
while b == 0:
b = random.randint(-(1 << bits), 1 << bits)
c = mod(a, b)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zsets(d, "%i");' % c)
print('zmod(c, a, b);')
print('zmod(a, a, b);')
print('assert(zcmp(c, d), == 0);')
print('assert(zcmp(a, d), == 0);')
print('zsets(a, "%i");' % a)
print('zmod(b, a, b);')
print('assert(zcmp(b, d), == 0);')
def zdivmod():
bits = random.randint(0, LIMIT)
ap = random.randint(0, 1 << bits)
bits = random.randint(0, LIMIT)
bp = 0
while bp == 0:
bp = random.randint(0, 1 << bits)
for (a_sign, b_sign) in ((1, 1), (1, -1), (-1, 1), (-1, -1)):
a = ap * a_sign
b = bp * b_sign
(c, d) = (div(a, b), mod(a, b))
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zdivmod(c, d, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('zdivmod(a, b, a, b);')
print('assert(zcmp(a, c), == 0);')
print('assert(zcmp(b, d), == 0);')
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zdivmod(b, a, a, b);')
print('assert(zcmp(b, c), == 0);')
print('assert(zcmp(a, d), == 0);')
print('zsets(b, "%i");' % b)
print('zdivmod(b, a, b, b);')
print('assert(zcmpu(b, 1), == 0);')
print('assert(zcmpu(a, 0), == 0);')
print('zsets(b, "%i");' % b)
print('zdivmod(a, b, b, b);')
print('assert(zcmpu(a, 1), == 0);')
print('assert(zcmpu(b, 0), == 0);')
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zdivmod(a, d, a, b);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('zsets(a, "%i");' % a)
print('zdivmod(c, b, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(b, buf, BUF_N), "%i");' % d)
a = bp * a_sign
b = bp * b_sign
(c, d) = (div(a, b), mod(a, b))
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zdivmod(c, d, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('zdivmod(a, b, a, b);')
print('assert(zcmp(a, c), == 0);')
print('assert(zcmp(b, d), == 0);')
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zdivmod(b, a, a, b);')
print('assert(zcmp(b, c), == 0);')
print('assert(zcmp(a, d), == 0);')
print('zsets(b, "%i");' % b)
print('zdivmod(b, a, b, b);')
print('assert(zcmpu(b, 1), == 0);')
print('assert(zcmpu(a, 0), == 0);')
print('zsets(b, "%i");' % b)
print('zdivmod(a, b, b, b);')
print('assert(zcmpu(a, 1), == 0);')
print('assert(zcmpu(b, 0), == 0);')
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zdivmod(a, d, a, b);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('zsets(a, "%i");' % a)
print('zdivmod(c, b, a, b);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % c)
print('assert_s(zstr(b, buf, BUF_N), "%i");' % d)
def zmul():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = a * b
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zsets(d, "%i");' % c)
print('zmul(c, a, b);')
print('assert(zcmp(c, d), == 0);')
print('zmul(c, b, a);')
print('assert(zcmp(c, d), == 0);')
print('zmul(a, a, b);')
print('assert(zcmp(a, d), == 0);')
print('zsets(a, "%i");' % a)
print('zmul(b, a, b);')
print('assert(zcmp(b, d), == 0);')
c = a * a
print('zsets(d, "%i");' % c)
print('zmul(c, a, a);')
print('assert(zcmp(c, d), == 0);')
print('zmul(a, a, a);')
print('assert(zcmp(a, d), == 0);')
def zsqr():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
c = a * a
print('zsets(a, "%i");' % a)
print('zsets(d, "%i");' % c)
print('zsqr(c, a);')
print('assert(zcmp(c, d), == 0);')
print('zsqr(a, a);')
print('assert(zcmp(a, d), == 0);')
def zmodmul():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
c = 0
while c == 0:
c = random.randint(-(1 << bits), 1 << bits)
d = mod(a * b, c)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zsets(c, "%i");' % c)
print('zmodmul(d, a, b, c);')
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('zmodmul(a, a, b, c);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % d)
print('zsets(a, "%i");' % a)
print('zmodmul(b, a, b, c);')
print('assert_s(zstr(b, buf, BUF_N), "%i");' % d)
print('zsets(b, "%i");' % b)
print('zmodmul(c, a, b, c);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % d)
print('zsets(c, "%i");' % c)
print('zmodmul(d, b, a, c);')
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('zmodmul(a, b, a, c);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % d)
print('zsets(a, "%i");' % a)
print('zmodmul(b, b, a, c);')
print('assert_s(zstr(b, buf, BUF_N), "%i");' % d)
print('zsets(b, "%i");' % b)
print('zmodmul(c, b, a, c);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % d)
print('zsets(c, "%i");' % c)
d = mod(a * a, c)
print('zmodmul(d, a, a, c);')
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('zmodmul(a, a, a, c);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % d)
print('zsets(a, "%i");' % a)
print('zmodmul(c, a, a, c);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % d)
if a != 0:
d = mod(a * b, a)
print('zsets(d, "%i");' % d)
print('zmodmul(c, a, b, a);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % d)
print('zmodmul(a, a, b, a);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % d)
print('zsets(a, "%i");' % a)
print('zmodmul(b, a, b, a);')
print('assert_s(zstr(b, buf, BUF_N), "%i");' % d)
print('zsets(b, "%i");' % b)
print('zmodmul(c, b, a, a);')
print('assert_s(zstr(c, buf, BUF_N), "%i");' % d)
print('zmodmul(a, b, a, a);')
print('assert_s(zstr(a, buf, BUF_N), "%i");' % d)
print('zsets(a, "%i");' % a)
print('zmodmul(b, b, a, a);')
print('assert_s(zstr(b, buf, BUF_N), "%i");' % d)
print('zmodmul(b, a, a, a);')
print('assert(zzero(b), == 1);')
print('zmodmul(a, a, a, a);')
print('assert(zzero(a), == 1);')
def zmodsqr():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = 0
while b == 0:
b = random.randint(-(1 << bits), 1 << bits)
c = mod(a ** 2, b)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zsets(d, "%i");' % c)
print('zmodsqr(c, a, b);')
print('assert(zcmp(c, d), == 0);')
print('zset(c, a);')
print('zmodsqr(a, a, b);')
print('assert(zcmp(a, d), == 0);')
print('zset(a, c);')
print('zset(c, b);')
print('zmodsqr(b, a, b);')
print('assert(zcmp(b, d), == 0);')
if a != 0:
c = mod(a ** 2, a)
print('zmodsqr(b, a, a);')
print('assert(zzero(b), == 1);')
print('zmodsqr(a, a, a);')
print('assert(zzero(a), == 1);')
def zcmpi():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = random.randint(-(1 << 63), (1 << 63) - 1)
c = -1 if a < b else (1 if a > b else 0)
print('zsets(a, "%i");' % a)
if b >= 0:
print('assert(zcmpi(a, %iLL), == %i);' % (b, c))
else:
print('assert(zcmpi(a, %iLL - 1LL), == %i);' % (b + 1, c))
def zcmpu():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = random.randint(0, (1 << 64) - 1)
c = -1 if a < b else (1 if a > b else 0)
print('zsets(a, "%i");' % a)
print('assert(zcmpu(a, %iULL), == %i);' % (b, c))
def zgcd():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
bits = random.randint(0, LIMIT)
b = random.randint(-(1 << bits), 1 << bits)
c = gcd(a, b)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zsets(d, "%i");' % c)
print('zgcd(c, a, b);')
print('assert(zcmp(c, d), == 0);')
def zpow():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = random.randint(1, 16)
c = a ** b
print('zsets(a, "%i");' % a)
print('zsetu(b, %i);' % b)
print('zsets(d, "%i");' % c)
print('zpow(c, a, b);')
print('zpow(a, a, b);')
print('assert(zcmp(c, d), == 0);')
print('assert(zcmp(a, d), == 0);')
print('zsets(a, "%i");' % a)
print('zpow(b, a, b);')
print('assert(zcmp(b, d), == 0);')
def zpowu():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = random.randint(1, 16)
c = a ** b
print('zsets(a, "%i");' % a)
print('zsets(d, "%i");' % c)
print('zpowu(c, a, %i);' % b)
print('zpowu(a, a, %i);' % b)
print('assert(zcmp(c, d), == 0);')
print('assert(zcmp(a, d), == 0);')
def zmodpowu():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = random.randint(1, 16)
bits = random.randint(0, LIMIT)
c = 0
while c == 0:
c = random.randint(-(1 << bits), 1 << bits)
d = mod(a ** b, c)
print('zsets(a, "%i");' % a)
print('zsets(c, "%i");' % c)
print('zsets(d, "%i");' % d)
print('zmodpowu(b, a, %i, c);' % b)
print('zmodpowu(a, a, %i, c);' % b)
print('assert(zcmp(b, d), == 0);')
print('assert(zcmp(a, d), == 0);')
print('zsets(a, "%i");' % a)
print('zmodpowu(c, a, %i, c);' % b)
print('assert(zcmp(c, d), == 0);')
def zmodpow():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
b = random.randint(1, 16)
bits = random.randint(0, LIMIT)
c = 0
while c == 0:
c = random.randint(-(1 << bits), 1 << bits)
d = mod(a ** b, c)
print('zsets(a, "%i");' % a)
print('zsets(b, "%i");' % b)
print('zsets(c, "%i");' % c)
print('zsets(d, "%i");' % d)
print('zmodpow(d, a, b, c);')
print('zmodpow(a, a, b, c);')
print('assert_s(zstr(d, buf, BUF_N), "%i");' % d)
print('assert(zcmp(a, d), == 0);')
print('zsets(a, "%i");' % a)
print('zmodpow(b, a, b, c);')
print('assert(zcmp(b, d), == 0);')
print('zsets(b, "%i");' % b)
print('zmodpow(c, a, b, c);')
print('assert(zcmp(c, d), == 0);')
def znot():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
sign = -(-1 if a < 0 else 1)
b = abs(a)
bits = 0
x = b
while x > 0:
bits += 1
x >>= 1
b = ~b
b &= (1 << bits) - 1
b *= sign
print('zsets(a, "%i");' % a)
print('zsets(c, "%i");' % b)
print('znot(b, a);')
print('znot(a, a);')
print('assert(zcmp(b, c), == 0);')
print('assert(zcmp(a, c), == 0);')
def zsave_zload():
bits = random.randint(0, LIMIT)
a = random.randint(-(1 << bits), 1 << bits)
print('zsets(a, "%i");' % a)
print('n = zsave(a, 0);')
print('assert_zu(zsave(a, buf), n);')
print('assert_zu(zload(b, buf), n);')
print('assert(zcmp(a, b), == 0);')
functions = [zzero, zsignum, zeven_nonzero, zodd_nonzero, zeven, zcmp, zcmpi, zcmpu, zcmpmag,
zodd, zabs, zneg, zlsh, zrsh, ztrunc, zsplit, zand, zor, zxor, zbits, zlsb, znot,
zbtest, zbset, zadd_unsigned, zsub_unsigned, zadd, zsub, zmul, zsqr, zdivmod,
zdiv, zmod, zmodmul, zmodsqr, zsave_zload, zgcd, zpow, zpowu, zmodpow, zmodpowu,
zstr_length, zstr] # untested: zptest, zrand
limits = [200]
for LIMIT in limits:
for function in functions:
print('/* %s */' % function.__qualname__)
for i in range(100):
function()
print()
print()
| 32.150685 | 94 | 0.467831 | 4,124 | 25,817 | 2.884336 | 0.032735 | 0.141488 | 0.083565 | 0.095502 | 0.841866 | 0.807987 | 0.796385 | 0.774275 | 0.75124 | 0.718873 | 0 | 0.024048 | 0.268738 | 25,817 | 802 | 95 | 32.190773 | 0.606017 | 0.0055 | 0 | 0.620783 | 0 | 0 | 0.33506 | 0.002766 | 0 | 0 | 0 | 0 | 0.206478 | 1 | 0.062078 | false | 0 | 0.00135 | 0 | 0.070175 | 0.54386 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
48653979564ed27235aca2aace98d97b3eb31326 | 8,125 | py | Python | tests/test_cifar_format.py | certiware/posemaro | 3f9bed71dd4a1053ea27bed1a85f2ff01fdcf800 | [
"MIT"
] | null | null | null | tests/test_cifar_format.py | certiware/posemaro | 3f9bed71dd4a1053ea27bed1a85f2ff01fdcf800 | [
"MIT"
] | null | null | null | tests/test_cifar_format.py | certiware/posemaro | 3f9bed71dd4a1053ea27bed1a85f2ff01fdcf800 | [
"MIT"
] | null | null | null | from unittest import TestCase
import os.path as osp
import numpy as np
from datumaro.components.dataset import Dataset
from datumaro.components.extractor import (
AnnotationType, DatasetItem, Label, LabelCategories,
)
from datumaro.plugins.cifar_format import CifarConverter, CifarImporter
from datumaro.util.image import Image
from datumaro.util.test_utils import TestDir, compare_datasets
from .requirements import Requirements, mark_requirement
class CifarFormatTest(TestCase):
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_save_and_load(self):
source_dataset = Dataset.from_iterable([
DatasetItem(id='image_2', subset='test',
image=np.ones((32, 32, 3)),
annotations=[Label(0)]
),
DatasetItem(id='image_3', subset='test',
image=np.ones((32, 32, 3))
),
DatasetItem(id='image_4', subset='test',
image=np.ones((32, 32, 3)),
annotations=[Label(1)]
)
], categories=['label_0', 'label_1'])
with TestDir() as test_dir:
CifarConverter.convert(source_dataset, test_dir, save_images=True)
parsed_dataset = Dataset.import_from(test_dir, 'cifar')
compare_datasets(self, source_dataset, parsed_dataset,
require_images=True)
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_save_and_load_without_saving_images(self):
source_dataset = Dataset.from_iterable([
DatasetItem(id='a', subset='train_1',
annotations=[Label(0)]
),
DatasetItem(id='b', subset='train_first',
annotations=[Label(1)]
),
], categories={
AnnotationType.label: LabelCategories.from_iterable(
'label' + str(label) for label in range(2)),
})
with TestDir() as test_dir:
CifarConverter.convert(source_dataset, test_dir, save_images=False)
parsed_dataset = Dataset.import_from(test_dir, 'cifar')
compare_datasets(self, source_dataset, parsed_dataset,
require_images=True)
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_save_and_load_with_different_image_size(self):
source_dataset = Dataset.from_iterable([
DatasetItem(id='image_1',
image=np.ones((10, 8, 3)),
annotations=[Label(0)]
),
DatasetItem(id='image_2',
image=np.ones((32, 32, 3)),
annotations=[Label(1)]
),
], categories={
AnnotationType.label: LabelCategories.from_iterable(
'label' + str(label) for label in range(2)),
})
with TestDir() as test_dir:
CifarConverter.convert(source_dataset, test_dir, save_images=True)
parsed_dataset = Dataset.import_from(test_dir, 'cifar')
compare_datasets(self, source_dataset, parsed_dataset,
require_images=True)
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_save_dataset_with_cyrillic_and_spaces_in_filename(self):
source_dataset = Dataset.from_iterable([
DatasetItem(id="кириллица с пробелом",
image=np.ones((32, 32, 3)),
annotations=[Label(0)]
),
], categories=['label_0'])
with TestDir() as test_dir:
CifarConverter.convert(source_dataset, test_dir, save_images=True)
parsed_dataset = Dataset.import_from(test_dir, 'cifar')
compare_datasets(self, source_dataset, parsed_dataset,
require_images=True)
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_save_and_load_image_with_arbitrary_extension(self):
dataset = Dataset.from_iterable([
DatasetItem(id='q/1', image=Image(path='q/1.JPEG',
data=np.zeros((32, 32, 3)))),
DatasetItem(id='a/b/c/2', image=Image(path='a/b/c/2.bmp',
data=np.zeros((32, 32, 3)))),
], categories=[])
with TestDir() as test_dir:
CifarConverter.convert(dataset, test_dir, save_images=True)
parsed_dataset = Dataset.import_from(test_dir, 'cifar')
compare_datasets(self, dataset, parsed_dataset,
require_images=True)
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_save_and_load_empty_image(self):
dataset = Dataset.from_iterable([
DatasetItem(id='a', annotations=[Label(0)]),
DatasetItem(id='b')
], categories=['label_0'])
with TestDir() as test_dir:
CifarConverter.convert(dataset, test_dir, save_images=True)
parsed_dataset = Dataset.import_from(test_dir, 'cifar')
compare_datasets(self, dataset, parsed_dataset,
require_images=True)
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_save_and_load_cifar100(self):
source_dataset = Dataset.from_iterable([
DatasetItem(id='image_2', subset='test',
image=np.ones((32, 32, 3)),
annotations=[Label(0)]
),
DatasetItem(id='image_3', subset='test',
image=np.ones((32, 32, 3))
),
DatasetItem(id='image_4', subset='test',
image=np.ones((32, 32, 3)),
annotations=[Label(1)]
)
], categories=[['class_0', 'superclass_0'], ['class_1', 'superclass_0']])
with TestDir() as test_dir:
CifarConverter.convert(source_dataset, test_dir, save_images=True)
parsed_dataset = Dataset.import_from(test_dir, 'cifar')
compare_datasets(self, source_dataset, parsed_dataset,
require_images=True)
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_save_and_load_cifar100_without_saving_images(self):
source_dataset = Dataset.from_iterable([
DatasetItem(id='a', subset='train_1',
annotations=[Label(0)]
),
DatasetItem(id='b', subset='train_1',
annotations=[Label(1)]
),
], categories=[['class_0', 'superclass_0'], ['class_1', 'superclass_0']])
with TestDir() as test_dir:
CifarConverter.convert(source_dataset, test_dir, save_images=False)
parsed_dataset = Dataset.import_from(test_dir, 'cifar')
compare_datasets(self, source_dataset, parsed_dataset,
require_images=True)
DUMMY_DATASET_DIR = osp.join(osp.dirname(__file__), 'assets', 'cifar_dataset')
class CifarImporterTest(TestCase):
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_import(self):
expected_dataset = Dataset.from_iterable([
DatasetItem(id='image_1', subset='train_1',
image=np.ones((32, 32, 3)),
annotations=[Label(0)]
),
DatasetItem(id='image_2', subset='test',
image=np.ones((32, 32, 3)),
annotations=[Label(1)]
),
DatasetItem(id='image_3', subset='test',
image=np.ones((32, 32, 3)),
annotations=[Label(3)]
),
DatasetItem(id='image_4', subset='test',
image=np.ones((32, 32, 3)),
annotations=[Label(2)]
),
DatasetItem(id='image_5', subset='test',
image=np.array([[[1., 2., 3.], [4., 5., 6.]],
[[1., 2., 3.], [4., 5., 6.]]]),
annotations=[Label(3)]
)
], categories=['airplane', 'automobile', 'bird', 'cat'])
dataset = Dataset.import_from(DUMMY_DATASET_DIR, 'cifar')
compare_datasets(self, expected_dataset, dataset, require_images=True)
@mark_requirement(Requirements.DATUM_GENERAL_REQ)
def test_can_detect(self):
self.assertTrue(CifarImporter.detect(DUMMY_DATASET_DIR))
| 38.875598 | 81 | 0.600738 | 895 | 8,125 | 5.185475 | 0.12514 | 0.036199 | 0.015083 | 0.033613 | 0.803706 | 0.794872 | 0.778711 | 0.759966 | 0.731523 | 0.712777 | 0 | 0.024645 | 0.280862 | 8,125 | 208 | 82 | 39.0625 | 0.769639 | 0 | 0 | 0.718391 | 0 | 0 | 0.052677 | 0 | 0 | 0 | 0 | 0 | 0.005747 | 1 | 0.057471 | false | 0 | 0.12069 | 0 | 0.189655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
486d849db7e6a9f7d9e126fd828abc8ac8b70e03 | 90,258 | py | Python | azure-mgmt-web/azure/mgmt/web/operations/app_service_plans_operations.py | SUSE/azure-sdk-for-python | 324f99d26dd6f4ee9793b9bf1d4d5f928e4b6c2f | [
"MIT"
] | 2 | 2020-07-29T14:22:17.000Z | 2020-11-06T18:47:40.000Z | azure-mgmt-web/azure/mgmt/web/operations/app_service_plans_operations.py | SUSE/azure-sdk-for-python | 324f99d26dd6f4ee9793b9bf1d4d5f928e4b6c2f | [
"MIT"
] | 1 | 2016-08-01T07:37:04.000Z | 2016-08-01T07:37:04.000Z | azure-mgmt-web/azure/mgmt/web/operations/app_service_plans_operations.py | SUSE/azure-sdk-for-python | 324f99d26dd6f4ee9793b9bf1d4d5f928e4b6c2f | [
"MIT"
] | 1 | 2020-12-12T21:04:41.000Z | 2020-12-12T21:04:41.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.pipeline import ClientRawResponse
from msrestazure.azure_exceptions import CloudError
from msrestazure.azure_operation import AzureOperationPoller
import uuid
from .. import models
class AppServicePlansOperations(object):
"""AppServicePlansOperations operations.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An objec model deserializer.
:ivar api_version: API Version. Constant value: "2016-09-01".
"""
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.api_version = "2016-09-01"
self.config = config
def list(
self, detailed=None, custom_headers=None, raw=False, **operation_config):
"""Get all App Service plans for a subcription.
Get all App Service plans for a subcription.
:param detailed: Specify <code>true</code> to return all App Service
plan properties. The default is <code>false</code>, which returns a
subset of the properties.
Retrieval of all properties may increase the API latency.
:type detailed: bool
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`AppServicePlanPaged
<azure.mgmt.web.models.AppServicePlanPaged>`
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/serverfarms'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if detailed is not None:
query_parameters['detailed'] = self._serialize.query("detailed", detailed, 'bool')
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
deserialized = models.AppServicePlanPaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.AppServicePlanPaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def list_by_resource_group(
self, resource_group_name, custom_headers=None, raw=False, **operation_config):
"""Get all App Service plans in a resource group.
Get all App Service plans in a resource group.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`AppServicePlanPaged
<azure.mgmt.web.models.AppServicePlanPaged>`
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
deserialized = models.AppServicePlanPaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.AppServicePlanPaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def get(
self, resource_group_name, name, custom_headers=None, raw=False, **operation_config):
"""Get an App Service plan.
Get an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`AppServicePlan <azure.mgmt.web.models.AppServicePlan>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AppServicePlan', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def create_or_update(
self, resource_group_name, name, app_service_plan, custom_headers=None, raw=False, **operation_config):
"""Creates or updates an App Service Plan.
Creates or updates an App Service Plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param app_service_plan: Details of the App Service plan.
:type app_service_plan: :class:`AppServicePlan
<azure.mgmt.web.models.AppServicePlan>`
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:rtype:
:class:`AzureOperationPoller<msrestazure.azure_operation.AzureOperationPoller>`
instance that returns :class:`AppServicePlan
<azure.mgmt.web.models.AppServicePlan>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(app_service_plan, 'AppServicePlan')
# Construct and send request
def long_running_send():
request = self._client.put(url, query_parameters)
return self._client.send(
request, header_parameters, body_content, **operation_config)
def get_long_running_status(status_link, headers=None):
request = self._client.get(status_link)
if headers:
request.headers.update(headers)
return self._client.send(
request, header_parameters, **operation_config)
def get_long_running_output(response):
if response.status_code not in [200, 202]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AppServicePlan', response)
if response.status_code == 202:
deserialized = self._deserialize('AppServicePlan', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
if raw:
response = long_running_send()
return get_long_running_output(response)
long_running_operation_timeout = operation_config.get(
'long_running_operation_timeout',
self.config.long_running_operation_timeout)
return AzureOperationPoller(
long_running_send, get_long_running_output,
get_long_running_status, long_running_operation_timeout)
def delete(
self, resource_group_name, name, custom_headers=None, raw=False, **operation_config):
"""Delete an App Service plan.
Delete an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: None
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def list_capabilities(
self, resource_group_name, name, custom_headers=None, raw=False, **operation_config):
"""List all capabilities of an App Service plan.
List all capabilities of an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: list of :class:`Capability <azure.mgmt.web.models.Capability>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/capabilities'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('[Capability]', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_hybrid_connection(
self, resource_group_name, name, namespace_name, relay_name, custom_headers=None, raw=False, **operation_config):
"""Retrieve a Hybrid Connection in use in an App Service plan.
Retrieve a Hybrid Connection in use in an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param namespace_name: Name of the Service Bus namespace.
:type namespace_name: str
:param relay_name: Name of the Service Bus relay.
:type relay_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`HybridConnection
<azure.mgmt.web.models.HybridConnection>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/hybridConnectionNamespaces/{namespaceName}/relays/{relayName}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'namespaceName': self._serialize.url("namespace_name", namespace_name, 'str'),
'relayName': self._serialize.url("relay_name", relay_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('HybridConnection', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def delete_hybrid_connection(
self, resource_group_name, name, namespace_name, relay_name, custom_headers=None, raw=False, **operation_config):
"""Delete a Hybrid Connection in use in an App Service plan.
Delete a Hybrid Connection in use in an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param namespace_name: Name of the Service Bus namespace.
:type namespace_name: str
:param relay_name: Name of the Service Bus relay.
:type relay_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: None
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/hybridConnectionNamespaces/{namespaceName}/relays/{relayName}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'namespaceName': self._serialize.url("namespace_name", namespace_name, 'str'),
'relayName': self._serialize.url("relay_name", relay_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200, 204]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def list_hybrid_connection_keys(
self, resource_group_name, name, namespace_name, relay_name, custom_headers=None, raw=False, **operation_config):
"""Get the send key name and value of a Hybrid Connection.
Get the send key name and value of a Hybrid Connection.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param namespace_name: The name of the Service Bus namespace.
:type namespace_name: str
:param relay_name: The name of the Service Bus relay.
:type relay_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`HybridConnectionKey
<azure.mgmt.web.models.HybridConnectionKey>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/hybridConnectionNamespaces/{namespaceName}/relays/{relayName}/listKeys'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'namespaceName': self._serialize.url("namespace_name", namespace_name, 'str'),
'relayName': self._serialize.url("relay_name", relay_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('HybridConnectionKey', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def list_web_apps_by_hybrid_connection(
self, resource_group_name, name, namespace_name, relay_name, custom_headers=None, raw=False, **operation_config):
"""Get all apps that use a Hybrid Connection in an App Service Plan.
Get all apps that use a Hybrid Connection in an App Service Plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param namespace_name: Name of the Hybrid Connection namespace.
:type namespace_name: str
:param relay_name: Name of the Hybrid Connection relay.
:type relay_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`StrPaged <azure.mgmt.web.models.StrPaged>`
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/hybridConnectionNamespaces/{namespaceName}/relays/{relayName}/sites'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'namespaceName': self._serialize.url("namespace_name", namespace_name, 'str'),
'relayName': self._serialize.url("relay_name", relay_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
deserialized = models.StrPaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.StrPaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def get_hybrid_connection_plan_limit(
self, resource_group_name, name, custom_headers=None, raw=False, **operation_config):
"""Get the maximum number of Hybrid Connections allowed in an App Service
plan.
Get the maximum number of Hybrid Connections allowed in an App Service
plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`HybridConnectionLimits
<azure.mgmt.web.models.HybridConnectionLimits>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/hybridConnectionPlanLimits/limit'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('HybridConnectionLimits', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def list_hybrid_connections(
self, resource_group_name, name, custom_headers=None, raw=False, **operation_config):
"""Retrieve all Hybrid Connections in use in an App Service plan.
Retrieve all Hybrid Connections in use in an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`HybridConnectionPaged
<azure.mgmt.web.models.HybridConnectionPaged>`
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/hybridConnectionRelays'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
deserialized = models.HybridConnectionPaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.HybridConnectionPaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def list_metric_defintions(
self, resource_group_name, name, custom_headers=None, raw=False, **operation_config):
"""Get metrics that can be queried for an App Service plan, and their
definitions.
Get metrics that can be queried for an App Service plan, and their
definitions.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`ResourceMetricDefinitionPaged
<azure.mgmt.web.models.ResourceMetricDefinitionPaged>`
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/metricdefinitions'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
deserialized = models.ResourceMetricDefinitionPaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.ResourceMetricDefinitionPaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def list_metrics(
self, resource_group_name, name, details=None, filter=None, custom_headers=None, raw=False, **operation_config):
"""Get metrics for an App Serice plan.
Get metrics for an App Serice plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param details: Specify <code>true</code> to include instance details.
The default is <code>false</code>.
:type details: bool
:param filter: Return only usages/metrics specified in the filter.
Filter conforms to odata syntax. Example: $filter=(name.value eq
'Metric1' or name.value eq 'Metric2') and startTime eq
'2014-01-01T00:00:00Z' and endTime eq '2014-12-31T23:59:59Z' and
timeGrain eq duration'[Hour|Minute|Day]'.
:type filter: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`ResourceMetricPaged
<azure.mgmt.web.models.ResourceMetricPaged>`
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/metrics'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if details is not None:
query_parameters['details'] = self._serialize.query("details", details, 'bool')
if filter is not None:
query_parameters['$filter'] = self._serialize.query("filter", filter, 'str', skip_quote=True)
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
deserialized = models.ResourceMetricPaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.ResourceMetricPaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def restart_web_apps(
self, resource_group_name, name, soft_restart=None, custom_headers=None, raw=False, **operation_config):
"""Restart all apps in an App Service plan.
Restart all apps in an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param soft_restart: Specify <code>true</code> to performa a soft
restart, applies the configuration settings and restarts the apps if
necessary. The default is <code>false</code>, which always restarts
and reprovisions the apps
:type soft_restart: bool
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: None
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/restartSites'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if soft_restart is not None:
query_parameters['softRestart'] = self._serialize.query("soft_restart", soft_restart, 'bool')
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [204]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def list_web_apps(
self, resource_group_name, name, skip_token=None, filter=None, top=None, custom_headers=None, raw=False, **operation_config):
"""Get all apps associated with an App Service plan.
Get all apps associated with an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param skip_token: Skip to a web app in the list of webapps associated
with app service plan. If specified, the resulting list will contain
web apps starting from (including) the skipToken. Otherwise, the
resulting list contains web apps from the start of the list
:type skip_token: str
:param filter: Supported filter: $filter=state eq running. Returns
only web apps that are currently running
:type filter: str
:param top: List page size. If specified, results are paged.
:type top: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`SitePaged <azure.mgmt.web.models.SitePaged>`
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
def internal_paging(next_link=None, raw=False):
if not next_link:
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/sites'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if skip_token is not None:
query_parameters['$skipToken'] = self._serialize.query("skip_token", skip_token, 'str')
if filter is not None:
query_parameters['$filter'] = self._serialize.query("filter", filter, 'str', skip_quote=True)
if top is not None:
query_parameters['$top'] = self._serialize.query("top", top, 'str')
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
else:
url = next_link
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
return response
# Deserialize response
deserialized = models.SitePaged(internal_paging, self._deserialize.dependencies)
if raw:
header_dict = {}
client_raw_response = models.SitePaged(internal_paging, self._deserialize.dependencies, header_dict)
return client_raw_response
return deserialized
def list_vnets(
self, resource_group_name, name, custom_headers=None, raw=False, **operation_config):
"""Get all Virtual Networks associated with an App Service plan.
Get all Virtual Networks associated with an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: list of :class:`VnetInfo <azure.mgmt.web.models.VnetInfo>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/virtualNetworkConnections'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('[VnetInfo]', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_vnet_from_server_farm(
self, resource_group_name, name, vnet_name, custom_headers=None, raw=False, **operation_config):
"""Get a Virtual Network associated with an App Service plan.
Get a Virtual Network associated with an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param vnet_name: Name of the Virtual Network.
:type vnet_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`VnetInfo <azure.mgmt.web.models.VnetInfo>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/virtualNetworkConnections/{vnetName}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'vnetName': self._serialize.url("vnet_name", vnet_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200, 404]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('VnetInfo', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_vnet_gateway(
self, resource_group_name, name, vnet_name, gateway_name, custom_headers=None, raw=False, **operation_config):
"""Get a Virtual Network gateway.
Get a Virtual Network gateway.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param vnet_name: Name of the Virtual Network.
:type vnet_name: str
:param gateway_name: Name of the gateway. Only the 'primary' gateway
is supported.
:type gateway_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`VnetGateway <azure.mgmt.web.models.VnetGateway>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/virtualNetworkConnections/{vnetName}/gateways/{gatewayName}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'vnetName': self._serialize.url("vnet_name", vnet_name, 'str'),
'gatewayName': self._serialize.url("gateway_name", gateway_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('VnetGateway', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def update_vnet_gateway(
self, resource_group_name, name, vnet_name, gateway_name, connection_envelope, custom_headers=None, raw=False, **operation_config):
"""Update a Virtual Network gateway.
Update a Virtual Network gateway.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param vnet_name: Name of the Virtual Network.
:type vnet_name: str
:param gateway_name: Name of the gateway. Only the 'primary' gateway
is supported.
:type gateway_name: str
:param connection_envelope: Definition of the gateway.
:type connection_envelope: :class:`VnetGateway
<azure.mgmt.web.models.VnetGateway>`
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`VnetGateway <azure.mgmt.web.models.VnetGateway>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/virtualNetworkConnections/{vnetName}/gateways/{gatewayName}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'vnetName': self._serialize.url("vnet_name", vnet_name, 'str'),
'gatewayName': self._serialize.url("gateway_name", gateway_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(connection_envelope, 'VnetGateway')
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('VnetGateway', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def list_routes_for_vnet(
self, resource_group_name, name, vnet_name, custom_headers=None, raw=False, **operation_config):
"""Get all routes that are associated with a Virtual Network in an App
Service plan.
Get all routes that are associated with a Virtual Network in an App
Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param vnet_name: Name of the Virtual Network.
:type vnet_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: list of :class:`VnetRoute <azure.mgmt.web.models.VnetRoute>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/virtualNetworkConnections/{vnetName}/routes'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'vnetName': self._serialize.url("vnet_name", vnet_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('[VnetRoute]', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_route_for_vnet(
self, resource_group_name, name, vnet_name, route_name, custom_headers=None, raw=False, **operation_config):
"""Get a Virtual Network route in an App Service plan.
Get a Virtual Network route in an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param vnet_name: Name of the Virtual Network.
:type vnet_name: str
:param route_name: Name of the Virtual Network route.
:type route_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: list of :class:`VnetRoute <azure.mgmt.web.models.VnetRoute>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/virtualNetworkConnections/{vnetName}/routes/{routeName}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'vnetName': self._serialize.url("vnet_name", vnet_name, 'str'),
'routeName': self._serialize.url("route_name", route_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200, 404]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('[VnetRoute]', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def create_or_update_vnet_route(
self, resource_group_name, name, vnet_name, route_name, route, custom_headers=None, raw=False, **operation_config):
"""Create or update a Virtual Network route in an App Service plan.
Create or update a Virtual Network route in an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param vnet_name: Name of the Virtual Network.
:type vnet_name: str
:param route_name: Name of the Virtual Network route.
:type route_name: str
:param route: Definition of the Virtual Network route.
:type route: :class:`VnetRoute <azure.mgmt.web.models.VnetRoute>`
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`VnetRoute <azure.mgmt.web.models.VnetRoute>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/virtualNetworkConnections/{vnetName}/routes/{routeName}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'vnetName': self._serialize.url("vnet_name", vnet_name, 'str'),
'routeName': self._serialize.url("route_name", route_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(route, 'VnetRoute')
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, **operation_config)
if response.status_code not in [200, 400, 404]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('VnetRoute', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def delete_vnet_route(
self, resource_group_name, name, vnet_name, route_name, custom_headers=None, raw=False, **operation_config):
"""Delete a Virtual Network route in an App Service plan.
Delete a Virtual Network route in an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param vnet_name: Name of the Virtual Network.
:type vnet_name: str
:param route_name: Name of the Virtual Network route.
:type route_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: None
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/virtualNetworkConnections/{vnetName}/routes/{routeName}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'vnetName': self._serialize.url("vnet_name", vnet_name, 'str'),
'routeName': self._serialize.url("route_name", route_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200, 404]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
def update_vnet_route(
self, resource_group_name, name, vnet_name, route_name, route, custom_headers=None, raw=False, **operation_config):
"""Create or update a Virtual Network route in an App Service plan.
Create or update a Virtual Network route in an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param vnet_name: Name of the Virtual Network.
:type vnet_name: str
:param route_name: Name of the Virtual Network route.
:type route_name: str
:param route: Definition of the Virtual Network route.
:type route: :class:`VnetRoute <azure.mgmt.web.models.VnetRoute>`
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`VnetRoute <azure.mgmt.web.models.VnetRoute>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/virtualNetworkConnections/{vnetName}/routes/{routeName}'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'vnetName': self._serialize.url("vnet_name", vnet_name, 'str'),
'routeName': self._serialize.url("route_name", route_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(route, 'VnetRoute')
# Construct and send request
request = self._client.patch(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, **operation_config)
if response.status_code not in [200, 400, 404]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('VnetRoute', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def reboot_worker(
self, resource_group_name, name, worker_name, custom_headers=None, raw=False, **operation_config):
"""Reboot a worker machine in an App Service plan.
Reboot a worker machine in an App Service plan.
:param resource_group_name: Name of the resource group to which the
resource belongs.
:type resource_group_name: str
:param name: Name of the App Service plan.
:type name: str
:param worker_name: Name of worker machine, which typically starts
with RD.
:type worker_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: None
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/serverfarms/{name}/workers/{workerName}/reboot'
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str', max_length=90, min_length=1, pattern='^[-\w\._\(\)]+[^\.]$'),
'name': self._serialize.url("name", name, 'str'),
'workerName': self._serialize.url("worker_name", worker_name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [204]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
| 47.831479 | 201 | 0.654479 | 9,916 | 90,258 | 5.763514 | 0.033078 | 0.03685 | 0.037182 | 0.032755 | 0.934682 | 0.92763 | 0.925636 | 0.91344 | 0.908278 | 0.904341 | 0 | 0.00475 | 0.244311 | 90,258 | 1,886 | 202 | 47.85684 | 0.833155 | 0.281825 | 0 | 0.854369 | 0 | 0.015102 | 0.189836 | 0.093662 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039914 | false | 0 | 0.005394 | 0 | 0.108954 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6f9f4192433035087f095d6f0faa163a517120c1 | 8,214 | py | Python | minecraft_monitor/third_party.py | iwcharlton/minecraft-monitor | fd4c10213bfab8c45a1c2884ef2f2fffd77b2e5b | [
"MIT"
] | null | null | null | minecraft_monitor/third_party.py | iwcharlton/minecraft-monitor | fd4c10213bfab8c45a1c2884ef2f2fffd77b2e5b | [
"MIT"
] | null | null | null | minecraft_monitor/third_party.py | iwcharlton/minecraft-monitor | fd4c10213bfab8c45a1c2884ef2f2fffd77b2e5b | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# Ultimately, I'll build this a bit better...
third_party_list = {
'Flask': {
'link': 'https://flask.palletsprojects.com/',
'license': [
'Copyright 2010 Pallets',
'Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:',
'* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.',
'* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.',
'* Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.',
'THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.'
]
},
'flask-appconfig':{
'link': 'https://github.com/mbr/flask-appconfig/',
'license': [
'Copyright 2015, Marc Brinkmann',
'Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:',
'The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.',
'THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.'
]
},
'flask-bootstrap':{
'link': 'https://pythonhosted.org/Flask-Bootstrap/',
'license': [
'Copyright 2013, Marc Brinkmann',
'Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:',
'* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.',
'* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.',
'* Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.',
'THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.'
]
},
'flask-debug':{
'link': 'https://github.com/mbr/flask-debug/',
'license': [
'Copyright 2015, Marc Brinkmann',
'Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:',
'The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.',
'THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.'
]
},
'Flask-Login':{
'link': 'https://github.com/maxcountryman/flask-login/',
'license': [
'Copyright (c) 2011 Matthew Frazier',
'Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:',
'The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.',
'THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.'
]
},
'Flask-Nav':{
'link': 'https://pythonhosted.org/flask-nav/',
'license': [
'Copyright (c) 2015 Marc Brinkmann',
'Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:',
'The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.',
'THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.'
]
}
}
def get_third_party_list():
return third_party_list
def register_global_functions(app):
app.add_template_global(get_third_party_list, 'get_third_party_list')
| 117.342857 | 763 | 0.76747 | 1,201 | 8,214 | 5.234804 | 0.169026 | 0.055989 | 0.021632 | 0.029267 | 0.931446 | 0.92222 | 0.913949 | 0.913949 | 0.913949 | 0.913949 | 0 | 0.003543 | 0.175432 | 8,214 | 69 | 764 | 119.043478 | 0.924701 | 0.007792 | 0 | 0.46875 | 0 | 0.34375 | 0.916053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0 | 0.015625 | 0.046875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
6fab999435b51c67eb139fd5d443900e79397d17 | 201 | py | Python | lib/python/treadmill_vagrant/bootstrap/__init__.py | vrautela/treadmill-vagrant | efdd50e28d6118adfc9bbfb1a243f0eb39c47eb8 | [
"Apache-2.0"
] | null | null | null | lib/python/treadmill_vagrant/bootstrap/__init__.py | vrautela/treadmill-vagrant | efdd50e28d6118adfc9bbfb1a243f0eb39c47eb8 | [
"Apache-2.0"
] | null | null | null | lib/python/treadmill_vagrant/bootstrap/__init__.py | vrautela/treadmill-vagrant | efdd50e28d6118adfc9bbfb1a243f0eb39c47eb8 | [
"Apache-2.0"
] | null | null | null | """Treadmill Vagrant/CentOS7 bootstrap module.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
| 25.125 | 46 | 0.850746 | 24 | 201 | 6.333333 | 0.583333 | 0.263158 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005587 | 0.109453 | 201 | 7 | 47 | 28.714286 | 0.843575 | 0.21393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6ff7eff49278c682ed7a55a0b6c87e110eb3d99b | 307 | py | Python | codes/course1/demo3_5.py | BigShuang/big-shuang-python-introductory-course | c4fd1343c4c539567180072c749b68bda7c28075 | [
"MIT"
] | null | null | null | codes/course1/demo3_5.py | BigShuang/big-shuang-python-introductory-course | c4fd1343c4c539567180072c749b68bda7c28075 | [
"MIT"
] | null | null | null | codes/course1/demo3_5.py | BigShuang/big-shuang-python-introductory-course | c4fd1343c4c539567180072c749b68bda7c28075 | [
"MIT"
] | null | null | null | n = 100
k = 1
s = 0
for i in range(n):
s += k / (i + 1)
k = -k
print("n=%s: S=%s" % (n, s))
n = 1000
k = 1
s = 0
for i in range(n):
s += k / (i + 1)
k = -k
print("n=%s: S=%s" % (n, s))
n = 10000
k = 1
s = 0
for i in range(n):
s += k / (i + 1)
k = -k
print("n=%s: S=%s" % (n, s))
| 11.807692 | 28 | 0.371336 | 72 | 307 | 1.583333 | 0.180556 | 0.157895 | 0.078947 | 0.105263 | 0.885965 | 0.885965 | 0.885965 | 0.885965 | 0.885965 | 0.885965 | 0 | 0.107692 | 0.364821 | 307 | 25 | 29 | 12.28 | 0.476923 | 0 | 0 | 0.857143 | 0 | 0 | 0.09772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
965bceb6de1e93c2ea53be38f5bdb0e3e542da61 | 37,814 | py | Python | config/muxcable.py | monipko/sonic-utilities | ad801bfb81633812b4aa25f45bdd555a27121845 | [
"Apache-2.0"
] | null | null | null | config/muxcable.py | monipko/sonic-utilities | ad801bfb81633812b4aa25f45bdd555a27121845 | [
"Apache-2.0"
] | 1 | 2021-01-10T15:12:29.000Z | 2021-01-10T15:12:29.000Z | config/muxcable.py | monipko/sonic-utilities | ad801bfb81633812b4aa25f45bdd555a27121845 | [
"Apache-2.0"
] | null | null | null | import json
import os
import sys
import click
import re
import utilities_common.cli as clicommon
from sonic_py_common import multi_asic
from swsscommon.swsscommon import SonicV2Connector, ConfigDBConnector
from tabulate import tabulate
from utilities_common import platform_sfputil_helper
platform_sfputil = None
REDIS_TIMEOUT_MSECS = 0
CONFIG_SUCCESSFUL = 0
CONFIG_FAIL = 1
VENDOR_NAME = "Credo"
VENDOR_MODEL_REGEX = re.compile(r"CAC\w{3}321P2P\w{2}MS")
# Helper functions
def get_value_for_key_in_dict(mdict, port, key, table_name):
value = mdict.get(key, None)
if value is None:
click.echo("could not retrieve key {} value for port {} inside table {}".format(key, port, table_name))
sys.exit(CONFIG_FAIL)
return value
#
# 'muxcable' command ("config muxcable")
#
def get_value_for_key_in_config_tbl(config_db, port, key, table):
info_dict = {}
info_dict = config_db.get_entry(table, port)
if info_dict is None:
click.echo("could not retrieve key {} value for port {} inside table {}".format(key, port, table))
sys.exit(CONFIG_FAIL)
value = get_value_for_key_in_dict(info_dict, port, key, table)
return value
@click.group(name='muxcable', cls=clicommon.AliasedGroup)
def muxcable():
"""SONiC command line - 'show muxcable' command"""
if os.geteuid() != 0:
click.echo("Root privileges are required for this operation")
sys.exit(CONFIG_FAIL)
global platform_sfputil
# Load platform-specific sfputil class
platform_sfputil_helper.load_platform_sfputil()
# Load port info
platform_sfputil_helper.platform_sfputil_read_porttab_mappings()
platform_sfputil = platform_sfputil_helper.platform_sfputil
def lookup_statedb_and_update_configdb(per_npu_statedb, config_db, port, state_cfg_val, port_status_dict):
muxcable_statedb_dict = per_npu_statedb.get_all(per_npu_statedb.STATE_DB, 'MUX_CABLE_TABLE|{}'.format(port))
configdb_state = get_value_for_key_in_config_tbl(config_db, port, "state", "MUX_CABLE")
ipv4_value = get_value_for_key_in_config_tbl(config_db, port, "server_ipv4", "MUX_CABLE")
ipv6_value = get_value_for_key_in_config_tbl(config_db, port, "server_ipv6", "MUX_CABLE")
state = get_value_for_key_in_dict(muxcable_statedb_dict, port, "state", "MUX_CABLE_TABLE")
if (state == "active" and configdb_state == "active") or (state == "standby" and configdb_state == "active") or (state == "unknown" and configdb_state == "active"):
if state_cfg_val == "active":
# status is already active, so right back error
port_status_dict[port] = 'OK'
if state_cfg_val == "auto":
# display ok and write to cfgdb auto
port_status_dict[port] = 'OK'
config_db.set_entry("MUX_CABLE", port, {"state": "auto",
"server_ipv4": ipv4_value, "server_ipv6": ipv6_value})
elif state == "active" and configdb_state == "auto":
if state_cfg_val == "active":
# make the state active and write back OK
config_db.set_entry("MUX_CABLE", port, {"state": "active",
"server_ipv4": ipv4_value, "server_ipv6": ipv6_value})
port_status_dict[port] = 'OK'
if state_cfg_val == "auto":
# dont write anything to db, write OK to user
port_status_dict[port] = 'OK'
elif (state == "standby" and configdb_state == "auto") or (state == "unknown" and configdb_state == "auto"):
if state_cfg_val == "active":
# make the state active
config_db.set_entry("MUX_CABLE", port, {"state": "active",
"server_ipv4": ipv4_value, "server_ipv6": ipv6_value})
port_status_dict[port] = 'INPROGRESS'
if state_cfg_val == "auto":
# dont write anything to db
port_status_dict[port] = 'OK'
# 'muxcable' command ("config muxcable mode <port|all> active|auto")
@muxcable.command()
@click.argument('state', metavar='<operation_status>', required=True, type=click.Choice(["active", "auto"]))
@click.argument('port', metavar='<port_name>', required=True, default=None)
@click.option('--json', 'json_output', required=False, is_flag=True, type=click.BOOL)
def mode(state, port, json_output):
"""Show muxcable summary information"""
port_table_keys = {}
y_cable_asic_table_keys = {}
per_npu_configdb = {}
per_npu_statedb = {}
mux_tbl_cfg_db = {}
# Getting all front asic namespace and correspding config and state DB connector
namespaces = multi_asic.get_front_end_namespaces()
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
# replace these with correct macros
per_npu_configdb[asic_id] = ConfigDBConnector(use_unix_socket_path=True, namespace=namespace)
per_npu_configdb[asic_id].connect()
per_npu_statedb[asic_id] = SonicV2Connector(use_unix_socket_path=True, namespace=namespace)
per_npu_statedb[asic_id].connect(per_npu_statedb[asic_id].STATE_DB)
mux_tbl_cfg_db[asic_id] = per_npu_configdb[asic_id].get_table("MUX_CABLE")
port_table_keys[asic_id] = per_npu_statedb[asic_id].keys(
per_npu_statedb[asic_id].STATE_DB, 'MUX_CABLE_TABLE|*')
if port is not None and port != "all":
asic_index = None
if platform_sfputil is not None:
asic_index = platform_sfputil.get_asic_id_for_logical_port(port)
if asic_index is None:
# TODO this import is only for unit test purposes, and should be removed once sonic_platform_base
# is fully mocked
import sonic_platform_base.sonic_sfp.sfputilhelper
asic_index = sonic_platform_base.sonic_sfp.sfputilhelper.SfpUtilHelper().get_asic_id_for_logical_port(port)
if asic_index is None:
click.echo("Got invalid asic index for port {}, cant retreive mux status".format(port))
sys.exit(CONFIG_FAIL)
if per_npu_statedb[asic_index] is not None:
y_cable_asic_table_keys = port_table_keys[asic_index]
logical_key = "MUX_CABLE_TABLE|{}".format(port)
if logical_key in y_cable_asic_table_keys:
port_status_dict = {}
lookup_statedb_and_update_configdb(
per_npu_statedb[asic_index], per_npu_configdb[asic_index], port, state, port_status_dict)
if json_output:
click.echo("{}".format(json.dumps(port_status_dict, indent=4)))
else:
headers = ['port', 'state']
data = sorted([(k, v) for k, v in port_status_dict.items()])
click.echo(tabulate(data, headers=headers))
sys.exit(CONFIG_SUCCESSFUL)
else:
click.echo("this is not a valid port present on mux_cable".format(port))
sys.exit(CONFIG_FAIL)
else:
click.echo("there is not a valid asic table for this asic_index".format(asic_index))
sys.exit(CONFIG_FAIL)
elif port == "all" and port is not None:
port_status_dict = {}
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
for key in port_table_keys[asic_id]:
logical_port = key.split("|")[1]
lookup_statedb_and_update_configdb(
per_npu_statedb[asic_id], per_npu_configdb[asic_id], logical_port, state, port_status_dict)
if json_output:
click.echo("{}".format(json.dumps(port_status_dict, indent=4)))
else:
data = sorted([(k, v) for k, v in port_status_dict.items()])
headers = ['port', 'state']
click.echo(tabulate(data, headers=headers))
sys.exit(CONFIG_SUCCESSFUL)
@muxcable.group(cls=clicommon.AbbreviationGroup)
def prbs():
"""Enable/disable PRBS mode on a port"""
pass
@prbs.command()
@click.argument('port', required=True, default=None, type=click.INT)
@click.argument('target', required=True, default=None, type=click.INT)
@click.argument('mode_value', required=True, default=None, type=click.INT)
@click.argument('lane_map', required=True, default=None, type=click.INT)
def enable(port, target, mode_value, lane_map):
"""Enable PRBS mode on a port"""
import sonic_y_cable.y_cable
res = sonic_y_cable.y_cable.enable_prbs_mode(port, target, mode_value, lane_map)
if res != True:
click.echo("PRBS config unsuccesful")
sys.exit(CONFIG_FAIL)
click.echo("PRBS config sucessful")
sys.exit(CONFIG_SUCCESSFUL)
@prbs.command()
@click.argument('port', required=True, default=None, type=click.INT)
@click.argument('target', required=True, default=None, type=click.INT)
def disable(port, target):
"""Disable PRBS mode on a port"""
import sonic_y_cable.y_cable
res = sonic_y_cable.y_cable.disable_prbs_mode(port, target)
if res != True:
click.echo("PRBS disable unsuccesful")
sys.exit(CONFIG_FAIL)
click.echo("PRBS disable sucessful")
sys.exit(CONFIG_SUCCESSFUL)
@muxcable.group(cls=clicommon.AbbreviationGroup)
def loopback():
"""Enable/disable loopback mode on a port"""
pass
@loopback.command()
@click.argument('port', required=True, default=None, type=click.INT)
@click.argument('target', required=True, default=None, type=click.INT)
@click.argument('lane_map', required=True, default=None, type=click.INT)
def enable(port, target, lane_map):
"""Enable loopback mode on a port"""
import sonic_y_cable.y_cable
res = sonic_y_cable.y_cable.enable_loopback_mode(port, target, lane_map)
if res != True:
click.echo("loopback config unsuccesful")
sys.exit(CONFIG_FAIL)
click.echo("loopback config sucessful")
sys.exit(CONFIG_SUCCESSFUL)
@loopback.command()
@click.argument('port', required=True, default=None, type=click.INT)
@click.argument('target', required=True, default=None, type=click.INT)
def disable(port, target):
"""Disable loopback mode on a port"""
import sonic_y_cable.y_cable
res = sonic_y_cable.y_cable.disable_loopback_mode(port, target)
if res != True:
click.echo("loopback disable unsuccesful")
sys.exit(CONFIG_FAIL)
click.echo("loopback disable sucessful")
sys.exit(CONFIG_SUCCESSFUL)
@muxcable.group(cls=clicommon.AbbreviationGroup)
def hwmode():
"""Configure muxcable hardware directly"""
pass
@hwmode.command()
@click.argument('state', metavar='<operation_status>', required=True, type=click.Choice(["active", "standby"]))
@click.argument('port', metavar='<port_name>', required=True, default=None)
def state(state, port):
"""Configure the muxcable mux state {active/standby}"""
per_npu_statedb = {}
transceiver_table_keys = {}
transceiver_dict = {}
# Getting all front asic namespace and correspding config and state DB connector
namespaces = multi_asic.get_front_end_namespaces()
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
per_npu_statedb[asic_id] = SonicV2Connector(use_unix_socket_path=False, namespace=namespace)
per_npu_statedb[asic_id].connect(per_npu_statedb[asic_id].STATE_DB)
transceiver_table_keys[asic_id] = per_npu_statedb[asic_id].keys(
per_npu_statedb[asic_id].STATE_DB, 'TRANSCEIVER_INFO|*')
if port is not None and port != "all":
click.confirm(('Muxcable at port {} will be changed to {} state. Continue?'.format(port, state)), abort=True)
logical_port_list = platform_sfputil_helper.get_logical_list()
if port not in logical_port_list:
click.echo("ERR: This is not a valid port, valid ports ({})".format(", ".join(logical_port_list)))
sys.exit(CONFIG_FAIL)
asic_index = None
if platform_sfputil is not None:
asic_index = platform_sfputil_helper.get_asic_id_for_logical_port(port)
if asic_index is None:
# TODO this import is only for unit test purposes, and should be removed once sonic_platform_base
# is fully mocked
import sonic_platform_base.sonic_sfp.sfputilhelper
asic_index = sonic_platform_base.sonic_sfp.sfputilhelper.SfpUtilHelper().get_asic_id_for_logical_port(port)
if asic_index is None:
click.echo("Got invalid asic index for port {}, cant retreive mux status".format(port))
sys.exit(CONFIG_FAIL)
if platform_sfputil is not None:
physical_port_list = platform_sfputil_helper.logical_port_name_to_physical_port_list(port)
if not isinstance(physical_port_list, list):
click.echo(("ERR: Unable to locate physical port information for {}".format(port)))
sys.exit(CONFIG_FAIL)
if len(physical_port_list) != 1:
click.echo("ERR: Found multiple physical ports ({}) associated with {}".format(
", ".join(physical_port_list), port))
sys.exit(CONFIG_FAIL)
transceiver_dict[asic_index] = per_npu_statedb[asic_index].get_all(
per_npu_statedb[asic_index].STATE_DB, 'TRANSCEIVER_INFO|{}'.format(port))
vendor_value = get_value_for_key_in_dict(transceiver_dict[asic_index], port, "manufacturer", "TRANSCEIVER_INFO")
model_value = get_value_for_key_in_dict(transceiver_dict[asic_index], port, "model", "TRANSCEIVER_INFO")
""" This check is required for checking whether or not this port is connected to a Y cable
or not. The check gives a way to differentiate between non Y cable ports and Y cable ports.
TODO: this should be removed once their is support for multiple vendors on Y cable"""
if vendor_value != VENDOR_NAME or not re.match(VENDOR_MODEL_REGEX, model_value):
click.echo("ERR: Got invalid vendor value and model for port {}".format(port))
sys.exit(CONFIG_FAIL)
physical_port = physical_port_list[0]
logical_port_list_for_physical_port = platform_sfputil_helper.get_physical_to_logical()
logical_port_list_per_port = logical_port_list_for_physical_port.get(physical_port, None)
""" This check is required for checking whether or not this logical port is the one which is
actually mapped to physical port and by convention it is always the first port.
TODO: this should be removed with more logic to check which logical port maps to actual physical port
being used"""
if port != logical_port_list_per_port[0]:
click.echo("ERR: This logical Port {} is not on a muxcable".format(port))
sys.exit(CONFIG_FAIL)
import sonic_y_cable.y_cable
read_side = sonic_y_cable.y_cable.check_read_side(physical_port)
if read_side == False or read_side == -1:
click.echo(("ERR: Unable to get read_side for the cable port {}".format(port)))
sys.exit(CONFIG_FAIL)
mux_direction = sonic_y_cable.y_cable.check_mux_direction(physical_port)
if mux_direction == False or mux_direction == -1:
click.echo(("ERR: Unable to get mux direction for the cable port {}".format(port)))
sys.exit(CONFIG_FAIL)
if int(read_side) == 1:
if state == "active":
res = sonic_y_cable.y_cable.toggle_mux_to_torA(physical_port)
elif state == "standby":
res = sonic_y_cable.y_cable.toggle_mux_to_torB(physical_port)
click.echo("Success in toggling port {} to {}".format(port, state))
elif int(read_side) == 2:
if state == "active":
res = sonic_y_cable.y_cable.toggle_mux_to_torB(physical_port)
elif state == "standby":
res = sonic_y_cable.y_cable.toggle_mux_to_torA(physical_port)
click.echo("Success in toggling port {} to {}".format(port, state))
if res == False:
click.echo("ERR: Unable to toggle port {} to {}".format(port, state))
sys.exit(CONFIG_FAIL)
elif port == "all" and port is not None:
click.confirm(('Muxcables at all ports will be changed to {} state. Continue?'.format(state)), abort=True)
logical_port_list = platform_sfputil_helper.get_logical_list()
rc = True
for port in logical_port_list:
if platform_sfputil is not None:
physical_port_list = platform_sfputil_helper.logical_port_name_to_physical_port_list(port)
asic_index = None
if platform_sfputil is not None:
asic_index = platform_sfputil_helper.get_asic_id_for_logical_port(port)
if asic_index is None:
# TODO this import is only for unit test purposes, and should be removed once sonic_platform_base
# is fully mocked
import sonic_platform_base.sonic_sfp.sfputilhelper
asic_index = sonic_platform_base.sonic_sfp.sfputilhelper.SfpUtilHelper().get_asic_id_for_logical_port(port)
if asic_index is None:
click.echo("Got invalid asic index for port {}, cant retreive mux status".format(port))
if not isinstance(physical_port_list, list):
click.echo(("ERR: Unable to locate physical port information for {}".format(port)))
continue
if len(physical_port_list) != 1:
click.echo("ERR: Found multiple physical ports ({}) associated with {}".format(
", ".join(physical_port_list), port))
continue
transceiver_dict[asic_index] = per_npu_statedb[asic_index].get_all(
per_npu_statedb[asic_index].STATE_DB, 'TRANSCEIVER_INFO|{}'.format(port))
vendor_value = transceiver_dict[asic_index].get("manufacturer", None)
model_value = transceiver_dict[asic_index].get("model", None)
""" This check is required for checking whether or not this port is connected to a Y cable
or not. The check gives a way to differentiate between non Y cable ports and Y cable ports.
TODO: this should be removed once their is support for multiple vendors on Y cable"""
if vendor_value != VENDOR_NAME or not re.match(VENDOR_MODEL_REGEX, model_value):
continue
physical_port = physical_port_list[0]
logical_port_list_for_physical_port = platform_sfputil_helper.get_physical_to_logical()
logical_port_list_per_port = logical_port_list_for_physical_port.get(physical_port, None)
""" This check is required for checking whether or not this logical port is the one which is
actually mapped to physical port and by convention it is always the first port.
TODO: this should be removed with more logic to check which logical port maps to actual physical port
being used"""
if port != logical_port_list_per_port[0]:
continue
import sonic_y_cable.y_cable
read_side = sonic_y_cable.y_cable.check_read_side(physical_port)
if read_side == False or read_side == -1:
click.echo(("ERR: Unable to get read side for the cable port {}".format(port)))
rc = False
continue
mux_direction = sonic_y_cable.y_cable.check_mux_direction(physical_port)
if mux_direction == False or mux_direction == -1:
click.echo(("ERR: Unable to get mux direction for the cable port {}".format(port)))
rc = False
continue
if int(read_side) == 1:
if state == "active":
res = sonic_y_cable.y_cable.toggle_mux_to_torA(physical_port)
elif state == "standby":
res = sonic_y_cable.y_cable.toggle_mux_to_torB(physical_port)
click.echo("Success in toggling port {} to {}".format(port, state))
elif int(read_side) == 2:
if state == "active":
res = sonic_y_cable.y_cable.toggle_mux_to_torB(physical_port)
elif state == "standby":
res = sonic_y_cable.y_cable.toggle_mux_to_torA(physical_port)
click.echo("Success in toggling port {} to {}".format(port, state))
if res == False:
rc = False
click.echo("ERR: Unable to toggle port {} to {}".format(port, state))
if rc == False:
click.echo("ERR: Unable to toggle one or more ports to {}".format(state))
sys.exit(CONFIG_FAIL)
@hwmode.command()
@click.argument('state', metavar='<operation_status>', required=True, type=click.Choice(["auto", "manual"]))
@click.argument('port', metavar='<port_name>', required=True, default=None)
def setswitchmode(state, port):
"""Configure the muxcable mux switching mode {auto/manual}"""
per_npu_statedb = {}
transceiver_dict = {}
# Getting all front asic namespace and correspding config and state DB connector
namespaces = multi_asic.get_front_end_namespaces()
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
per_npu_statedb[asic_id] = SonicV2Connector(use_unix_socket_path=False, namespace=namespace)
per_npu_statedb[asic_id].connect(per_npu_statedb[asic_id].STATE_DB)
if port is not None and port != "all":
click.confirm(('Muxcable at port {} will be changed to {} switching mode. Continue?'.format(port, state)), abort=True)
logical_port_list = platform_sfputil_helper.get_logical_list()
if port not in logical_port_list:
click.echo("ERR: This is not a valid port, valid ports ({})".format(", ".join(logical_port_list)))
sys.exit(CONFIG_FAIL)
asic_index = None
if platform_sfputil is not None:
asic_index = platform_sfputil_helper.get_asic_id_for_logical_port(port)
if asic_index is None:
# TODO this import is only for unit test purposes, and should be removed once sonic_platform_base
# is fully mocked
import sonic_platform_base.sonic_sfp.sfputilhelper
asic_index = sonic_platform_base.sonic_sfp.sfputilhelper.SfpUtilHelper().get_asic_id_for_logical_port(port)
if asic_index is None:
click.echo("Got invalid asic index for port {}, cant retreive mux status".format(port))
sys.exit(CONFIG_FAIL)
if platform_sfputil is not None:
physical_port_list = platform_sfputil_helper.logical_port_name_to_physical_port_list(port)
if not isinstance(physical_port_list, list):
click.echo(("ERR: Unable to locate physical port information for {}".format(port)))
sys.exit(CONFIG_FAIL)
if len(physical_port_list) != 1:
click.echo("ERR: Found multiple physical ports ({}) associated with {}".format(
", ".join(physical_port_list), port))
sys.exit(CONFIG_FAIL)
transceiver_dict[asic_index] = per_npu_statedb[asic_index].get_all(
per_npu_statedb[asic_index].STATE_DB, 'TRANSCEIVER_INFO|{}'.format(port))
vendor_value = get_value_for_key_in_dict(transceiver_dict[asic_index], port, "manufacturer", "TRANSCEIVER_INFO")
model_value = get_value_for_key_in_dict(transceiver_dict[asic_index], port, "model", "TRANSCEIVER_INFO")
""" This check is required for checking whether or not this port is connected to a Y cable
or not. The check gives a way to differentiate between non Y cable ports and Y cable ports.
TODO: this should be removed once their is support for multiple vendors on Y cable"""
if vendor_value != VENDOR_NAME or not re.match(VENDOR_MODEL_REGEX, model_value):
click.echo("ERR: Got invalid vendor value and model for port {}".format(port))
sys.exit(CONFIG_FAIL)
physical_port = physical_port_list[0]
logical_port_list_for_physical_port = platform_sfputil_helper.get_physical_to_logical()
logical_port_list_per_port = logical_port_list_for_physical_port.get(physical_port, None)
""" This check is required for checking whether or not this logical port is the one which is
actually mapped to physical port and by convention it is always the first port.
TODO: this should be removed with more logic to check which logical port maps to actual physical port
being used"""
if port != logical_port_list_per_port[0]:
click.echo("ERR: This logical Port {} is not on a muxcable".format(port))
sys.exit(CONFIG_FAIL)
if state == "auto":
mode = sonic_y_cable.y_cable.SWITCHING_MODE_AUTO
elif state == "manual":
mode = sonic_y_cable.y_cable.SWITCHING_MODE_MANUAL
import sonic_y_cable.y_cable
result = sonic_y_cable.y_cable.set_switching_mode(physical_port, mode)
if result == False:
click.echo(("ERR: Unable to set switching mode for the cable port {}".format(port)))
sys.exit(CONFIG_FAIL)
click.echo("Success in switching mode on port {} to {}".format(port, state))
elif port == "all" and port is not None:
click.confirm(('Muxcable at port {} will be changed to {} switching mode. Continue?'.format(port, state)), abort=True)
logical_port_list = platform_sfputil_helper.get_logical_list()
rc = True
for port in logical_port_list:
if platform_sfputil is not None:
physical_port_list = platform_sfputil_helper.logical_port_name_to_physical_port_list(port)
asic_index = None
if platform_sfputil is not None:
asic_index = platform_sfputil_helper.get_asic_id_for_logical_port(port)
if asic_index is None:
# TODO this import is only for unit test purposes, and should be removed once sonic_platform_base
# is fully mocked
import sonic_platform_base.sonic_sfp.sfputilhelper
asic_index = sonic_platform_base.sonic_sfp.sfputilhelper.SfpUtilHelper().get_asic_id_for_logical_port(port)
if asic_index is None:
click.echo("Got invalid asic index for port {}, cant retreive mux status".format(port))
if not isinstance(physical_port_list, list):
click.echo(("ERR: Unable to locate physical port information for {}".format(port)))
continue
if len(physical_port_list) != 1:
click.echo("ERR: Found multiple physical ports ({}) associated with {}".format(
", ".join(physical_port_list), port))
continue
transceiver_dict[asic_index] = per_npu_statedb[asic_index].get_all(
per_npu_statedb[asic_index].STATE_DB, 'TRANSCEIVER_INFO|{}'.format(port))
vendor_value = transceiver_dict[asic_index].get("manufacturer", None)
model_value = transceiver_dict[asic_index].get("model", None)
""" This check is required for checking whether or not this port is connected to a Y cable
or not. The check gives a way to differentiate between non Y cable ports and Y cable ports.
TODO: this should be removed once their is support for multiple vendors on Y cable"""
if vendor_value != VENDOR_NAME or not re.match(VENDOR_MODEL_REGEX, model_value):
continue
physical_port = physical_port_list[0]
logical_port_list_for_physical_port = platform_sfputil_helper.get_physical_to_logical()
logical_port_list_per_port = logical_port_list_for_physical_port.get(physical_port, None)
""" This check is required for checking whether or not this logical port is the one which is
actually mapped to physical port and by convention it is always the first port.
TODO: this should be removed with more logic to check which logical port maps to actual physical port
being used"""
if port != logical_port_list_per_port[0]:
continue
if state == "auto":
mode = sonic_y_cable.y_cable.SWITCHING_MODE_AUTO
elif state == "manual":
mode = sonic_y_cable.y_cable.SWITCHING_MODE_MANUAL
import sonic_y_cable.y_cable
result = sonic_y_cable.y_cable.set_switching_mode(physical_port, mode)
if result == False:
rc = False
click.echo("ERR: Unable to set switching mode on port {} to {}".format(port, state))
click.echo("Success in switching mode on port {} to {}".format(port, state))
if rc == False:
click.echo("ERR: Unable to set switching mode one or more ports to {}".format(state))
sys.exit(CONFIG_FAIL)
def get_per_npu_statedb(per_npu_statedb, port_table_keys):
# Getting all front asic namespace and correspding config and state DB connector
namespaces = multi_asic.get_front_end_namespaces()
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
# replace these with correct macros
per_npu_statedb[asic_id] = SonicV2Connector(use_unix_socket_path=True, namespace=namespace)
per_npu_statedb[asic_id].connect(per_npu_statedb[asic_id].STATE_DB)
port_table_keys[asic_id] = per_npu_statedb[asic_id].keys(
per_npu_statedb[asic_id].STATE_DB, 'MUX_CABLE_TABLE|*')
def get_physical_port_list(port):
physical_port_list = []
if platform_sfputil is not None:
physical_port_list = platform_sfputil_helper.logical_port_name_to_physical_port_list(port)
asic_index = None
if platform_sfputil is not None:
asic_index = platform_sfputil_helper.get_asic_id_for_logical_port(port)
if asic_index is None:
# TODO this import is only for unit test purposes, and should be removed once sonic_platform_base
# is fully mocked
import sonic_platform_base.sonic_sfp.sfputilhelper
asic_index = sonic_platform_base.sonic_sfp.sfputilhelper.SfpUtilHelper().get_asic_id_for_logical_port(port)
if asic_index is None:
click.echo("Got invalid asic index for port {}, cant retreive mux status".format(port))
if not isinstance(physical_port_list, list):
click.echo(("ERR: Unable to locate physical port information for {}".format(port)))
sys.exit(CONFIG_FAIL)
if len(physical_port_list) != 1:
click.echo("ERR: Found multiple physical ports ({}) associated with {}".format(
", ".join(physical_port_list), port))
sys.exit(CONFIG_FAIL)
return (physical_port_list, asic_index)
def perform_download_firmware(physical_port, fwfile, port):
import sonic_y_cable.y_cable
result = sonic_y_cable.y_cable.download_firmware(physical_port, fwfile)
if result == sonic_y_cable.y_cable.FIRMWARE_DOWNLOAD_SUCCESS:
click.echo("firmware download successful {}".format(port))
return True
else:
click.echo("firmware download failure {}".format(port))
return False
def perform_activate_firmware(physical_port, port):
import sonic_y_cable.y_cable
result = sonic_y_cable.y_cable.activate_firmware(physical_port)
if result == sonic_y_cable.y_cable.FIRMWARE_ACTIVATE_SUCCESS:
click.echo("firmware activate successful for {}".format(port))
return True
else:
click.echo("firmware activate failure for {}".format(port))
return False
def perform_rollback_firmware(physical_port, port):
import sonic_y_cable.y_cable
result = sonic_y_cable.y_cable.rollback_firmware(physical_port)
if result == sonic_y_cable.y_cable.FIRMWARE_ROLLBACK_SUCCESS:
click.echo("firmware rollback successful {}".format(port))
return True
else:
click.echo("firmware rollback failure {}".format(port))
return False
@muxcable.group(cls=clicommon.AbbreviationGroup)
def firmware():
"""Configure muxcable firmware command"""
pass
@firmware.command()
@click.argument('fwfile', metavar='<firmware_file>', required=True)
@click.argument('port', metavar='<port_name>', required=True, default=None)
def download(fwfile, port):
"""Config muxcable firmware download"""
per_npu_statedb = {}
y_cable_asic_table_keys = {}
port_table_keys = {}
get_per_npu_statedb(per_npu_statedb, port_table_keys)
if port is not None and port != "all":
physical_port_list = []
physical_port_list, asic_index = get_physical_port_list(port)
physical_port = physical_port_list[0]
if per_npu_statedb[asic_index] is not None:
y_cable_asic_table_keys = port_table_keys[asic_index]
logical_key = "MUX_CABLE_TABLE|{}".format(port)
if logical_key in y_cable_asic_table_keys:
perform_download_firmware(physical_port, fwfile, port)
else:
click.echo("this is not a valid port present on mux_cable".format(port))
sys.exit(CONFIG_FAIL)
else:
click.echo("there is not a valid asic table for this asic_index".format(asic_index))
sys.exit(CONFIG_FAIL)
elif port == "all" and port is not None:
rc = CONFIG_SUCCESSFUL
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
for key in port_table_keys[asic_id]:
port = key.split("|")[1]
physical_port_list = []
(physical_port_list, asic_index) = get_physical_port_list(port)
physical_port = physical_port_list[0]
status = perform_download_firmware(physical_port, fwfile, port)
if status is not True:
rc = CONFIG_FAIL
sys.exit(rc)
@firmware.command()
@click.argument('port', metavar='<port_name>', required=True, default=None)
def activate(port):
"""Config muxcable firmware activate"""
per_npu_statedb = {}
y_cable_asic_table_keys = {}
port_table_keys = {}
get_per_npu_statedb(per_npu_statedb, port_table_keys)
if port is not None and port != "all":
physical_port_list = []
(physical_port_list, asic_index) = get_physical_port_list(port)
physical_port = physical_port_list[0]
if per_npu_statedb[asic_index] is not None:
y_cable_asic_table_keys = port_table_keys[asic_index]
logical_key = "MUX_CABLE_TABLE|{}".format(port)
if logical_key in y_cable_asic_table_keys:
perform_activate_firmware(physical_port, port)
else:
click.echo("this is not a valid port present on mux_cable".format(port))
sys.exit(CONFIG_FAIL)
else:
click.echo("there is not a valid asic table for this asic_index".format(asic_index))
sys.exit(CONFIG_FAIL)
elif port == "all" and port is not None:
rc = CONFIG_SUCCESSFUL
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
for key in port_table_keys[asic_id]:
port = key.split("|")[1]
physical_port_list = []
(physical_port_list, asic_index) = get_physical_port_list(port)
physical_port = physical_port_list[0]
status = perform_activate_firmware(physical_port, port)
if status is not True:
rc = CONFIG_FAIL
sys.exit(rc)
@firmware.command()
@click.argument('port', metavar='<port_name>', required=True, default=None)
def rollback(port):
"""Config muxcable firmware rollback"""
port_table_keys = {}
y_cable_asic_table_keys = {}
per_npu_statedb = {}
get_per_npu_statedb(per_npu_statedb, port_table_keys)
if port is not None and port != "all":
physical_port_list = []
(physical_port_list, asic_index) = get_physical_port_list(port)
physical_port = physical_port_list[0]
if per_npu_statedb[asic_index] is not None:
y_cable_asic_table_keys = port_table_keys[asic_index]
logical_key = "MUX_CABLE_TABLE|{}".format(port)
if logical_key in y_cable_asic_table_keys:
perform_rollback_firmware(physical_port, port)
else:
click.echo("this is not a valid port present on mux_cable".format(port))
sys.exit(CONFIG_FAIL)
else:
click.echo("there is not a valid asic table for this asic_index".format(asic_index))
sys.exit(CONFIG_FAIL)
elif port == "all" and port is not None:
rc = CONFIG_SUCCESSFUL
for namespace in namespaces:
asic_id = multi_asic.get_asic_index_from_namespace(namespace)
for key in port_table_keys[asic_id]:
port = key.split("|")[1]
physical_port_list = []
(physical_port_list, asic_index) = get_physical_port_list(port)
physical_port = physical_port_list[0]
status = perform_rollback_firmware(physical_port, port)
if status is not True:
rc = CONFIG_FAIL
sys.exit(rc)
| 43.715607 | 168 | 0.661633 | 5,048 | 37,814 | 4.674525 | 0.056854 | 0.059499 | 0.037971 | 0.019833 | 0.905878 | 0.88685 | 0.863245 | 0.850447 | 0.828792 | 0.814214 | 0 | 0.002252 | 0.2484 | 37,814 | 864 | 169 | 43.766204 | 0.828015 | 0.052785 | 0 | 0.795608 | 0 | 0 | 0.127587 | 0.000631 | 0 | 0 | 0 | 0.015046 | 0 | 1 | 0.038851 | false | 0.006757 | 0.045608 | 0 | 0.099662 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
738d346396861d44dbbc117d09027ee14a9dcf8f | 407 | py | Python | Secao8_FuncoesPython/Exercicios/Exerc.7.py | PauloFTeixeira/curso_python | 9040c7dcc5262620f6330bb9637710bb8899bc6b | [
"MIT"
] | null | null | null | Secao8_FuncoesPython/Exercicios/Exerc.7.py | PauloFTeixeira/curso_python | 9040c7dcc5262620f6330bb9637710bb8899bc6b | [
"MIT"
] | null | null | null | Secao8_FuncoesPython/Exercicios/Exerc.7.py | PauloFTeixeira/curso_python | 9040c7dcc5262620f6330bb9637710bb8899bc6b | [
"MIT"
] | null | null | null | """
Faça uma função que receba uma matriz de 3 x 3 elementos. Calcule a soma dos
elementos que estão acima da diagonal principal.
Faça uma função que receba uma matriz de 3 x 3 elementos. Calcule e retorne a soma
dos elementos que estão abaixo da diagonal principal.
Faça uma função que receba uma matriz de 3 x 3 elementos. Calcule e retorne a soma
dos elementos que estão na diagonal principal.
"""
| 31.307692 | 82 | 0.773956 | 72 | 407 | 4.375 | 0.319444 | 0.066667 | 0.12381 | 0.152381 | 0.904762 | 0.904762 | 0.825397 | 0.825397 | 0.825397 | 0.825397 | 0 | 0.018349 | 0.19656 | 407 | 12 | 83 | 33.916667 | 0.944954 | 0.972973 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
739a6bc3f7daeca050f5d3ee8cfa66ca5e8e816d | 7,380 | py | Python | apicomponents/policy.py | tonnyhideyori/dependencytrack-pywrap | 58d4a8ac8862bbdb7007ab483f38f5871ab55c48 | [
"MIT"
] | null | null | null | apicomponents/policy.py | tonnyhideyori/dependencytrack-pywrap | 58d4a8ac8862bbdb7007ab483f38f5871ab55c48 | [
"MIT"
] | 5 | 2021-11-18T20:35:12.000Z | 2021-11-25T19:03:16.000Z | apicomponents/policy.py | tonnyhideyori/dependencytrack-pywrap | 58d4a8ac8862bbdb7007ab483f38f5871ab55c48 | [
"MIT"
] | 2 | 2021-11-15T19:58:15.000Z | 2021-11-23T12:55:04.000Z | import json
class DependencyTackPolicy(object):
def get_policy(self, uuid):
"""Returns a specific policy
Args:
uuid (string): The UUID of the policy to retrieve.
Returns:
dictionary: policy
"""
response = self.session.get(self.apicall + f"/v1/policy/{uuid}")
if response.status_code == 200:
return response.json()
elif response.status_code == 401:
return (f"Unauthorized, {response.status_code}")
else:
return (f"{(response.content).decode('utf-8')}, {response.status_code}")
def list_policy(self, pageSize=100):
"""Returns a list of all policies
Args:
pageSize (int, optional): size of the page. Defaults to 100.
Returns:
List: list of all policies
"""
policylist = list()
pageNumber = 1
response = self.session.get(
self.apicall + "/v1/policy", params={'pageSize': pageSize, 'pageNumber': pageNumber})
for policy in range(0, len(response.json())):
policylist.append(response.json()[policy-1])
while len(response.json()) == pageSize:
pageNumber += 1
response = self.session.get(
self.apicall + "/v1/policy", params={'pageSize': pageSize, 'pageNumber': pageNumber})
for policy in range(0, len(response.json())):
policylist.append(response.json()[policy-1])
if response.status_code == 200:
return policylist
elif response.status_code == 401:
return (f"Unauthorized, {response.status_code}")
else:
return (f"{(response.content).decode('utf-8')}, {response.status_code}")
def delete_policy(self, uuid):
""" Deletes a specific policy
Args:
uuid (string): The UUID of the policy to delete.
"""
response = self.session.delete(self.apicall + f"/v1/policy/{uuid}")
if response.status_code >= 200 and response.status_code <= 299:
return ("Successful operation")
elif response.status_code == 401:
return (f"Unauthorized, {response.status_code}")
else:
return (f"{(response.content).decode('utf-8')}, {response.status_code}")
def create_policy(self, name, operator="ANY", violationState="INFO", policyCondition=None, projects=None, globals=None):
# TODO: create better comments explaining the args
""" Create a policy
Args:
name (string): Name of the policy
operator (string, optional): Operator of the policy(ANY, ALL). Defaults to ANY.
violationState (str, optional): [description]. Defaults to "INFO".
policyCondition ([type], optional): [description]. Defaults to None.
projects ([type], optional): [description]. Defaults to None.
globals ([type], optional): [description]. Defaults to None.
Returns:
[type]: [description]
"""
data = {"name": name,
"violationState": violationState}
if operator:
data["operator"] = operator
if policyCondition:
if isinstance(policyCondition, list):
data["policyCondition"] = policyCondition
else:
return "Error! The policyCondition should be a list"
if projects:
if isinstance(projects, list):
data["projects"] = projects
else:
return "Error! The projects should be a list"
if globals:
data["globals"] = globals
response = self.session.put(
self.apicall + f"/v1/policy", data=json.dumps(data))
if response.status_code == 201:
return response.json()
elif response.status_code == 401:
return (f"Unauthorized, {response.status_code}")
else:
return (f"{(response.content).decode('utf-8')}, {response.status_code}")
def update_policy(self,uuid ,name=None, operator=None, violationState=None, policyCondition=None, projects=None, globals=None):
# TODO: create better comments explaining the args
""" Create a policy
Args:
name (string): Name of the policy
operator ([type], optional): Operator of the policy. Defaults to None.
violationState (str, optional): [description]. Defaults to "INFO".
policyCondition ([type], optional): [description]. Defaults to None.
projects ([type], optional): [description]. Defaults to None.
globals ([type], optional): [description]. Defaults to None.
"""
data ={"uuid":uuid}
if name:
data['name'] = name
if violationState:
data['violationState'] = violationState
if operator:
data["operator"] = operator
if policyCondition:
if isinstance(policyCondition, list):
data["policyCondition"] = policyCondition
else:
return "Error! The policyCondition should be a list"
if projects :
if isinstance(projects, list):
data["projects"] = projects
else:
return "Error! The projects should be a list"
if globals:
data["globals"] = globals
response = self.session.post(self.apicall + f"/v1/policy", data=json.dumps(data))
if response.status_code == 200:
return ("Successful operation")
elif response.status_code == 401:
return (f"Unauthorized, {response.status_code}")
else:
return (f"{(response.content).decode('utf-8')}, {response.status_code}")
def add_policyToproject(self, policyUuid, projectUuid):
"""Adds project to a policy.
Args:
policyUuid (string): The UUID of the policy
projectUuid (string): The UUID of the project
"""
response = self.session.post(self.apicall + f"/v1/policy/{policyUuid}/projects/{projectUuid}")
if response.status_code == 200:
return (f"Successful operation")
elif response.status_code == 401:
return (f"Unauthorized, {response.status_code}")
elif response.status_code == 304:
return (f"The policy already has the specified project assigned, {response.status_code}")
else:
return (f"{(response.content).decode('utf-8')}, {response.status_code}")
def delete_policyFromproject(self, policyUuid, projectUuid):
"""Removes a project from a policy.
Args:
policyUuid (string): The UUID of the policy
projectUuid (string): The UUID of the project
"""
response=self.session.delete(self.apicall + f"/v1/policy/{policyUuid}/projects/{projectUuid}")
if response.status_code == 200:
return (f"Successful operation")
elif response.status_code == 401:
return (f"Unauthorized, {response.status_code}")
elif response.status_code == 304:
return (f"The policy does not have the specified project assigned, {response.status_code}")
else:
return (f"{(response.content).decode('utf-8')}, {response.status_code}")
| 41.931818 | 131 | 0.584824 | 773 | 7,380 | 5.531695 | 0.141009 | 0.108045 | 0.138915 | 0.046305 | 0.834659 | 0.82203 | 0.809401 | 0.809401 | 0.809401 | 0.800281 | 0 | 0.01509 | 0.299594 | 7,380 | 175 | 132 | 42.171429 | 0.812149 | 0.21477 | 0 | 0.706422 | 0 | 0 | 0.255934 | 0.129347 | 0 | 0 | 0 | 0.011429 | 0 | 1 | 0.06422 | false | 0 | 0.009174 | 0 | 0.330275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
73d78bfd8e4d631319d06de9a70f87dffd90232e | 1,672 | py | Python | OtherPlatform/Resolution_switch.py | ftoorto/JiraTest | 4ab79385c30f70e2fe879e639816b93a18bfc3fc | [
"MIT"
] | null | null | null | OtherPlatform/Resolution_switch.py | ftoorto/JiraTest | 4ab79385c30f70e2fe879e639816b93a18bfc3fc | [
"MIT"
] | null | null | null | OtherPlatform/Resolution_switch.py | ftoorto/JiraTest | 4ab79385c30f70e2fe879e639816b93a18bfc3fc | [
"MIT"
] | null | null | null | import os
import time
def start_test():
i = 0
while (True):
i = i + 1
os.system("adb connect 192.168.1.102")
os.system("adb devices")
time.sleep(5)
for j in range(4):
os.system("adb shell input keyevent DPAD_UP")
time.sleep(1)
for j in range(4):
os.system("adb shell input keyevent DPAD_RIGHT")
time.sleep(1)
os.system("adb shell input keyevent ENTER ")
time.sleep(1)
for j in range(6):
os.system("adb shell input keyevent DPAD_DOWN")
time.sleep(1)
os.system("adb shell input keyevent DPAD_UP")
time.sleep(1)
os.system("adb shell input keyevent ENTER ")
time.sleep(1)
for j in range(11):
os.system("adb shell input keyevent DPAD_DOWN")
time.sleep(1)
os.system("adb shell input keyevent ENTER ")
time.sleep(1)
os.system("adb shell input keyevent ENTER ")
time.sleep(1)
for j in range(i%14):
os.system("adb shell input keyevent DPAD_DOWN")
time.sleep(1)
os.system("adb shell input keyevent ENTER ")
time.sleep(3)
os.system("adb shell input keyevent DPAD_DOWN ")
time.sleep(1)
os.system("adb shell input keyevent ENTER ")
time.sleep(1)
os.system("adb shell input keyevent DPAD_RIGHT ")
time.sleep(1)
os.system("adb shell input keyevent ENTER ")
time.sleep(1)
print(i, " times reboot")
time.sleep(100)
if(i>3000):
break
if __name__ == "__main__":
start_test() | 28.338983 | 60 | 0.552033 | 229 | 1,672 | 3.951965 | 0.200873 | 0.150276 | 0.20663 | 0.265193 | 0.81547 | 0.81547 | 0.81547 | 0.81547 | 0.81547 | 0.81547 | 0 | 0.038078 | 0.340311 | 1,672 | 59 | 61 | 28.338983 | 0.782412 | 0 | 0 | 0.62 | 0 | 0 | 0.32636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02 | false | 0 | 0.04 | 0 | 0.06 | 0.02 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fb857f5f852a1cbd758150ba9cb64739ce6f83d4 | 40,238 | py | Python | test/test_box.py | OpenJarbas/intentBox | 2b12d67fc43f08cf316d3b0eaadd4df239020a5a | [
"Apache-2.0"
] | 2 | 2021-04-27T16:32:35.000Z | 2021-04-29T18:45:58.000Z | test/test_box.py | OpenJarbas/intentBox | 2b12d67fc43f08cf316d3b0eaadd4df239020a5a | [
"Apache-2.0"
] | null | null | null | test/test_box.py | OpenJarbas/intentBox | 2b12d67fc43f08cf316d3b0eaadd4df239020a5a | [
"Apache-2.0"
] | 1 | 2021-04-27T16:48:19.000Z | 2021-04-27T16:48:19.000Z | import unittest
from intentBox import IntentBox, IntentDeterminationStrategy
class TestDefaults(unittest.TestCase):
def setUp(self) -> None:
intents = IntentBox()
# sample based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
joke = ["tell me a joke", "i want a joke", "say a joke",
"tell joke"]
lights_on = ["turn on the lights", "lights on", "turn lights on",
"turn the lights on"]
lights_off = ["turn off the lights", "lights off",
"turn lights off",
"turn the lights off"]
music = ["play music", "play some songs", "play heavy metal",
"play some jazz", "play rock", "play some music"]
call = ["call {person}", "phone {person}"]
intents.register_intent("weather", weather)
intents.register_intent("hello", hello)
intents.register_intent("joke", joke)
intents.register_intent("lights_on", lights_on)
intents.register_intent("lights_off", lights_off)
intents.register_intent("play_music", music)
intents.register_intent("call_person", call)
# keyword based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
name = ["name is"]
joke = ["joke"]
play = ["play"]
say = ["say", "tell"]
music = ["music", "jazz", "metal", "rock", "songs"]
door = ["door", "doors"]
light = ["light", "lights"]
on = ["activate", "on", "engage", "open"]
off = ["deactivate", "off", "disengage", "close"]
intents.register_entity("weather", weather)
intents.register_entity("hello", hello)
intents.register_entity("name", name)
intents.register_entity("joke", joke)
intents.register_entity("door", door)
intents.register_entity("lights", light)
intents.register_entity("on", on)
intents.register_entity("off", off)
intents.register_entity("play", play)
intents.register_entity("music", music)
intents.register_entity("say", say)
intents.register_keyword_intent("weather", ["weather"], ["say"])
intents.register_keyword_intent("hello", ["hello"])
intents.register_keyword_intent("name", ["name"])
intents.register_keyword_intent("joke", ["joke"], ["say"])
intents.register_keyword_intent("lights_on", ["lights", "on"])
intents.register_keyword_intent("lights_off", ["lights", "off"])
intents.register_keyword_intent("door_open", ["door", "on"])
intents.register_keyword_intent("door_close", ["door", "off"])
intents.register_keyword_intent("play_music", ["play", "music"])
self.intents = intents
def test_defaults(self):
def test_intents(utterance, expected_intents):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_type"] for i in res]),
set(expected_intents))
test_intents("turn off the lights, open the door",
{'door_open', 'lights_off'})
test_intents("Call mom and tell her hello",
{'call_person'})
test_intents("tell me a joke and the weather",
{'weather', 'joke'})
test_intents("turn on the lights close the door",
{'lights_on', 'door_close'})
test_intents("close the pod bay doors play some music", {})
test_intents("play the music satan and friends",
{'play_music'})
test_intents(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'play_music', 'door_close', 'joke', 'lights_on'})
def test_engines(self):
def test_engines(utterance, expected_engines):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_engine"] for i in res]),
set(expected_engines))
test_engines("turn off the lights, open the door",
{'adapt'})
test_engines("Call mom and tell her hello",
{'padacioso'})
test_engines("tell me a joke and the weather",
{'adapt'})
test_engines("turn on the lights close the door",
{'nebulento', 'adapt'})
test_engines("close the pod bay doors play some music", {})
test_engines("play the music satan and friends",
{'adapt'})
test_engines(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'adapt'})
class TestAll(unittest.TestCase):
def setUp(self) -> None:
config = {
"engines": {"nebulento": {"enabled": True},
"adapt": {"enabled": True},
"palavreado": {"enabled": True},
"padacioso": {"enabled": True},
"padaos": {"enabled": True},
"padatious": {"enabled": True}}
}
intents = IntentBox(config=config,
strategy=IntentDeterminationStrategy.SEGMENT_REMAINDER)
# sample based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
joke = ["tell me a joke", "i want a joke", "say a joke",
"tell joke"]
lights_on = ["turn on the lights", "lights on", "turn lights on",
"turn the lights on"]
lights_off = ["turn off the lights", "lights off",
"turn lights off",
"turn the lights off"]
music = ["play music", "play some songs", "play heavy metal",
"play some jazz", "play rock", "play some music"]
call = ["call {person}", "phone {person}"]
intents.register_intent("weather", weather)
intents.register_intent("hello", hello)
intents.register_intent("joke", joke)
intents.register_intent("lights_on", lights_on)
intents.register_intent("lights_off", lights_off)
intents.register_intent("play_music", music)
intents.register_intent("call_person", call)
# keyword based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
name = ["name is"]
joke = ["joke"]
play = ["play"]
say = ["say", "tell"]
music = ["music", "jazz", "metal", "rock", "songs"]
door = ["door", "doors"]
light = ["light", "lights"]
on = ["activate", "on", "engage", "open"]
off = ["deactivate", "off", "disengage", "close"]
intents.register_entity("weather", weather)
intents.register_entity("hello", hello)
intents.register_entity("name", name)
intents.register_entity("joke", joke)
intents.register_entity("door", door)
intents.register_entity("lights", light)
intents.register_entity("on", on)
intents.register_entity("off", off)
intents.register_entity("play", play)
intents.register_entity("music", music)
intents.register_entity("say", say)
intents.register_keyword_intent("weather", ["weather"])
intents.register_keyword_intent("hello", ["hello"])
intents.register_keyword_intent("name", ["name"])
intents.register_keyword_intent("joke", ["joke"], ["say"])
intents.register_keyword_intent("lights_on", ["lights", "on"])
intents.register_keyword_intent("lights_off", ["lights", "off"])
intents.register_keyword_intent("door_open", ["door", "on"])
intents.register_keyword_intent("door_close", ["door", "off"])
intents.register_keyword_intent("play_music", ["play", "music"])
self.intents = intents
def test_all(self):
def test_intents(utterance, expected_intents):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_type"] for i in res
if i["conf"] > 0.5]),
set(expected_intents))
test_intents("turn off the lights, open the door",
{'door_open', 'lights_off'})
test_intents("Call mom and tell her hello",
{'call_person', 'hello'})
test_intents("tell me a joke and the weather",
{'weather', 'joke'})
test_intents("close the pod bay doors play some music",
{'door_close', 'play_music'})
test_intents("play the music satan and friends",
{'play_music'})
test_intents(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'play_music', 'door_close', 'joke', 'lights_on'})
def test_engines(self):
def test_engines(utterance, expected_engines):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_engine"] for i in res]),
set(expected_engines))
test_engines("turn off the lights, open the door",
{'adapt'})
test_engines("Call mom and tell her hello",
{'padatious', 'nebulento', 'palavreado'})
test_engines("tell me a joke and the weather",
{'adapt'})
test_engines("turn on the lights close the door",
{'palavreado', 'adapt'})
test_engines("close the pod bay doors play some music",
{'adapt', 'palavreado'})
test_engines("play the music satan and friends",
{'adapt'})
test_engines(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'adapt'})
class TestPadatious(unittest.TestCase):
def setUp(self) -> None:
config = {
"engines": {"nebulento": {"enabled": False},
"adapt": {"enabled": False},
"palavreado": {"enabled": False},
"padacioso": {"enabled": False},
"padaos": {"enabled": False},
"padatious": {"enabled": True}}
}
intents = IntentBox(config=config,
strategy=IntentDeterminationStrategy.SEGMENT_REMAINDER)
# sample based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
joke = ["tell me a joke", "i want a joke", "say a joke",
"tell joke"]
lights_on = ["turn on the lights", "lights on", "turn lights on",
"turn the lights on"]
lights_off = ["turn off the lights", "lights off",
"turn lights off",
"turn the lights off"]
music = ["play music", "play some songs", "play heavy metal",
"play some jazz", "play rock", "play some music"]
call = ["call {person}", "phone {person}"]
intents.register_intent("weather", weather)
intents.register_intent("hello", hello)
intents.register_intent("joke", joke)
intents.register_intent("lights_on", lights_on)
intents.register_intent("lights_off", lights_off)
intents.register_intent("play_music", music)
intents.register_intent("call_person", call)
# keyword based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
name = ["name is"]
joke = ["joke"]
play = ["play"]
say = ["say", "tell"]
music = ["music", "jazz", "metal", "rock", "songs"]
door = ["door", "doors"]
light = ["light", "lights"]
on = ["activate", "on", "engage", "open"]
off = ["deactivate", "off", "disengage", "close"]
intents.register_entity("weather", weather)
intents.register_entity("hello", hello)
intents.register_entity("name", name)
intents.register_entity("joke", joke)
intents.register_entity("door", door)
intents.register_entity("lights", light)
intents.register_entity("on", on)
intents.register_entity("off", off)
intents.register_entity("play", play)
intents.register_entity("music", music)
intents.register_entity("say", say)
intents.register_keyword_intent("weather", ["weather"])
intents.register_keyword_intent("hello", ["hello"])
intents.register_keyword_intent("name", ["name"])
intents.register_keyword_intent("joke", ["joke"], ["say"])
intents.register_keyword_intent("lights_on", ["lights", "on"])
intents.register_keyword_intent("lights_off", ["lights", "off"])
intents.register_keyword_intent("door_open", ["door", "on"])
intents.register_keyword_intent("door_close", ["door", "off"])
intents.register_keyword_intent("play_music", ["play", "music"])
self.intents = intents
def test_all(self):
def test_intents(utterance, expected_intents):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_type"] for i in res]),
set(expected_intents))
test_intents("turn off the lights, open the door",
{'lights_off'})
test_intents("Call mom and tell her hello",
{'call_person'})
test_intents("tell me a joke and the weather",
{'joke'})
test_intents("turn on the lights close the door",
{})
test_intents("close the pod bay doors play some music", {})
test_intents("play the music satan and friends",
{})
test_intents(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'play_music', 'joke', 'lights_on'})
def test_engines(self):
def test_engines(utterance, expected_engines):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_engine"] for i in res]),
set(expected_engines))
test_engines("turn off the lights, open the door",
{'padatious'})
test_engines("Call mom and tell her hello",
{'padatious'})
test_engines("tell me a joke and the weather",
{'padatious'})
test_engines("turn on the lights close the door", {})
test_engines("close the pod bay doors play some music", {})
test_engines("play the music satan and friends", {})
test_engines(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'padatious'})
class TestPadaos(unittest.TestCase):
def setUp(self) -> None:
config = {
"engines": {"nebulento": {"enabled": False},
"adapt": {"enabled": False},
"palavreado": {"enabled": False},
"padacioso": {"enabled": False},
"padaos": {"enabled": True},
"padatious": {"enabled": False}}
}
intents = IntentBox(config=config,
strategy=IntentDeterminationStrategy.SEGMENT_REMAINDER)
# sample based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
joke = ["tell me a joke", "i want a joke", "say a joke",
"tell joke"]
lights_on = ["turn on the lights", "lights on", "turn lights on",
"turn the lights on"]
lights_off = ["turn off the lights", "lights off",
"turn lights off",
"turn the lights off"]
music = ["play music", "play some songs", "play heavy metal",
"play some jazz", "play rock", "play some music"]
call = ["call {person}", "phone {person}"]
intents.register_intent("weather", weather)
intents.register_intent("hello", hello)
intents.register_intent("joke", joke)
intents.register_intent("lights_on", lights_on)
intents.register_intent("lights_off", lights_off)
intents.register_intent("play_music", music)
intents.register_intent("call_person", call)
# keyword based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
name = ["name is"]
joke = ["joke"]
play = ["play"]
say = ["say", "tell"]
music = ["music", "jazz", "metal", "rock", "songs"]
door = ["door", "doors"]
light = ["light", "lights"]
on = ["activate", "on", "engage", "open"]
off = ["deactivate", "off", "disengage", "close"]
intents.register_entity("weather", weather)
intents.register_entity("hello", hello)
intents.register_entity("name", name)
intents.register_entity("joke", joke)
intents.register_entity("door", door)
intents.register_entity("lights", light)
intents.register_entity("on", on)
intents.register_entity("off", off)
intents.register_entity("play", play)
intents.register_entity("music", music)
intents.register_entity("say", say)
intents.register_keyword_intent("weather", ["weather"])
intents.register_keyword_intent("hello", ["hello"])
intents.register_keyword_intent("name", ["name"])
intents.register_keyword_intent("joke", ["joke"], ["say"])
intents.register_keyword_intent("lights_on", ["lights", "on"])
intents.register_keyword_intent("lights_off", ["lights", "off"])
intents.register_keyword_intent("door_open", ["door", "on"])
intents.register_keyword_intent("door_close", ["door", "off"])
intents.register_keyword_intent("play_music", ["play", "music"])
self.intents = intents
def test_all(self):
def test_intents(utterance, expected_intents):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_type"] for i in res]),
set(expected_intents))
test_intents("turn off the lights, open the door",
{'lights_off'})
test_intents("Call mom and tell her hello",
{'call_person'})
test_intents("tell me a joke and the weather",
{'joke'})
test_intents("turn on the lights close the door",
{})
test_intents("close the pod bay doors play some music", {})
test_intents("play the music satan and friends",
{})
test_intents(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'play_music', 'joke', 'lights_on'})
def test_engines(self):
def test_engines(utterance, expected_engines):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_engine"] for i in res]),
set(expected_engines))
test_engines("turn off the lights, open the door",
{'padaos'})
test_engines("Call mom and tell her hello",
{'padaos'})
test_engines("tell me a joke and the weather",
{'padaos'})
test_engines("turn on the lights close the door", {})
test_engines("close the pod bay doors play some music", {})
test_engines("play the music satan and friends", {})
test_engines(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'padaos'})
class TestPadacioso(unittest.TestCase):
def setUp(self) -> None:
config = {
"engines": {"nebulento": {"enabled": False},
"adapt": {"enabled": False},
"palavreado": {"enabled": False},
"padacioso": {"enabled": True},
"padaos": {"enabled": False},
"padatious": {"enabled": False}}
}
intents = IntentBox(config=config,
strategy=IntentDeterminationStrategy.SEGMENT_REMAINDER)
# sample based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
joke = ["tell me a joke", "i want a joke", "say a joke",
"tell joke"]
lights_on = ["turn on the lights", "lights on", "turn lights on",
"turn the lights on"]
lights_off = ["turn off the lights", "lights off",
"turn lights off",
"turn the lights off"]
music = ["play music", "play some songs", "play heavy metal",
"play some jazz", "play rock", "play some music"]
call = ["call {person}", "phone {person}"]
intents.register_intent("weather", weather)
intents.register_intent("hello", hello)
intents.register_intent("joke", joke)
intents.register_intent("lights_on", lights_on)
intents.register_intent("lights_off", lights_off)
intents.register_intent("play_music", music)
intents.register_intent("call_person", call)
# keyword based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
name = ["name is"]
joke = ["joke"]
play = ["play"]
say = ["say", "tell"]
music = ["music", "jazz", "metal", "rock", "songs"]
door = ["door", "doors"]
light = ["light", "lights"]
on = ["activate", "on", "engage", "open"]
off = ["deactivate", "off", "disengage", "close"]
intents.register_entity("weather", weather)
intents.register_entity("hello", hello)
intents.register_entity("name", name)
intents.register_entity("joke", joke)
intents.register_entity("door", door)
intents.register_entity("lights", light)
intents.register_entity("on", on)
intents.register_entity("off", off)
intents.register_entity("play", play)
intents.register_entity("music", music)
intents.register_entity("say", say)
intents.register_keyword_intent("weather", ["weather"])
intents.register_keyword_intent("hello", ["hello"])
intents.register_keyword_intent("name", ["name"])
intents.register_keyword_intent("joke", ["joke"], ["say"])
intents.register_keyword_intent("lights_on", ["lights", "on"])
intents.register_keyword_intent("lights_off", ["lights", "off"])
intents.register_keyword_intent("door_open", ["door", "on"])
intents.register_keyword_intent("door_close", ["door", "off"])
intents.register_keyword_intent("play_music", ["play", "music"])
self.intents = intents
def test_all(self):
def test_intents(utterance, expected_intents):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_type"] for i in res]),
set(expected_intents))
test_intents("turn off the lights, open the door",
{'lights_off'})
test_intents("Call mom and tell her hello",
{'call_person'})
test_intents("tell me a joke and the weather",
{'joke'})
test_intents("turn on the lights close the door",
{})
test_intents("close the pod bay doors play some music", {})
test_intents("play the music satan and friends",
{})
test_intents(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'play_music', 'joke', 'lights_on'})
def test_engines(self):
def test_engines(utterance, expected_engines):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_engine"] for i in res]),
set(expected_engines))
test_engines("turn off the lights, open the door",
{'padacioso'})
test_engines("Call mom and tell her hello",
{'padacioso'})
test_engines("tell me a joke and the weather",
{'padacioso'})
test_engines("turn on the lights close the door", {})
test_engines("close the pod bay doors play some music", {})
test_engines("play the music satan and friends", {})
test_engines(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'padacioso'})
class TestPalavreado(unittest.TestCase):
def setUp(self) -> None:
config = {
"engines": {"nebulento": {"enabled": False},
"adapt": {"enabled": False},
"palavreado": {"enabled": True},
"padacioso": {"enabled": False},
"padaos": {"enabled": False},
"padatious": {"enabled": False}}
}
intents = IntentBox(config=config,
strategy=IntentDeterminationStrategy.SEGMENT_REMAINDER)
# sample based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
joke = ["tell me a joke", "i want a joke", "say a joke",
"tell joke"]
lights_on = ["turn on the lights", "lights on", "turn lights on",
"turn the lights on"]
lights_off = ["turn off the lights", "lights off",
"turn lights off",
"turn the lights off"]
music = ["play music", "play some songs", "play heavy metal",
"play some jazz", "play rock", "play some music"]
call = ["call {person}", "phone {person}"]
intents.register_intent("weather", weather)
intents.register_intent("hello", hello)
intents.register_intent("joke", joke)
intents.register_intent("lights_on", lights_on)
intents.register_intent("lights_off", lights_off)
intents.register_intent("play_music", music)
intents.register_intent("call_person", call)
# keyword based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
name = ["name is"]
joke = ["joke"]
play = ["play"]
say = ["say", "tell"]
music = ["music", "jazz", "metal", "rock", "songs"]
door = ["door", "doors"]
light = ["light", "lights"]
on = ["activate", "on", "engage", "open"]
off = ["deactivate", "off", "disengage", "close"]
intents.register_entity("weather", weather)
intents.register_entity("hello", hello)
intents.register_entity("name", name)
intents.register_entity("joke", joke)
intents.register_entity("door", door)
intents.register_entity("lights", light)
intents.register_entity("on", on)
intents.register_entity("off", off)
intents.register_entity("play", play)
intents.register_entity("music", music)
intents.register_entity("say", say)
intents.register_keyword_intent("weather", ["weather"])
intents.register_keyword_intent("hello", ["hello"])
intents.register_keyword_intent("name", ["name"])
intents.register_keyword_intent("joke", ["joke"], ["say"])
intents.register_keyword_intent("lights_on", ["lights", "on"])
intents.register_keyword_intent("lights_off", ["lights", "off"])
intents.register_keyword_intent("door_open", ["door", "on"])
intents.register_keyword_intent("door_close", ["door", "off"])
intents.register_keyword_intent("play_music", ["play", "music"])
self.intents = intents
def test_all(self):
def test_intents(utterance, expected_intents):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_type"] for i in res]),
set(expected_intents))
test_intents("turn off the lights, open the door",
{'door_open', 'lights_off'})
test_intents("Call mom and tell her hello",
{'hello'})
test_intents("tell me a joke and the weather",
{'joke', 'weather'})
test_intents("turn on the lights close the door",
{'lights_off', 'door_open'})
test_intents("close the pod bay doors play some music",
{'play_music', 'door_close'})
test_intents("play the music satan and friends",
{'play_music'})
test_intents(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'play_music', 'lights_on', 'door_close', 'joke'})
def test_engines(self):
def test_engines(utterance, expected_engines):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_engine"] for i in res]),
set(expected_engines))
test_engines("turn off the lights, open the door",
{'palavreado'})
test_engines("Call mom and tell her hello",
{'palavreado'})
test_engines("tell me a joke and the weather",
{'palavreado'})
test_engines("turn on the lights close the door", {'palavreado'})
test_engines("close the pod bay doors play some music", {'palavreado'})
test_engines("play the music satan and friends", {'palavreado'})
test_engines(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'palavreado'})
class TestAdapt(unittest.TestCase):
def setUp(self) -> None:
config = {
"engines": {"nebulento": {"enabled": False},
"adapt": {"enabled": True},
"palavreado": {"enabled": False},
"padacioso": {"enabled": False},
"padaos": {"enabled": False},
"padatious": {"enabled": False}}
}
intents = IntentBox(config=config,
strategy=IntentDeterminationStrategy.SEGMENT_REMAINDER)
# sample based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
joke = ["tell me a joke", "i want a joke", "say a joke",
"tell joke"]
lights_on = ["turn on the lights", "lights on", "turn lights on",
"turn the lights on"]
lights_off = ["turn off the lights", "lights off",
"turn lights off",
"turn the lights off"]
music = ["play music", "play some songs", "play heavy metal",
"play some jazz", "play rock", "play some music"]
call = ["call {person}", "phone {person}"]
intents.register_intent("weather", weather)
intents.register_intent("hello", hello)
intents.register_intent("joke", joke)
intents.register_intent("lights_on", lights_on)
intents.register_intent("lights_off", lights_off)
intents.register_intent("play_music", music)
intents.register_intent("call_person", call)
# keyword based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
name = ["name is"]
joke = ["joke"]
play = ["play"]
say = ["say", "tell"]
music = ["music", "jazz", "metal", "rock", "songs"]
door = ["door", "doors"]
light = ["light", "lights"]
on = ["activate", "on", "engage", "open"]
off = ["deactivate", "off", "disengage", "close"]
intents.register_entity("weather", weather)
intents.register_entity("hello", hello)
intents.register_entity("name", name)
intents.register_entity("joke", joke)
intents.register_entity("door", door)
intents.register_entity("lights", light)
intents.register_entity("on", on)
intents.register_entity("off", off)
intents.register_entity("play", play)
intents.register_entity("music", music)
intents.register_entity("say", say)
intents.register_keyword_intent("weather", ["weather"])
intents.register_keyword_intent("hello", ["hello"])
intents.register_keyword_intent("name", ["name"])
intents.register_keyword_intent("joke", ["joke"], ["say"])
intents.register_keyword_intent("lights_on", ["lights", "on"])
intents.register_keyword_intent("lights_off", ["lights", "off"])
intents.register_keyword_intent("door_open", ["door", "on"])
intents.register_keyword_intent("door_close", ["door", "off"])
intents.register_keyword_intent("play_music", ["play", "music"])
self.intents = intents
def test_all(self):
def test_intents(utterance, expected_intents):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_type"] for i in res]),
set(expected_intents))
test_intents("turn off the lights, open the door",
{'door_open', 'lights_off'})
test_intents("Call mom and tell her hello",
{})
test_intents("tell me a joke and the weather",
{'joke', 'weather'})
test_intents("turn on the lights close the door",
{})
test_intents("close the pod bay doors play some music",
{})
test_intents("play the music satan and friends",
{'play_music'})
test_intents(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'play_music', 'lights_on', 'door_close', 'joke'})
def test_engines(self):
def test_engines(utterance, expected_engines):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_engine"] for i in res]),
set(expected_engines))
test_engines("turn off the lights, open the door",
{'adapt'})
test_engines("Call mom and tell her hello",
{})
test_engines("tell me a joke and the weather",
{'adapt'})
test_engines("turn on the lights close the door", {})
test_engines("close the pod bay doors play some music", {})
test_engines("play the music satan and friends", {'adapt'})
test_engines(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'adapt'})
class TestNebulento(unittest.TestCase):
def setUp(self) -> None:
config = {
"engines": {"nebulento": {"enabled": True},
"adapt": {"enabled": False},
"palavreado": {"enabled": False},
"padacioso": {"enabled": False},
"padaos": {"enabled": False},
"padatious": {"enabled": False}}
}
intents = IntentBox(config=config,
strategy=IntentDeterminationStrategy.SEGMENT_REMAINDER)
# sample based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
joke = ["tell me a joke", "i want a joke", "say a joke",
"tell joke"]
lights_on = ["turn on the lights", "lights on", "turn lights on",
"turn the lights on"]
lights_off = ["turn off the lights", "lights off",
"turn lights off",
"turn the lights off"]
music = ["play music", "play some songs", "play heavy metal",
"play some jazz", "play rock", "play some music"]
call = ["call {person}", "phone {person}"]
intents.register_intent("weather", weather)
intents.register_intent("hello", hello)
intents.register_intent("joke", joke)
intents.register_intent("lights_on", lights_on)
intents.register_intent("lights_off", lights_off)
intents.register_intent("play_music", music)
intents.register_intent("call_person", call)
# keyword based intents
weather = ["weather"]
hello = ["hey", "hello", "hi", "greetings"]
name = ["name is"]
joke = ["joke"]
play = ["play"]
say = ["say", "tell"]
music = ["music", "jazz", "metal", "rock", "songs"]
door = ["door", "doors"]
light = ["light", "lights"]
on = ["activate", "on", "engage", "open"]
off = ["deactivate", "off", "disengage", "close"]
intents.register_entity("weather", weather)
intents.register_entity("hello", hello)
intents.register_entity("name", name)
intents.register_entity("joke", joke)
intents.register_entity("door", door)
intents.register_entity("lights", light)
intents.register_entity("on", on)
intents.register_entity("off", off)
intents.register_entity("play", play)
intents.register_entity("music", music)
intents.register_entity("say", say)
intents.register_keyword_intent("weather", ["weather"])
intents.register_keyword_intent("hello", ["hello"])
intents.register_keyword_intent("name", ["name"])
intents.register_keyword_intent("joke", ["joke"], ["say"])
intents.register_keyword_intent("lights_on", ["lights", "on"])
intents.register_keyword_intent("lights_off", ["lights", "off"])
intents.register_keyword_intent("door_open", ["door", "on"])
intents.register_keyword_intent("door_close", ["door", "off"])
intents.register_keyword_intent("play_music", ["play", "music"])
self.intents = intents
def test_all(self):
def test_intents(utterance, expected_intents):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_type"] for i in res]),
set(expected_intents))
test_intents("turn off the lights, open the door",
{'lights_off'})
test_intents("Call mom and tell her hello",
{})
test_intents("tell me a joke and the weather",
{'joke', 'weather'})
test_intents("turn on the lights close the door",
{'lights_on'})
test_intents("close the pod bay doors play some music", {})
test_intents("play the music satan and friends", {})
test_intents(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{'lights_on', 'play_music', 'joke'})
def test_engines(self):
def test_engines(utterance, expected_engines):
res = self.intents.calc(utterance)
self.assertEqual(set([i["intent_engine"] for i in res]),
set(expected_engines))
test_engines("turn off the lights, open the door",
{'nebulento'})
test_engines("Call mom and tell her hello",
{})
test_engines("tell me a joke and the weather",
{'nebulento'})
test_engines("turn on the lights close the door",
{'nebulento'})
test_engines("close the pod bay doors play some music", {})
test_engines("play the music satan and friends", {})
test_engines(
"tell me a joke and order some pizza and turn on the lights and close the door and play some songs",
{"nebulento"})
| 44.072289 | 112 | 0.559247 | 4,354 | 40,238 | 5.02136 | 0.024116 | 0.148196 | 0.084526 | 0.092211 | 0.980972 | 0.976398 | 0.974935 | 0.974935 | 0.971459 | 0.963271 | 0 | 0.000072 | 0.306775 | 40,238 | 912 | 113 | 44.120614 | 0.783717 | 0.008524 | 0 | 0.948232 | 0 | 0.020202 | 0.29658 | 0 | 0 | 0 | 0 | 0 | 0.020202 | 1 | 0.050505 | false | 0 | 0.002525 | 0 | 0.063131 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fbcbbd157721742d55dad5f01deb50666ea8bfe5 | 19,975 | py | Python | src/tests/unit/fixtures/endpoint_standard/mock_enriched_events.py | fslds/carbon-black-cloud-sdk-python | 248a3c63d6b36d6fcdbcb3f51fb7751f062ed372 | [
"MIT"
] | 24 | 2020-10-16T22:07:38.000Z | 2022-03-24T14:58:03.000Z | src/tests/unit/fixtures/endpoint_standard/mock_enriched_events.py | fslds/carbon-black-cloud-sdk-python | 248a3c63d6b36d6fcdbcb3f51fb7751f062ed372 | [
"MIT"
] | 63 | 2020-10-26T18:26:15.000Z | 2022-03-31T17:31:02.000Z | src/tests/unit/fixtures/endpoint_standard/mock_enriched_events.py | fslds/carbon-black-cloud-sdk-python | 248a3c63d6b36d6fcdbcb3f51fb7751f062ed372 | [
"MIT"
] | 10 | 2020-11-09T11:54:23.000Z | 2022-03-24T20:44:00.000Z | """Mock responses for enriched event queries."""
POST_ENRICHED_EVENTS_SEARCH_JOB_RESP = {
"job_id": "08ffa932-b633-4107-ba56-8741e929e48b"
}
GET_ENRICHED_EVENTS_SEARCH_JOB_RESULTS_RESP = {
"contacted": 41,
"completed": 41,
"query": {
"cb.event_docs": True,
"cb.max_backend_timestamp": 1603973841000,
"cb.min_backend_timestamp": 0,
"cb.min_device_timestamp": 0,
"cb.preview_results": 500,
"cb.use_agg": True,
"facet": False,
"fq": '{!collapse field=event_id sort="device_timestamp desc"}',
"q": "(process_pid:1000 OR process_pid:2000)",
"rows": 500,
"start": 0,
},
"search_initiated_time": 1603973841206,
"connector_id": "P1PFUIAN32",
}
GET_ENRICHED_EVENTS_SEARCH_JOB_RESULTS_RESP_0 = {
"contacted": 0,
"completed": 0,
"query": {
"cb.event_docs": True,
"cb.max_backend_timestamp": 1603973841000,
"cb.min_backend_timestamp": 0,
"cb.min_device_timestamp": 0,
"cb.preview_results": 500,
"cb.use_agg": True,
"facet": False,
"fq": '{!collapse field=event_id sort="device_timestamp desc"}',
"q": "(process_pid:1000 OR process_pid:2000)",
"rows": 500,
"start": 0,
},
"search_initiated_time": 1603973841206,
"connector_id": "P1PFUIAN32",
}
GET_ENRICHED_EVENTS_SEARCH_JOB_RESULTS_RESP_ZERO_COMP = {
"contacted": 10,
"completed": 0,
"query": {
"cb.event_docs": True,
"cb.max_backend_timestamp": 1603973841000,
"cb.min_backend_timestamp": 0,
"cb.min_device_timestamp": 0,
"cb.preview_results": 500,
"cb.use_agg": True,
"facet": False,
"fq": '{!collapse field=event_id sort="device_timestamp desc"}',
"q": "(process_pid:1000 OR process_pid:2000)",
"rows": 500,
"start": 0,
},
"search_initiated_time": 1603973841206,
"connector_id": "P1PFUIAN32",
}
GET_ENRICHED_EVENTS_SEARCH_JOB_RESULTS_ZERO = {
"num_found": 0,
"num_available": 0,
"results": []
}
GET_ENRICHED_EVENTS_SEARCH_JOB_RESULTS_RESP_1 = {
"num_found": 808,
"num_available": 1,
"contacted": 6,
"completed": 6,
"results": [
{
"backend_timestamp": "2020-10-23T08:25:24.797Z",
"device_group_id": 0,
"device_id": 215209,
"device_name": "scarpaci-win10-eap01",
"device_policy_id": 2203,
"device_timestamp": "2020-10-23T08:24:22.624Z",
"enriched": True,
"enriched_event_type": "SYSTEM_API_CALL",
"event_description": 'The application "<share><link hash="6c02d54afe705d7df7db7ee94d92afdefb2fb91f9d1805c970126a096df52786">C:\\windows\\system32\\wbem\\scrcons.exe</link></share>" attempted to open itself for modification, by calling the function "OpenProcess". The operation was successful.', # noqa: E501
"event_id": "27a278d5150911eb86f1011a55e73b72",
"event_type": "crossproc",
"ingress_time": 1603441488750,
"legacy": True,
"org_id": "WNEXFKQ7",
"parent_guid": "WNEXFKQ7-000348a9-00000374-00000000-1d691b52d77fbcd",
"parent_pid": 884,
"process_guid": "WNEXFKQ7-000348a9-000003e8-00000000-1d6a915e8ccce86",
"process_hash": [
"47a61bee31164ea1dd671d695424722e",
"6c02d54afe705d7df7db7ee94d92afdefb2fb91f9d1805c970126a096df52786",
],
"process_name": "c:\\windows\\system32\\wbem\\scrcons.exe",
"process_pid": [1000],
"process_username": ["NT AUTHORITY\\SYSTEM"],
},
],
}
GET_ENRICHED_EVENTS_SEARCH_JOB_RESULTS_RESP_2 = {
"num_found": 808,
"num_available": 52,
"contacted": 6,
"completed": 6,
"results": [
{
"backend_timestamp": "2020-10-23T08:25:24.797Z",
"device_group_id": 0,
"device_id": 215209,
"device_name": "scarpaci-win10-eap01",
"device_policy_id": 2203,
"device_timestamp": "2020-10-23T08:24:22.624Z",
"enriched": True,
"enriched_event_type": "SYSTEM_API_CALL",
"event_description": 'The application "<share><link hash="6c02d54afe705d7df7db7ee94d92afdefb2fb91f9d1805c970126a096df52786">C:\\windows\\system32\\wbem\\scrcons.exe</link></share>" attempted to open itself for modification, by calling the function "OpenProcess". The operation was successful.', # noqa: E501
"event_id": "27a278d5150911eb86f1011a55e73b72",
"event_type": "crossproc",
"ingress_time": 1603441488750,
"legacy": True,
"org_id": "WNEXFKQ7",
"parent_guid": "WNEXFKQ7-000348a9-00000374-00000000-1d691b52d77fbcd",
"parent_pid": 884,
"process_guid": "WNEXFKQ7-000348a9-000003e8-00000000-1d6a915e8ccce86",
"process_hash": [
"47a61bee31164ea1dd671d695424722e",
"6c02d54afe705d7df7db7ee94d92afdefb2fb91f9d1805c970126a096df52786",
],
"process_name": "c:\\windows\\system32\\wbem\\scrcons.exe",
"process_pid": [1000],
"process_username": ["NT AUTHORITY\\SYSTEM"],
},
{
"backend_timestamp": "2020-10-23T08:25:24.797Z",
"device_group_id": 0,
"device_id": 215209,
"device_name": "scarpaci-win10-eap01",
"device_policy_id": 2203,
"device_timestamp": "2020-10-23T08:24:22.271Z",
"enriched": True,
"enriched_event_type": "SYSTEM_API_CALL",
"event_description": 'The application "<share><link hash="6c02d54afe705d7df7db7ee94d92afdefb2fb91f9d1805c970126a096df52786">C:\\windows\\system32\\wbem\\scrcons.exe</link></share>" attempted to open the process "C:\\ProgramData\\Microsoft\\Windows Defender\\Platform\\4.18.2009.7-0\\MsMpEng.exe", by calling the function "OpenProcess". The operation was successful.', # noqa: E501
"event_id": "27a278d2150911eb86f1011a55e73b72",
"event_type": "crossproc",
"ingress_time": 1603441488750,
"legacy": True,
"org_id": "WNEXFKQ7",
"parent_guid": "WNEXFKQ7-000348a9-00000374-00000000-1d691b52d77fbcd",
"parent_pid": 884,
"process_guid": "WNEXFKQ7-000348a9-000003e8-00000000-1d6a915e8ccce86",
"process_hash": [
"47a61bee31164ea1dd671d695424722e",
"6c02d54afe705d7df7db7ee94d92afdefb2fb91f9d1805c970126a096df52786",
],
"process_name": "c:\\windows\\system32\\wbem\\scrcons.exe",
"process_pid": [2000],
"process_username": ["NT AUTHORITY\\SYSTEM"],
},
],
}
GET_ENRICHED_EVENTS_SEARCH_JOB_RESULTS_RESP_STILL_QUERYING = {
"num_found": 808,
"num_available": 1,
"contacted": 6,
"completed": 0,
"results": [],
}
GET_ENRICHED_EVENTS_AGG_JOB_RESULTS_RESP_1 = {
"results": [
{
"alert_id": ["null/99FI049P"],
"backend_timestamp": "2020-06-25T21:05:10.787Z",
"device_id": 195940,
"device_name": "desktop-8qonquj",
"device_os": "WINDOWS",
"device_policy": "default",
"device_policy_id": 2198,
"device_timestamp": "2020-06-25T20:36:06.608Z",
"enriched": True,
"enriched_event_type": "CREATE_PROCESS",
"event_description": "test",
"event_id": "8ff185c2b72311eaab6d9f3b90c54099",
"event_type": "childproc",
"ingress_time": 1593117428851,
"legacy": True,
"num_devices": 1,
"num_events": 2,
"org_id": "WNEXFKQ7",
"parent_guid": "WNEXFKQ7-0002fd64-00001ffc-00000000-1d64b3039bb7130",
"parent_pid": 8188,
"process_effective_reputation": "LOCAL_WHITE",
"process_guid": "WNEXFKQ7-0002fd64-000007d0-00000000-1d64b30404d93d8",
"process_hash": [
"0dde659f0854d78f137119e13e1368ef",
"de74b04a291133b8c6c5a30bff6b2cef8ad4141cd1813d063c8c62f2671652e8",
],
"process_name": "c:\\users\\dragon\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\bin\\rustc.exe",
"process_pid": [2000],
"process_sha256": "de74b04a291133b8c6c5a30bff6b2cef8ad4141cd1813d063c8c62f2671652e8",
"process_username": ["DESKTOP-8QONQUJ\\dragon"],
}
],
"num_found": 1,
"num_available": 1,
"contacted": 32,
"completed": 32,
}
GET_ENRICHED_EVENTS_DETAIL_JOB_RESULTS_RESP_1 = {
"results": [
{
"alert_id": ["null/99FI049P"],
"backend_timestamp": "2020-10-09T14:17:24.704Z",
"childproc_cmdline": '"c:\\Windows\\System32\\cmd.exe"',
"childproc_cmdline_length": 4877,
"childproc_effective_reputation": "TRUSTED_WHITE_LIST",
"childproc_guid": "WNEXFKQ7-0002fd64-00001bec-00000000-1d64b304076f88d",
"childproc_hash": [
"ff79d3c4a0b7eb191783c323ab8363ebd1fd10be58d8bcc96b07067743ca81d5"
],
"childproc_name": "c:\\windows\\system32\\cmd.exe",
"childproc_pid": 7148,
"childproc_reputation": "COMMON_WHITE_LIST",
"device_id": 195940,
"device_installed_by": "user@vmware.com",
"device_internal_ip": "111.222.111.222",
"device_location": "OFFSITE",
"device_name": "desktop",
"device_os": "WINDOWS",
"device_os_version": "Windows 10 x64",
"device_policy": "default",
"device_policy_id": 2198,
"device_target_priority": "MEDIUM",
"device_timestamp": "2020-06-25T20:36:06.608Z",
"document_guid": "udUDvAqqSIib030hecSTrw",
"enriched": True,
"enriched_event_type": "CREATE_PROCESS",
"event_description": "test",
"event_id": "8ff185c2b72311eaab6d9f3b90c54099",
"event_type": "childproc",
"ingress_time": 1602253036356,
"legacy": True,
"org_id": "test",
"parent_effective_reputation": "LOCAL_WHITE",
"parent_guid": "WNEXFKQ7-0002fd64-00001ffc-00000000-1d64b3039bb7130",
"parent_hash": [
"574fefcfd4f2f9de53a63cbc791698f93637e709b0631478ac2c40f43f9a08cb"
],
"parent_name": "cargo.exe",
"parent_pid": 8188,
"parent_publisher_state": ["FILE_SIGNATURE_STATE_NOT_SIGNED"],
"parent_reputation": "ADAPTIVE_WHITE_LIST",
"process_cmdline": ['"rustc.exe"'],
"process_cmdline_length": [796],
"process_effective_reputation": "LOCAL_WHITE",
"process_guid": "WNEXFKQ7-0002fd64-000007d0-00000000-1d64b30404d93d8",
"process_hash": ["0dde659f0854d78f137119e13e1368ef"],
"process_name": "rustc.exe",
"process_pid": [2000],
"process_publisher_state": ["FILE_SIGNATURE_STATE_NOT_SIGNED"],
"process_reputation": "ADAPTIVE_WHITE_LIST",
"process_sha256": "de74b04a291133b8c6c5a30bff6b2cef8ad4141cd1813d063c8c62f2671652e8",
"process_start_time": "2020-06-25T20:36:06.334Z",
"process_username": ["DESKTOP-8QONQUJ\\dragon"],
"ttp": ["ADAPTIVE_WHITE_APP"],
}
],
"num_found": 1,
"num_available": 1,
"contacted": 32,
"completed": 32,
}
GET_ENRICHED_EVENTS_SEARCH_JOB_RESULTS_RESP_ALERTS = {
"approximate_unaggregated": 2,
"completed": 7,
"contacted": 7,
"num_aggregated": 2,
"num_available": 2,
"num_found": 2,
"results": [
{
"alert_category": ["OBSERVED"],
"alert_id": ["62802DCE"],
"backend_timestamp": "2021-05-13T00:21:13.086Z",
"device_external_ip": "66.170.99.2",
"device_group_id": 0,
"device_id": 8612331,
"device_installed_by": "Administrator",
"device_internal_ip": "10.169.255.100",
"device_location": "OFFSITE",
"device_name": "win-2016-devrel",
"device_os": "WINDOWS",
"device_os_version": "Windows Server 2019 x64",
"device_policy": "standard",
"device_policy_id": 7113786,
"device_target_priority": "MEDIUM",
"device_timestamp": "2021-05-13T00:20:13.044Z",
"document_guid": "VNs_NgMIQ-u3_06Sa4Sclg",
"enriched": True,
"enriched_event_type": "NETWORK",
"event_attack_stage": ["RECONNAISSANCE"],
"event_description": 'The application "<share><link '
'hash="7fd065bac18c5278777ae44908101cdfed72d26fa741367f0ad4d02020787ab6"'
'-k RPCSS -p</link></share>" accepted a '
"<accent>TCP/135</accent> connection from "
"<share><accent>10.169.255.100</accent></share><accent>:38240</accent> "
"to "
"<share><accent>10.126.6.201</accent></share><accent>:135</accent>. "
"The device was off the corporate network "
"using the public address "
"<accent>66.170.99.2</accent> "
"(<accent>win-2016-devrel</accent>, located "
"in San Jose CA, United States). The "
"operation was successful.",
"event_id": "0980efd1b38111eba4bfa5e98aa30b19",
"event_network_inbound": True,
"event_network_local_ipv4": "10.126.6.201",
"event_network_protocol": "TCP",
"event_network_remote_ipv4": "10.169.255.100",
"event_network_remote_port": 38240,
"event_report_code": "SUB_RPT_NONE",
"event_threat_score": [1],
"event_type": "netconn",
"ingress_time": 1620865258371,
"legacy": True,
"netconn_inbound": True,
"netconn_ipv4": 178913124,
"netconn_local_ipv4": 176031433,
"netconn_local_port": 135,
"netconn_port": 135,
"netconn_protocol": "PROTO_TCP",
"org_id": "4JDT3MX9Q",
"parent_effective_reputation": "LOCAL_WHITE",
"parent_effective_reputation_source": "CERT",
"parent_guid": "4JDT3MX9Q-008369eb-00000268-00000000-1d6f5ba1abcf6fc",
"parent_hash": [
"e8ea65fb51db75b1cb93890bee4364fc0b804f8f68e1817887e4a7f767ceb9ab"
],
"parent_name": "c:\\windows\\system32\\services.exe",
"parent_pid": 616,
"parent_reputation": "NOT_LISTED",
"process_cmdline": ["C:\\Windows\\system32\\svchost.exe -k RPCSS " "-p"],
"process_cmdline_length": [43],
"process_effective_reputation": "LOCAL_WHITE",
"process_effective_reputation_source": "CERT",
"process_guid": "4JDT3MX9Q-008369eb-00000364-00000000-1d6f5ba1b173ce9",
"process_hash": [
"8a0a29438052faed8a2532da50455756",
"7fd065bac18c5278777ae44908101cdfed72d26fa741367f0ad4d02020787ab6",
],
"process_name": "c:\\windows\\system32\\svchost.exe",
"process_pid": [868],
"process_reputation": "ADAPTIVE_WHITE_LIST",
"process_sha256": "7fd065bac18c5278777ae44908101cdfed72d26fa741367f0ad4d02020787ab6",
"process_start_time": "2021-01-28T13:12:17.823Z",
"process_username": ["NT AUTHORITY\\NETWORK SERVICE"],
"ttp": [
"PORTSCAN",
"MITRE_T1046_NETWORK_SERVICE_SCANNING",
"NETWORK_ACCESS",
"ACTIVE_SERVER",
],
},
{
"alert_category": ["OBSERVED"],
"alert_id": ["62802DCE"],
"backend_timestamp": "2021-05-13T00:21:08.028Z",
"device_external_ip": "66.170.99.2",
"device_group_id": 0,
"device_id": 8612331,
"device_installed_by": "Administrator",
"device_internal_ip": "10.169.255.100",
"device_location": "OFFSITE",
"device_name": "win-2016-devrel",
"device_os": "WINDOWS",
"device_os_version": "Windows Server 2019 x64",
"device_policy": "standard",
"device_policy_id": 7113786,
"device_target_priority": "MEDIUM",
"device_timestamp": "2021-05-13T00:20:13.043Z",
"document_guid": "WwZPWPLITqSNpAmQKagBYw",
"enriched": True,
"enriched_event_type": "NETWORK",
"event_attack_stage": ["RECONNAISSANCE"],
"event_description": 'The application "<share><link '
'hash="7fd065bac18c5278777ae44908101cdfed72d26fa741367f0ad4d02020787ab6"'
'-k termsvcs -s TermService</link></share>" '
"accepted a <accent>TCP/3389</accent> "
"connection from "
"<share><accent>10.169.255.100</accent></share><accent>:38604</accent> "
"to "
"<share><accent>10.126.6.201</accent></share><accent>:3389</accent>. "
"The device was off the corporate network "
"using the public address "
"<accent>66.170.99.2</accent> "
"(<accent>win-2016-devrel</accent>, located "
"in San Jose CA, United States). The "
"operation was successful.",
"event_id": "0980efd0b38111eba4bfa5e98aa30b19",
"event_network_inbound": True,
"event_network_local_ipv4": "10.126.6.201",
"event_network_protocol": "TCP",
"event_network_remote_ipv4": "10.169.255.100",
"event_network_remote_port": 38604,
"event_report_code": "SUB_RPT_NONE",
"event_threat_score": [1],
"event_type": "netconn",
"ingress_time": 1620865258370,
"legacy": True,
"netconn_inbound": True,
"netconn_ipv4": 178913124,
"netconn_local_ipv4": 176031433,
"netconn_local_port": 3389,
"netconn_port": 3389,
"netconn_protocol": "PROTO_TCP",
"org_id": "4JDT3MX9Q",
"parent_effective_reputation": "LOCAL_WHITE",
"parent_effective_reputation_source": "CERT",
"parent_guid": "4JDT3MX9Q-008369eb-00000268-00000000-1d6f5ba1abcf6fc",
"parent_hash": [
"e8ea65fb51db75b1cb93890bee4364fc0b804f8f68e1817887e4a7f767ceb9ab"
],
"parent_name": "c:\\windows\\system32\\services.exe",
"parent_pid": 616,
"parent_reputation": "NOT_LISTED",
"process_cmdline": [
"C:\\Windows\\System32\\svchost.exe -k " "termsvcs -s TermService"
],
"process_cmdline_length": [58],
"process_effective_reputation": "LOCAL_WHITE",
"process_effective_reputation_source": "CERT",
"process_guid": "4JDT3MX9Q-008369eb-0000016c-00000000-1d6f5ba1b4bd98d",
"process_hash": [
"8a0a29438052faed8a2532da50455756",
"7fd065bac18c5278777ae44908101cdfed72d26fa741367f0ad4d02020787ab6",
],
"process_name": "c:\\windows\\system32\\svchost.exe",
"process_pid": [364],
"process_reputation": "ADAPTIVE_WHITE_LIST",
"process_sha256": "7fd065bac18c5278777ae44908101cdfed72d26fa741367f0ad4d02020787ab6",
"process_start_time": "2021-01-28T13:12:18.168Z",
"process_username": ["NT AUTHORITY\\NETWORK SERVICE"],
"ttp": [
"PORTSCAN",
"MITRE_T1046_NETWORK_SERVICE_SCANNING",
"NETWORK_ACCESS",
"ACTIVE_SERVER",
],
},
],
}
| 43.142549 | 393 | 0.586583 | 1,809 | 19,975 | 6.186844 | 0.201769 | 0.011258 | 0.020014 | 0.018495 | 0.83515 | 0.816387 | 0.794764 | 0.771533 | 0.747051 | 0.741065 | 0 | 0.184596 | 0.279149 | 19,975 | 462 | 394 | 43.235931 | 0.59268 | 0.003805 | 0 | 0.710468 | 0 | 0.017817 | 0.573727 | 0.277462 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fbd3d0c2b0afe09da7bc6aeb29de36cbf84cf76a | 216,520 | py | Python | msgraph-cli-extensions/v1_0/applications_v1_0/azext_applications_v1_0/vendored_sdks/applications/aio/operations/_service_principals_operations.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | msgraph-cli-extensions/v1_0/applications_v1_0/azext_applications_v1_0/vendored_sdks/applications/aio/operations/_service_principals_operations.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | 22 | 2022-03-29T22:54:37.000Z | 2022-03-29T22:55:27.000Z | msgraph-cli-extensions/v1_0/applications_v1_0/azext_applications_v1_0/vendored_sdks/applications/aio/operations/_service_principals_operations.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, AsyncIterable, Callable, Dict, Generic, List, Optional, TypeVar, Union
import warnings
from azure.core.async_paging import AsyncItemPaged, AsyncList
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse, HttpRequest
from azure.mgmt.core.exceptions import ARMErrorFormat
from ... import models
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class ServicePrincipalsOperations:
"""ServicePrincipalsOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~applications.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
def list_app_role_assigned_to(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum32"]]] = None,
select: Optional[List[Union[str, "models.Enum33"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfAppRoleAssignment0"]:
"""Get appRoleAssignedTo from servicePrincipals.
Get appRoleAssignedTo from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum32]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum33]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfAppRoleAssignment0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfAppRoleAssignment0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfAppRoleAssignment0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_app_role_assigned_to.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfAppRoleAssignment0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_app_role_assigned_to.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo'} # type: ignore
async def create_app_role_assigned_to(
self,
service_principal_id: str,
body: "models.MicrosoftGraphAppRoleAssignment",
**kwargs
) -> "models.MicrosoftGraphAppRoleAssignment":
"""Create new navigation property to appRoleAssignedTo for servicePrincipals.
Create new navigation property to appRoleAssignedTo for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property.
:type body: ~applications.models.MicrosoftGraphAppRoleAssignment
:keyword callable cls: A custom type or function that will be passed the direct response
:return: MicrosoftGraphAppRoleAssignment, or the result of cls(response)
:rtype: ~applications.models.MicrosoftGraphAppRoleAssignment
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.MicrosoftGraphAppRoleAssignment"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_app_role_assigned_to.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'MicrosoftGraphAppRoleAssignment')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('MicrosoftGraphAppRoleAssignment', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_app_role_assigned_to.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo'} # type: ignore
async def get_app_role_assigned_to(
self,
service_principal_id: str,
app_role_assignment_id: str,
select: Optional[List[Union[str, "models.Enum34"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> "models.MicrosoftGraphAppRoleAssignment":
"""Get appRoleAssignedTo from servicePrincipals.
Get appRoleAssignedTo from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param app_role_assignment_id: key: id of appRoleAssignment.
:type app_role_assignment_id: str
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum34]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: MicrosoftGraphAppRoleAssignment, or the result of cls(response)
:rtype: ~applications.models.MicrosoftGraphAppRoleAssignment
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.MicrosoftGraphAppRoleAssignment"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_app_role_assigned_to.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
'appRoleAssignment-id': self._serialize.url("app_role_assignment_id", app_role_assignment_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('MicrosoftGraphAppRoleAssignment', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_app_role_assigned_to.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}'} # type: ignore
async def update_app_role_assigned_to(
self,
service_principal_id: str,
app_role_assignment_id: str,
body: "models.MicrosoftGraphAppRoleAssignment",
**kwargs
) -> None:
"""Update the navigation property appRoleAssignedTo in servicePrincipals.
Update the navigation property appRoleAssignedTo in servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param app_role_assignment_id: key: id of appRoleAssignment.
:type app_role_assignment_id: str
:param body: New navigation property values.
:type body: ~applications.models.MicrosoftGraphAppRoleAssignment
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_app_role_assigned_to.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
'appRoleAssignment-id': self._serialize.url("app_role_assignment_id", app_role_assignment_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'MicrosoftGraphAppRoleAssignment')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
update_app_role_assigned_to.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}'} # type: ignore
async def delete_app_role_assigned_to(
self,
service_principal_id: str,
app_role_assignment_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property appRoleAssignedTo for servicePrincipals.
Delete navigation property appRoleAssignedTo for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param app_role_assignment_id: key: id of appRoleAssignment.
:type app_role_assignment_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_app_role_assigned_to.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
'appRoleAssignment-id': self._serialize.url("app_role_assignment_id", app_role_assignment_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
delete_app_role_assigned_to.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}'} # type: ignore
def list_app_role_assignments(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum35"]]] = None,
select: Optional[List[Union[str, "models.Enum36"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfAppRoleAssignment1"]:
"""Get appRoleAssignments from servicePrincipals.
Get appRoleAssignments from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum35]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum36]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfAppRoleAssignment1 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfAppRoleAssignment1]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfAppRoleAssignment1"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_app_role_assignments.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfAppRoleAssignment1', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_app_role_assignments.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignments'} # type: ignore
async def create_app_role_assignments(
self,
service_principal_id: str,
body: "models.MicrosoftGraphAppRoleAssignment",
**kwargs
) -> "models.MicrosoftGraphAppRoleAssignment":
"""Create new navigation property to appRoleAssignments for servicePrincipals.
Create new navigation property to appRoleAssignments for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property.
:type body: ~applications.models.MicrosoftGraphAppRoleAssignment
:keyword callable cls: A custom type or function that will be passed the direct response
:return: MicrosoftGraphAppRoleAssignment, or the result of cls(response)
:rtype: ~applications.models.MicrosoftGraphAppRoleAssignment
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.MicrosoftGraphAppRoleAssignment"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_app_role_assignments.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'MicrosoftGraphAppRoleAssignment')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('MicrosoftGraphAppRoleAssignment', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_app_role_assignments.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignments'} # type: ignore
async def get_app_role_assignments(
self,
service_principal_id: str,
app_role_assignment_id: str,
select: Optional[List[Union[str, "models.Enum37"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> "models.MicrosoftGraphAppRoleAssignment":
"""Get appRoleAssignments from servicePrincipals.
Get appRoleAssignments from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param app_role_assignment_id: key: id of appRoleAssignment.
:type app_role_assignment_id: str
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum37]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: MicrosoftGraphAppRoleAssignment, or the result of cls(response)
:rtype: ~applications.models.MicrosoftGraphAppRoleAssignment
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.MicrosoftGraphAppRoleAssignment"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_app_role_assignments.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
'appRoleAssignment-id': self._serialize.url("app_role_assignment_id", app_role_assignment_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('MicrosoftGraphAppRoleAssignment', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_app_role_assignments.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignments/{appRoleAssignment-id}'} # type: ignore
async def update_app_role_assignments(
self,
service_principal_id: str,
app_role_assignment_id: str,
body: "models.MicrosoftGraphAppRoleAssignment",
**kwargs
) -> None:
"""Update the navigation property appRoleAssignments in servicePrincipals.
Update the navigation property appRoleAssignments in servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param app_role_assignment_id: key: id of appRoleAssignment.
:type app_role_assignment_id: str
:param body: New navigation property values.
:type body: ~applications.models.MicrosoftGraphAppRoleAssignment
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_app_role_assignments.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
'appRoleAssignment-id': self._serialize.url("app_role_assignment_id", app_role_assignment_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'MicrosoftGraphAppRoleAssignment')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
update_app_role_assignments.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignments/{appRoleAssignment-id}'} # type: ignore
async def delete_app_role_assignments(
self,
service_principal_id: str,
app_role_assignment_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property appRoleAssignments for servicePrincipals.
Delete navigation property appRoleAssignments for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param app_role_assignment_id: key: id of appRoleAssignment.
:type app_role_assignment_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_app_role_assignments.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
'appRoleAssignment-id': self._serialize.url("app_role_assignment_id", app_role_assignment_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
delete_app_role_assignments.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/appRoleAssignments/{appRoleAssignment-id}'} # type: ignore
def list_claims_mapping_policies(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum38"]]] = None,
select: Optional[List[Union[str, "models.Enum39"]]] = None,
expand: Optional[List[Union[str, "models.Enum40"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfClaimsMappingPolicy"]:
"""Get claimsMappingPolicies from servicePrincipals.
Get claimsMappingPolicies from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum38]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum39]
:param expand: Expand related entities.
:type expand: list[str or ~applications.models.Enum40]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfClaimsMappingPolicy or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfClaimsMappingPolicy]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfClaimsMappingPolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_claims_mapping_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfClaimsMappingPolicy', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_claims_mapping_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/claimsMappingPolicies'} # type: ignore
def list_ref_claims_mapping_policies(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum41"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfClaimsMappingPolicy"]:
"""Get ref of claimsMappingPolicies from servicePrincipals.
Get ref of claimsMappingPolicies from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum41]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfClaimsMappingPolicy or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfClaimsMappingPolicy]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfClaimsMappingPolicy"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_claims_mapping_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfClaimsMappingPolicy', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_claims_mapping_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/claimsMappingPolicies/$ref'} # type: ignore
async def create_ref_claims_mapping_policies(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to claimsMappingPolicies for servicePrincipals.
Create new navigation property ref to claimsMappingPolicies for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_claims_mapping_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_claims_mapping_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/claimsMappingPolicies/$ref'} # type: ignore
def list_created_objects(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum42"]]] = None,
select: Optional[List[Union[str, "models.Enum43"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfDirectoryObject0"]:
"""Get createdObjects from servicePrincipals.
Get createdObjects from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum42]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum43]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfDirectoryObject0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfDirectoryObject0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfDirectoryObject0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_created_objects.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfDirectoryObject0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_created_objects.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/createdObjects'} # type: ignore
def list_ref_created_objects(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum44"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfDirectoryObject0"]:
"""Get ref of createdObjects from servicePrincipals.
Get ref of createdObjects from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum44]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfDirectoryObject0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfDirectoryObject0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfDirectoryObject0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_created_objects.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfDirectoryObject0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_created_objects.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/createdObjects/$ref'} # type: ignore
async def create_ref_created_objects(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to createdObjects for servicePrincipals.
Create new navigation property ref to createdObjects for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_created_objects.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_created_objects.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/createdObjects/$ref'} # type: ignore
def list_endpoints(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum45"]]] = None,
select: Optional[List[Union[str, "models.Enum46"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfEndpoint"]:
"""Get endpoints from servicePrincipals.
Get endpoints from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum45]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum46]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfEndpoint or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfEndpoint]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfEndpoint"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_endpoints.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfEndpoint', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_endpoints.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/endpoints'} # type: ignore
async def create_endpoints(
self,
service_principal_id: str,
body: "models.MicrosoftGraphEndpoint",
**kwargs
) -> "models.MicrosoftGraphEndpoint":
"""Create new navigation property to endpoints for servicePrincipals.
Create new navigation property to endpoints for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property.
:type body: ~applications.models.MicrosoftGraphEndpoint
:keyword callable cls: A custom type or function that will be passed the direct response
:return: MicrosoftGraphEndpoint, or the result of cls(response)
:rtype: ~applications.models.MicrosoftGraphEndpoint
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.MicrosoftGraphEndpoint"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_endpoints.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'MicrosoftGraphEndpoint')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('MicrosoftGraphEndpoint', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_endpoints.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/endpoints'} # type: ignore
async def get_endpoints(
self,
service_principal_id: str,
endpoint_id: str,
select: Optional[List[Union[str, "models.Enum47"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> "models.MicrosoftGraphEndpoint":
"""Get endpoints from servicePrincipals.
Get endpoints from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param endpoint_id: key: id of endpoint.
:type endpoint_id: str
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum47]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: MicrosoftGraphEndpoint, or the result of cls(response)
:rtype: ~applications.models.MicrosoftGraphEndpoint
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.MicrosoftGraphEndpoint"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_endpoints.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
'endpoint-id': self._serialize.url("endpoint_id", endpoint_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('MicrosoftGraphEndpoint', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_endpoints.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/endpoints/{endpoint-id}'} # type: ignore
async def update_endpoints(
self,
service_principal_id: str,
endpoint_id: str,
body: "models.MicrosoftGraphEndpoint",
**kwargs
) -> None:
"""Update the navigation property endpoints in servicePrincipals.
Update the navigation property endpoints in servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param endpoint_id: key: id of endpoint.
:type endpoint_id: str
:param body: New navigation property values.
:type body: ~applications.models.MicrosoftGraphEndpoint
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.update_endpoints.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
'endpoint-id': self._serialize.url("endpoint_id", endpoint_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'MicrosoftGraphEndpoint')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
update_endpoints.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/endpoints/{endpoint-id}'} # type: ignore
async def delete_endpoints(
self,
service_principal_id: str,
endpoint_id: str,
if_match: Optional[str] = None,
**kwargs
) -> None:
"""Delete navigation property endpoints for servicePrincipals.
Delete navigation property endpoints for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param endpoint_id: key: id of endpoint.
:type endpoint_id: str
:param if_match: ETag.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delete_endpoints.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
'endpoint-id': self._serialize.url("endpoint_id", endpoint_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
delete_endpoints.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/endpoints/{endpoint-id}'} # type: ignore
def list_home_realm_discovery_policies(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum48"]]] = None,
select: Optional[List[Union[str, "models.Enum49"]]] = None,
expand: Optional[List[Union[str, "models.Enum50"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfHomeRealmDiscoveryPolicy0"]:
"""Get homeRealmDiscoveryPolicies from servicePrincipals.
Get homeRealmDiscoveryPolicies from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum48]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum49]
:param expand: Expand related entities.
:type expand: list[str or ~applications.models.Enum50]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfHomeRealmDiscoveryPolicy0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfHomeRealmDiscoveryPolicy0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfHomeRealmDiscoveryPolicy0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_home_realm_discovery_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfHomeRealmDiscoveryPolicy0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_home_realm_discovery_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/homeRealmDiscoveryPolicies'} # type: ignore
def list_ref_home_realm_discovery_policies(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum51"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfHomeRealmDiscoveryPolicy0"]:
"""Get ref of homeRealmDiscoveryPolicies from servicePrincipals.
Get ref of homeRealmDiscoveryPolicies from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum51]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfHomeRealmDiscoveryPolicy0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfHomeRealmDiscoveryPolicy0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfHomeRealmDiscoveryPolicy0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_home_realm_discovery_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfHomeRealmDiscoveryPolicy0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_home_realm_discovery_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/homeRealmDiscoveryPolicies/$ref'} # type: ignore
async def create_ref_home_realm_discovery_policies(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to homeRealmDiscoveryPolicies for servicePrincipals.
Create new navigation property ref to homeRealmDiscoveryPolicies for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_home_realm_discovery_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_home_realm_discovery_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/homeRealmDiscoveryPolicies/$ref'} # type: ignore
def list_member_of(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum52"]]] = None,
select: Optional[List[Union[str, "models.Enum53"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfDirectoryObject1"]:
"""Get memberOf from servicePrincipals.
Get memberOf from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum52]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum53]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfDirectoryObject1 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfDirectoryObject1]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfDirectoryObject1"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_member_of.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfDirectoryObject1', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_member_of.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/memberOf'} # type: ignore
def list_ref_member_of(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum54"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfDirectoryObject1"]:
"""Get ref of memberOf from servicePrincipals.
Get ref of memberOf from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum54]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfDirectoryObject1 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfDirectoryObject1]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfDirectoryObject1"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_member_of.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfDirectoryObject1', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_member_of.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/memberOf/$ref'} # type: ignore
async def create_ref_member_of(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to memberOf for servicePrincipals.
Create new navigation property ref to memberOf for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_member_of.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_member_of.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/memberOf/$ref'} # type: ignore
async def add_key(
self,
service_principal_id: str,
body: "models.PathsN3Fx9GServiceprincipalsServiceprincipalIdMicrosoftGraphAddkeyPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> "models.MicrosoftGraphKeyCredential":
"""Invoke action addKey.
Invoke action addKey.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: Action parameters.
:type body: ~applications.models.PathsN3Fx9GServiceprincipalsServiceprincipalIdMicrosoftGraphAddkeyPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: MicrosoftGraphKeyCredential, or the result of cls(response)
:rtype: ~applications.models.MicrosoftGraphKeyCredential
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.MicrosoftGraphKeyCredential"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.add_key.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'PathsN3Fx9GServiceprincipalsServiceprincipalIdMicrosoftGraphAddkeyPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('MicrosoftGraphKeyCredential', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
add_key.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/microsoft.graph.addKey'} # type: ignore
async def add_password(
self,
service_principal_id: str,
body: "models.PathsIeboplServiceprincipalsServiceprincipalIdMicrosoftGraphAddpasswordPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> "models.MicrosoftGraphPasswordCredential":
"""Invoke action addPassword.
Invoke action addPassword.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: Action parameters.
:type body: ~applications.models.PathsIeboplServiceprincipalsServiceprincipalIdMicrosoftGraphAddpasswordPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: MicrosoftGraphPasswordCredential, or the result of cls(response)
:rtype: ~applications.models.MicrosoftGraphPasswordCredential
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.MicrosoftGraphPasswordCredential"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.add_password.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'PathsIeboplServiceprincipalsServiceprincipalIdMicrosoftGraphAddpasswordPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('MicrosoftGraphPasswordCredential', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
add_password.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/microsoft.graph.addPassword'} # type: ignore
async def check_member_groups(
self,
service_principal_id: str,
body: "models.PathsO5Kx2YServiceprincipalsServiceprincipalIdMicrosoftGraphCheckmembergroupsPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> List[str]:
"""Invoke action checkMemberGroups.
Invoke action checkMemberGroups.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: Action parameters.
:type body: ~applications.models.PathsO5Kx2YServiceprincipalsServiceprincipalIdMicrosoftGraphCheckmembergroupsPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of str, or the result of cls(response)
:rtype: list[str]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List[str]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.check_member_groups.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'PathsO5Kx2YServiceprincipalsServiceprincipalIdMicrosoftGraphCheckmembergroupsPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('[str]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
check_member_groups.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/microsoft.graph.checkMemberGroups'} # type: ignore
async def check_member_objects(
self,
service_principal_id: str,
body: "models.Paths1Ffhl47ServiceprincipalsServiceprincipalIdMicrosoftGraphCheckmemberobjectsPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> List[str]:
"""Invoke action checkMemberObjects.
Invoke action checkMemberObjects.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: Action parameters.
:type body: ~applications.models.Paths1Ffhl47ServiceprincipalsServiceprincipalIdMicrosoftGraphCheckmemberobjectsPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of str, or the result of cls(response)
:rtype: list[str]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List[str]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.check_member_objects.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'Paths1Ffhl47ServiceprincipalsServiceprincipalIdMicrosoftGraphCheckmemberobjectsPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('[str]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
check_member_objects.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/microsoft.graph.checkMemberObjects'} # type: ignore
async def get_member_groups(
self,
service_principal_id: str,
body: "models.Paths1850388ServiceprincipalsServiceprincipalIdMicrosoftGraphGetmembergroupsPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> List[str]:
"""Invoke action getMemberGroups.
Invoke action getMemberGroups.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: Action parameters.
:type body: ~applications.models.Paths1850388ServiceprincipalsServiceprincipalIdMicrosoftGraphGetmembergroupsPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of str, or the result of cls(response)
:rtype: list[str]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List[str]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.get_member_groups.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'Paths1850388ServiceprincipalsServiceprincipalIdMicrosoftGraphGetmembergroupsPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('[str]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_member_groups.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/microsoft.graph.getMemberGroups'} # type: ignore
async def get_member_objects(
self,
service_principal_id: str,
body: "models.Paths1Md6PmhServiceprincipalsServiceprincipalIdMicrosoftGraphGetmemberobjectsPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> List[str]:
"""Invoke action getMemberObjects.
Invoke action getMemberObjects.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: Action parameters.
:type body: ~applications.models.Paths1Md6PmhServiceprincipalsServiceprincipalIdMicrosoftGraphGetmemberobjectsPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of str, or the result of cls(response)
:rtype: list[str]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List[str]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.get_member_objects.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'Paths1Md6PmhServiceprincipalsServiceprincipalIdMicrosoftGraphGetmemberobjectsPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('[str]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_member_objects.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/microsoft.graph.getMemberObjects'} # type: ignore
async def remove_key(
self,
service_principal_id: str,
body: "models.Paths1UhuhlbServiceprincipalsServiceprincipalIdMicrosoftGraphRemovekeyPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> None:
"""Invoke action removeKey.
Invoke action removeKey.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: Action parameters.
:type body: ~applications.models.Paths1UhuhlbServiceprincipalsServiceprincipalIdMicrosoftGraphRemovekeyPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.remove_key.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'Paths1UhuhlbServiceprincipalsServiceprincipalIdMicrosoftGraphRemovekeyPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
remove_key.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/microsoft.graph.removeKey'} # type: ignore
async def remove_password(
self,
service_principal_id: str,
body: "models.Paths1Idoj4GServiceprincipalsServiceprincipalIdMicrosoftGraphRemovepasswordPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> None:
"""Invoke action removePassword.
Invoke action removePassword.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: Action parameters.
:type body: ~applications.models.Paths1Idoj4GServiceprincipalsServiceprincipalIdMicrosoftGraphRemovepasswordPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.remove_password.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'Paths1Idoj4GServiceprincipalsServiceprincipalIdMicrosoftGraphRemovepasswordPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
remove_password.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/microsoft.graph.removePassword'} # type: ignore
async def restore(
self,
service_principal_id: str,
**kwargs
) -> "models.MicrosoftGraphDirectoryObject":
"""Invoke action restore.
Invoke action restore.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: MicrosoftGraphDirectoryObject, or the result of cls(response)
:rtype: ~applications.models.MicrosoftGraphDirectoryObject
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.MicrosoftGraphDirectoryObject"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.restore.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.post(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('MicrosoftGraphDirectoryObject', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
restore.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/microsoft.graph.restore'} # type: ignore
def list_oauth2_permission_grants(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum55"]]] = None,
select: Optional[List[Union[str, "models.Enum56"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfOAuth2PermissionGrant"]:
"""Get oauth2PermissionGrants from servicePrincipals.
Get oauth2PermissionGrants from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum55]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum56]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfOAuth2PermissionGrant or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfOAuth2PermissionGrant]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfOAuth2PermissionGrant"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_oauth2_permission_grants.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfOAuth2PermissionGrant', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_oauth2_permission_grants.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/oauth2PermissionGrants'} # type: ignore
def list_ref_oauth2_permission_grants(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum57"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfOAuth2PermissionGrant"]:
"""Get ref of oauth2PermissionGrants from servicePrincipals.
Get ref of oauth2PermissionGrants from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum57]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfOAuth2PermissionGrant or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfOAuth2PermissionGrant]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfOAuth2PermissionGrant"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_oauth2_permission_grants.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfOAuth2PermissionGrant', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_oauth2_permission_grants.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/oauth2PermissionGrants/$ref'} # type: ignore
async def create_ref_oauth2_permission_grants(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to oauth2PermissionGrants for servicePrincipals.
Create new navigation property ref to oauth2PermissionGrants for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_oauth2_permission_grants.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_oauth2_permission_grants.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/oauth2PermissionGrants/$ref'} # type: ignore
def list_owned_objects(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum58"]]] = None,
select: Optional[List[Union[str, "models.Enum59"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfDirectoryObject2"]:
"""Get ownedObjects from servicePrincipals.
Get ownedObjects from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum58]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum59]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfDirectoryObject2 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfDirectoryObject2]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfDirectoryObject2"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_owned_objects.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfDirectoryObject2', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_owned_objects.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/ownedObjects'} # type: ignore
def list_ref_owned_objects(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum60"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfDirectoryObject2"]:
"""Get ref of ownedObjects from servicePrincipals.
Get ref of ownedObjects from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum60]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfDirectoryObject2 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfDirectoryObject2]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfDirectoryObject2"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_owned_objects.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfDirectoryObject2', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_owned_objects.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/ownedObjects/$ref'} # type: ignore
async def create_ref_owned_objects(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to ownedObjects for servicePrincipals.
Create new navigation property ref to ownedObjects for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_owned_objects.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_owned_objects.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/ownedObjects/$ref'} # type: ignore
def list_owners(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum61"]]] = None,
select: Optional[List[Union[str, "models.Enum62"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfDirectoryObject3"]:
"""Get owners from servicePrincipals.
Get owners from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum61]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum62]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfDirectoryObject3 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfDirectoryObject3]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfDirectoryObject3"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_owners.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfDirectoryObject3', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_owners.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/owners'} # type: ignore
def list_ref_owners(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum63"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfDirectoryObject3"]:
"""Get ref of owners from servicePrincipals.
Get ref of owners from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum63]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfDirectoryObject3 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfDirectoryObject3]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfDirectoryObject3"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_owners.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfDirectoryObject3', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_owners.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/owners/$ref'} # type: ignore
async def create_ref_owners(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to owners for servicePrincipals.
Create new navigation property ref to owners for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_owners.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_owners.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/owners/$ref'} # type: ignore
def list_token_issuance_policies(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum64"]]] = None,
select: Optional[List[Union[str, "models.Enum65"]]] = None,
expand: Optional[List[Union[str, "models.Enum66"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfTokenIssuancePolicy0"]:
"""Get tokenIssuancePolicies from servicePrincipals.
Get tokenIssuancePolicies from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum64]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum65]
:param expand: Expand related entities.
:type expand: list[str or ~applications.models.Enum66]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfTokenIssuancePolicy0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfTokenIssuancePolicy0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfTokenIssuancePolicy0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_token_issuance_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfTokenIssuancePolicy0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_token_issuance_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/tokenIssuancePolicies'} # type: ignore
def list_ref_token_issuance_policies(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum67"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfTokenIssuancePolicy0"]:
"""Get ref of tokenIssuancePolicies from servicePrincipals.
Get ref of tokenIssuancePolicies from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum67]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfTokenIssuancePolicy0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfTokenIssuancePolicy0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfTokenIssuancePolicy0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_token_issuance_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfTokenIssuancePolicy0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_token_issuance_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/tokenIssuancePolicies/$ref'} # type: ignore
async def create_ref_token_issuance_policies(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to tokenIssuancePolicies for servicePrincipals.
Create new navigation property ref to tokenIssuancePolicies for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_token_issuance_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_token_issuance_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/tokenIssuancePolicies/$ref'} # type: ignore
def list_token_lifetime_policies(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum68"]]] = None,
select: Optional[List[Union[str, "models.Enum69"]]] = None,
expand: Optional[List[Union[str, "models.Enum70"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfTokenLifetimePolicy0"]:
"""Get tokenLifetimePolicies from servicePrincipals.
Get tokenLifetimePolicies from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum68]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum69]
:param expand: Expand related entities.
:type expand: list[str or ~applications.models.Enum70]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfTokenLifetimePolicy0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfTokenLifetimePolicy0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfTokenLifetimePolicy0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_token_lifetime_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfTokenLifetimePolicy0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_token_lifetime_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/tokenLifetimePolicies'} # type: ignore
def list_ref_token_lifetime_policies(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum71"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfTokenLifetimePolicy0"]:
"""Get ref of tokenLifetimePolicies from servicePrincipals.
Get ref of tokenLifetimePolicies from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum71]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfTokenLifetimePolicy0 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfTokenLifetimePolicy0]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfTokenLifetimePolicy0"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_token_lifetime_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfTokenLifetimePolicy0', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_token_lifetime_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/tokenLifetimePolicies/$ref'} # type: ignore
async def create_ref_token_lifetime_policies(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to tokenLifetimePolicies for servicePrincipals.
Create new navigation property ref to tokenLifetimePolicies for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_token_lifetime_policies.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_token_lifetime_policies.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/tokenLifetimePolicies/$ref'} # type: ignore
def list_transitive_member_of(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum72"]]] = None,
select: Optional[List[Union[str, "models.Enum73"]]] = None,
expand: Optional[List[str]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfDirectoryObject4"]:
"""Get transitiveMemberOf from servicePrincipals.
Get transitiveMemberOf from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum72]
:param select: Select properties to be returned.
:type select: list[str or ~applications.models.Enum73]
:param expand: Expand related entities.
:type expand: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfDirectoryObject4 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfDirectoryObject4]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfDirectoryObject4"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_transitive_member_of.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
if select is not None:
query_parameters['$select'] = self._serialize.query("select", select, '[str]', div=',')
if expand is not None:
query_parameters['$expand'] = self._serialize.query("expand", expand, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfDirectoryObject4', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_transitive_member_of.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/transitiveMemberOf'} # type: ignore
def list_ref_transitive_member_of(
self,
service_principal_id: str,
orderby: Optional[List[Union[str, "models.Enum74"]]] = None,
**kwargs
) -> AsyncIterable["models.CollectionOfLinksOfDirectoryObject4"]:
"""Get ref of transitiveMemberOf from servicePrincipals.
Get ref of transitiveMemberOf from servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param orderby: Order items by property values.
:type orderby: list[str or ~applications.models.Enum74]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either CollectionOfLinksOfDirectoryObject4 or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~applications.models.CollectionOfLinksOfDirectoryObject4]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.CollectionOfLinksOfDirectoryObject4"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list_ref_transitive_member_of.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if self._config.top is not None:
query_parameters['$top'] = self._serialize.query("self._config.top", self._config.top, 'int', minimum=0)
if self._config.skip is not None:
query_parameters['$skip'] = self._serialize.query("self._config.skip", self._config.skip, 'int', minimum=0)
if self._config.search is not None:
query_parameters['$search'] = self._serialize.query("self._config.search", self._config.search, 'str')
if self._config.filter is not None:
query_parameters['$filter'] = self._serialize.query("self._config.filter", self._config.filter, 'str')
if self._config.count is not None:
query_parameters['$count'] = self._serialize.query("self._config.count", self._config.count, 'bool')
if orderby is not None:
query_parameters['$orderby'] = self._serialize.query("orderby", orderby, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('CollectionOfLinksOfDirectoryObject4', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.odata_next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
error = self._deserialize(models.OdataError, response)
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list_ref_transitive_member_of.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/transitiveMemberOf/$ref'} # type: ignore
async def create_ref_transitive_member_of(
self,
service_principal_id: str,
body: Dict[str, object],
**kwargs
) -> Dict[str, object]:
"""Create new navigation property ref to transitiveMemberOf for servicePrincipals.
Create new navigation property ref to transitiveMemberOf for servicePrincipals.
:param service_principal_id: key: id of servicePrincipal.
:type service_principal_id: str
:param body: New navigation property ref value.
:type body: dict[str, object]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to object, or the result of cls(response)
:rtype: dict[str, object]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, object]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.create_ref_transitive_member_of.metadata['url'] # type: ignore
path_format_arguments = {
'servicePrincipal-id': self._serialize.url("service_principal_id", service_principal_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, '{object}')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('{object}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_ref_transitive_member_of.metadata = {'url': '/servicePrincipals/{servicePrincipal-id}/transitiveMemberOf/$ref'} # type: ignore
async def delta(
self,
**kwargs
) -> List["models.MicrosoftGraphServicePrincipal"]:
"""Invoke function delta.
Invoke function delta.
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of MicrosoftGraphServicePrincipal, or the result of cls(response)
:rtype: list[~applications.models.MicrosoftGraphServicePrincipal]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["models.MicrosoftGraphServicePrincipal"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.delta.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('[MicrosoftGraphServicePrincipal]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
delta.metadata = {'url': '/servicePrincipals/microsoft.graph.delta()'} # type: ignore
async def get_available_extension_properties(
self,
body: "models.PathsGo2T4HServiceprincipalsMicrosoftGraphGetavailableextensionpropertiesPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> List["models.MicrosoftGraphExtensionProperty"]:
"""Invoke action getAvailableExtensionProperties.
Invoke action getAvailableExtensionProperties.
:param body: Action parameters.
:type body: ~applications.models.PathsGo2T4HServiceprincipalsMicrosoftGraphGetavailableextensionpropertiesPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of MicrosoftGraphExtensionProperty, or the result of cls(response)
:rtype: list[~applications.models.MicrosoftGraphExtensionProperty]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["models.MicrosoftGraphExtensionProperty"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.get_available_extension_properties.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'PathsGo2T4HServiceprincipalsMicrosoftGraphGetavailableextensionpropertiesPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('[MicrosoftGraphExtensionProperty]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_available_extension_properties.metadata = {'url': '/servicePrincipals/microsoft.graph.getAvailableExtensionProperties'} # type: ignore
async def get_by_ids(
self,
body: "models.Paths15YkyvsServiceprincipalsMicrosoftGraphGetbyidsPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> List["models.MicrosoftGraphDirectoryObject"]:
"""Invoke action getByIds.
Invoke action getByIds.
:param body: Action parameters.
:type body: ~applications.models.Paths15YkyvsServiceprincipalsMicrosoftGraphGetbyidsPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of MicrosoftGraphDirectoryObject, or the result of cls(response)
:rtype: list[~applications.models.MicrosoftGraphDirectoryObject]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["models.MicrosoftGraphDirectoryObject"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.get_by_ids.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'Paths15YkyvsServiceprincipalsMicrosoftGraphGetbyidsPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('[MicrosoftGraphDirectoryObject]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_by_ids.metadata = {'url': '/servicePrincipals/microsoft.graph.getByIds'} # type: ignore
async def validate_properties(
self,
body: "models.PathsYq15M4ServiceprincipalsMicrosoftGraphValidatepropertiesPostRequestbodyContentApplicationJsonSchema",
**kwargs
) -> None:
"""Invoke action validateProperties.
Invoke action validateProperties.
:param body: Action parameters.
:type body: ~applications.models.PathsYq15M4ServiceprincipalsMicrosoftGraphValidatepropertiesPostRequestbodyContentApplicationJsonSchema
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.validate_properties.metadata['url'] # type: ignore
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(body, 'PathsYq15M4ServiceprincipalsMicrosoftGraphValidatepropertiesPostRequestbodyContentApplicationJsonSchema')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize(models.OdataError, response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
validate_properties.metadata = {'url': '/servicePrincipals/microsoft.graph.validateProperties'} # type: ignore
| 50.826291 | 175 | 0.656466 | 22,446 | 216,520 | 6.121492 | 0.016172 | 0.03302 | 0.03537 | 0.017321 | 0.912076 | 0.908241 | 0.90017 | 0.891844 | 0.882906 | 0.863635 | 0 | 0.006788 | 0.244061 | 216,520 | 4,259 | 176 | 50.838225 | 0.832694 | 0.133013 | 0 | 0.833703 | 0 | 0 | 0.130868 | 0.056855 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017369 | false | 0.004435 | 0.002956 | 0 | 0.077605 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8384aeb9c9ba1558e7d3dc73430573160aabae52 | 86 | py | Python | galry/managers/__init__.py | fiath/test | b50898dafa90e93da48f573e0b3feb1bb6acd8de | [
"MIT",
"BSD-3-Clause"
] | 55 | 2015-01-12T06:08:36.000Z | 2021-08-13T17:24:50.000Z | galry/managers/__init__.py | fiath/test | b50898dafa90e93da48f573e0b3feb1bb6acd8de | [
"MIT",
"BSD-3-Clause"
] | 2 | 2017-03-08T12:04:22.000Z | 2017-07-27T07:13:00.000Z | galry/managers/__init__.py | fiath/test | b50898dafa90e93da48f573e0b3feb1bb6acd8de | [
"MIT",
"BSD-3-Clause"
] | 10 | 2015-01-01T10:51:38.000Z | 2021-12-10T02:53:45.000Z | from default_manager import *
from plot_manager import *
from mesh_manager import *
| 14.333333 | 29 | 0.802326 | 12 | 86 | 5.5 | 0.5 | 0.590909 | 0.515152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 86 | 5 | 30 | 17.2 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
83853daaeb7a27833289565909a9faf3f625c441 | 68,578 | py | Python | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_heteroFair/cmp_gamess/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_heteroFair/cmp_gamess/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_heteroFair/cmp_gamess/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.256718,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.404326,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.08133,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.739555,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 1.28064,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.734484,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 2.75468,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.565236,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 8.13574,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.204286,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0268094,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.302379,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.198272,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.506665,
'Execution Unit/Register Files/Runtime Dynamic': 0.225082,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.801002,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.7912,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 5.58067,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00216373,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00216373,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.0018916,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000736091,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.0028482,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00906725,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0204958,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.190604,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.4688,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.647378,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.33635,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0233446,
'L2/Runtime Dynamic': 0.00573752,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 6.80765,
'Load Store Unit/Data Cache/Runtime Dynamic': 2.6844,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.18022,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.18022,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 7.66216,
'Load Store Unit/Runtime Dynamic': 3.7534,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.444392,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.888783,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.157716,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.15798,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0771076,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.831154,
'Memory Management Unit/Runtime Dynamic': 0.235088,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 30.1828,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.712707,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.046393,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.376861,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.13596,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 12.0472,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.109326,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.288558,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.461329,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.272271,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.439164,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.221675,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.93311,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.240671,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.18075,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.087155,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0114203,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.128761,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.08446,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.215916,
'Execution Unit/Register Files/Runtime Dynamic': 0.0958803,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.2986,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.669578,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.3925,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00104465,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00104465,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000945046,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000385072,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00121327,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00424762,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00875981,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0811935,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 5.1646,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.199906,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.27577,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.63377,
'Instruction Fetch Unit/Runtime Dynamic': 0.569877,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0103058,
'L2/Runtime Dynamic': 0.00252047,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.61397,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.14534,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0768967,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0768968,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.97709,
'Load Store Unit/Runtime Dynamic': 1.60146,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.189614,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.379229,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0672947,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0674091,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.321116,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0328912,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.592825,
'Memory Management Unit/Runtime Dynamic': 0.1003,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 20.9842,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.229265,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0150742,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.136134,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.380474,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 5.04714,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.110002,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.289088,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.464537,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.272456,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.439461,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.221825,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.933743,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.240389,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.1861,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0877611,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.011428,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.129087,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0845173,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.216848,
'Execution Unit/Register Files/Runtime Dynamic': 0.0959453,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.299457,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.671208,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.39536,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00103477,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00103477,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00093636,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000381663,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.0012141,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00422001,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00866821,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0812486,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 5.16811,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.198969,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.275957,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.63744,
'Instruction Fetch Unit/Runtime Dynamic': 0.569063,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0104483,
'L2/Runtime Dynamic': 0.00255517,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.61335,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.14516,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0768766,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0768765,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.97637,
'Load Store Unit/Runtime Dynamic': 1.60117,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.189565,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.379129,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0672771,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0673958,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.321334,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0327306,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.593013,
'Memory Management Unit/Runtime Dynamic': 0.100126,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 20.9929,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.230859,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.015102,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.136199,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.38216,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 5.05043,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.105723,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.285728,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.448209,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.262747,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.423801,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.21392,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.900468,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.23179,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.13661,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0846762,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0110208,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.124265,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0815054,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.208941,
'Execution Unit/Register Files/Runtime Dynamic': 0.0925262,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.288229,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.646128,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.33023,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00100681,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00100681,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000911401,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000371669,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00117083,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00409586,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00842181,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0783532,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.98394,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.19161,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.266123,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.44434,
'Instruction Fetch Unit/Runtime Dynamic': 0.548604,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.010087,
'L2/Runtime Dynamic': 0.00248151,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.52347,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.10185,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0739688,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0739688,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.87277,
'Load Store Unit/Runtime Dynamic': 1.5406,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.182395,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.364789,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0647324,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0648489,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.309883,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.031515,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.577191,
'Memory Management Unit/Runtime Dynamic': 0.0963639,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 20.6305,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.222744,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0145652,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.131325,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.368634,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 4.88691,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 0.046124918629534245,
'Runtime Dynamic': 0.046124918629534245,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.0284366,
'Runtime Dynamic': 0.0177567,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 92.8188,
'Peak Power': 125.931,
'Runtime Dynamic': 27.0494,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 92.7904,
'Total Cores/Runtime Dynamic': 27.0317,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.0284366,
'Total L3s/Runtime Dynamic': 0.0177567,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.030635 | 124 | 0.681939 | 8,082 | 68,578 | 5.7805 | 0.067681 | 0.123635 | 0.113019 | 0.093497 | 0.93921 | 0.931183 | 0.918532 | 0.886789 | 0.862173 | 0.842994 | 0 | 0.131442 | 0.224431 | 68,578 | 914 | 125 | 75.030635 | 0.746931 | 0 | 0 | 0.642232 | 0 | 0 | 0.657709 | 0.04812 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8387c58006ba97f940d85ff053eccb733dd3eef1 | 82 | py | Python | src/chapter32/__init__.py | Peefy/CLRS_dugu_code-master | 98f00e75e1b0ebc13a7affb2604bec8501692a19 | [
"Apache-2.0"
] | 3 | 2018-01-31T03:08:50.000Z | 2018-04-25T12:57:01.000Z | src/chapter32/__init__.py | HideLakitu/IntroductionToAlgorithm.Python | 33662f46dc346203b220d7481d1a4439feda05d2 | [
"Apache-2.0"
] | null | null | null | src/chapter32/__init__.py | HideLakitu/IntroductionToAlgorithm.Python | 33662f46dc346203b220d7481d1a4439feda05d2 | [
"Apache-2.0"
] | 3 | 2019-03-03T04:49:53.000Z | 2020-07-13T10:18:58.000Z |
# python src/chapter32/chapter32note.py
# python3 src/chapter32/chapter32note.py
| 20.5 | 40 | 0.817073 | 10 | 82 | 6.7 | 0.6 | 0.358209 | 0.746269 | 0.80597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 0.085366 | 82 | 3 | 41 | 27.333333 | 0.773333 | 0.926829 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
839517fae0f8d1b9b8285351902afcbd8d595b21 | 72,242 | py | Python | lrs/tests/OAuthTests.py | varunasingh/ADL_LRS | 65130b0f4d1b022bd2b4fb42261457bdc47b3ad2 | [
"Apache-2.0"
] | null | null | null | lrs/tests/OAuthTests.py | varunasingh/ADL_LRS | 65130b0f4d1b022bd2b4fb42261457bdc47b3ad2 | [
"Apache-2.0"
] | null | null | null | lrs/tests/OAuthTests.py | varunasingh/ADL_LRS | 65130b0f4d1b022bd2b4fb42261457bdc47b3ad2 | [
"Apache-2.0"
] | null | null | null | from django.test import TestCase
from django.core.urlresolvers import reverse
from lrs import views, models
from oauth_provider.oauth.oauth import OAuthRequest, OAuthSignatureMethod_HMAC_SHA1
from django.test.utils import setup_test_environment
from django.contrib.auth.models import User
from lrs.objects.StatementManager import StatementManager
import time
from django.conf import settings
import uuid
import json
import urllib
import ast
import os
import hashlib
import base64
import re
class OAuthTests(TestCase):
@classmethod
def setUpClass(cls):
print "\n%s" % __name__
def setUp(self):
if not settings.OAUTH_ENABLED:
settings.OAUTH_ENABLED = True
# Create a user
self.user = User.objects.create_user('jane', 'jane@example.com', 'toto')
user = self.client.login(username='jane', password='toto')
#Register a client
self.name = "test client"
self.desc = "test desc"
form = {"name":self.name, "description":self.desc, "scopes":"all"}
response = self.client.post(reverse(views.reg_client),form, X_Experience_API_Version="1.0.0")
self.consumer = models.Consumer.objects.get(name=self.name)
self.client.logout()
# Create a user
self.user2 = User.objects.create_user('dick', 'dick@example.com', 'lassie')
user2 = self.client.login(username='dick', password='lassie')
#Register a client
self.name2 = "test client2"
self.desc2 = "test desc2"
form2 = {"name":self.name2, "description":self.desc2, "scopes":"all"}
response2 = self.client.post(reverse(views.reg_client),form2, X_Experience_API_Version="1.0.0")
self.consumer2 = models.Consumer.objects.get(name=self.name2)
self.client.logout()
def perform_oauth_handshake(self, scope=True, scope_type=None, request_nonce=None,
access_nonce=None, resource_nonce=None):
# TEST REQUEST TOKEN
oauth_header_request_params = "OAuth realm=\"test\","\
"oauth_consumer_key=\"%s\","\
"oauth_signature_method=\"PLAINTEXT\","\
"oauth_signature=\"%s&\","\
"oauth_timestamp=\"%s\","\
"oauth_nonce=\"requestnonce\","\
"oauth_version=\"1.0\","\
"oauth_callback=\"http://example.com/request_token_ready\"" % (self.consumer.key,self.consumer.secret,str(int(time.time())))
if scope:
if scope_type:
# Test sending in scope as query param with REQUEST_TOKEN
param = {
"scope":scope_type
}
else:
# Test sending in scope as query param with REQUEST_TOKEN
param = {
"scope":"all"
}
path = "%s?%s" % ("/XAPI/OAuth/initiate", urllib.urlencode(param))
else:
path = "/XAPI/OAuth/initiate"
request_resp = self.client.get(path, Authorization=oauth_header_request_params, X_Experience_API_Version="1.0.0")
self.assertEqual(request_resp.status_code, 200)
self.assertIn('oauth_token_secret=', request_resp.content)
self.assertIn('oauth_token=', request_resp.content)
self.assertIn('&oauth_callback_confirmed=true', request_resp.content)
token = models.Token.objects.get(consumer=self.consumer)
self.assertIn(token.key, request_resp.content)
self.assertIn(token.secret, request_resp.content)
self.assertEqual(token.callback, 'http://example.com/request_token_ready')
self.assertEqual(token.callback_confirmed, True)
# Test AUTHORIZE
oauth_auth_params = {'oauth_token': token.key}
auth_resp = self.client.get("/XAPI/OAuth/authorize", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_resp.status_code, 302)
self.assertIn('http://testserver/XAPI/accounts/login?next=/XAPI/OAuth/authorize%3F', auth_resp['Location'])
self.assertIn(token.key, auth_resp['Location'])
self.client.login(username='jane', password='toto')
self.assertEqual(token.is_approved, False)
auth_resp = self.client.get("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_resp.status_code, 200) # Show return/display OAuth authorized view
html = auth_resp.content
cap = re.search('name="obj_id"\Wvalue="(.*?)"', html)
oauth_auth_params['obj_id'] = cap.group(1)
caps = re.findall('checked="checked".*?value="(.*?)"', html)
oauth_auth_params['scopes'] = [c for c in caps]
oauth_auth_params['authorize_access'] = 1
auth_post = self.client.post("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_post.status_code, 302)
self.assertIn('http://example.com/request_token_ready?oauth_verifier=', auth_post['Location'])
token = models.Token.objects.get(consumer=self.consumer)
self.assertIn(token.key, auth_post['Location'])
self.assertEqual(token.is_approved, True)
# Test ACCESS TOKEN
oauth_header_access_params = "OAuth realm=\"test\","\
"oauth_consumer_key=\"%s\","\
"oauth_token=\"%s\","\
"oauth_signature_method=\"PLAINTEXT\","\
"oauth_signature=\"%s&%s\","\
"oauth_timestamp=\"%s\","\
"oauth_nonce=\"accessnonce\","\
"oauth_version=\"1.0\","\
"oauth_verifier=\"%s\"" % (self.consumer.key,token.key,self.consumer.secret,token.secret,str(int(time.time())),token.verifier)
access_resp = self.client.get("/XAPI/OAuth/token/", Authorization=oauth_header_access_params,
X_Experience_API_Version="1.0.0")
self.assertEqual(access_resp.status_code, 200)
access_token = models.Token.objects.filter(token_type=models.Token.ACCESS, consumer=self.consumer)[0]
self.assertIn(access_token.key, access_resp.content)
self.assertEqual(access_token.user.username, u'jane')
# Test ACCESS RESOURCE
oauth_header_resource_params = "OAuth realm=\"test\", "\
"oauth_consumer_key=\"%s\","\
"oauth_token=\"%s\","\
"oauth_signature_method=\"HMAC-SHA1\","\
"oauth_timestamp=\"%s\","\
"oauth_nonce=\"accessresourcenonce\","\
"oauth_version=\"1.0\"" % (self.consumer.key, access_token.key, str(int(time.time())))
return oauth_header_resource_params, access_token
def perform_oauth_handshake2(self, scope=True, scope_type=None, request_nonce=None,
access_nonce=None, resource_nonce=None):
# TEST REQUEST TOKEN
oauth_header_request_params = "OAuth realm=\"test\","\
"oauth_consumer_key=\"%s\","\
"oauth_signature_method=\"PLAINTEXT\","\
"oauth_signature=\"%s&\","\
"oauth_timestamp=\"%s\","\
"oauth_nonce=\"requestnonc2e\","\
"oauth_version=\"1.0\","\
"oauth_callback=\"http://example2.com/request_token_ready\"" % (self.consumer2.key,self.consumer2.secret,str(int(time.time())))
if scope:
if scope_type:
# Test sending in scope as query param with REQUEST_TOKEN
param = {
"scope":scope_type
}
else:
# Test sending in scope as query param with REQUEST_TOKEN
param = {
"scope":"all"
}
path = "%s?%s" % ("/XAPI/OAuth/initiate", urllib.urlencode(param))
else:
path = "/XAPI/OAuth/initiate"
request_resp = self.client.get(path, Authorization=oauth_header_request_params, X_Experience_API_Version="1.0.0")
self.assertEqual(request_resp.status_code, 200)
self.assertIn('oauth_token_secret=', request_resp.content)
self.assertIn('oauth_token=', request_resp.content)
self.assertIn('&oauth_callback_confirmed=true', request_resp.content)
token = models.Token.objects.get(consumer=self.consumer2)
self.assertIn(token.key, request_resp.content)
self.assertIn(token.secret, request_resp.content)
self.assertEqual(token.callback, 'http://example2.com/request_token_ready')
self.assertEqual(token.callback_confirmed, True)
# Test AUTHORIZE
oauth_auth_params = {'oauth_token': token.key}
auth_resp = self.client.get("/XAPI/OAuth/authorize", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_resp.status_code, 302)
self.assertIn('http://testserver/XAPI/accounts/login?next=/XAPI/OAuth/authorize%3F', auth_resp['Location'])
self.assertIn(token.key, auth_resp['Location'])
self.client.login(username='dick', password='lassie')
self.assertEqual(token.is_approved, False)
auth_resp = self.client.get("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_resp.status_code, 200) # Show return/display OAuth authorized view
html = auth_resp.content
cap = re.search('name="obj_id"\Wvalue="(.*?)"', html)
oauth_auth_params['obj_id'] = cap.group(1)
caps = re.findall('checked="checked".*?value="(.*?)"', html)
oauth_auth_params['scopes'] = [c for c in caps]
oauth_auth_params['authorize_access'] = 1
auth_post = self.client.post("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_post.status_code, 302)
self.assertIn('http://example2.com/request_token_ready?oauth_verifier=', auth_post['Location'])
token = models.Token.objects.get(consumer=self.consumer2)
self.assertIn(token.key, auth_post['Location'])
self.assertEqual(token.is_approved, True)
# Test ACCESS TOKEN
oauth_header_access_params = "OAuth realm=\"test\","\
"oauth_consumer_key=\"%s\","\
"oauth_token=\"%s\","\
"oauth_signature_method=\"PLAINTEXT\","\
"oauth_signature=\"%s&%s\","\
"oauth_timestamp=\"%s\","\
"oauth_nonce=\"accessnonce2\","\
"oauth_version=\"1.0\","\
"oauth_verifier=\"%s\"" % (self.consumer2.key,token.key,self.consumer2.secret,token.secret,str(int(time.time())),token.verifier)
access_resp = self.client.get("/XAPI/OAuth/token/", Authorization=oauth_header_access_params,
X_Experience_API_Version="1.0.0")
self.assertEqual(access_resp.status_code, 200)
access_token = models.Token.objects.filter(token_type=models.Token.ACCESS, consumer=self.consumer2)[0]
self.assertIn(access_token.key, access_resp.content)
self.assertEqual(access_token.user.username, u'dick')
# Test ACCESS RESOURCE
oauth_header_resource_params = "OAuth realm=\"test\", "\
"oauth_consumer_key=\"%s\","\
"oauth_token=\"%s\","\
"oauth_signature_method=\"HMAC-SHA1\","\
"oauth_timestamp=\"%s\","\
"oauth_nonce=\"accessresourcenonce2\","\
"oauth_version=\"1.0\"" % (self.consumer2.key, access_token.key, str(int(time.time())))
return oauth_header_resource_params, access_token
def tearDown(self):
if settings.OAUTH_ENABLED:
settings.OAUTH_ENABLED = False
# Delete everything
models.Token.objects.all().delete()
models.Consumer.objects.all().delete()
models.Nonce.objects.all().delete()
User.objects.all().delete()
attach_folder_path = os.path.join(settings.MEDIA_ROOT, "activity_state")
for the_file in os.listdir(attach_folder_path):
file_path = os.path.join(attach_folder_path, the_file)
try:
os.unlink(file_path)
except Exception, e:
raise e
def test_all_error_flows(self):
# Test request_token without appropriate headers
resp = self.client.get("/XAPI/OAuth/initiate/", X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 401)
self.assertIn('WWW-Authenticate', resp._headers['www-authenticate'])
self.assertIn('OAuth realm="http://localhost:8000/XAPI"', resp._headers['www-authenticate'])
self.assertEqual(resp.content, 'Invalid request parameters.')
oauth_header_request_params = "OAuth realm=\"test\","\
"oauth_consumer_key=\"%s\","\
"oauth_signature_method=\"PLAINTEXT\","\
"oauth_signature=\"%s&\","\
"oauth_timestamp=\"%s\","\
"oauth_nonce=\"requestnonce\","\
"oauth_version=\"1.0\","\
"oauth_callback=\"http://example.com/request_token_ready\"" % (self.consumer.key,self.consumer.secret,str(int(time.time())))
# Test passing scope as form param
form_data = {
'scope':'all',
}
request_resp = self.client.get("/XAPI/OAuth/initiate/", Authorization=oauth_header_request_params, data=form_data, X_Experience_API_Version="1.0.0")
self.assertEqual(request_resp.status_code, 200)
self.assertIn('oauth_token_secret=', request_resp.content)
self.assertIn('oauth_token=', request_resp.content)
self.assertIn('&oauth_callback_confirmed=true', request_resp.content)
token = models.Token.objects.get(consumer=self.consumer)
self.assertIn(token.key, request_resp.content)
self.assertIn(token.secret, request_resp.content)
self.assertEqual(token.callback, 'http://example.com/request_token_ready')
self.assertEqual(token.callback_confirmed, True)
# Test wrong scope
form_data['scope'] = 'videos'
scope_resp = self.client.get("/XAPI/OAuth/initiate/", Authorization=oauth_header_request_params, data=form_data, X_Experience_API_Version="1.0.0")
self.assertEqual(scope_resp.status_code, 401)
self.assertEqual(scope_resp.content, 'Resource videos is not allowed.')
form_data['scope'] = 'all'
# Test wrong callback
oauth_header_request_params += ',oauth_callback="wrongcallback"'
call_resp = self.client.get("/XAPI/OAuth/initiate/", Authorization=oauth_header_request_params, data=form_data, X_Experience_API_Version="1.0.0")
self.assertEqual(call_resp.status_code, 401)
self.assertEqual(call_resp.content, 'Invalid callback URL.')
# Test AUTHORIZE
oauth_auth_params = {'oauth_token': token.key}
auth_resp = self.client.get("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_resp.status_code, 302)
self.assertIn('http://testserver/XAPI/accounts/login?next=/XAPI/OAuth/authorize/%3F', auth_resp['Location'])
self.assertIn(token.key, auth_resp['Location'])
self.client.login(username='jane', password='toto')
self.assertEqual(token.is_approved, False)
auth_resp = self.client.get("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_resp.status_code, 200) # Show return/display OAuth authorized view
html = auth_resp.content
# <input type="hidden" name="obj_id" value="38" id="id_obj_id">
# hidden.*name="obj_id"\Wvalue="(.*?)"
cap = re.search('name="obj_id"\Wvalue="(.*?)"', html)
oauth_auth_params['obj_id'] = cap.group(1)
# <input checked="checked" type="checkbox" name="scopes" value="statements/write">
# input\Wchecked="checked".*?value="(.*?)"
caps = re.findall('checked="checked".*?value="(.*?)"', html)
oauth_auth_params['scopes'] = [c for c in caps]
oauth_auth_params['authorize_access'] = 1
oauth_auth_params['authorize_access'] = 1
auth_post = self.client.post("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_post.status_code, 302)
self.assertIn('http://example.com/request_token_ready?oauth_verifier=', auth_post['Location'])
token = models.Token.objects.get(consumer=self.consumer)
self.assertIn(token.key, auth_post['Location'])
self.assertEqual(token.is_approved, True)
# Test without session param (previous POST removed it)
auth_post = self.client.post("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_post.status_code, 401)
self.assertEqual(auth_post.content, 'Action not allowed.')
# Test fake access
auth_resp = self.client.get("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
oauth_auth_params['authorize_access'] = 0
auth_resp = self.client.post("/XAPI/OAuth/authorize/", oauth_auth_params, X_Experience_API_Version="1.0.0")
self.assertEqual(auth_resp.status_code, 302)
self.assertEqual(auth_resp['Location'], 'http://example.com/request_token_ready?error=Access%20not%20granted%20by%20user.')
self.client.logout()
# Test ACCESS TOKEN
oauth_header_access_params = "OAuth realm=\"test\","\
"oauth_consumer_key=\"%s\","\
"oauth_token=\"%s\","\
"oauth_signature_method=\"PLAINTEXT\","\
"oauth_signature=\"%s&%s\","\
"oauth_timestamp=\"%s\","\
"oauth_nonce=\"accessnonce\","\
"oauth_version=\"1.0\","\
"oauth_verifier=\"%s\"" % (self.consumer.key,token.key,self.consumer.secret,token.secret,str(int(time.time())),token.verifier)
access_resp = self.client.get("/XAPI/OAuth/token/", Authorization=oauth_header_access_params, X_Experience_API_Version="1.0.0")
self.assertEqual(access_resp.status_code, 200)
access_token = models.Token.objects.filter(token_type=models.Token.ACCESS, consumer=self.consumer)[0]
self.assertIn(access_token.key, access_resp.content)
self.assertEqual(access_token.user.username, u'jane')
# Test same Nonce
access_resp = self.client.get("/XAPI/OAuth/token/", Authorization=oauth_header_access_params, X_Experience_API_Version="1.0.0")
self.assertEqual(access_resp.status_code, 401)
self.assertEqual(access_resp.content, 'Nonce already used: accessnonce')
# Test missing/invalid verifier
oauth_header_access_params += ',oauth_nonce="yetanotheraccessnonce"'
oauth_header_access_params += ',oauth_verifier="invalidverifier"'
access_resp = self.client.get("/XAPI/OAuth/token/", Authorization=oauth_header_access_params, X_Experience_API_Version="1.0.0")
self.assertEqual(access_resp.status_code, 401)
self.assertEqual(access_resp.content, 'Consumer key or token key does not match. Make sure your request token is approved. Check your verifier too if you use OAuth 1.0a.')
oauth_header_access_params += ',oauth_verifier="token.verifier"'
# Test token not approved
oauth_header_access_params += ',oauth_nonce="anotheraccessnonce"'
token.is_approved = False
token.save()
access_resp = self.client.get("/XAPI/OAuth/token/", Authorization=oauth_header_access_params, X_Experience_API_Version="1.0.0")
self.assertEqual(access_resp.status_code, 401)
self.assertEqual(access_resp.content, 'Consumer key or token key does not match. Make sure your request token is approved. Check your verifier too if you use OAuth 1.0a.')
# Test ACCESS RESOURCE
oauth_header_resource_params = "OAuth realm=\"test\", "\
"oauth_consumer_key=\"%s\","\
"oauth_token=\"%s\","\
"oauth_signature_method=\"HMAC-SHA1\","\
"oauth_timestamp=\"%s\","\
"oauth_nonce=\"accessresourcenonce\","\
"oauth_version=\"1.0\"" % (self.consumer.key, access_token.key, str(int(time.time())))
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
oauth_request = OAuthRequest.from_token_and_callback(access_token,
http_url='http://testserver/XAPI/statements/', parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
resp = self.client.get("/XAPI/statements/", Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 200)
self.assertEqual(resp.content, '{"statements": [], "more": ""}')
# Test wrong signature
oauth_header_resource_params += ',oauth_signature="wrongsignature"'
oauth_header_resource_params += ',oauth_nonce="anotheraccessresourcenonce"'
resp = self.client.get("/XAPI/statements/", Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 401)
self.assertIn('Invalid signature.', resp.content)
# Test wrong params - will not return 'Invalid request parameters.' like oauth example states
# because there is no Authorization header. With no auth header the lrs reads as no auth supplied at all
resp = self.client.get("/XAPI/statements/", X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 401)
self.assertEqual(resp.content, 'Auth is enabled but no authentication was sent with the request.')
# Test revoke access
access_token.delete()
oauth_header_resource_params += ',oauth_signature="%s"' % signature
oauth_header_resource_params += ',oauth_nonce="yetanotheraccessresourcenonce"'
resp = self.client.get("/XAPI/statements/", Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 401)
self.assertIn('Invalid access token', resp.content)
def test_oauth_disabled(self):
# Disable oauth
if settings.OAUTH_ENABLED:
settings.OAUTH_ENABLED = False
# TEST REQUEST TOKEN
oauth_header_request_params = {
'oauth_consumer_key': self.consumer.key,
'oauth_signature_method': 'PLAINTEXT',
'oauth_signature':'%s&' % self.consumer.secret,
'oauth_timestamp': str(int(time.time())),
'oauth_nonce': 'requestnonce',
'oauth_version': '1.0',
'oauth_callback':'http://example.com/request_token_ready'
}
# Test sending in scope as query param with REQUEST_TOKEN
param = {
"scope":"all"
}
path = "%s?%s" % ("/XAPI/OAuth/initiate", urllib.urlencode(param))
request_resp = self.client.get(path, Authorization=oauth_header_request_params, X_Experience_API_Version="1.0.0")
self.assertEqual(request_resp.status_code, 400)
self.assertEqual(request_resp.content,'OAuth is not enabled. To enable, set the OAUTH_ENABLED flag to true in settings' )
def test_stmt_put(self):
# build stmt data and path
put_guid = str(uuid.uuid1())
stmt = json.dumps({"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bill"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/accessed","display": {"en-US":"accessed"}},
"object": {"id":"act:test_put"}})
param = {"statementId":put_guid}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(request_nonce='stmtputrequestnonce',
access_nonce='stmtputaccessnonce', resource_nonce='stmtputresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add put data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='PUT',
http_url=path, parameters=oauth_header_resource_params_dict)
# build signature and add to the params
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
# Put statements
resp = self.client.put(path, data=stmt, content_type="application/json",
Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 204)
def test_stmt_post_no_scope(self):
stmt = {"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bob"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_post"}}
stmt_json = json.dumps(stmt)
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope=False,
request_nonce='stmtpostrequestnonce', access_nonce='stmtpostaccessnonce',
resource_nonce='stmtpostresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='POST',
http_url='http://testserver/XAPI/statements/', parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
post = self.client.post('/XAPI/statements/', data=stmt_json, content_type="application/json",
Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(post.status_code, 200)
def test_stmt_simple_get(self):
guid = str(uuid.uuid1())
stmt_data = {"id":guid,"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bob"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_simple_get"}}
stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
param = {"statementId":guid}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(request_nonce='stmtgetrequestnonce',
access_nonce='stmtgetaccessnonce', resource_nonce='stmtgetresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add get data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
resp = self.client.get(path,Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 200)
rsp = resp.content
self.assertIn(guid, rsp)
def test_stmt_complex_get(self):
stmt_data = {"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bob"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_complex_get"}}
stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
param = {"activity":"act:test_complex_get"}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(request_nonce='stmtcomplexrequestnonce',
access_nonce='stmtcomplexaccessnonce', resource_nonce='stmtcomplexresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add get data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
resp = self.client.get(path,Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 200)
def test_stmt_get_then_wrong_scope(self):
guid = str(uuid.uuid1())
stmt_data = {"id":guid,"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bob"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_simple_get"}}
stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
param = {"statementId":guid}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope_type="statements/read,profile",
request_nonce='stmtgetrequestnonce',access_nonce='stmtgetaccessnonce',
resource_nonce='stmtgetresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add get data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
resp = self.client.get(path,Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 200)
rsp = resp.content
self.assertIn(guid, rsp)
# Test POST (not allowed)
post_stmt = {"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bob"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_post"}}
post_stmt_json = json.dumps(post_stmt)
# change nonce
oauth_header_resource_params_dict['oauth_nonce'] = 'wrongpostnonce'
# delete statementId from get
del oauth_header_resource_params_dict['statementId']
# create another oauth request
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='POST',
http_url='http://testserver/XAPI/statements/', parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
# replace headers with the nonce you added in dict
new_oauth_headers = oauth_header_resource_params.replace('oauth_nonce="accessresourcenonce"','oauth_nonce="wrongpostnonce"')
post = self.client.post('/XAPI/statements/', data=post_stmt_json, content_type="application/json",
Authorization=new_oauth_headers, X_Experience_API_Version="1.0.0")
self.assertEqual(post.status_code, 403)
self.assertEqual(post.content, 'Incorrect permissions to POST at /statements')
def test_activity_state_put_then_wrong_scope(self):
url = 'http://testserver/XAPI/activities/state'
testagent = '{"name":"jane","mbox":"mailto:jane@example.com"}'
activityId = "http://www.iana.org/domains/example/"
stateId = "id:the_state_id"
activity = models.Activity(activity_id=activityId)
activity.save()
testparams = {"stateId": stateId, "activityId": activityId, "agent": testagent}
teststate = {"test":"put activity state 1"}
path = '%s?%s' % (url, urllib.urlencode(testparams))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope_type='state',
request_nonce='stateforbiddenrequestnonce', access_nonce='stateforbiddenaccessnonce',
resource_nonce='stateforbiddenresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add put data
oauth_header_resource_params_dict.update(testparams)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='PUT',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
put = self.client.put(path, data=teststate, content_type="application/json",
Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(put.status_code, 204)
# Set up for Get
guid = str(uuid.uuid1())
stmt_data = {"id":guid,"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bob"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_simple_get"}}
stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
param = {"statementId":guid}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
# change nonce
oauth_header_resource_params_dict['oauth_nonce'] = 'differnonce'
# delete statementId from get
del oauth_header_resource_params_dict['stateId']
del oauth_header_resource_params_dict['activityId']
del oauth_header_resource_params_dict['agent']
# update dict with stmt data
oauth_header_resource_params_dict.update(param)
# create another oauth request
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature2 = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params_new = oauth_header_resource_params.replace('"%s"' % signature, '"%s"' % signature2)
# replace headers with the nonce you added in dict
new_oauth_headers = oauth_header_resource_params_new.replace('oauth_nonce="accessresourcenonce"','oauth_nonce="differnonce"')
get = self.client.get(path, content_type="application/json",
Authorization=new_oauth_headers, X_Experience_API_Version="1.0.0")
self.assertEqual(get.status_code, 403)
self.assertEqual(get.content, 'Incorrect permissions to GET at /statements')
def stmt_get_then_wrong_profile_scope(self):
guid = str(uuid.uuid1())
stmt_data = {"id":guid,"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bob"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_simple_get"}}
stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
param = {"statementId":guid}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope_type="statements/read",
request_nonce='stmtgetrequestnonce',access_nonce='stmtgetaccessnonce',
resource_nonce='stmtgetresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add get data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
resp = self.client.get(path,Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 200)
rsp = resp.content
self.assertIn(guid, rsp)
url = 'http://testserver/XAPI/agents/profile'
params = {"agent": {"mbox":"mailto:test@example.com"}}
path = "%s?%s" %(url, urllib.urlencode(params))
oauth_header_resource_params_dict['oauth_nonce'] = 'differnonce'
del oauth_header_resource_params_dict['statementId']
oauth_header_resource_params_dict.update(params)
# create another oauth request
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature2 = signature_method.build_signature(oauth_request, self.consumer, access_token)
new_sig_params = oauth_header_resource_params.replace('"%s"' % signature, '"%s"' % signature2 )
# replace headers with the nonce you added in dict
new_oauth_headers = new_sig_params.replace('oauth_nonce="accessresourcenonce"','oauth_nonce="differnonce"')
r = self.client.get(path, Authorization=new_oauth_headers, X_Experience_API_Version="1.0.0")
self.assertEqual(r.status_code, 200)
def test_consumer_state(self):
stmt_data = {"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bob"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_complex_get"}}
stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
param = {"object":{"objectType": "Activity", "id":"act:test_complex_get"}}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(request_nonce='consumerstaterequestnonce',
access_nonce='consumerstateaccessnonce', resource_nonce='consumerstateresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add get data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
consumer = access_token.consumer
consumer.status = 4
consumer.save()
resp = self.client.get(path,Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 401)
self.assertEqual(resp.content, 'test client has not been authorized')
def test_simple_stmt_get_mine_only(self):
guid = str(uuid.uuid1())
username = "tester1"
email = "test1@tester.com"
password = "test"
auth = "Basic %s" % base64.b64encode("%s:%s" % (username, password))
form = {"username":username, "email":email,"password":password,"password2":password}
response = self.client.post(reverse(views.register),form, X_Experience_API_Version="1.0.0")
param = {"statementId":guid}
path = "%s?%s" % (reverse(views.statements), urllib.urlencode(param))
stmt = json.dumps({"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_put"},"actor":{"objectType":"Agent", "mbox":"mailto:t@t.com"}})
putResponse = self.client.put(path, stmt, content_type="application/json", Authorization=auth, X_Experience_API_Version="1.0.0")
self.assertEqual(putResponse.status_code, 204)
param = {"statementId":guid}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope_type="statements/read/mine",
request_nonce='stmtgetrequestnonce',access_nonce='stmtgetaccessnonce', resource_nonce='stmtgetresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add get data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
resp = self.client.get(path,Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 403)
# build stmt data and path
oauth_agent1 = models.Agent.objects.get(account_name=self.consumer.key)
oauth_agent2 = models.Agent.objects.get(mbox="mailto:test1@tester.com")
oauth_group = models.Agent.objects.get(member__in=[oauth_agent1, oauth_agent2])
guid = str(uuid.uuid1())
stmt_data = {"id":guid,"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bill"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/accessed","display": {"en-US":"accessed"}},
"object": {"id":"act:test_put"}, "authority":oauth_group.get_agent_json()}
stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
param = {"statementId":guid}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
# add put data
oauth_header_resource_params_dict['statementId'] = guid
oauth_header_resource_params_dict['oauth_nonce'] = 'getdiffernonce'
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
# build signature and add to the params
signature_method = OAuthSignatureMethod_HMAC_SHA1()
get_signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
replace_sig = oauth_header_resource_params.replace('"%s"' % signature, '"%s"' % get_signature)
new_oauth_headers = replace_sig.replace('oauth_nonce="accessresourcenonce"','oauth_nonce="getdiffernonce"')
get = self.client.get(path, content_type="application/json",
Authorization=new_oauth_headers, X_Experience_API_Version="1.0.0")
self.assertEqual(get.status_code, 200)
def test_complex_stmt_get_mine_only(self):
guid = str(uuid.uuid1())
username = "tester1"
email = "test1@tester.com"
password = "test"
auth = "Basic %s" % base64.b64encode("%s:%s" % (username, password))
form = {"username":username, "email":email,"password":password,"password2":password}
response = self.client.post(reverse(views.register),form, X_Experience_API_Version="1.0.0")
param = {"statementId":guid}
path = "%s?%s" % (reverse(views.statements), urllib.urlencode(param))
stmt = json.dumps({"verb":{"id": "http://adlnet.gov/expapi/verbs/passed","display": {"en-US":"passed"}},
"object": {"id":"act:test_put"},"actor":{"objectType":"Agent", "mbox":"mailto:t@t.com"}})
putResponse = self.client.put(path, stmt, content_type="application/json", Authorization=auth, X_Experience_API_Version="1.0.0")
self.assertEqual(putResponse.status_code, 204)
param = {"statementId":guid}
path = "%s?%s" % ('http://testserver/XAPI/statements', urllib.urlencode(param))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope_type="statements/read/mine",
request_nonce='stmtgetrequestnonce',access_nonce='stmtgetaccessnonce', resource_nonce='stmtgetresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add get data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
resp = self.client.get(path,Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 403)
# build stmt data and path
oauth_agent1 = models.Agent.objects.get(account_name=self.consumer.key)
oauth_agent2 = models.Agent.objects.get(mbox="mailto:test1@tester.com")
oauth_group = models.Agent.objects.get(member__in=[oauth_agent1, oauth_agent2])
guid = str(uuid.uuid1())
stmt_data = {"id":guid,"actor":{"objectType": "Agent", "mbox":"mailto:t@t.com", "name":"bill"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/accessed","display": {"en-US":"accessed"}},
"object": {"id":"act:test_put"}, "authority":oauth_group.get_agent_json()}
stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
# add put data
oauth_header_resource_params_dict['oauth_nonce'] = 'getdiffernonce'
del oauth_header_resource_params_dict['statementId']
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url='http://testserver/XAPI/statements', parameters=oauth_header_resource_params_dict)
# build signature and add to the params
signature_method = OAuthSignatureMethod_HMAC_SHA1()
get_signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
replace_sig = oauth_header_resource_params.replace('"%s"' % signature, '"%s"' % get_signature)
new_oauth_headers = replace_sig.replace('oauth_nonce="accessresourcenonce"','oauth_nonce="getdiffernonce"')
# Put statements
get = self.client.get('http://testserver/XAPI/statements', content_type="application/json",
Authorization=new_oauth_headers, X_Experience_API_Version="1.0.0")
get_content = json.loads(get.content)
self.assertEqual(get.status_code, 200)
self.assertEqual(get_content['statements'][0]['actor']['name'], 'bill')
self.assertEqual(len(get_content['statements']), 1)
def test_state_wrong_auth(self):
url = 'http://testserver/XAPI/activities/state'
testagent = '{"name":"joe","mbox":"mailto:joe@example.com"}'
activityId = "http://www.iana.org/domains/example/"
stateId = "id:the_state_id"
activity = models.Activity(activity_id=activityId)
activity.save()
testparams = {"stateId": stateId, "activityId": activityId, "agent": testagent}
teststate = {"test":"put activity state 1"}
path = '%s?%s' % (url, urllib.urlencode(testparams))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope_type='state',
request_nonce='stateforbiddenrequestnonce', access_nonce='stateforbiddenaccessnonce',
resource_nonce='stateforbiddenresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add put data
oauth_header_resource_params_dict.update(testparams)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='PUT',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
put = self.client.put(path, data=teststate, content_type="application/json",
Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(put.status_code, 404)
self.assertEqual(put.content, "Agent in state cannot be found to match user in authorization")
def test_profile_wrong_auth(self):
agent = models.Agent(name="joe", mbox="mailto:joe@example.com")
agent.save()
url = 'http://testserver/XAPI/agents/profile'
testparams = {"agent": '{"name":"joe","mbox":"mailto:joe@example.com"}'}
path = '%s?%s' % (url, urllib.urlencode(testparams))
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope_type='profile',
request_nonce='profileforbiddenrequestnonce', access_nonce='profileforbiddenaccessnonce',
resource_nonce='profileforbiddenresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add put data
oauth_header_resource_params_dict.update(testparams)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
get = self.client.get(path, content_type="application/json",
Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(get.status_code, 403)
self.assertEqual(get.content, "Authorization doesn't match agent in profile")
def test_define_scope_activity(self):
url = 'http://testserver/XAPI/statements'
guid = str(uuid.uuid1())
stmt_data = {"id":guid,"actor":{"objectType": "Agent",
"mbox":"mailto:bob@bob.com", "name":"bob"},"verb":{"id": "http://adlnet.gov/expapi/verbs/passed",
"display": {"en-US":"passed"}},"object": {"id":"test://test/define/scope"}}
existing_stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
# build stmt data and path
put_guid = str(uuid.uuid1())
stmt = json.dumps({"actor":{"objectType": "Agent", "mbox":"mailto:bill@bill.com", "name":"bill"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/accessed","display": {"en-US":"accessed"}},
"object": {"id":"test://test/define/scope",
'definition': {'name': {'en-US':'testname', 'en-GB': 'altname'},
'description': {'en-US':'testdesc', 'en-GB': 'altdesc'},'type': 'type:course',
'interactionType': 'other'}}})
param = {"statementId":put_guid}
path = "%s?%s" % (url, urllib.urlencode(param))
# START PUT STMT
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope_type='statements/write,statements/read',
request_nonce='anotherstmtputrequestnonce',access_nonce='anotherstmtputaccessnonce',
resource_nonce='anotherstmtputresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add put data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='PUT',
http_url=path, parameters=oauth_header_resource_params_dict)
# build signature and add to the params
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
# Put statements - does not have define scope, therefore it creates another activity with
# canonical_version as false
resp = self.client.put(path, data=stmt, content_type="application/json",
Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 204)
acts = models.Activity.objects.all()
self.assertEqual(len(acts), 2)
self.assertEqual(acts[0].activity_id, acts[1].activity_id)
# START GET STMT
get_params = {"activity":"test://test/define/scope"}
path = "%s?%s" % (url, urllib.urlencode(get_params))
del oauth_header_resource_params_dict['statementId']
oauth_header_resource_params_dict.update(get_params)
oauth_header_resource_params_dict['oauth_nonce'] = 'getdiffernonce'
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
# build signature and add to the params
signature_method = OAuthSignatureMethod_HMAC_SHA1()
get_signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
replace_sig = oauth_header_resource_params.replace('"%s"' % signature, '"%s"' % get_signature)
new_oauth_headers = replace_sig.replace('oauth_nonce="accessresourcenonce"','oauth_nonce="getdiffernonce"')
get_resp = self.client.get(path, X_Experience_API_Version="1.0.0",
Authorization=new_oauth_headers)
self.assertEqual(get_resp.status_code, 200)
content = json.loads(get_resp.content)
self.assertEqual(len(content['statements']), 2)
self.client.logout()
# START OF POST WITH ANOTHER HANDSHAKE
post_stmt = {"actor":{"objectType": "Agent", "mbox":"mailto:dom@dom.com", "name":"dom"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/tested","display": {"en-US":"tested"}},
"object": {"id":"test://test/define/scope",
'definition': {'name': {'en-US':'definename', 'en-GB': 'definealtname'},
'description': {'en-US':'definedesc', 'en-GB': 'definealtdesc'},'type': 'type:course',
'interactionType': 'other'}}}
stmt_json = json.dumps(post_stmt)
post_oauth_header_resource_params, post_access_token = self.perform_oauth_handshake2(scope_type='define,statements/write',
request_nonce='stmtpostrequestnonce', access_nonce='stmtpostaccessnonce',
resource_nonce='stmtpostresourcenonce')
# from_token_and_callback takes a dictionary
post_param_list = post_oauth_header_resource_params.split(",")
post_oauth_header_resource_params_dict = {}
for p in post_param_list:
item = p.split("=")
post_oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del post_oauth_header_resource_params_dict['OAuth realm']
post_oauth_request = OAuthRequest.from_token_and_callback(post_access_token, http_method='POST',
http_url='http://testserver/XAPI/statements/',
parameters=post_oauth_header_resource_params_dict)
post_signature_method = OAuthSignatureMethod_HMAC_SHA1()
post_signature = signature_method.build_signature(post_oauth_request, self.consumer2,
post_access_token)
post_oauth_header_resource_params += ',oauth_signature="%s"' % post_signature
# This adds the act_def to the very first activity created in this test sine this has define scope
post = self.client.post('/XAPI/statements/', data=stmt_json, content_type="application/json",
Authorization=post_oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(post.status_code, 200)
acts = models.Activity.objects.all()
self.assertEqual(len(acts), 2)
global_act = models.Activity.objects.get(canonical_version=True)
global_name_list = global_act.activity_definition_name.values()
self.assertIn('definename', global_name_list)
self.assertIn('definealtname', global_name_list)
global_desc_list = global_act.activity_definition_description.values()
self.assertIn('definedesc', global_desc_list)
self.assertIn('definealtdesc', global_desc_list)
non_global_act = models.Activity.objects.get(canonical_version=False)
non_global_name_list = non_global_act.activity_definition_name.values()
self.assertIn('testname', non_global_name_list)
self.assertIn('altname', non_global_name_list)
non_global_desc_list = non_global_act.activity_definition_description.values()
self.assertIn('testdesc', non_global_desc_list)
self.assertIn('altdesc', non_global_desc_list)
def test_define_scope_agent(self):
url = 'http://testserver/XAPI/statements'
guid = str(uuid.uuid1())
stmt_data = {"id":guid,"actor":{"objectType": "Agent",
"mbox":"mailto:bob@bob.com", "name":"bob"},"verb":{"id": "http://adlnet.gov/expapi/verbs/helped",
"display": {"en-US":"helped"}},"object": {"objectType":"Agent", "mbox":"mailto:tim@tim.com",
"name":"tim"}}
existing_stmt = StatementManager(stmt_data, stmt_json=json.dumps(stmt_data))
# build stmt data and path
put_guid = str(uuid.uuid1())
stmt = json.dumps({"actor":{"objectType": "Agent", "mbox":"mailto:bill@bill.com", "name":"bill"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/talked","display": {"en-US":"talked"}},
"object": {"objectType":"Agent", "mbox":"mailto:tim@tim.com","name":"tim timson"}})
param = {"statementId":put_guid}
path = "%s?%s" % (url, urllib.urlencode(param))
# START PUT STMT
oauth_header_resource_params, access_token = self.perform_oauth_handshake(scope_type='statements/write,statements/read',
request_nonce='anotherstmtputrequestnonce',access_nonce='anotherstmtputaccessnonce',
resource_nonce='anotherstmtputresourcenonce')
# from_token_and_callback takes a dictionary
param_list = oauth_header_resource_params.split(",")
oauth_header_resource_params_dict = {}
for p in param_list:
item = p.split("=")
oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del oauth_header_resource_params_dict['OAuth realm']
# add put data
oauth_header_resource_params_dict.update(param)
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='PUT',
http_url=path, parameters=oauth_header_resource_params_dict)
# build signature and add to the params
signature_method = OAuthSignatureMethod_HMAC_SHA1()
signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
oauth_header_resource_params += ',oauth_signature="%s"' % signature
# Put statements
resp = self.client.put(path, data=stmt, content_type="application/json",
Authorization=oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(resp.status_code, 204)
agents = models.Agent.objects.all().values_list('name', flat=True)
# Jane, Anonymous agent for account, Group for jane and account, bill, bob, tim, tim timson
self.assertEqual(len(agents), 7)
self.assertIn('tim', agents)
self.assertIn('tim timson', agents)
tim = models.Agent.objects.get(name='tim timson')
self.assertFalse(tim.canonical_version)
tim = models.Agent.objects.get(name='tim')
self.assertTrue(tim.canonical_version)
# START GET STMT
get_params = {"agent":{"objectType": "Agent", "mbox":"mailto:tim@tim.com"}, "related_agents":True}
path = "%s?%s" % (url, urllib.urlencode(get_params))
del oauth_header_resource_params_dict['statementId']
oauth_header_resource_params_dict.update(get_params)
oauth_header_resource_params_dict['oauth_nonce'] = 'getdiffernonce'
oauth_request = OAuthRequest.from_token_and_callback(access_token, http_method='GET',
http_url=path, parameters=oauth_header_resource_params_dict)
# build signature and add to the params
signature_method = OAuthSignatureMethod_HMAC_SHA1()
get_signature = signature_method.build_signature(oauth_request, self.consumer, access_token)
replace_sig = oauth_header_resource_params.replace('"%s"' % signature, '"%s"' % get_signature)
new_oauth_headers = replace_sig.replace('oauth_nonce="accessresourcenonce"','oauth_nonce="getdiffernonce"')
get_resp = self.client.get(path, X_Experience_API_Version="1.0.0",
Authorization=new_oauth_headers)
self.assertEqual(get_resp.status_code, 200)
content = json.loads(get_resp.content)
# Should only be one since querying by tim email. Will only pick up global tim object
self.assertEqual(len(content['statements']), 1)
self.client.logout()
# START OF POST WITH ANOTHER HANDSHAKE
# Anonymous group that will make 2 canonical agents
ot = "Group"
members = [{"name":"john doe","mbox":"mailto:jd@example.com"},
{"name":"jan doe","mbox":"mailto:jandoe@example.com"}]
kwargs = {"objectType":ot, "member": members, "name": "doe group"}
global_group, created = models.Agent.objects.retrieve_or_create(**kwargs)
# Anonymous group that will retrieve two agents and create one more canonical agents
members = [{"name":"john doe","mbox":"mailto:jd@example.com"},
{"name":"jan doe","mbox":"mailto:jandoe@example.com"},
{"name":"dave doe", "mbox":"mailto:dd@example.com"}]
kwargs1 = {"objectType":ot, "member": members, "name": "doe group"}
post_stmt = {"actor":{"objectType": "Agent", "mbox":"mailto:dom@dom.com", "name":"dom"},
"verb":{"id": "http://adlnet.gov/expapi/verbs/assisted","display": {"en-US":"assisted"}},
"object": kwargs1}
stmt_json = json.dumps(post_stmt)
post_oauth_header_resource_params, post_access_token = self.perform_oauth_handshake2(scope_type='statements/write,statements/read',
request_nonce='stmtpostrequestnonce', access_nonce='stmtpostaccessnonce',
resource_nonce='stmtpostresourcenonce')
# from_token_and_callback takes a dictionary
post_param_list = post_oauth_header_resource_params.split(",")
post_oauth_header_resource_params_dict = {}
for p in post_param_list:
item = p.split("=")
post_oauth_header_resource_params_dict[str(item[0]).strip()] = str(item[1]).strip('"')
# from_request ignores realm, must remove so not input to from_token_and_callback
del post_oauth_header_resource_params_dict['OAuth realm']
post_oauth_request = OAuthRequest.from_token_and_callback(post_access_token, http_method='POST',
http_url='http://testserver/XAPI/statements/',
parameters=post_oauth_header_resource_params_dict)
post_signature_method = OAuthSignatureMethod_HMAC_SHA1()
post_signature = signature_method.build_signature(post_oauth_request, self.consumer2,
post_access_token)
post_oauth_header_resource_params += ',oauth_signature="%s"' % post_signature
post = self.client.post('/XAPI/statements/', data=stmt_json, content_type="application/json",
Authorization=post_oauth_header_resource_params, X_Experience_API_Version="1.0.0")
self.assertEqual(post.status_code, 200)
agents = models.Agent.objects.all()
# These 5 agents are all non-global since created w/o define scope
non_globals = models.Agent.objects.filter(canonical_version=False).values_list('name', flat=True)
self.assertEqual(len(non_globals), 4)
self.assertIn('bill', non_globals)
self.assertIn('tim timson', non_globals)
self.assertIn('dom', non_globals)
self.assertIn('doe group', non_globals)
# 2 oauth group objects, all of these agents since created with member or manually and 2 anon
# account agents for the accounts in the oauth groups
global_agents = models.Agent.objects.filter(canonical_version=True).values_list('name', flat=True)
self.assertEqual(len(global_agents), 12)
self.assertIn('bob', global_agents)
self.assertIn('tim', global_agents)
self.assertIn('jan doe', global_agents)
self.assertIn('john doe', global_agents)
self.assertIn('dave doe', global_agents)
self.assertIn('jane', global_agents)
self.assertIn('dick', global_agents)
self.assertIn('doe group', global_agents)
| 54.44009 | 184 | 0.669943 | 8,625 | 72,242 | 5.334841 | 0.054609 | 0.052594 | 0.080934 | 0.106492 | 0.872297 | 0.856258 | 0.83672 | 0.821616 | 0.797514 | 0.790994 | 0 | 0.009435 | 0.203386 | 72,242 | 1,326 | 185 | 54.481146 | 0.790109 | 0.076521 | 0 | 0.730256 | 0 | 0.002051 | 0.19711 | 0.047239 | 0 | 0 | 0 | 0 | 0.158974 | 0 | null | null | 0.02359 | 0.017436 | null | null | 0.001026 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f7ef0ae4e6eb388b285d30fb246de90afbbc0e1a | 16,396 | py | Python | tests/advanced_tests/advanced_install_tests.py | loaiabdalslam/Brainless | d363e0d713fc9b024a4fac990b9c39cd59769454 | [
"MIT"
] | 1 | 2020-02-28T12:12:21.000Z | 2020-02-28T12:12:21.000Z | tests/advanced_tests/advanced_install_tests.py | loaiabdalslam/Brainless | d363e0d713fc9b024a4fac990b9c39cd59769454 | [
"MIT"
] | null | null | null | tests/advanced_tests/advanced_install_tests.py | loaiabdalslam/Brainless | d363e0d713fc9b024a4fac990b9c39cd59769454 | [
"MIT"
] | null | null | null | import datetime
import os
import random
import sys
import warnings
import numpy as np
from sklearn.model_selection import train_test_split
import tests.utils_testing as utils
from brainless import Predictor
from brainless.utils_models import load_ml_model
sys.path = [os.path.abspath(os.path.dirname(__file__))] + sys.path
os.environ['is_test_suite'] = 'True'
def test_feature_learning_getting_single_predictions_classification(model_name=None):
np.random.seed(0)
df_titanic_train, df_titanic_test = utils.get_titanic_binary_classification_dataset()
column_descriptions = {
'survived': 'output',
'sex': 'categorical',
'embarked': 'categorical',
'pclass': 'categorical'
}
ml_predictor = Predictor(
type_of_estimator='classifier', column_descriptions=column_descriptions)
# NOTE: this is bad practice to pass in our same training set as our fl_data set,
# but we don't have enough data to do it any other way
df_titanic_train, fl_data = train_test_split(df_titanic_train, test_size=0.2)
ml_predictor.train(
df_titanic_train, model_names=model_name, feature_learning=True, fl_data=fl_data)
file_name = ml_predictor.save(str(random.random()))
saved_ml_pipeline = load_ml_model(file_name)
os.remove(file_name)
try:
keras_file_name = file_name[:-5] + '_keras_deep_learning_model.h5'
os.remove(keras_file_name)
except:
pass
df_titanic_test_dictionaries = df_titanic_test.to_dict('records')
# 1. make sure the accuracy is the same
predictions = []
for row in df_titanic_test_dictionaries:
predictions.append(saved_ml_pipeline.predict_proba(row)[1])
print('predictions')
print(predictions)
first_score = utils.calculate_brier_score_loss(df_titanic_test.survived, predictions)
print('first_score')
print(first_score)
# Make sure our score is good, but not unreasonably good
lower_bound = -0.16
if model_name == 'DeepLearningClassifier':
lower_bound = -0.187
assert lower_bound < first_score < -0.133
# 2. make sure the speed is reasonable (do it a few extra times)
data_length = len(df_titanic_test_dictionaries)
start_time = datetime.datetime.now()
for idx in range(1000):
row_num = idx % data_length
saved_ml_pipeline.predict(df_titanic_test_dictionaries[row_num])
end_time = datetime.datetime.now()
duration = end_time - start_time
print('duration.total_seconds()')
print(duration.total_seconds())
# It's very difficult to set a benchmark for speed that will work across all machines.
# On my 2013 bottom of the line 15" MacBook Pro,
# this runs in about 0.8 seconds for 1000 predictions
# That's about 1 millisecond per prediction
# Assuming we might be running on a test box that's pretty weak, multiply by 3
# Also make sure we're not running unreasonably quickly
assert 0.2 < duration.total_seconds() < 15
# 3. make sure we're not modifying the dictionaries
# (the score is the same after running a few experiments as it is the first time)
predictions = []
for row in df_titanic_test_dictionaries:
predictions.append(saved_ml_pipeline.predict_proba(row)[1])
print('predictions')
print(predictions)
print('df_titanic_test_dictionaries')
print(df_titanic_test_dictionaries)
second_score = utils.calculate_brier_score_loss(df_titanic_test.survived, predictions)
print('second_score')
print(second_score)
# Make sure our score is good, but not unreasonably good
assert lower_bound < second_score < -0.133
def test_feature_learning_categorical_ensembling_getting_single_predictions_classification(
model_name=None):
np.random.seed(0)
df_titanic_train, df_titanic_test = utils.get_titanic_binary_classification_dataset()
column_descriptions = {
'survived': 'output',
'sex': 'categorical',
'embarked': 'categorical',
'pclass': 'categorical'
}
ml_predictor = Predictor(
type_of_estimator='classifier', column_descriptions=column_descriptions)
# NOTE: this is bad practice to pass in our same training set as our fl_data set,
# but we don't have enough data to do it any other way
df_titanic_train, fl_data = train_test_split(df_titanic_train, test_size=0.2)
ml_predictor.train_categorical_ensemble(
df_titanic_train,
model_names=model_name,
feature_learning=True,
fl_data=fl_data,
categorical_column='embarked')
file_name = ml_predictor.save(str(random.random()))
from brainless.utils_models import load_ml_model
saved_ml_pipeline = load_ml_model(file_name)
os.remove(file_name)
try:
keras_file_name = file_name[:-5] + '_keras_deep_learning_model.h5'
os.remove(keras_file_name)
except:
pass
df_titanic_test_dictionaries = df_titanic_test.to_dict('records')
# 1. make sure the accuracy is the same
predictions = []
for row in df_titanic_test_dictionaries:
predictions.append(saved_ml_pipeline.predict_proba(row)[1])
print('predictions')
print(predictions)
first_score = utils.calculate_brier_score_loss(df_titanic_test.survived, predictions)
print('first_score')
print(first_score)
# Make sure our score is good, but not unreasonably good
lower_bound = -0.175
if model_name == 'DeepLearningClassifier':
lower_bound = -0.245
if model_name == 'CatBoostClassifier':
lower_bound = -0.265
assert lower_bound < first_score < -0.14
# 2. make sure the speed is reasonable (do it a few extra times)
data_length = len(df_titanic_test_dictionaries)
start_time = datetime.datetime.now()
for idx in range(1000):
row_num = idx % data_length
saved_ml_pipeline.predict(df_titanic_test_dictionaries[row_num])
end_time = datetime.datetime.now()
duration = end_time - start_time
print('duration.total_seconds()')
print(duration.total_seconds())
# It's very difficult to set a benchmark for speed that will work across all machines.
# On my 2013 bottom of the line 15" MacBook Pro,
# this runs in about 0.8 seconds for 1000 predictions
# That's about 1 millisecond per prediction
# Assuming we might be running on a test box that's pretty weak, multiply by 3
# Also make sure we're not running unreasonably quickly
assert 0.2 < duration.total_seconds() < 15
# 3. make sure we're not modifying the dictionaries
# (the score is the same after running a few experiments as it is the first time)
predictions = []
for row in df_titanic_test_dictionaries:
predictions.append(saved_ml_pipeline.predict_proba(row)[1])
print('predictions')
print(predictions)
print('df_titanic_test_dictionaries')
print(df_titanic_test_dictionaries)
second_score = utils.calculate_brier_score_loss(df_titanic_test.survived, predictions)
print('second_score')
print(second_score)
# Make sure our score is good, but not unreasonably good
assert lower_bound < second_score < -0.147
def test_feature_learning_getting_single_predictions_regression(model_name=None):
np.random.seed(0)
df_boston_train, df_boston_test = utils.get_boston_regression_dataset()
column_descriptions = {'MEDV': 'output', 'CHAS': 'categorical'}
ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)
# NOTE: this is bad practice to pass in our same training set as our fl_data set,
# but we don't have enough data to do it any other way
df_boston_train, fl_data = train_test_split(df_boston_train, test_size=0.2)
ml_predictor.train(
df_boston_train, model_names=model_name, feature_learning=True, fl_data=fl_data)
file_name = ml_predictor.save(str(random.random()))
# from brainless.utils_models import load_keras_model
# saved_ml_pipeline = load_keras_model(file_name)
saved_ml_pipeline = load_ml_model(file_name)
os.remove(file_name)
try:
keras_file_name = file_name[:-5] + '_keras_deep_learning_model.h5'
os.remove(keras_file_name)
except:
pass
df_boston_test_dictionaries = df_boston_test.to_dict('records')
# 1. make sure the accuracy is the same
predictions = []
for row in df_boston_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
first_score = utils.calculate_rmse(df_boston_test.MEDV, predictions)
print('first_score')
print(first_score)
# Make sure our score is good, but not unreasonably good
lower_bound = -4.0
assert lower_bound < first_score < -2.8
# 2. make sure the speed is reasonable (do it a few extra times)
data_length = len(df_boston_test_dictionaries)
start_time = datetime.datetime.now()
for idx in range(1000):
row_num = idx % data_length
saved_ml_pipeline.predict(df_boston_test_dictionaries[row_num])
end_time = datetime.datetime.now()
duration = end_time - start_time
print('duration.total_seconds()')
print(duration.total_seconds())
# It's very difficult to set a benchmark for speed that will work across all machines.
# On my 2013 bottom of the line 15" MacBook Pro,
# this runs in about 0.8 seconds for 1000 predictions
# That's about 1 millisecond per prediction
# Assuming we might be running on a test box that's pretty weak, multiply by 3
# Also make sure we're not running unreasonably quickly
assert 0.2 < duration.total_seconds() / 1.0 < 20
# 3. make sure we're not modifying the dictionaries
# (the score is the same after running a few experiments as it is the first time)
predictions = []
for row in df_boston_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
second_score = utils.calculate_rmse(df_boston_test.MEDV, predictions)
print('second_score')
print(second_score)
# Make sure our score is good, but not unreasonably good
assert lower_bound < second_score < -2.8
def test_feature_learning_categorical_ensembling_getting_single_predictions_regression(
model_name=None):
np.random.seed(0)
df_boston_train, df_boston_test = utils.get_boston_regression_dataset()
column_descriptions = {'MEDV': 'output', 'CHAS': 'categorical'}
ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)
# NOTE: this is bad practice to pass in our same training set as our fl_data set,
# but we don't have enough data to do it any other way
df_boston_train, fl_data = train_test_split(df_boston_train, test_size=0.2)
ml_predictor.train_categorical_ensemble(
df_boston_train,
model_names=model_name,
feature_learning=True,
fl_data=fl_data,
categorical_column='CHAS')
# print('Score on training data')
# ml_predictor.score(df_boston_train, df_boston_train.MEDV)
file_name = ml_predictor.save(str(random.random()))
from brainless.utils_models import load_ml_model
saved_ml_pipeline = load_ml_model(file_name)
# with open(file_name, 'rb') as read_file:
# saved_ml_pipeline = dill.load(read_file)
os.remove(file_name)
try:
keras_file_name = file_name[:-5] + '_keras_deep_learning_model.h5'
os.remove(keras_file_name)
except:
pass
df_boston_test_dictionaries = df_boston_test.to_dict('records')
# 1. make sure the accuracy is the same
predictions = []
for row in df_boston_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
first_score = utils.calculate_rmse(df_boston_test.MEDV, predictions)
print('first_score')
print(first_score)
# Make sure our score is good, but not unreasonably good
lower_bound = -4.5
assert lower_bound < first_score < -3.4
# 2. make sure the speed is reasonable (do it a few extra times)
data_length = len(df_boston_test_dictionaries)
start_time = datetime.datetime.now()
for idx in range(1000):
row_num = idx % data_length
saved_ml_pipeline.predict(df_boston_test_dictionaries[row_num])
end_time = datetime.datetime.now()
duration = end_time - start_time
print('duration.total_seconds()')
print(duration.total_seconds())
# It's very difficult to set a benchmark for speed that will work across all machines.
# On my 2013 bottom of the line 15" MacBook Pro,
# this runs in about 0.8 seconds for 1000 predictions
# That's about 1 millisecond per prediction
# Assuming we might be running on a test box that's pretty weak, multiply by 3
# Also make sure we're not running unreasonably quickly
assert 0.2 < duration.total_seconds() / 1.0 < 15
# 3. make sure we're not modifying the dictionaries
# (the score is the same after running a few experiments as it is the first time)
predictions = []
for row in df_boston_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
second_score = utils.calculate_rmse(df_boston_test.MEDV, predictions)
print('second_score')
print(second_score)
# Make sure our score is good, but not unreasonably good
assert lower_bound < second_score < -3.4
def test_all_algos_classification(model_name=None):
np.random.seed(0)
df_titanic_train, df_titanic_test = utils.get_titanic_binary_classification_dataset()
column_descriptions = {
'survived': 'output',
'sex': 'categorical',
'embarked': 'categorical',
'pclass': 'categorical'
}
ml_predictor = Predictor(
type_of_estimator='classifier', column_descriptions=column_descriptions)
ml_predictor.train(
df_titanic_train,
model_names=[
'LogisticRegression', 'RandomForestClassifier', 'RidgeClassifier',
'GradientBoostingClassifier', 'ExtraTreesClassifier', 'AdaBoostClassifier',
'SGDClassifier', 'Perceptron', 'PassiveAggressiveClassifier', 'DeepLearningClassifier',
'XGBClassifier', 'LGBMClassifier', 'LinearSVC'
])
test_score = ml_predictor.score(df_titanic_test, df_titanic_test.survived)
print('test_score')
print(test_score)
# Linear models aren't super great on this dataset...
assert -0.215 < test_score < -0.131
def test_all_algos_regression():
# a random seed of 42 has ExtraTreesRegressor getting the best CV score,
# and that model doesn't generalize as well as GradientBoostingRegressor.
np.random.seed(0)
df_boston_train, df_boston_test = utils.get_boston_regression_dataset()
column_descriptions = {'MEDV': 'output', 'CHAS': 'categorical'}
ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)
ml_predictor.train(
df_boston_train,
model_names=[
'LinearRegression', 'RandomForestRegressor', 'Ridge', 'GradientBoostingRegressor',
'AdaBoostRegressor', 'SGDRegressor', 'PassiveAggressiveRegressor', 'Lasso', 'LassoLars',
'ElasticNet', 'OrthogonalMatchingPursuit', 'BayesianRidge', 'ARDRegression',
'MiniBatchKMeans', 'DeepLearningRegressor', 'LGBMRegressor', 'XGBClassifier',
'LinearSVR', 'CatBoostRegressor'
])
test_score = ml_predictor.score(df_boston_test, df_boston_test.MEDV)
print('test_score')
print(test_score)
assert -3.4 < test_score < -2.8
def test_throws_warning_when_fl_data_equals_df_train():
df_titanic_train, df_titanic_test = utils.get_titanic_binary_classification_dataset()
column_descriptions = {
'survived': 'output',
'sex': 'categorical',
'embarked': 'categorical',
'pclass': 'categorical'
}
ml_predictor = Predictor(
type_of_estimator='classifier', column_descriptions=column_descriptions)
with warnings.catch_warnings(record=True) as w:
try:
ml_predictor.train(df_titanic_train, feature_learning=True, fl_data=df_titanic_train)
except KeyError as e:
pass
# We should not be getting to this line- we should be throwing an error above
for thing in w:
print(thing)
assert len(w) >= 1
assert True
| 34.81104 | 100 | 0.71493 | 2,232 | 16,396 | 4.986111 | 0.120968 | 0.031539 | 0.030371 | 0.031449 | 0.869081 | 0.854704 | 0.837272 | 0.825411 | 0.814628 | 0.806901 | 0 | 0.014623 | 0.203342 | 16,396 | 470 | 101 | 34.885106 | 0.837391 | 0.236033 | 0 | 0.77305 | 0 | 0 | 0.116623 | 0.042299 | 0 | 0 | 0 | 0 | 0.056738 | 1 | 0.024823 | false | 0.024823 | 0.042553 | 0 | 0.067376 | 0.14539 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7f0a92ae25810a74be778e4f6589c45e75f3a2e | 75,914 | py | Python | ansible/roles/openshift_client_python/library/openshift_client_python.py | pehala/openshift-client-python | 39fcf605f0e6774c13e2219d8051db87d90f51f6 | [
"Apache-2.0"
] | 41 | 2019-04-12T21:07:02.000Z | 2022-02-21T20:01:18.000Z | ansible/roles/openshift_client_python/library/openshift_client_python.py | pehala/openshift-client-python | 39fcf605f0e6774c13e2219d8051db87d90f51f6 | [
"Apache-2.0"
] | 27 | 2019-07-11T21:26:27.000Z | 2021-11-29T17:28:42.000Z | ansible/roles/openshift_client_python/library/openshift_client_python.py | pehala/openshift-client-python | 39fcf605f0e6774c13e2219d8051db87d90f51f6 | [
"Apache-2.0"
] | 33 | 2019-04-10T17:37:01.000Z | 2022-03-08T01:05:45.000Z | #!/usr/bin/env python
# THIS IS A GENERATED FILE. DO NOT MODIFY IT
# Modify: openshift_client_python.template.py and then run rebuild_module.sh to affect this file
from __future__ import print_function
from __future__ import absolute_import
from ansible.module_utils.basic import AnsibleModule
import os
import six
import tempfile
import shutil
import tarfile
import base64
import sys
import pprint
# Allows modules to trigger errors
def error(msg, **kwargs):
import openshift as oc
raise oc.OpenShiftPythonException(msg, **kwargs)
def main():
import openshift as oc
script = module.params["script"]
time = module.params["timeout"]
oc.ansible.reset()
oc.ansible.vars = module.params["vars"]
if time is not None:
time = int(time) # Allow time to come in as a string
if module.params["project"] is not None:
oc.context.default_project = module.params["project"]
with oc.timeout(time):
with oc.tracking() as ct:
try:
with oc.util.OutputCapture() as capture:
exec(script)
module.debug("openshift_client_python module invocation result:\n" + str(ct.get_result()))
module.exit_json(rc=ct.get_result().status(),
changed=module.params['changes'],
ansible_facts=oc.ansible.new_facts,
stdout=capture.out.getvalue().decode('UTF-8'),
stderr=capture.err.getvalue().decode('UTF-8'),
result=ct.get_result().as_dict()
)
except oc.OpenShiftPythonException as ose:
module.debug("openshift_client_python module invocation exception: " + str(ose))
module.debug("openshift_client_python module invocation result:\n" + str(ct.get_result()))
module.fail_json(msg=ose.msg,
rc=ose.result.status(),
exception_attributes=ose.attributes(),
changed=module.params['changes'] or oc.ansible.changed,
ansible_facts=oc.ansible.new_facts,
stdout=capture.out.getvalue().decode('UTF-8'),
stderr=capture.err.getvalue().decode('UTF-8'),
result=ct.get_result().as_dict()
)
except KeyboardInterrupt:
print('Received KeyboardInterrupt during module', file=sys.stderr)
pprint.pprint(ct.get_result().as_dict(), stream=sys.stderr)
raise
if __name__ == '__main__':
# When openshift-client-python/ansible/rebuild_module.sh is executed, it will read in this template
# and replace the following variable with a b64 encoded tarball of the openshift-client-library
# package. The client_python_extract_dir path will contain the 'openshift' package directory.
REPLACED_BY_REBUILD_MODULE = 'H4sIAAAAAAAAA+39e38bx5EwjOZvfooJ9ewBYAPDiy52uKF3aUmOtbEtHVFOTl6GP2QIDMgJAQw8A5Ci9ei7n7r1dXoGA4hSnH2FxCIw093Vl+rqquq65It0Xl5lk+VeMlpm+Txe3P3uvj/78Hny5BH9hY/39+Hj/a+e/O7g4aODw8eHT57g84ODJ/tPfhft33tPAp9VuUyKKPoUoH6Ln0mRz6LhcLJarop0OIyy2SIvllFyUebT1TId8u+dHXleri4WRT5Ky1I9WWazVL/NR9fpUv36Z5nP1fdcly906XGyTO3ayyIZpRfJ6Fo3l73d2aEOxqtlNlV9e5POFt9l07QfZeVwlE+nKWHucHm3SHd2dh5Eb66KNI0ukjJ98ihK56N8nI6jUQ615+l8WfajTtyJxuk0m2VLeJOVURIt8+t0HkffZUW57EeTbD6OkvkdjHd0Fc2S5egqhob/lq+iUTLn1+nbZLaYpmWUT6LlVVqm3EYZ3WbLq+gf+Sgqk+gyXQ7ocfTHMi1uslGajEb5ar6M5sks/eYfO/RyWKSX6dvoGCYnxn7C4LrF7lky+PVk8P/sD/7w5d7w74Pzdwf7/fd/jwOPw88P4fFuDyfkOz2cdHYxTeajFDt9DU/j+DQdFbBmJf0Jd+Tvf/0Cy/79r19yYfi928ciL/7008vXz5+enD4nMC9PVsurkxFixxueiqRIo0cPo9EVbDFvPcplkc0vy508gUotJ+DRQxrQzjidRENY/BJgZMvsJu2O8vkS1nYIrfaOdnYi+GSTyJrcGNcsmU69kpF8YFSrYh69KVaprm1PyRbVrYFtWlsefJdMy1QNt0jHTKGHPHNdaUDKrjpffPH6+bOTp2+eP/vii45bayhwQ9NkPYL5t2cM9no3ALZvV+lH+71AM/bYWzcTnHfae83z1mGsPIreve/Ek7yAKiGAPWdurRZhrkbTpCyjE6rQzS/+CRRFzQ9N4zCbZ8vhsFum00k/ukmLC+j+bDycZkgt8hX8kxYF7olJWgCGp0Bk4FxZrujvOKNeHP8E9Keve68/AHs5TJZLIGvLY0QCIGzQuWKeTI8JBaDxabIo0/EQyeXxfqANfAHdkPLV9+nbdCS1rQnE4cTcT1g0/uK+xKHCK/zjvoDRw3M1B+476AjiAPybF1Gn476EeYKX+G/gpZ4r6o58d4uYKSZyoX64hWQ6EKH5m/vannEoY/90C6p1gELqqzcaa2FwWNZPr6BaACylvu/oMoT27ujgWEKEOXIWszoF794bPE3K4TgbLQVNl8VqPoIzFuZwjLgxOEAEJXrAJ5UgmzyDmU6TmffQgKLnalMQQATudK6DYwIsmi06R96oXZTs2POky1rPvOJwCuOhAiW7DsoeA9noR9GDCGhrfgvHzTxKk/IuuizSBSwYcRRwcl0iLi4sJOBGqREFXXarWwTRXhWgTe++BuxXb+Gr9xJAqpdIH7zxF4UeNtAN92U2N52SLeCVMKuiSlqEp7okVlfkp1fI3gCqpP2s0kHeCqqo+m2KvXcw20E6F5/5VYq4dHbuvJmnb5d4xjP3dCyHoV0CSD0wGJe40GoN3MYFvNNStYTdjThZgDw0Dh8foYrre4kfYDUn1FXA2M6AGcJOuCd+g8QUtOkyNF/togKM6F0sS+RMu7oDvXZzsVIVjh0OIwSsTNs1Wenr+Iw20zmRdC4bwiCD5T4WTYQU2k8RP7rX6R0c2MkUaBchSvY2Bqa/gP9mZdfbOjZ1U58H0QJZg+VVka8ur6wDJ6IpReJC/H4MTSNfvbzKQRJAvlefHQkw3+O0zC7ngOfzHASdPAAGGZIEergoshsg2tznMg4hNIzJWdC4fikn5RmUxnml9oKlEHI2X6Whwb82470B6nqRCt+Ohzcy6lPg2uHJaoTiY7i3GUwM9Bba6FIf+rQI3AzJbGVN76m0MAPh3tdj3IPoBSC+6RrKO2UOpy8ChB4ABiTT7FdapZwL4qgCK8Otja7S0TWKeSC/LLMC1xnLI4opISRb3lXH748DZeJ4vJotSp6MXnXSM1+20fVbrLLP8Ju6m04eIqruQx9k4GtYe5xJmK/rFM5XPbfhUVdxz9/x1inGG39Shja98CVuVyuTpM7SwBwBKDxvQ9OjazmVwjNjNaOqVfk3fWJXuuuOhohCYAS6fngcwBpYw7CPpxYDWBZ3DeuNuJ3NgUsiDQds9EVSADsGPCUhbT8a57CDgvUrg6fdvej2HCr1ro5K6bENQfKi8dEumebJuPRnpb6BdKomqIrpb0fpYhkGjtS9Aevq1gl4KG80ABz5rFoswxq6QnWBAPI0nZui0Tc+/w5PBgdBrFBwde2zI6/uefRl1InjWD0ew/fO/eAMs9bbIw3U3whNoHwtnjhzXKnLGEKT9QEoIvqDsSN1YUdE6srmY1jz40f3IWt5MK3DoyoUKtnPher8CkGqPAkoD0w3nM67P3t66PxH6enScpQs0iGwfMT28aAeRFfL5aI82tuDpR9d5yBcTUCCQ93f3i/A9iBpK/ceHjx5ePjwyR43MYDqqxkqcAdw7A4A05JZdp0PyvJqNM3g+QClzQE0MYNNa+t6djv/UXZ2o/+ADbIsqBfA9C2myQiVjB1UZe52/t7p7Pb6Wrk4maLoMyflRncqvd7d3aW/3/FLVBvje+Qt0mlKXYu6t1fZ6Iq2A+yOGSzNDfBOsDOwZNnT/AYQ7Sk/pCaRPWGFKJ+mRzS8aHoUnTAMbnaW3GlGMYf2C242jp4r+KSMTpZcCYsDaFQjSLM8I9gq96Af4UipdEzlzEgm+QpoX0bjiNTcA7M7nRJvcIHHxWi6GgMx0bOjVHjTqvZC1gKkPHqUXF4amY+4RF+Vr6cdP8hoIUCYEQc/oZkYJCaS2Zw1S0Vgc8may45O+9FFngMYREj8xgprQMS0GCXAxiNUYvRKmCHgf6bJBbTndABVQx1L8TjtxVS/6xF7BPGj5qFSQPg7ZiHp9oHXfk3D+jUOWiSpqavVhDeCwfloyBwC607fLm21JaxmydpI1JwM8SqiXCRIDkTfOM/NQ/1MPwjpMS2Swg3zmQ2U2vmt1aABrafX4hdfXN9iR73N9xw2+WqJ1y6jiPc9DYylME0iYpBccEZIJ8R6XS4BUspqugRM4mnFG097w8lkHakvdkFBh5T0dyt8YtfEThxFb+A1TpRcCnHXYMdzldQBJQuht7ihAf8Xf/+Rf3wjW582Hmw6wXK8t0JSArOgB2237q7rEZ3WdNgMBvBqYF7phtVuVls+m9/ko8Qfpo0ZVqv6WZBAYJM1zZm2fjJNmAHjyVBk41SvhwGEDMQiHWWTLB3bTVpKAphZPBNxZlmcxqWQTqlhah1hdaVxa478Hmu8xtbxRNbCZHIBFJ0vDxBOuVospnf0Dato7IEVC8+FkR1wXrkSdAHnMqmFFEdPk7lMt54OHBtM2D+B2TOYrzvunjDWNiS4stszkO0znDA8TKyp0OgCFKdAUBd30VV2SUcRULUpLVSRw8rBiJNsquVDgcd7+sg7jfQelUHJazwGjuSVQwRg76DC50ywIr5MgSkcDRcJcJA9pnTn+jBq0qkHVelQx274enWRwu9JdikA1Kq4TWGfFFneHQy4xvF/lMh4NDbXC0JNFtlwVUzroMErvH8JVTC8KjHpcNwB7SlSYLawKY0kQJKvs0X05odTnC94ZGGk9Aeas3nzXaupXY9L5/7sKrZuFwQOeHSGMo1XzWW+vVlTRQfYt8FyWg6ob3e71qi8KnivnhYy0QAzPJ3EAbdZOq3ttM7eQEM1qzZKhqO0WLZGFIA2SrBGHTi3QQuqR+DrZscl9ruKKUK1uCa9dXXNic+Tq39brdh9XRQ57l533CRB4w/n5GgLMti8TAIS1GvY7FW9rkMWFiRMQK0jd18kfCZEtyCslimRbOZMsmVUXuWrKXcbCGGCBP0KzyASLeC0nqf2NsFiN+52qOpUpdy1q4r3hV1RA1yj/B8Q9q8jukCwcOW6jRTP1dxaYXR8995GRJxdzc+GEX6aXxLdb0kVVfHA8pqWwpCQIgyBIgyZInR7tUDWEZEH0YlimYTXILYfJSn65rFY7tKLwEYgg5KH4uvUMBYAO8cLrv14/4AfmetpMfHQSKkPaWQiPGrt6qyxV8whxmq2DGuiZ8a+ULfUB7qkFtpx1fT0nMJ2ACEiAU65JF4ChSGQ0Ej3MuFrbOjcUt1NiYIK6P+u+s3X/PKbz/khGv7Aw4HMAm0EdSeOf+h2UgQn+74cxtvtKmuxWH9ZLUfz/BYQbxBVXnYP/vDVfj86gP/3evEyXybTYYnH7hgoQfRFdLC/v1+DZuXVkOWKOpQ2PMNZJ+aiw6u8ZB0c7KD/riHl2PKqxDshHCWbP/QrsLElLtGz8OKW75TIpmJCk4W6tXlniaIkXnTfWVJGLDO4yApEHMG2oWKhXbbBWlOZeqFUUv/IIWGbzRRtFN45oqhmnHAKIBnPYB4Q0dI5bMsC1hB3Uehe6AHh69vo17TIEWYnH3VApAHY0TyFDYuTQNqiSk0U/YGo7od1fCPsGDCAu2rhLLXVqBe4pfGG9SVMqlsoqDnFUzwdDxU1AXR5dfLm++P/g/8e/Z/vX/74fO8CpsFBIBtOoCNo7oh7UUgKUEHm21HQGWc32XiVTAFlk3G0F93i+YjWePM5cOr5AqcaD0ehgpMczSiq1zmEgUrfRlw9ta9sCRApSpDRimiFSqVoAc3A36MGHV+y9+jx148eHe4/rAAr0lmSzaH+OsSdZWioJoXN5q40iGhK9K6vvlKn5TvTqTq0ZgMWWQO1FsfuKjYoTD/4o8y6zKR8RGDp/CYr8jmei8fvPiIc/HR+eDo8+eGHzlHUgXPz59P45zffDb7ufMzhvf+IbVexTh/HlbtA9dF4GdO27K655DLFJ9NVeRVAdLeU7PK4vFoBxt/OhwwlSEFezKPF3fIqnx/2iVQAPS9EfUYiOF7pmUIP+9GL6AK4fzh6kFkmjYAoIKn4xd0yZAvwIPr2DtgK4AJod8uRNktI+0JK5rIE7oT0AKR2yJVvwkAU/AxfqSACAOB4naymrGpBioE2x2O2xGD1RRl10/gyRuB8fKKWhgzUR3R6kqok0DBQ5QTOWL7s6sU4EqAZQNKWrDoipT5M3TRazTPic5RFAdkbXMFQoGPTOwS8KPDMXgagQBPJhE74bI7WFUCdL7JptszoCmF5m8JRf0gC1cPq9GomzFC5mBczpklPu53VcoKbDK0v86I87mSX87xIA6Y8mjAaKrl1Wy7zZ3VOYWiRjm6G6dsMr5LQCM/HUb6bE+cCbdoJUhuBDtxZau46aDwVZkYJEAlPTgXSnCmHA2WGfGy2KnaDf02gQLUvbn0qDl0LE4RAWRhh/WXog+ivV4AQ+RzQSnYmiQyM0cj54QxfksUGn8x47zFOinHFKtb/wFGAFrwg4fCZAIf24q6G5Ej5M0XSiQu2iXptLfEkQazQbiXxK9z1xP7JncGxmd8Y/9nqjBATWEQ7akMw/BgRmx/AEI7hvwB9VB9gkqap6nSMXA7zvVWetzI927LidR84XTR7vm7otVf4lYLN+yb0UZNxneFktKpyAXTkem1J2fMvT5+H93iwM2iohxICUaLGKiQrltM0XXRZPG/uuxbhgefsgkwpv7+ULz2E+grwAXjjEijTMpqwG9EFzGYJPG+EBzBuw4M94FPpUiNy7b/9D8rgKUpUdE9heG9sDE5AdYrW1teHwYecAlZj2mq/dWNOay7NVVjDT/GhrwMTVKyuewPxno/rdAn02jXU7+riA0sPIb12lWg126LSkxpIgwPmsMaoX0/x/J+ntP50ewI4Uaz4Ikr51RmhHy/4SdGCdu04vyhq0R05XRmDUEWczji9WIHUtSoWeSmOEHTrYfr3+2Nb7nX0F9QM0Wztfiei55BedY2WRJ+883GFkAWexRNA2nSI12o1sL0KSn9yZIm/9W2e7Z9X6Kn9+uBc9dz4cPDlE9JavJTgx7t9VsIxqiRQStyA9HU5H0QsNNJOcP18rImuHE3m0tuY8rs33/Vm9lbXj7WFvesJVO8yEdkuP8YLQwuUQT+ACAUZ1m7KfBbpZVYCaGVLkPQsfV6U7PyrvVc/fz70kxv/73mZXQCPdf8O4M3+3/sHB4/2ff/vrw6/+uz//Sk+D36/tyoL1DrusQwhHtfrXcKp2PIK+QGSgbnUNB8lU3QG/rlk44A6IX6Wj1fTlM0jgGOAU/EW2KcRumffJEWGJg7wNV2OoK3xqlA6Am17g1JyEi2myd1Fnl9Hy6S8jncEh9GnD/vRVfZ88lydE8MiLeEMUP4eD+AEAMB0C0hWIXDkkt2IsnXAFyCN6S6CKJCK9QiAjVB25XNX7SEoOeSSxqoAxDVpbpEvVlO0wmWRTw+XOFj71glbd9qFom6TJ9rxTQ1d5rUcFdliqdTAxHqwNAidT0ntMqbJp8ut6QqJvAOJC4/NFZWa25jmDp4H59QtBfP/r8bvz5/mj0X/F5lY39z3CdBI/w/2v3p4+NCj/4eHUPwz/f8Enw3jf9wls6kOz3Gng3qgekiF6hA7MXnzhTwFspRO/YfKGNN9OgdZ1xwoGHliSFEAQLbhAto2U4CvCkWEpECZopCE94rSU/ktr9VjDCqyszN89vy7k59/eIPC8gUbmCgrcPg9XObDRYGRQtD3Ax545qhPWQtcSt1SE++sEKVcpKujtDaLI1RAC989iE5evXjJt/3Q3R9pkgbf0Omji5AxqC73f6nQN1iKXmDRb6SscsnQ5gPJFI/nOzwoVSf6WmXPRnzAyMtBhLIdiZmOwR5ZOr65si0bQVKclzgWHiwcft4Yub6y7ntDwq39XlnHcpN6PpWgaZlok4mCHnvVJSK4RIxssfXQMinxG6fpbNtwuyZxXawWp65zszZldww1qKB2P64ATy3/Y+nhtBY+okR1RPBKbMWTrEyjv6DxE2nXurs/z6/nqKG6ym8D64vNHdl342Sdj1uhp3dKUg5p0gM7pB6PlBiJFvC0DC7evJYrJVPHlIwR10N4Tm/70T+BrOs7rGwZOz0KTFlX6v0AS8cY0TCB65dcaujWaFoUz/YS9Tq3GVrZ0o1Xara71LPryPxSAIThJEunY3TqJIXbcFZewnaeDGdZifr9Y0XK+saHga33tdtNF5DvBnUhx1Gn0zMG2uy8DJThKsGrJrxWuCDDyjFfxRmza3b0kuY0/SDCioaK6JkC+zX6ZZUWeG2VsPZqtljeaXMlNYnYEVi8H7n3kd0112VDjxCLq0F6LuCE1DRxjNQyP00+dzLbpnnR/qERnpo/U4P6hv9qBw97yW5Q6lFxZBS9qg8hgyuOO5w2jfaQYJdo96mcbGyg5RrjeBXqbXD8gsf+E+Ww6WptAyCO0X4nZJaD4a7yGdlelXrj2cvuiCIaJZzgMerjTE4lyoj67ALD+pe0KIHZ2D2Kdm8OdsM3Q7vIQWAJ3I11ZaDnyThZJrtI6GrKkG0pFDirug+6tgWEQdUZNnbU73zr6dCobedGty3f5BOg7f59vkt2sG5Jcjs6YSJDrNs6oFgoLpNJOkTI6wCLX5MA+Pn1D1V9feCsIe8FPFfSJZLvWVJcrxYRny1RFzEYe4HkAKegZ5t+P4ieAqFHBBJeqUgXKGnOl4nSCZjjwlyRk4OkGiITV2fgvsl8Mrf4sgx5OqT1y4xk9os7Erdt82vNYBrmiiyeUyTaIN1P74Cu3qQWkEQ7tJAKAFkoPIB1izEelUUKkwIkALlp3YMM3Wovi+SC2ljc4ZjzVRFZmngLjuO2D62NcOoMlB13iuxrycUdX/pqZlsb/9HC21y3HRzFbicWi22aKOU2r4F3LZNTP3iRtT3UoY0fjzVAbkB4bFLXhJFBr2QcPRUNCNqd4AKhBRfaa+C6GtTFOxBWjWQWLe+UEW/kYM/kFDCI5rCK63yF2w/3f05f/hQ1j7Kpf7YJcLirvhevezLgEJCcSv8NlyFhy6qsSM3QLN4OuYVcTTE2Tkf+MlrkZZni/yP0IujCw7t8Fd3CLgD0z1cLx2FLMTJ9NAGKfsFmemYmXlCD4zylA7JvGEPd35htKSxOg+JfGOtrDABzq9nP50ZMooViFtfUPmKSZzG52UTGxme0LCGS7B9drFJ+WZoBwZ6Rqy/VF88EW3ZDr1hVOq4iD/LcVBXlYTPgJjRx2U0LU7CdNfYXu0I2mdyqOddey9TEroMq5quNUfqbhXq/fFTc+2WVTNl9jqaLDcXw61lMCHfeI6nDXkboVDZH9AhglIb3GbMMZnXeYYX372hCLUcTeHhMeEYrvAFytLUGIoAMgr52YZ4m2dvhOF9WEQjt8kPAbEKumc+Pho8GRIgi/taoG/WWO/vvgIi1JM7M+gcSOks82ZLcMaIyelnoKh66949wcqx6uIbMrsFEmq863NOD9yvcP1K+mDj7A4vN8/mAh3AjC+irHrCcwqkKOpoZNgxXJz5j0t8htR1gNLDa7DQPXDu9KtnMq9MRB28M/yba1Ji1w/MMLUUMMlHcQlL0iVhM7aTjvdVcvmlAzKr/b92K1gKKcGBR1cDWcHCcpQfVVOajQ1UJ0lKRRDTCVyZZUiuHBrdgKaf13TaemFWVkyWsWYrB25QkxtDmw05QmCV78J29Dnur2uWD2q5Ox4Dk3XJsVYnLxTRbdqG53tn+uQ3B2h7BZmPLaovatYRqKUOPDXnLyqHFzy3Tckm/h3nBfz2dqoc5qI5XLXQW+bjTixDn/Fdn9K4fdcZl57zn430YKIXHWVKAYi8wB8493UXVy2i3sn84ZgJ1xgimJbtt6jgNaqcoMNcmApA/ZJlD+zqsazimXu0EmvleZWquP/yoiJXCLIZW/x3YE+rmWmpoUzEO4/LhDIU9Ux/IUthN1TIVzSD8iwGDHurMGd44fO194ooC8W/E1la6/K/FIa8794VPXrP3jlsYm+S+0erfUDz6V6LQfck1LcSabbGE3N7vn+rQDP4boAj3819LX7DB+yIq2NZHwRG+PrgvRHEiiv074Ih0dmNE4Xr3jS3U6n2iDDX4wXhDEVQN2kx+0cSlBj0oioK9BcWXeFHkl7AweM8wSqZ3UXKZ4GUc3zMq40nEqDh6QZeFXE+3S+6ZcttZzLI5odnyNkdC+lJsxlAbkBS80mpBSVDA7qizObCySubAsuxebHTYdhRCMZTpzssjS9RDGWEPi/Ua0aDzbl6+P3L0tnvvsJqR9+blsXe35yhTWytp7c86he1WjXp65q3awEGa4bb0enQ/9pXGOrwMLnbzMtt3Fi0WmMbCVys9jBO914F/7QHqzkpgoiF8Vx5BdkakZLXMh6Q7EZWkE2u0bnx++MxQyEyroAFC4QoZEOY2w7gkGHraClM4SvA2EJWA5E5KgQVy0mZh/TG1JRt7elchxVaAznnEYcSSqRbVk/E4k0c6ACerZko8OeoDhjl0+TVZwFrvOOSh99ierwfK5FQsICT2gGUxqpNxaa2a3KxbGhkMjECDQ1cJjFrVq75S4a3Odgf5sRzmu4MJ/bt7btsjkhXvsfSaHOf8tzFMl0IZE5rWNhaoxKc90x0xD8/bbVgTe1Zmwo5eW0+pCEvtYD/Itxt089ReNCxEumE26e7Saa4cVWQjWiaTNCdVjRi3YjYY9m2ojEk2tITQRs9ksMe/SlZDpW8TMo8YpdlNWqyhBQZ2zMSpx7nMspGy9zi2180iD+lbDKEmJCGfD/n3cLKajyzTRHiRXOAR575o5NpstZqtP5OJZkARplYslFJd8XkcHtK9OIgmKjKqxEuYEn9mGUYL8SHVlkVaGVCFUrhj5Z2tQRhSFmws0JY1PW3b0jwVNVpFk2/zfJomc22BN3Ym1OlQX+JAsamTMW9c0TmjutIXnXBpRzKxFw57xLOiyJCL3L2Y8mByEUphgcaFB84WgZq4J+39aJqtaPfdVVBsXIOWnwC4tXjP6aJkW+euSHO7uk2rhrTpbzUzEhImzDYakYmbbKM2RyjbxDFnQCy22Rm4SCDPkEvWpz7fTuRAsM38ucDaQ87mTAJMxy5P0a41PTrqoq3Q5Bj/7SfyNdf4f9NMyiStmUqeb7xYuPdJzRTd+U+BUnJmA7wH+3ea6GFfsZ4qPxCfht45OE1mF+NEZ+3jyaxO/Xo+xztGnXaFglSbrVIhCXCuV3qcoufohVpgj6Vv5uGDrHnQFhwlaAxJSIJLkd/iM5tNdyi+9EiYJQpIqgJiNMjEOqcXXdVL4HXdlmStycnW1zcXdvaP4Ws7qrYVKqVow9juqor2LjuzOaz1fO2986+ade0I68p9NOxrp4os3QJT7LCU+Pc5iokFpsHq9gJmotP8UrGCOi2ozi+BIdCyfKV/k/uJSgyBtqhDiiEnT5bQU8pe6uaucKdsWdwNp/n8srzKl5UEOvaKnnCwDaIMKpwKdpZ9WIrVfM5BLMfovFLmo8x4ahMzrfUx0SlH2jIs3DhJZ/m8TJfMMaMeiH6MYZ/nd0iPOPC7/aRP55QEX5fg+dO0cB5jI0Y+W2XTsWqHfvSxt5YtN7G42oQkq45jhvIyjpDyf/BY89s5c3wkPujaJjuECiY6vdOAAGMSNFdBixacwjj6Dp3bODE6cYiJmRMvnHFip4TAjYgZ02kd4jj+RyXGH/mrk+tcMrrCHgcWJ1la0AYDXi7y5Z9QuJZlJDELrP2vAWjw45J0Jv9wJ9RFL9FvUPkReQOUpFcQfOLAPmTnAxMtZx3HWB1nIIYtrTmEeS9lBcjAaIFhTCPJeCsxA3F4u6p/Wm23a/XwlPy3qBmViUvsi6BVfZ5GK8v5YoqyIDXJKyUkFScMc7SvMOQMDIRFWg0IinWkJx1rAfsSj5oCRSmcUXFhyYksjTpcyfXRwLCSnMnoFmQ36Eo+8qiOOhuoc9q3gpWp6ApCq6zEEmtKPg7L4Dfv4IWjkYpUgulbs4tU1IUA3TGmZYQnCHiaXRQYdRIlBQ2XHEsSNNYHzFjNYXopqh91mm8uZCuqXB4cm8lxC8Y2AA3zy3n2q+xFPp7tqBfeMqQ2hQmctirRyrs/EtIOtIqSYvqTv/IfoWOy8755H4tkG9Ba+x1IoqsVrACFJkX87WOQzV9AHs3QRQErFuougW5d9K2EDLlPUyAl0oI6hIm60jmHSx5hozU6UujyMLm8LNLLRJi7d1ZyY7rbBgYAulGgq4xKYIXhBBEzKPss8wSexxe9R9MrRls/GCNuSH4jua7dEF5uK18eRys4j7+Bj6OGwuk2fYqK0fG793+Ez9/n797D8a10Uw4g1d1AVkzpMf3VGR4VC9v5ewFN9okvcOuZHpzpSTlX7TRqJG0+RnEM7hwYXaXOp7HouBWJtVhba8AciGXDRg88ZspiTda3SIUHzMfYebxMG17rhk1a37gp2wlpde2SyDzZEdXgtw0YzgwyFGMndbs/9aZmbG/n9lK1o0B7Sg/SpNS3eIb2eX1Oyi5sU+e8DoLopfluCONmIusiRoDtQXYMD+aPhWx9S51Y3IJRlKH0lH7PivKD+jUK9mcU7E+lbLA/ow/ozxkOGroFoD7yipx1LkYIiZjaCjAtXdeKWXju7a65QehHH0vucjurDwXvABH4E61al9PBj0JsN5Z57GfI/1pCf5BBqeIxDW9JBsXIGJL1MvEWxHkgu1f1v265rH41yRuhatfljf0fDM0gCSscPggP6TLF9G8JxiFAMlIG2/jXYIL6bIkRuvMbY4bfQlPi7r+quOaavWOUwLnlgOv4cskx1zVC1WXwZj21d3WISIPyF8alDS71JC8MsyXFxHIE5b9YvwsgctulbbNYa1Yfe6XnujMYSS4W6lkbOyj/07GzV1kHLsIxSNJriWlOBk4UKXB1TYpEcko3yRXNqMa5Ml5R7tptoFWRDEPWcp5EBytx007TJbHi6CQyz28pagzi1mhJ2gFJHEtiGeIJ+1GSkANSwqhXRTZgCDny6+Cbd++77973DBPm7ghnydzFqo6gYaPZbHlFyeWVNhqtBTDVlKZK6bU48/FxeVfGKjDsfam6QovmZW5trfB6BQMD2Y66X1oiJ1/FaWU8yb2oDWXVCA8Oc4rzYQFrP0uXV7ntV60TV2LRo+ilUxMjO8LpgROgwZNrE8Xe6ur8UlhOz2Av/uQ6//leEpw2jA4W22tOw6rfvISrVNJCAvPVwgT1RSED/etig/W9DbkgtMF/fLRwfoXueCyNLZDobHI3xGxDKBbfCZbTYxCx6Z6hL3k/y+PDlneTQBL01Y9pR1KS4pS5N9ziccZ9MVdGSmskDjVqqS211FjtXyKWfLGEwQLEBIJxl6NbEeuRcHbWKxUXM7rJEmU0wHrMIr9YWXfpmBGVUmOzSrYvHoeUiWq+ml0AOcwnanrY0xARD7aAqzphrHZm44imCdWvFPGDdZxafUSJjefWZFmaT54cE85tUqTllW7Ild8TyksOJZFRk9Akd7HiHPUKIItY6mChZhZzDiYiphUzjpelcyU6oLjgVTozrKi3QLHvOUpxHRQMAW6nYpS6REQ/GYlQCYVpScWQVq80hzpi61ycxxNjJLEP6JbMS7b9pByYei2NHjBwd+VdP9pKrYTQqaArM1zDRN1inPz0DF5esH0H48eFZ+xhuGi1NxKF9Jj8klOG0d6wrg26WQyrZJnEsDIxm3gbuVxdwlIhLs5zDcdqXWOasSbB8bBdtz0H9g0rJ2ZfdyG3S3vYjg8kQxkKfBW8Vr0mjbwsRYb6RPRJAYmG9E0FVumq/fslJg7sedlgxjlTRswEYU+Cr+zAzwO0z2PdZBXJFXXLVNxIvK4ol/mCkYmMyVG1SlNFWVRlG6i8onr9XW6KDGSkj9A2jb7KZHMKDldexSrD9dw3z7gtWWmLwI2t9NpLUnY2edeisSUkJ+R+V6HA8XG07+f2cy507Vmpxkaz3yo17XFQTVtBy2CClcCyIBTpbFgB/CD6E8amjoju63hQro+DufViW7hKI2KLQG10A0aKfW8Atl+X4RPacAMnkp0Qr8V0OGq67yBvZwo8aEempvuNKe4HOemMYZ9B/L/lK7U9UOzx2RjWntFZHlE2DLGcW0JnU6Rb1kzNkjurJ6vF2IkMpVJwUW3M7uFGiCJyx6YXInDjDQMGLaJTlzrz72TRAus+VMYsFd5Q2ZtoIw9iCveDZkQ+PtnWJiBMalceSh0zBEI5JBFSSUwbWJH7TRzx1S12cZ7bV4jGBty1oxwlcyTUFylLsZ96uZyxBY88nrHdgLk2KvC1pbaryfenBc8GXLg19wyYSplqDqDmgGrah21b2xfq8FpCHTaM6UdbWYFvYi1jhuMbc2PPjT3MbhMiK+rZYLL9WpjzshpYbgSsue/FkVnx7EqfhAd4R4S8lmGSbjahD4aIs9GnBc/0h17P09UXLVgJIL5t9Hhr8aINt9Ge01jf1pa8RDEM6XPRw7wVCxEIYWkFCNVt5KtlQO8c4CysDGgHzp72NoFgjOXVIC4V1e1AGKh3xGhVoMlNI22v2Sk/l2ItqjwWOmXYUABIK8YZvrN2hljdwNk9WOaDMeW9sLgi39TPOicxu4cqap3pKH07jdHoSX7Fk4JGUx0fnTJGHCI/JOLjxf9ITCr+09jm9F0/JXVKkXnkfK1t4q5M99qdjax6lap/3u/1n3/NfscMMqLsOjbo6AfkPXY2fvBGTrdjouXgUnZ6xKJymFpdJGUNV9kNXCrj5wEg8fJKPGwYiYPlAjxHfQ5HIR+Op4fznqIxvQSW5BR9bV9RxNfnaqd0OyY2E7OhN6kiPno/v3v/n0azIQgQvmQATCsCOUClj2aqWhHT++k4AJT0inVdbntRVVjaXrzPnSYXcI4whaYbr0qArUZuG6vYcQiYxFKjFca82a1d1/Pd2p1rAurRGjdorYNzHcW48dz3H5PW9dU1Qw1T2xtLFrKd4Knp8gynwAmYdaPGIvHtg2GzbirSth8ajGyprXVicH1KIkopvUO+tn11bguZqN4OmXGdpqjvRc0fnKXlUonYcpzifR/NnWJaq0eohJCjfuFBipoZFIIKwmB+bmncc9f8cyJLJjHbZICWSnsGI60KYnr4JmKdErSw+cFAF9Ci2CdXG1sr4PRS3kQ645IoN5LJEi1y1T4QpUsQ4StCu4VG4o0b9LljTKrgUKPYnk28sTiI7CmPPGz2HVyR6pCInVje4/dGekzLW9AfU/ljESELwkehRKb9j0uOBI72ETJgPyVZUr3wKZO/9Fb3qtTJerkBibIW8jOd2oxOGfRqIlMaw6z1Oa7Bs2Yith0VW0fGFmi+qCLz4ne6jiCLEOj15d3xrnzLRrtVBWZQgKNmasS32ggNRlGH2tzjXQzcIV1oMgu2JSvTrAwEhndsJwqwxye5AazMDXX9oVrUId3spl5zPCMfHBFiK8+5qh5E+YVhp3BbidFMo1ZQi1NyQzJto8OGUkCmRqtyiTY6qwvOKeQoJayzax6R/b3zvgztRC9hhqIephIROqVxkazjWnaJOpg9B/O8oxWBjKvqqDEe8z0yK28kYjCZkiozAxsAnka5RK//gfzgsQTJ/eeVFHXho5AIBzUglIQk3Er4MFU8m5gaMR4wlI+Hx9ajXDZkrCapgqLdV/lYff0JiAZ9d+X5ZTqz4dP3s1qUPBo8Okco5D0hcwpTwFMD/Yk5DBVeQ01TvIfqdBSrwKFiDU5xptrUzaBUm+OKJ9Xgd11OOsNWnHUo31Hn3G0PB5oGkpuozwOlGPNCb/cpia4eiHTeut9Tp+0bcdq7SFlrphR347EH5xYmCZ32kvEMHRqTS1RojfFIxSkZkC/cH/HJN9EgP8aUQv/QzsiUYMj2JuQg3hK32wOURB3sc0d1Wt8VYt6iODrNyX/uNkXmFgsL0nS8IYttz4xtbzlsjzpYq9f4GrECiaLOuD/oRKOLVe6NkYzUJgTDD6uWoFh33CZyta+JGnsLr5MXmlR3+FFGlbYZZY4WIfr0BJqHodEUh1h7WDr3yJRxyr9J4TDwqJ+bcpKxMZLtEh2/BaiO3pZVuSzTE5fPUo5t5j2ZLK60/ZhqW5isT8W9Gc21bkujkzAsiGh0cyRdJMUdTpmaI5eo2mTSGm0wmrr13nWJ09PvyCgOwVBFN2NxRIHtWaJf91GymVv9iYlu+RrMQIMLL2J7pcg1cC+7xMPcuDnIJACXHKXqwo4DcWEnheaF+Txeit2NuSGpV2N7E2SNLLMZtSxNF542lkjxKnvToIvm4AA9S4ts3fTnwwz9R5TMSKhqb+8H0bNcrn1kQq8SNhRBr5PidUou3NC//7LRVPTWniDs1bCSTAYPR8lwrp52Oh39/Tn7stvVvMbtVwM7+H50c+Ci1zQfXb/Eys/oYhmLLH07IBMAIPCSjgOKpyJxAZ7q0k45Vo4Uq4u7wVU6neaD27yYjgdud1YZtPV4/+Gjr/YPHw2SJwePBwcH6deDr79+dDDYTx49GT366tF4ku478+LsvQKD+M/brUHFo6neDQyape898QjSgQjxqgRf0vjqVPRkWuUjrbXCpMCYATKi8/oURPb7xMiChCdnQhQY3jS+PHqfqMno8fzGtouOLXt6tyc2OBtrgfaVsQ5sGmf5noPHDOSZdp18SoEorAKMfKMinQ5m+TwDeb4cQJsDMowCCAPMU+CVJ3nsyIRTVTVtGhr5odiPot2Dgz88+Wr/4NHXf7C5csLsJ189Tg4uDv8wGH/95FAw++HXDwf7h+OvH+1/dfD4D+ODMGZ/OG6678Tvxy7A4w2tdxvsNZ54jLlosE08YB23ZIS/AKfA5neYEXGcgqhZpCYiDvPemRVywkh6tC+ygrdChZfSXbISwqC731U6uibS4e2gEFfTXeR4056RTxGJEr2A1GrHLdHWtGGuhku6bs5kqYC94hkdXtwNYTU1LbBXSGTHWJ9kwVIakGIipJSTaZPVXHqSmuMr4p3fEOWUbqWjVT6bXrqokt5oFcSaGItB/ODqgiY6ga9tupyY5WOccAmNiHQSloSMHHCfYy541rwX5JeI0NEiU7dlLO2lC+RQlBSjq6D5RhO6UAMh/YiDH+JEy+JnEknyUieXOJneS3dEGrSSqVqY0uTkLA2jp7OuXPF29uNxdhhqpxfLwjjxEWTCa5E7m5jjrhG53ZOxFsNVifVIrvq9GY5b/a3iuHpjW98hrxqih83oPr6DBchGJnSqEMMp+RslwWQ9EzeeD/n5EHSVMtbQG++i1IqNZDo24E1gdpOkMhunHLVED+b4WDIZKZcMTkSlwjdVoyGl1Eq8BpIjLdvArDBTFlCeKTsGlTF5vriz6EBQyAx3wWrM6wV3ApGHv1iguBm7qlIYKrfGULiaE3+pVZhcRe2UOlL3wTofvXVuDBsnt96eaHwlJjiOstJIWH6a5K4jdb55+ezlEeoLogLd8NDXl5wKhUoBpxf9Vz0Xo2Ej1QG8EPsfp4x1IIUifFRIEuFjra4TP06WbHZSba5AuQb0TVB5/K5DLuPYY3R+7hzZs/Pei5+ihujkWjeSrDsaMdsw71Gd5vA4FK0i3Kgf0K2u8Uq57YCsb36jhq1dU9eyIwBQ+QFX4Kwem0CzQrrUQbOKxNcrOPXnKUgJCBpQbLAxxH/mF3WQ4FVtexXVebP5VgWPd3+eoz/8nBziVabAd++t0F50b6OJCPPHVpBwa//pgQVCdoYYSB7fsTLsaBuvm0JJW/47y3yID0UVS9ojpZZV3va0n+Vhu+igr9ICB0gXSvmIoKKoV4glxBx5rVyHhB2Q6pNcHqfG7VepRoN349JrUrAmRZHcOY6TyLGpOxNpTilUTTjtgE87jB1bvAN+A73aOaq6zofJ3uAUfUHuu3TIcFGjVbvqTOGRuhHDsaNiYYa3AwBlTh6junBfcyCl9N2KrCGx4ngYFYB2PNbXhMyOiXHmTkah2SLt1swWqkF2uwh6FflaZGt1wmpku4DLvKIiVIyXXWcWXhmnFSlrol5lXhAwb+qbKw90aTuMh9uEsy0t7S4OpE61W4Ea1PNSC7aSV/rXt08+suLedfZs7UU464ABX4/p22apEKw5XB9JtkoR3btzoIW8+ef49Yzi0Z07FNA1+bZGd7Z/DjyPmEyTIkYCzwau33/3+dP+ow/6PbpBwDwVZby4u1cY+/B58uQR/YWP+/fgq4cHjx//7uDho4PDx4dPnuDzg8NHjx7/Ltq/117UfFbIOUTRpwD1W/zQ9eBwOFnB5kmHQ4xygeFZOO6JigmxU1MquSjzKZx5Q/69syPP81J9Q6R68kj9ynL1rbzTRZYF0J6LZHStH2TAlsl3vE1T3/HSXtfP3u5wr2It0cmrU/nd14ySFBQXFdVHRYDlrVBg9Xq0KoaaKGsha54Psbt4gy/V5ByUWmLwJvC02Zy81QooKcDWdPLyRw6o8aNKDV/Hd0pdVQ2j1HiPUKeB5po7xOPRSna/4CPkiy+ub8kujkm4826STVMnlpEuLE0Nh+j+IJKyTJ4x2vOy91gcJN0/8m8nqpW494aZSMVAWszjGLlHWURmWDAghiW7c/gh7SqJPaI2EhXORNudKC8uet0qA1X1Mt/ODf1/8fsf4cs3wpIpZcRkig5Dc8Un2jf5hqm1AegZ465QbN+kGCsmFKSK1JhYmfjLzkjsWdZsZjYH4MbQpmQ/dyuf4JzPYIlER7AoUovdcjDCP/um8LznIx56UWE3pZ2AJmYiFk6AXdM7reLJLVWtZitLcn88dXAuxPZIogtGlmPDa1m7GpkIP+NUR9ynOzWX6PrbRqHhHKS3fwjzUsNbwVhNjH4SI8WeTgXDRs6pYyeYIorHfJGKz6v4VCM6ymZmT6ZFN+Rcr/I/Nu73XXiJF3s1mUM0EFIefCAYaqMJEEn/HwyKWlkPDC9V5ESgtQxCVPQr6B9g+yxISyYa24VEpRJPMHkvDqfqJ7WtjiyUHilgBYYauwBSdIVBvZU89+efv33+9OVP3734k4a1Ivn6H2wXhU/+Ebv7TFpWw0Q7MXvnxNYUdDUSe5WOfM7cey+I6ZrG4AtW0HCzNbYx3j5WNT3jmF/smDQ91aA2evnZCoavcnN6s656IaIFG7iYXBeEDrdXORy4zUjw0SzBnGSCqvOrUgxC9Hras8z93Wx+pU44gNwG04o9azOn5Ca0Wl6B+Hedzn9rc9vButCvjz3HcB4tO1ujMHWx7WSjW002SjGDHiZls+a+TIbsn9W0COSxj/TEbafEtgfUThT9UVr65h8Ov5MMw75c0lIkTTF7Yem7kA9n61qzFp8IF8yVoNfJsrMOMWC4zrq0Qw1vVl0c0VOMd6z+YrXEGRfAZqjD9wK6v//SvVq656xSUk4oZ4ncE8r+GEiHO5E6WtVxKgUkR4tKikjRG+i21/ZHUCMLhGJQADZbarmWcVbY73F1edGYTCvHvFBhtnt9YEVlJYGuVVZSib9NS3qaLkuHX1GzCCu2Kl3pQV4JQZVyKpAHLS8JSVBNDZWWRomOSvbDq+UEEDRusT4fD+PUBIjDphqbN+1VCmCN7sMxw2pMwlc7C1bd/ZLsiyKTcaBBWajInXhHtlA3R64swaZqO5LrK5t38Whk6oMTh+l8mlHnOYvXZHeEDViXQtqEwZBa1TqJ/gpATMHfViUHnlbhFVEsvuZVYnMeSkMkyd6SRTZkJ1LYKQrNbFxRgBhNNVjL5j9XEfQz59hRneKK6leLih/xvKolUtSJzRCQq7j8yQrQzqw7Zl8JrX5bPCQInV4NlqHspoQOhpeTpRBTgGC0uL6ktaM5VU+ycjFN7px7zPHMCUikJkwlVk0oOJAtejW6oSupCRU+1IJdxe7zUfSMw7UbdQmn4HLbSaYYbv3OThj8ETGHW7bmTfBZrNZK+5W5QlfVrMmt1ON3lTow+1acvo64dKHcr+pqdRqgmvKRS27ybGymWqw4WE84Snh/8eyjmvMmm6aXEjaKjVEkLr2xOf6iV1FRqTOK3csks7YYMmJXmE518N8OZ/wz/tua/SOzQtwjeDrl14a2BZdXXOVUBOustCMnZ7NZOkYjLwzvOiH+baQCVGKiYhu1nCtTY7LDMPcs6s7XmdVszN/4YYmkE/YWlBOBglhZ17UC1sYhc6+ry+r88p3BwCracTbtuYDA5mzcWtcelx2INs+uqlq0qaGFbLtGKQd4Gb7JraWRHajTcSikjcYq0oXJ1mA6r0ika4bSkjLb3beh14NDWj0or7OFmPUMyIO+c67P+JC8INvJooXO5a3ci9ejyo4J+ekR8rrQn5d4NzQEtiDLx0KrAZi5O2hiL8jLRowmdKI4TTHijcg4d1pIht5Ebx0TXvoJxLvzVk6w2pikr5D+VqOwISDKUYuw7BbsWSBKaaXRLBX/PRhQsQEX8y1mlF0+Tp6lul9wT+ixGeenkqVV5oWgpMxd8XblsI//FztJvueKYWdnMLW/zBnNkOCeYP76SC+TZfKAP9LlKOYsRCAASdoSKsFTVZeIrjGm2SaxU6Ehe0XDXrqhBu0FPrY2nt1azwDhxW5skYrs6j3fitiogK62qtUQH97PIW/EJk2EUBbZZg3ERU7UP6H3gLrIYw84icgMbVHs+9VC719tf8dzcXsFvIB1JBrc2PRQ9MK+8SXpMqdYGZQXCV0k+Rdd9Q71jTBmUscCQ3J/yCce6foJOzHNfhXqpX1blY+pFcPhj9j8N84mbwtUySmqcbqfJM/WOHqq7NwSunbfs7xa+e6L44HIpbUGQQTQciOZS+DNXGsCrHibapBjayBs9K7zGWDtWwlIE7yqpJjbig9St7us9vHYMAPOSndIL1S7Q93Ose1yRfybvj2Vzcpj7HO6ikz9jspcJVq7BqyInr3+m60jyEr0PU/mo7Q1YvSpXcuwsm1FHF/bsmJfR9ngKAhEWwRyg0zXWPhJxgK3rJmJrG/sInynFAxwwFYSDbXL7G2sXJHvFmlZ9WzJdLZSy0q48y6UOTFTsW84rm7WJjIC1sFNEpfJJB1iRaxnd9hb/KxP28c3rycZkIKTkXBsx4eUfRnhAJkIELqRpzu15ehrsBR0obeuE7R7A3PO4YUJA7Rzd+b4IpwsI8nvlWEG81udn4KMChJuQadzAelpevcrBx8gy9nVCO2IojQps7Sw2z1NJT0JAdVWAOx4izbLwEOM+2SgagcXIdSlfZwsRqmDKsYh2aIfTiA0ojihcu7EBMmE665JlOJWx7hBOsZhbezbCivkCS2fiLMyf5LUDUrdFnBMS8CGOTAj0+w6tSCx8fiLGcilp2SKjJBid+hulBsOatOT4TLqNIWEe4B0rMOp5KQ4+epRtioWhUm3BkcHpvjC4eIiSaKlQvl2upPoEmElsWU62IwTKMJsvurG81sSnibzGrClEa9KP7CkSkaBvTgfotOJeLbRd5IekQb+4ggtOuV8jX+LCDzTfJRMh4hhEhnPEWSQlQBudFimI8z/zrKO35LN57klfa1X2KbKYq87NERlJd3E8hqK0TAVIQquy2mP+EBV5edUXWRh9a1GKvx+basVjt2sUQM/3NEsuynesdQP/joeWdsROSlSY5DQQmUwXnjSQQuwS45nhJt5jCH90YPWimf1QIKPU0nkZzLkacerEXM0JgbXo/irqGugEAUcZwUD6sVedyY5SkGSBzwj+/sl2Y/coDIcDS5v0bAM+JoFsFIXGcwq50qBitclbXCrRQxJRYklKSeKOPp28lEnusjmCaVMQk8LIgv5ZJlaIctv2F8fVpCuC6cZSIlDeWh5vy0yzBQFpeQVL3S3E1v5xGfJP3PEYjRb5PJn++fW62zuvz44dw6uH3BllDchcEkvn/KR9Y9kurhK/iEcJAxNdZoPmDjCzKroBHiZ20EJHlDQSApGxEMHjhfvoQiAHHjsJK8b5MtIUWnO8tKeZKTxU8lAwyGRMVjC3iWroSV9DSsh2RuhVIZzhDwO+e/QiCholQZdyAR+Ez3CH12ZzmP4iacCT983x9FX64Ln1CJhA8VuaMTaL+3F146Ir1YVj4rakjUN0H/vSHN1adahAgjbJva210gvLNXX9iFY6JtocNDcE0/274Rkf79L2+os+WywiG9deKFKklprMkImjZX7n/GK2DcCiDsDITqsrO6Cd5YXclKz4mADcapJY/hagoWJNkLFCrPiYRRatFVx24jU6lgack1hibYssVPz6NlIDKEOnEBhQnRoWwGLlpxbyvMnyJxdgsiAIsweBblT0ffCkjqBaZDWP74O0Djje57Zo0R5OrIGlxhQCsWgLIS7KNTYTP0oLzDbZM9ZVKoY4vXgnNhSWWPpBeaSDgGg08kg+T0Szqs7V1aYVjREMfN09ALYxYrBhtZQnakjbIb+5LoYBwE8UpE5zXMTzQff3hzY75SU08FdZj1n9vuIu0KP3zv7bY3lLC9Ju7Tc/LmXBIL4MdbZDbl5Kx/XJruCG+5lxQbkpeYiw+7Yfdxp4GGCpzm6jNqG8ptSjN/4PcUr73oiCPVjUiccw2qE3ryT1dSYVBsNp5uQqNSkPCGVdFfuijkcjauP/Lh0icPqMO+h8rWgBsIiSTY9yov19Ei54f4rqJCTy8jy0ugMJvRv53ybO5pOFbc7H3RH05JP2+COxmdyHRGetkJnIxueNtn4PiDedltivI78Vm+hmfJZJmL4JtV3amErVREhFTVnu7Ch8IyeyqTqmuZzhymlLgXQsrstabrEgFaoqw07eMWRoKxJC4ZMgh0t2xAnZJzxcgOh3H10PqzGY0yppSxDTUQGy4CGmV1JlokwUAGIqeaNgUvFPW1r5y9riVtyIpbVQ73/lYMXTR6M6D2/rQfjC2q9ZNc4Zda4kpzb4eNvU1fEe8YGPeY6H0TXnlHiXfwL8Omewlxo/6Rx6uNpbT52J+LHiURy6Jm+ijPqKFngVUe9tXiHkW8TKt5EWBFfjmvdItWn4sXTgqjrABL4bQPi3trf0SL2PwVWhh1AeLbEEtzzULDd8/LxcIY51UZlF7+jy+aabWoEUQzg/iNX7ljBxVUy7UrqHgAgpuixqODe7SJXtHu0a1ra7e8ajgjeSO/i669LjrF6ATzRAZRSvNHu0btdcoQ72i3v5qPBrzdfj67hvZ5NeGGMF1FPAi8xfsYP2fwa3u0BuHIvDGbPmEHuuW1gwKVyzwGoAre+yWboXzhbQOOH+wdfDw4OBod/eHPwh6PHj4/2H/0/u+/7u8v6Mo+OHu7/P9DcLcxMfgvvD2b7OCs6rkq5e3TmDBlersrkMsWZGC1W8HSf5geknjv48fDxkyeP/pztvn9//t4mB7LeuHth+i2ne6tM0I+56rPs0w5EUDjMnwMtU1gnV2eVAy6MRLVUoEhuB7JWQgrU1kSO13DIgwGUtFnptsv87j0vrcWSykTZQWr7kf1Q3cCdb8BhEgFCn6ROwFnwQzSGDUQB8y7QxNCNszMVu+pqqYWHzI92nqvW0c5dqlNiYNbmi8NaukNWMBogRhJI6ZJ8dOWnOMCQUSqojpzlhhyZqI8K2et6UjV5pAQa3d23d7/uon5hl0gC/uqRulBJnj6yV8PGxqzoxFsjJf969jj/yrvFB9ECHd+WkRofp/8gfSxlZsGULPROV7mH68hw/MT2MRLXx0bUoN+bMLEVjRTtOtt/z7uQa+G/bt1lEa8z4kvAO7nkJNN4sZV9GB8cxIdfdzzth037pLWHG0myTGekaqfqXxLyctSllaJEQqhbV6wPhXE7khIwtBsawteH8sTEJ4xuDvDN/pfjR6NkNNqXAhM4LoHlA77826TMRoOTFbCEfzo9xQx1f4ZeAwkro9NXPz3/00uqMaFbhTml39HyKyIVPnTyWMCWwWeOFRN2sCZQJ5dlCfjs4DyeslTcuQkpE2RuHn3AIviRNPASohNw/WmxNPKT1F6uRZYmu3YxNy+6a7hkNwU9RI0GUB2U19FGRHaDzK7TXswbQ1i2+DJTXysHR7tqgflvlx9YWQ+pXUdnHk6Jc1OmJsb36k6LD9zaknGRt3ZCylDZ3p939/rd/UDqnfIsXi2Xi/Job4/MpkA4xyBRvJ/TYrAqB2lSLgeHVv4FYD2PHj16qDqsnm/Yb4fGqIzuIWKzjtoY+C2JjtLE2civqhBXVoEhM6WzPPiA1G4T079jF+RhBWQlATvqGNxGHPjK/OTFfJIHbUS9XS+5fozuHduno7ZvRZN/FO9Ho+mqRENrZuw4hjSrCWht0PRHpRElzuxCZcp4AEwC4cuSoua7O1XHBZLAMYCJ8f2Q96plQAHMB1BQ1LmjETbp3usJvUR/pP7hUkltst/+TVF6bft504rC+8U3ouYUFdi1LsJduZaeU2raDW4iTYLUoGOshdWTFBN0Ki8JdVqU1avLZgXKCak0S0caKbXzBSrHlKhCG/wDXBbIDjhbyjFl34WG8pM/5Ts4mWpLhUc+DVAT7ST6xquC0SeZG2Ibi/EEikD2Yy2ZGGDojo6XfjB15nZArgst0DNMzoZnEx+zgDc6RDH64ZONnTNBVhphoDB21mArJDFHfWFMqbs4sNTCpMMNBqGrwQhLVyLJDI3rdqLT4Wl3G5xTFM1tt37jxguVdjT2ke2JcrPxE/0A6iA9zMvUf2Msbhb5YsWBtXU8Zu7RIrlDciIdlqsXBKJmy8JIMj9XJzQwblerCzx/98zhan/NynIFf77af/LVIzpgb6/uNMEvcyASqAwjiHM0RSxBQPp0wfQYh/RNCsm2rz2/GJqMcA4+9CbWKKexO3Q9qYvpq9N/t4vt34ahjXIeQCNTvF/JeY8og1uF+eroKNUWXHCATHUnQVjflxY5IamJzLK4U3y99mcywekRQ4x1ry6hOE9t4qausGjPdMrKnhQjGO4Qblq25xrn6ExAezzWCiAy6iLNag3FcZwrAHh5xVtspIhehiw+EjCYrXQyyUYZhe6JumIJB3vIXBtJV+QKyGpa4oHytktGwMOXhkYb5UyP9USKRFysLn+F/ZnERTq+SohX3wNyjDEPL+PRZfZf2fj44KvDr/5w8JWteipUAiHr3CDfK5gNZAU8LKWTl7aGm3/Vdc+BOi6jK7KfrilJ6QQZ6tRCdgvV/He0eu4x/l9OFYo1YMGtMs94eav2huSvxWiCXbNh+rBheo31kA91cI5S4TqAYx+VutUmEfaZgUsZISwom9i/0a77YPO3rdKbm89HsYTboCWLPTRfFftapJSnfAMGtt4obt1V3memsh1TKVZ2fxUHW8qvhuZ9ZKXHRneKPRTrO1nFFsyltjlQDEnFrPk3zv5s7H7wme35ePbFgnefKWwDhSWfI4nhN0sWhGJd85Mv08ZZMVwkyyvENfyrw2TBc4zNK/6n6ims5BAqy0+83ZJEQwHbZBMsy6J5lrfvhEID6A6R+X6RztiLyTGbFUJEreJjcmRwRDVmQ8WQr9cTH1pxPR0j10K+amW6V6S/rLKC+EA2zQfGgKLBUOvCykiHgVt148K4s8fnhOMnzNHmaMrsepVZPsKQJBRhnznozL+0hRqE6sC4Wk4m6mmWkmkRi7oCTzwIdDx77mgc/ZQvxUcMX5h2Z+g6cSFZ7Ch8Rm6PHTkYySpNDXczinQl9lgUcT8qgfvm2w2aql7sj9nCoCPxdJnQqCNy9GW5RRQWRm+hBiRXhmedeHGHmsUYjrXOec/EgCRwlM/JtEaZQ9HR0Yv4qFCXTwfqyHV6F92ge320SLKitNauMo1RV8028PQX6HTOGKcWliDBPJDfI+DxBVBlltzUirraG71xnENSkgRW+8EBLISzkB5pvPPPPnu3UdZKU0cxwdWrdzU91ZgN+o3JXZhNrBFUa1jvTJ0xHiZEglSDdiAvf3s0+tArFyYOgbZUQVBwk9iXFZY1QQXA2igRlRpodlN5aCWiQmqm3uOyNY1JlUeEwXRiUV7G2Huoo7sauGegpo+xMH6L/5lnc128z431diq1cLGkRlbi3uvWNG8thr1va08jbIsPEfw21HvQ6iJdwuDlAYGsbSmbeE3EuH8xHidxK/P2fcJPNcCI/aFtm+V0o9blmUO267hTdHAORzm66R7vrpaTwdcYHAF4z+YpGCIPISYgatzq0Zpxj2dnTgsocU5iDP6nHO7HYQ6Mk6z/mCy2YcOc7lC0I3jq8QVuGd7MUMrsbFPC5uoExHgW4OjGDltSpnBYL5knke+/EYaEe0Pn3W+cG7Hm7TMr8i9hRT6YwVDI9sHcBTsHICxzB3Lx5BERNDw0a31ItmdFpO/t+RCp8JtgQrQszQnJmPK7im4DlOeVKiCNuO5HNxQWC1gION4KkmENW2P1enZ2jTSdYcR6Pbo3n3mfT8X7tGId7oeVuT82pp6F2Yib+63zOpV9oZifZu7nlOjIvbE+Nvvxkfie0RS9rCg41+v0ErC9uEM7MbQo6ua2LbPOokDXWMKbaDetpCB3azvcYXKBMUszbBkActNI7a8wvq1cclnlPZrLWfxQzT0cStZn1chwVUztUO4miDsyRVObv8IPJWq168LS2T/dgjqI/rGG4BbQwfKPNWCbvOF+om646EqPoErn3fv/TtnYL86LS21Co2D1XGCqGv0Ncqn56DotmJFBfb7LsdLkD/VgKVUOTnl5P/xox02A7vel87+dYa2dXvJzxPM3YUfbKSqH5bomuNV+q2zPxmwMzgH/5mBhyKPkE7P/Yb/RGBJ6Q2ubqEgwcgXEuDEG0gltdax4n0VCV861k657BlR7qDbU0NqvFaIOu/HIsmMDCLEhK/hLVe7FUgMwcJzyNw2OxnyGxW2ycu6cEESLaUBH4e4Z4q3YP2Wfox1RYWru8lV0y4IAh5+8o/s2egGMjrUB+XYPzzUmp/Yr5+ii3kOv6O+O04Mfkl/vIjJJwhWjJAbjIrm8JMeJubYUwkhxsIlhhjJhRe2MuNLWT5TtpcrS4mGABAU6KjGtYXZ8ShJaOzK/HK9mC+BvreKwbtiV5fGj4JpJb07IfSYbXWtG1Yg8mP4B79Rp2vjq35quuELljoJd7ttT+W/DLSDPjZDXEPYAf6FmrEmzQleKlMLXP244ZmxJSW+Ki2xZIFoped7Y/YhIQQSMbiTJzt9Eq3Ivbemi940tNpmrXol24gefKjncbdl3fMjKCiUN9gAfOsNSF4bOlaqdnpOnBf0YS+KqyM/ROrpr7tvExzMceZM/5havoZB1/teUAMqECaSh+0VTMdztTe/xuB0COmFzgTBDVreVV1hTa0XKOVQX+TQb3TWVZH8LYi+H1VhG4TqSX26o8guu643J2NI4MpqCGbbYWAxTPokZ0LG7/52P7B/hN4Vb0EUrp7JlxWSRISEbGufMG00vbFphlBuBsMPG3km/ZCsl9cskayUP6v22XaF9gLfxZj+YjiDemT7gL9cHGT/vKjPY0b14BTUIbLF0J9qiief6m+n8WYdg0Qjpm+6RbE3bf/OFfupGZKd83DkwlyWC1+HZicblcyc0NLG4y2wKgvXQcJOkIelK256crSDCbMjX8DjkJY1EvuuxWJv/KFhZCjzLCqpvlTcrZPIZAUU1myDcIJxgqyJb3j2VPHEBHso0CAvnOqAavFgH5yafwjnwI+fBPG+DMoKgHWx5wNUDe7NDMF8lxOZ19rBwqBRqEl7Op3eVEfAo1DdRY6Em1d0tJg4BtHBmBnbu7w+XUoYjQ2HzZKWIJV9RQZoSt66JW1pDUpsat+r8Cau8ohqnHAGUgNW1qsGGqLIPRsqccBHyM1bGln5VF1PkZPLbs99RS/YDQwzVkeVX1y+YEKpfjUjKNRm/7hcz8bUgZrURKrFQaBuo/r4ZRdHJ3xUqbD7WOtMswq6Y4FfoWR7kctVX6y3OD3K68MfBdZXEKfev8njphbtSP6scsvNRtlwNRWBncBQF/+BwPq2YMJq/Il/mQNiPO2+evqo787lrPIJ1DBd+1nN3NBI4Q447T9nd70UjcJ83cZkQu2u28s08xX1u5m3HrvxhHEx4XT8eF1OlyMwOHLXfrXYKDDMngYBJ1DTyP/Nl157MYFFBI2Rp5GugFMN7ZZqt74C10e1NKPuDZt7fKkqExD/OjKv809vSiVOuf2+0QvqzTqn6SRWqQbF7cvm/W4/apNL7X6FHfcA2BUi9tJU72b2j8x3XpOQ0ohh1dGTj6GKaXyi/pJQfTfP8mrOsqOgA7+QvwT346jB+uB8f7n8dH+x/dfR4f39/98gpQsWUChLeeVnbd/t+WaWlxLJ/lA6qZ99UStNtRbVZ++KjCgJX3W5e5zbWcHatKu/t+ru8UwYKg2LY0Qnaz5c3o1g82WMK3H8fkxHHMflaiNXDx9eVK/MKXHDxU0su8pt0W9V4k0bcUXo7te5Zob6F+ly1hMeW3bAnIAoE0cvqy0G3FKGoNETfvfebqeo/kaK8UNWwWwp7mhTjVMbSiN+j/ntyKdPDMOwD9n+HthsGuLGam1150E1G+4eWfiibH5Nr4gNGlIMmj/6BSaQX2UBX+Ifl3lpaSZ/ZhmzAh09H0lnhiXaTpbdAOKRD2kf2BrAaz0056jGCx3x6Zxy81fVRQrSMaF1KLmLlKMU0OXnJ0T80UyFUlU8kTy+vQowtRMuA0XdWBTroIn2i2ykVUmSP0nRhzAs+JwcD9Lcqj9EDeIr5sEo5jiVWiAFDaIrk7055mVFCVBgUxpbCQH5e7FLK0SMhczvOJHsRoEwXlOmcGg8HjHWXdMihULrwwI2sh7sOCa5xlOpyUdSJ1UpWGPGz7h1NiJUTCu8SUJWwRq6b5pdDHeCyKVAbly2HlCltbZvlkPtzcbds0QPoLJpjDA5qC3GQQ0yDhrh3nNIX2S/KlkcHeKSnpAWdXZN9lTW3/HJRpJPsrW+LZa1AZIW4U2Kr0wU7eaX9HF3uktnFODmig10kGqdIt/OnBF3fOEiKMYNB/LeORBzP5BcJWSldIdZ8ng8xdtY1JW9101VinawcIiHtnkkovg5JLICtngEbhSmqwZmwjk59HnAIURUFCHU6nOB0ls6APjKnq1vtMzWRoEDWgOvaTalFaNqKBplwFEfUhnMCsdquV83NpuO5SYxqfyjBog5+2NRo2KIMGlaOk0IEOHMCp2TvWpmGSMXX6dV4NliLRmvG6QzxsReaJ7+dp8VrlYSwjIGyDWEuR1fdGk2dOlGf0ZBOgWhVyr2vsbuTyR/mnDPQxIZ0xmg66YRt1CMONo36Zq91MS6EGa033vN2Ub36CUd+ep1xpkl7h1lyIZl7qoUmhMvnFDxbykjMUSd6USNE/Hhbtq4YH1guLrhmjkyevgQmCqljvHyLkWfZ7PF2A7NH7A9N1xCb6U7WemVaJ4F7MKytyceCOSHWg0KKr2n/2uL2ceKfL9a2qptG2MOjIrtIt5rKSUyxYAjZVUNdy22VEytUE9PKhsFNyNsUeDxsg3Y6hiITNtrbgXVjYIOKLdDA7n9SDsn6r8qJUIDfZiYEi4Sew4yQWf7wn8D2AKM5XM2BOQN2KVnms2w0cMNx45kENSjQGrPQnaC/cjmeU/ThWvZhMl0BnztuLjRfzYbKI0H3T0Sk4wMQ8vdra/mjWlurkUvB0wDmFc1F61gTxW6okVfVA/rNsU5dgxbb1kRU6zhvTT0Qn4+8jeNxFvbQAgyrGhFHEPbONnopvaUYJG+XXTRaIcJL5/vcGuaE4s0HDhLM40ixn4Wo9mlsgeiHBloV/yVfgXjgqys2KR5LwgNKnQJbGn8C5maLJYpE5aBMbtIwfrb56CtXtg7pANCmi5PGT4XiVLkJh3Q4vK1ZLD00ooW9IEHpEEHpNJnTK6LiTq4d3rBljbQouj5nRLYQi7yk/DN9Di+YKEQm9FGpmkugquMVZdWe62jf7ElkF6fb01IUuB4ounwvI9nltlMYSJxRmaP3ivS5wq+PUwyLVMV8a8/VYb+7ae9lB1hNHlWRw5dCngo3RB7Z4rKlJgFjlVRyVsa1nE0rvFNAYgECO4+urMIo2MKjI7yvrUkI7+3a7deRjo2W04ZN2hk8w5Nr7yYp9oAT2ZNKjTXorJPhNpWjBKUNB1a4bgN1qtKMUKnmvRyQ017MSS3DUZznab4qI4n/hTqiyTSnNezDBs0xgKFoL5O5vvwtcVlv0+m00jiaRgXZiTAONCG1NGNgbond+GmH4QLx3jAcPxbuVqw8/E87DKaSdViMy/tXCsyj6KfQTiSTegrXtO3gO+fuzrQp7YhtePHGKkG9MiqV1YzxPJd3IObPxgN5OMAAwst0DUy9d2rYNurHTyuloUB/RMBHQVvowiUpZapYUwv2vLoz1OeBvjLk/OFpRCjMylHgpKzwoeKnO8lHK9S71jaJJwc2gkfHJhtEfSw0ism5DzVEgxVMGlY+D2xzXXMrMmt935bhsT/tCBl+2jMm1XTe+PEpignLHmBDlH+4aKyVRqmethA4jvkV6TjKSAHSOo2iCjk9MpSNQFisCNKyv8/DMPsRsvQpbjV5C2sysmIwa85+UeQShLFOAy0lmsyiULaldNsliXcjo4BEeRcGlk5WMO1LepsupvkdXnPjL9FWFlSxoIra7b6WAd9K691e591O491Cg72JSrxJilQXFq8kNDDlmA5EOdD+UpdzeDXCkzchOk5hD/Cuju1ILjOgf2pd/XgDbJchilvdtl3IRgg78U7h5ZBVEVqlvN0EIYydb4ce4HYThOfuUqBlHdhB2om6ctPEBDwFOYsueeSylFvCy3ACaNCtpJRQGC9NIRh8T5ej2AmGUK8NtnpLBl/VAZexlyqYiYUnPOimRbkPZwXpyi+0CUs61jSlpIaF5GQl9cCEkaKDhkQJt+XIRGWQBp1VdreCEzZRnWDmNYLGcSAye43ILjkCiYm+UC/yEb3CtcOeVarglsEa+LdVBWsDYT36OaCf66p7lzfRj2zOxNYyS7loFe9CiX9JFTERRxz9Vc0h+xsyj0ItKm8gpOBGuyx3TPkoTceACc8sJJ3ZgKk79LhcjkEOdu2H9IZXzemtVqOMd8gzGfnhz87HusZa2cw24jZLtwzVYa/tjqkz0FGC6Yu5xgsucz8k219bo1EM3370T4zBkkTTNLnBWKIUGaVMKJr7LcZ1z1LHA1uHYnT6F7Pxtwrc3u2hBL7foKL14znQ5YdOvfwhCttV5917yqChs2/UTald279eoM6qw92pd9RwBxEYFa9/g9qo5bAwdAd6bbBbirp159Y7PZMiwqmL+/vin6Q2VMtG9H1dUrDgrYgoYqtjNOr5KoMZurPnpvqNN/P2p55fqX3Trl3DqGxwB6NrbngZo+s13cq0bWW4/aUTfjxuyf1Z4XFZsuxSyNRaPhdLD0eIS7JbmnhEKowuMEWGp6up0uC8Z/IYr2VWraK65ValtSX6eMg8eX0tOelBclZlG/q+3X3KtjcqGzDrrXj19az6Bpx6G0b9GZrXqVDzJhw3c6AEjK9PlGJFGNFI3JcVP6/tvx3uTaGxBPvWpTOVFAC5aDLkNpoH3ZLD7Fcw3mEFFSfuMLAUzVoiUQNANI0b8PbqR+6FHw+pAi+waTaGilwvQf7CAjr8gl7JXuDLLmzOmz1/Fx5Fs4SkeOa2UZNpuPCLO2XVFkenxNbf2enIuYrpdAzC/UJne08C4gJLCTX9MfNhJkkd4Rv2TNcynWNkZikQ5lP3ddNe+iQGVi/F9Zjnt3MRvtyuOhHcQHgb0S6wxDDClB7moyCQynqvrA5LS4la0Vb60+x0u0LjHFQT3d/T188iFXOMWHqoS+0Z1p4CAKLyMJ/N0EJ3HK3m07Qssa+81jBiZbgELfCOOErGwK4DiiKYW4z8/vrbk6dsPjXPLq+Ws8RDziYyexR9n98CsmJ6IbH4pn2Be3yyghVQytQubg/WPKq2elUoNUS5DopWkIpKM48WCFNia3yJN/tfBu/9e42S5wtL3PSlTWv+/2ud4KnFNn3X5jagcrg+vur0Km05Eum6lvzhNMmnv0lZlIVOlkKr6b5/2jtxJVDMZu2RzKqxQaWEY6gQIHGNTehCda34JKi5Q35h06ojjapj9d4EZwUN+Pru/atGewZGzMbg3YZBG8t+6GhxmVp6QcAHCsRWRpe5pGMARGO1mpw0gGIpaR5Y00IEgbROg4E0yymRLlbFOJ1HnNeekjjqpGrlEn1i+CKDxPy0EFK9zrq1SsMdtgyGxEFZkYRovykm1Ms+HSbzlPNPwfwo5dzIRB6gTYEu93AQwFZ5jldDZDXe9+AIn7BMcTfz/Q2Fa8cDhB1x3i4KPBRu1RFzld4pW5BIZ290g+DNxx4UTpvJo1BsAJ1AJ7DVV6MrDqDJKamgyGSaXFKUAGW86x5SgODhUSS4v3JWKrqWDEj2cK59ubsDDztG7g6I2HSJSxVtk4ZyBAcCalM6P2mk7IRvrRibYZm6qh25QTEI4ZhC4XhecQ4qlW4FkG6FVFRyiBBrkE3YmANRlG5zoBGjK8QPWUXZPXG0JyZX7AAteQIjfxB9R7dEShvL7LfOY4rwtIwfV2pbYzKzPS73CJhMeLeHuaWGaAus7IXX3WGtOt9R0MV37/WAccXl1krrlKbpvKt6YN+Dh66rsBxMtDsBFbDm7kxyJprJo144WkJp0l7jB9HPZeolkweaNErQMQQo0KUYryyA1FKwJhPzOOrI+07Uzadw0PWQobUW0Lyep7dpYYbr2cv5qe3FdfddR8NFC2up1Hnf97rLWiizWepu+aw1si4Ya9fJ7qSjLMPUhejnE7Q1tPVdWgcCkyJ1Otai1ysDbQhi6F2ycerGxmRaI6jnmFrTuG5sVTfulsq2t32nVp1TzCqtkhPyJatehFAqck9paBp6ylG961tily2nJWsTnGJ8FXWnh+umgnO7AivSHXJHqEimhtLQ3T2LdcKQuHNvI/jv8cGgyKdp7Dq+MdgOszjvezG3Z+tMK64VFWHbmSlTNHyzod+raMTawYAPsmtMLwlcBRD3K9T947u9t3e/VoGIRIsni27UPlT8TXmyXOL5IbyLyInkKkAaA2chLQDWVLCxKSV41lNtl9ypTBp5yUmtulPWM9uu2diyoo79YINSlZU12u5afem7ltb2j8bWmjR+jZY66xqtUz9ubjoXWnVPh9xU0Vo8S1TBXWXWP5sE1Gt4FtUqwRyA6qli7UF+YBFunR2xOeralxy0sE/uWJq+DVpmgWWDCoy7m7RfAqXaoAZsnyJZV/68590/8sa7DCnm+iRXiKrRUrxZCWNLQ8lNR77oiw6zgjObYwc+kK2ur2Nd5l1VsTl4ANg8DVazMY2hxDF0bYYYpVIaBnAR7qyR3llPB1sxAD/MEUkqGyggw6pXHq10rsQzbcZS1lBKxcTXEUt1X953Gm4kmFErmyj14bsV+ndNyfprQuvkWNfGB1ytbXOduPlF4odeIW5NtiOi3HQzyOdseSUcWJfvdoeaN8D0D7+siexJMbYOD8MvVVCIpvoSLqKhyHV6N1QJHpru/9BKEjbOEI11G+7KgFjOEWkRIfLV8vhJ3e3b8qrIV5dqdtY1C5BR7UH3+kP0XDveff6WYw88++m0r76/eNX/HhqiKCp1M1IAF3idD1VHydC86S7Ou3Z7zdmw0TtZmrKz60irrAtaitXaWFtxGhcSCZrEChgQ9sgfhfUi07yU9OQ71oyidDwYUNwCupAYcXzKWTJPLlVDWivj3qvUIZ5tVzfRLi6R5ITVtgiYxhY6mc6XKms6FVvywJe5l3GQFfqk76LAFKSc0NsAOl/mo4w0FxwOSqVbN3igxtYnlezczchDsd2o4widorhZb3XEHiqhEylgQt/SuZdUO8Mt2CndhArKktDaI1xB/aLkxwt9IZbd4I0oJjYCkWkPFzMtKEEFxtIhu0gn94SzSvYGc66Q8A3GskBNCU8PNEq2l57639t7RxjTWqGOPLOLB3aglVoYZT57TTJUj1AwmRKtDnHyTV4rOE5xRXesjS3Wy6YB2iW6QzHqX2jO93QwH2xZtoCpRkDgKFwkICLxEsq9GIrlRTYep3P/GtElFTgPs1kSYaRsOAAweJW6zEM8lvIULa6EzYl/9NDIsgsG6FAGSvFejGF1wvkra2gMRRLTyDJmnSsGQUHT1FvKqqxSmQjxWMFeLqak3j49/d6jMbiqbPPnXa9Ym41ZQ/GOoOm1aUckxMO9jHkQ/Q8QK+BhB6TtFotXHVToKoGV0kGE7mLMj5bfllIVVeG82SIKdoVRUTq00VH/DxK0vnNCwkk3onjZxCMQUOq1c0dST8T00gi9AYIylAGKQtM6hM3Fi5UDqrbtfqTTi1tCMhcHaay2ntjiO04AvzQoIRo64Ec70MA12/0L2xYqFa7E9ZJNoO5XVWeV0pzt7aSUFUDMrwe7/sesxPPImmdKMP7iJTkPdHfZhwB13nQxrZr0CT1qNHaVPkM648UgIQJp7V7R7ri7mVNNdTt9W0cuunG6ps4KNNWUTa2twzG0RYHHMrTjXOFRMXGkdHZ5l9JmqTlys3JVpmnC72JsXue1Oj52uq+eU3KxRcDh0sy/p/w2G/pY744YKMJThdTh0jGGAxPDNCKmQziZyq5n/4hQnbOnqvM3LYLIMJwxQugWJTxzV/fsBJo7GY85kHPFz4+qAWpcw2B+soOMm5f2eNdsZ28kgTONTjKv3dDtip02PrGzJfb1AccHhnuQyVVltqw0STDRi7GUyLNeJ2gw+n23etdC9AAxAup2BTP6xPJUyxK11YU7B4dfxfvwvwOQOferpe01cLtJ6u/h6CqB8U27u2z5NViOFtlit2961LcAhv046T4MTVaWGH1uyqwjcwbJPGeHOD2JfeZr3cRcYnmSVGf2gcNIkNWFygkX8SVSked0J0d/0c6ZDj9SoFDF6gWZCnGvGMhGkasWL1UBf5NZEDTn2Six1UJQBXwINUyHRTH8NxQUqo50SNnulYhTxw7+HVPUaSOBhuMgNgzv2CRMcwRQ+0ermDIoY3p8LzCngNnHGsfXtvPFFzVTVIkd8JxvHYFpmqPZPDBteOgF1yGmQvEJ/vs6/WWVlsvvgRKB0NetJwG8++D0lLsaF76KGqyrSwBA9zz2vQrnd/qYI4GLTtbuu/ekxXNPZf+s7UfVg9rVbSS3SbaVakPWCsMdYwT+Wj1BkwZkvQJkvf6jnfqjpfZjI53Gfak0mjQaofK+vx8lORDDyYTvpEqLUrK9MHGhJR19fDX5WcPwWcPwWcPw/1INw097J300paaVuOXgA9OkXIo9D9bTSZmRKWfMiHKMx1p6kcm9g4CCpS673tOeW5Suuo7pd4z/KF7o9gr98hC7DHMVNrtqr5yvl83XsRWRxSqtL7s5LxXiptbX2Z7d8o9B51eL2v7VwOHjFpVCR2rgWZvOV87bKr/TZsrDp20dB0kWbQ1yp+9SuSaAA00Jon05TdNF92C/KgGbTRF9422ZL/3dFugPMpPhGyyKuLHVNRZ9MGxHuWzM4YcfzHU+b1Ow8W6MPi0uyLil9bdk9Gl5VUaftvdl9Gl9aUafzW7OuDf3dX1Gn83v0OjjMZ7PObSL5ZKgEsblKhkGxuZhhuQmS+h4x9eF3MAVo74YlveVl3v0dJqj7lHy1c4tR4VkAmNEga38X8y6yh7jvqvpJKtxmuvAzNq19dY7ik7ZOwSHsVosQDJgp45sHnVhBnA8rM5Q6svPrPNn1vkz69yCda4SLZcd/gTcaRuudBNudBMudBvuczuu0z9TfSXhZufrZiznh7Ga98Vi4jOi2X31lbGOv6N/3rGt8cUzYihnRlcOEjfytDkfKlpjeqVM7lWx6r0UF5tMV+VV4NaK38rdQ1xeraDHt/Mht+qYUpM32bE1rBidq7o6ybrY4PfFY+e4wxE8OraPhDUDwj1s2gbv6eGIg9hbnVFDACH4BsO9EP+9XDlXb8IAWG1UCMPvPn9+ax9tlLrHVsfx4u7eYWDEiidPHtFf+Hh/D548eXjwu4OHjw4OHx8+eYLPDw4ef3X4u2j/3nsS+KxQkoyiTwHqt/ihO8nhcLKiAJBDZbKSXJT5FBjsIf/e2ZHnGl2QRucj9ZjzZe04mbdI4sbrILR0lryPnsSkmQhlg0hcnIgIphrb+FgJAp1GhO5Qhi60Dyi7+ShWkZLamdJHkZ24iNy7NgnE2Rnkx5xxYKNKgyLZDE6L0Orz3PjTKXc6jBGlTTRkXYbuwgzJiqY7vB7xNzvbJpyT+nnFNkm/oReugZAUgdX4kVrHlUPrbIbgY0YviDFDMTTlUYU7qLGJ+WHT2eiFZPOa3pnr8mQEOGbhFtq7x2FsNEEyVGhabfZKTmESYdE05mbKusYgxOsmWn/TGbJUIAD8yeIJz4C21bnW+RctdZf4anGKHadKzH5b1opQ2Zrp5s2nrunZmuvTzvrS01JolYHY+MEMUOoz09oHz7rMy/XorCPQBgKtc14zT2LGYM8Tq4uijztbWvbWYJVuhGeHNTEgR96QaocUG/azwIzWYr/KxUo6HR05VUFC1PQaYsuVd4J+R3Ec962M0DH9RvmLvr7/4GVD13lrIqoBLZy3xxvi+vXIsuJTZZxNyM+sTWh1yGw9tJpz1ssh4srG06mm5hMdNGYU/0Ax2lKc42U0bmCLbv4LNrAilL7+rxb9PgQPQgu51XJbM9xmv4datPc/NAkERTzS6wgJv+7a67UBIbGrrScEk+qeNfzURzg+gcGJJOYssplOtXr6Iqhj91XSDZtLJaIzbLgBxMTWUQ5Q6ZmDhH03wLBdR/F7ikOVchoUDnyEALlfFIIquUUFpBjD6hjIhFlWu9iaaTy6pfgJKkrKRaoTSUsEh1fPf+Q5nt8DhiNhsVY6QOnst/WoHyYKLqVrxW5YHfIonY2QQUrnVFMr24bSle6gsnmZjlBoKa+zxXA5LTEkQDa562IU+xvYP5tuqdOUnY05mAyydwxggAAGAGDAAGiawjjblTce4iqfUYqNpCq5Mb6405ZGPGWrt7WdsKjtGsJg8a4UXjkfQzt0neP0qx8i0p1S11fKaqrvK6O3ph5qEcSmNIxGH4Tl1paiBtdLiZZMCLg3UJS8XgqzITflrhnUreqxHeNlWQgq95ShfYODbUU2rEqCso0YvZ2JIyo3oizLir5tfCD99Solxlz7wGDM/Jtkmo1tKmoyFyeLDHB8vMgzFXFqlgBVxfD7y5wvXjgpcaGyTrH/jd2Y6m1GVyl5NM6JyvejQif+JLQzVfjm1FSLZhgH+yKVWRnzQeXJF/HnjbXJxnqAlFQinTOa89AkSBfa74q5fOjUNsE2bdIG00QHBgFoexAwIXUQ2dmhUdMYfspvKXRMtuTcO2h+TKF+eRzqcryWPvP56fE9G3WfIwtv3v8gm2mm2uzxIU71xhvdYfMDm1FWsMu3+b3603LDbQXztKBwsTX7KmG4sossHkKueRs33/3sNc0CY/+RAaQYSlDt7eP9P0hyDmNsYMtJ42zMvhQ5u5o1cLTMqWXpdHwvfOVHIgqap1TvidmvthIQfzwRilqhuTt2GkNXAzi6a+epo7zR1EhJHNCje8BxnrFdNDYgBbZeL1g7dLoQnh5ogBE9hC/lCvHFk0dyu4YttWVdm3ZjQn+35F7X7EnKDDAneXCGcVcUX1+nyVR7k/sk27JhyyPKM/fSk6AAVxjFUR1sanYxjRm1L7N+kU6wN2zwQTS3uTtrSUUymWxIKRBqDZG4Z9Iw30v+HbbtA5pYayhqjvlcQ16Nwoeh3CzpP2rOc0otAlOLxaXp2kMTl4Hsl1ZpHH2P7Y8S2sz0Jpnm88sygw2ZZhxcjIhoifKwtAzdAaaeuE4KE6sX1YykL3lKyUEPTW5G10fYvMOx+D2T5gllRtM0KYKjtjaEONLGzBffpj4vsSTehCNsSuNlMkkBmRYgkuhkaTx6RVY2Z9yb6CpvwSePhkJcDUnjjapokcMScY+dTsJ0r0S9Am9RI9JSvnoQXS2Xi/Job+8S1nN1EY/y2Z4Jn0dfR8vpXlaWK/j5eP/gAX2VOLWDR/tPvv7D/uGTg0a5rSmLqCKA8bv3ccNJokQyezYbZDE1px8krH2i+1/n/p/I3f0bADTf/z8+hIX17v8P4fHn+/9P8dnw/l+YcG0OUO7sUAsYhsyOrqF+h9/id6BCsENEzkNDJSK//J4kfWk5LtJyNV2qV6/p184OxjAGKQyDeerEdkjrLdtjc2GIYTHQZhS923eePf/u5Ocf3gxPT78ffv/y9M1PJz8+5wRHcFSm85vu7stXz386/f7Fd2+GT3948fynN8NXf3vz/cufhqGau4rLtF/+fPr89XbNqprBZl+9fP1G3Ik2bBZrQpO7h4e7PbfNk5/fvByePHu2eVdVTWyXJONdrafCk6G7s3uXlvgSjzr6i//c4T8Hu6YTP7w8eTY8/dvpm+c/0qQO//z8b6eb9SbYhIa8Ua92yNEfTo1+VK+ls0yRKco6GfijiUA+Qu4jHwmm/emHl9+e/DA8/fOLV8M3P5wO//L89Yvv/tZubF6lLSd5Bz28b7Iin88oqG5SZKRVtjOFmFyQV3C+I4tlWfno5LuckliZ3QvHTk/Er2fnxxNYgNfDNy9+fP7y5w3w1K0HPR8caGMQJ2LAkS1cqdMKtvbo+mxwgNdroylaMz+V8rkd5oZsXoZoOj4cYhjfiRVrBX/GwN5zeA7Hy5he5aMhBkEMvrOYsdoyySIbropp8N0S+KN58A0wX8iR1DfrZTmsFpjml9P0Jp0OmUELFfFUTuEuSooBNIAFOn4ZLmXlIoD3pLrymhFHLX0WBKd6wbh2bGWEwM8DZc8PE70qjDeDGYgdYSUwTnitAj9UGtclJKrJ4WH1lRUnI9y6FeYiXED5jB5HwOJUXztW4OEZxGLBADhQHHlGe7p09KAyXxUjvBUeC4uOJkJJQXpxEuBU1H1KjCB7x4U7wYQqQ8pa6Q0OD1x6i7RI6edoR/oxmEHqWE1TIgpcCqgCP+tSA2f75xXHP6mDriFd/h4P2SdgSJkRTNBeVmg4JezQq6Zk3OkFExG8MTNlzc9FahLpwuQwhP+McNNIHgnseqW5kURucWNdVKfSmgp6jo9rZuMCFul6x6JkKSofqqQMZg0j3jvEEb0mvZycLsWrklJ7JPqNMi4jkC6GYKBGtuAvHTt4Fd8DypjOYzhIcgbxO/8gOhULedrtGNKMHIFg/lfWXuBspPaW97xJff9o3U07Ggx2wrPY56OsWhe3dnX6qhRHj9CdDm+USu+DH2A18OIL+8JB7LT/FHvkgETOwYRUdi8McgwHeIbTY+kknTS4PhA+TfSk94ITqYjjkR8gZuvQeaaJDwqhpzsbCqVnYAQUCaqo+SJ6hIOvHrt6hEMUPR89dtUIOuzNAvPuLONX9Cf7NS3i18+BwRx++7c3z5FRXeS33cN+9MjzXF5b/9XJ0z8/f+O1sAbPgrHSvKk45Sw7tzjbyZhTzNH6lv/lU9jmY2X9LmoVjs3uEXCpFL8sWs059R3Wqu/WujBuXnfuIZZbhU74UaMqO0biRzlMhBVIqsJBtDP41i5oFQajr6NE+azFNsGd7Ghu/tiborm1CArlRWDzY0DJOfZWc+R98onsszqxT/muUtSW9ir2nHIeLfJFt1dP5jQUHIfw4IoYV2ig4tElH5RrXIUf6yhThSuNyInaqg0uG9t9q5ybarii4KCCbPPmDo6EiNqhsYjRqlNU9L6Gxb1aOyiG6YzHFn1qh+XIR636Zte4r0E6XV07VqcHzpBFxgzwc44I2qp7Uvi+hqi6tnZ0Cq4zME9ArhugL0e36qRX6b4G7Hd57cD9fjgTYHk9N3CCRwVFsdPHerhUlfVtNzgrwN49zVEwZKlz7+3PgQ4kWYMBjpTdelSqxn2Oywp52WpkOoBlw8i0eqD1yJzIEPc0MivUZquRKR6naWRardJ6ZKrGfY5M97TFyFRyk5pBualSWvXFqmFJs8ZGGptBF2pMoteXtJ+jq3R0HcmQrRQtVgN2PkU2VEymt8ldyapnugjnoCq+eXyAqyGjI1SlfMhMq5lbSxHVWNwzjpV7wuwhW7/MlSWL3XV+U7Vxx4+8O8ZAr/pFcnlZpBivIzqWArYqzArci+bS+cIEnEkoEox0WoJllHZXyHrETILXFwVWpeGpHIjcoJUd0vGP5O69VHmu+9Y4yEAWzVeXaLEjStECLyPK7Cad3tVpc2J7nmWGdatOoiHU9BXZxUry3SsYJhUs3511JewJmW2iKuySTRpV58qqOkGaWjNXdtFqEAFd3MUgpdGu27qexrsVert17osm6Z6u3SqqpEd9PVvQOgLs6e/b0WC30r2RYa/La0fu98MlFiv4b0KibTPf5Dsdze+UFlsli5gm2awUXSLdPEcY0kLXYDNhqdOTIFSYI7HAJKNk5yuqe32hXGLW4qxIx8rErK9jeInVrhsHXwfD4b338xydjKKZFfxKYuuUWrpXD0htZl2DkX4M8y7nslpWu29w51J25VJnUyj3dGQvvMJb4DU/WuCRHaCCdTHN2SpJJdVA9yyOaW7aH/kRvPWbeX4L79TFTrxajuCJ9Z6jWI7qMQpPyuoFkVWcTi4E802gYF0MQnGkGTmvaRSCtj6CisF1lWOYZfNhkc6SbE73YBxqcCO0JKfm1eyCjdZUaNBEhzRDq6lSQlxnFGctKpNlVlrbUxuceejI2UwRgIv3bEpMlonwDoaQzVaz/4yyOI0j6PsIcE0hvVnkE1UQu3kgHaUEAsx5aItq3CaVjUFdLGVT1Lw1+q8uWbXSTuFsT9x0/P+WbYNYhevg36qt204UVwwRUY3Nq//h2y2Ypbz99rO21IG//wiTp3i7p0Yf5PMCc9QN9HmAverFy3yZTPW+rGaHqJm1mp61BxT9UXcw3P1WQO9nnA5hs890e6Ideqpf/DE6CJ7wsHj+ACqcmmqkrqgioWzF1YpmvnB5UJUQDpNeaPJzcRf9gx3Kxeig2/uHPoR1k0j42GAskuCWTBqvoCmqmHL6+UxMuT0Dnthizr1xM87WkqqnJNypfEycoUC5Hmcq9zqcNkQ6ErotQF8U6Atq7B1BzkrrFDbH6MsQe/VcWqVOg0RSEZsfRCdjzLpALqooI+D1H3BIqlEppE+jdG4l/Asb01OTVtjDtAA0IisEOlZsWxKb55rDvOFKqTOC71ovM7LJ5d4p4ZJ+2DcHcr+xIZG0erI+0rGuVZnuID21FjZQpWZV9XiqNTCxk5oFGX+AvvnL3dCgbmUNnVEOLkK1ZAlUoPPwZtfeKnrrqUOVDfYtEdT1HdNMlG5KMVNijc6WrMaBkM82QqAR3ytrzw1iEvL59M66SZfAsRJNmkPmVvg2ahzPQwoim7BFK2IrB4V3aYgKl+sERVWfymjxK/oX2HyQHaY+OJuZxVKSioe/h+0+AoZYFV5DAmuTjW5XZTRxQteHsanR0su4Qnk3Bl38p86dSWlnJiuMzJosgXxLyAiY6IuMQppaAYH1hLJRMU8i86MS6pJ1+vaEs4FJn+vR9YOiTMjVuV52NZcedDePqv/KGP3bBGusa+4R6ttUN4bwX01TxuYQLyqrLShNHnkyhJtQikfK8BdaPb7gu6mpriwbbwI1tXakrrIqUFPfVzWMc3pU01jVyFHKS8sX02R+XRuazgpnQgU14lRTUaLVTop2KkmRTe8wkPY0QYsy+IaO6yZ+VcJ6W/JQmukAIT+CYMPOZIDrhMjj9GJ1eUnmMUV+CYSp9M1mEGRysQfHYnKB8cqS4hIAzvCcxG2CZn8e/sLwnzqiiNLOKJNbExDWWD0wASNDB9+s3LJ1CBmy99cFo3eD4YYMzPtR0M7kuNH8uwpoVI1J74cwwF3PUc69U4eMMskHWUIxY3SdFca2UxI8E3r/MsR4HhIAbDQN2WNv4nHgEifJ8hduVtzisCewtNmcPSCt04aaIjNQoI8peU1qbYLi3ExcbORVy3I10yFvVEZnKpLPZUvApKjiMEeO+6Q2M6ODVc8WoPqLVyrIcBw9Uw4dSNc/eLqyicery4mwtHuQ6cya6OoVdRCn/5viTrDPaMnh49W4KGWnnikszK+WyTXl5hylaBCXUtBwVw+OycuKyxWNZy88OgpQX4nyXQlE785TWy8QGs7cmpE2oew3A2W2fhjURsHwtwxUv/nkGGpDvSZnC6fbYRs53SPf2M7tQTOtYpDk9WpDHLWNsG+bf4YJPT3KXALlyjyWeXxoG1VbQOzv/HcHaaKxHtXE1rV5k98WDPVVZb/9b9hoB3wguWxlbc0aWMjyKKOAHfNabNx0BoeRb7XvIN/IM9n3VsIq49vtO7+tcnWG+8HnOw0HMzGRwnX6PLOcNqUIQsww6+PM8MvKol1xSkgOV3PLTZ4sdIkzRgdelP55XoDolWzMSVfQ0f95dfLmewdnpWdH0XcrdClS8Jk9J/Kmu7ER7o4sNxzbwMmbJGMi1xUWWHgX2/RKHnnsdiiownNlbp3aWcYzddJlgBUFC3wgUYp+hCrKaVBgJDiXlXi5SOen5Ft18uqF5B6MuoOBfFOHQylK+adKrSKM5jwFVBlzdAXShGczOI8pl6++MFZWB0oRpzvETAB1yALO3WSlmlbz+DllcCqZYMNwK71V54RdyZteq7JEFAhXtkOoqAln5V0cPeU5EEd5Dl9vyWc1DO4o4KUVMh0bWZ5atonnyPfFqhgSekgoglESjNPxYSglHoceTrGsJblw1JXSFJea5IzBgAo4qwO9s1aEG/h0C6KFw2QamsCAdHpP0zdbJKOlDjFs70aBGWuDG6XQ5C63331e/eDuU2VIbKzdduGsTbqjWrEkF/ZqMlTjVSXGh6xleCnFRsY9/M1q4q8Hwi1nJavLimwR/bJKpoi6qFWbkPpK43K3I/3fAzZu2YkG30Qd+qZwx/N7tJiI3b3dnvZf8vekuifgeR5esZW6HAV0d7HFbuXLA0Q5pRzHSaealM6Hrxi6kkIokbIh/TYIDTdI9llOlLcYRYfaKDz6D1uWYNsIvkgLlF0k0i5pGbgDdLVgC3xW17hnSjhVueLHaTlK52OMxRfqK4+dbj9L9xqAFKMqfZLST9D1quyOSsxLsxR08WPxIJTfLEoTlii5rJJo9VhFnuVbXz06VvSaiMMnXFkw3agEndBAOJ8n1ipa5lgYsIhWT0X8oSb4YmocvGGS1ULxhK7gqSvmHswVLBD5/NGjeMlX0rhpxFU74bJC6Vm218pphYE4zUJVosvshgQLdwGV1Qo1YHYe0C4un0hIF0LaJSxPtE8dolmh5hFvJLuV6UDOV5s25e1rGZk7zu1qKZp7Kop7ZxoD9EqQ2bsWpLG4iCTKF3d0fUIKawk4QDLldIUBe2uIq5XIRlZbxUGisgVh9NDb1tZT9Ca57fMokn+NR7nQ/4LzxunQO8/fotcoMuXuqFklo1q1svoErnaQw3FqBwWvUDWeqG5HvesI8ewFZRaLevgK1ldM8PDKBDvvLq5csajLPUVeNB1Qtz2ZqanWhc05slL0qHz8XhFW+9pRFfztMhdSxYRxXk7SopAoRpSnkbIY7UmGpZevuT1kATG6merRDHU/o6QQwYa4yzFm38P0awiJA1Cz+IQIwJ7GNBIh8h/rlB557vnkLR5YLmU5+gVGT9qa8dJ3X3qjo2wt84mcTnGRQWcKDvelUxpaS0eslejBMwzMXa4KMp0Gos+Y+jaZLaZ0ZYMu7XothJPF4B1J2WdRg0190+WoH2WldIVEUjhZFFaZTseMx39G6dyIuhQVgzSD3FfOatDX2nhW+eHkXt0tADlKToVATb0D/gU71TmKOpj3IO5E7yMBY5s0SUxU2QD9SAdDwONzPKbg2ETt8rnbZXUopkCIx2Nz/i9vc92Iod9KgrjI8QYiJXMtXruFDqugdyHV0CbAJHQg14UpJ825kahNTFc20N5JZGxpc2ZSVWJF3vPG4pj3OUf+46WRmXkDXCKFF5gml+VR9BzPJfpOAdqbgrPw0eudbUmJnT6mfh1FDdWPlzqSguJSCuwAjglw9lKnhMEQlKVRKhPCZJyeyEJ8AEDe0cIBdAYd4nY5Pew0XXKge91KZ8AFZkBjs4UpQuRiihead0q9LC0mgnEybR9FWjNROZTVPV1O0U0WTo51ZKitLFbe4+qx8CB6zpv3iAeg7wLfAsF5ED2lcBRIYubprcW2CidnVhTqMcLFRLoqV4r3IzCqa+VwBxi+/ra1yOg3EJQZdaFmofGG116XFs/77v7gD72Pd75UYs3cBIUwc2XL4e051t19LhglNL3Kodt73HkEGhHQTFF4a7a4G8R8S3TnFmGfzBniBXz6WLNbvcFGgsB9D86ztsSxbXA+eILJBUkbaxkx0+wJkMuyHJDdN1GubggSDEi4k9/T5M6KgZ7OgcSMUmO/obgsvHXF0xh5RpRMPEWY5EJ1I1Np5KAryyK/dXFgjbGPinWlxCtmCsiwBf0IPt6iOyZVlumNtdgc8eoNmbr8gLE1VCMUaKNt7KuqLcsGwddePh3iJQBGHMtHuz0y7qNLYkKhUt8SIMUSQGHoVSXtBr14+vKn7178SfXE9b5woJgLgs0AnLx6MTx9/vovz183t28H9MK5eJanYgqwWtAdVPiifv5PE/llnJWjfFWAdMvKDR1Ff0G7cZSGIXtK6k0m72T49PnrNy1mz9gEbdD8q9cv/+f50zfNLTs8RrCEZRO0GXr+8PJPPzz/y/MfmjtQpbGbxEOsROuzQo/Tns3zpWUE7u51v4Cz9d3weI4P33OUi1CNgjGFxUEDTZuBIZ+mCZlkAM+REkFDLVIGbBdH/ONgx0A8Ku6mTvwndhej5o+jM7uL58TJnVBaRCQ+pWVKkniKQhBPHmhlGamluEmgsU8dUomiA5ZNpNHIRDtiAYhSmavGV8hFQ+kO9rtjjKlKZbGBMJgnWJVXe5dA96V9qSEKTGDPqzXyxV55BTLndbxjVi1AZz9ZnN6P9THxf9URcf8BgJvj/+4fHB48rsT/ffzoc/zfT/FpG/+XY/FqZpFLfSGPMXI40s2L0n9D0cbVQ33Z9opS+T7XDFqgLKWM7Uc/cjgnKaG8s3wo2lfLf6F+z5OZaUU95BTm3kPmQFS78zKj41kmhX8i5TtlCV5iBk50/EGUNDDM0c4Qj4LTF3BGDIFydA7i/fjgYWdnZ45WGYY9oIBpUTkqssXSUfVTuUypm1kOzJgVF8qLdwMMtOSA8tklcsWc3J3EAPranZWX/eiLL65vLaUea5Xr1sOrQgCB4N2k84ys2uS2BUfNFxzK2cObA+0+SY+H8tgL5GpN1L87Lf13/Bj6T9vvY6R/X0P/Dw/3D77y878/Otz/TP8/xWfD+O9l9lbOAvgWS55EflUkIJVrmbSWtuhvKupHRTztR0R/+JpOLug9+kXM6WqRFuwTNxwS0OGwTyxrL9btQUsevw9PgP7Cv+5juRQ8FrDuS4aNZkP0xfQ7WUoQjYo3uFA3VAd3rTZ6W3tF2p7k7m2ll2PXXB+S9KjVIBLZIIYz9fKKct5VXRi93lszY425HOpRWZ1GG8rqaPXbs12Y8t1zFWbQnn8Vd0IF+q/zyIM2uIhpxlssNek2YpXLcJxcD2RzmAu7u/J892w3+tKg1JfR7vnf5/ioSBeCltx6T+8J4mn4Frf1LvjoiF/BbdVZ5ryoz11c2Nqebto10y+KrGs1ClsCd5SeAfxR3VPkx2mvsFfrOr2TKJJ2XeJ4rBXYfSF5gqBuOluQPQDeR0Lt7n+UFF1fYolGF0Da2O2Mzsjd6D+wlBPHEh77fdgUOLxMl+mW8GHmMjQh3njmnFr/opnz+/CJZg5vohboccKq/8Gd1auLPJ9W0duOE7K+jWk6r21ify2JUmSma2/Eb2k8vV27NtKbD6g+zm703NMV9Hq0QfP2DauUq4tNq8xW082qjJL5cIYOolLpC0CUsnb1tBC3zM3Nx5i9R1QQIJZI9SFPIQgsZ6odmVsgng617Gk7YM62NcwLvBfr3qARUplSxk8QJfFC/5h6I510ncKh9A9ZuaQme2ic5b7j55XR3YRbwlvramHdfrBr/gPlnVFpXE4Hr/HNGy6rNqTK+3MIW3xGeev0Is8SNA3qR2gcun5iuTSe9LJUTbvaKf+TZExEOFVOQYXB0MWx2E5gEVR3S4cwS71jYqdRtTLEWBFSmC5T3hCiLws2QlzNM/TptBkZf+QuF6MB8BeVPCbUQ55It380aL93+PA++iaN4x+3XxwOMZshKT9DUtyPKEu0041+NJnmyfLc4eoCE44NhWNjqLnhLuAaY85tNffwGH/SYJsW1MP+0Jx6RawuECs9zEokkHCGrkPq8M6p7p5atA703yMNof57Rez+46tP1f8qM/CMQmmi1x0lL8IUnPkMzeGj/+BQCP9R4plPudTtjdXrR/oRLXBPpxzSlCab0+D0kNI1RIaUgMR8UAVnRkPkK9xiZYaCk+Ib8KkzjQaw2ZL4Q+C0jqkdb18MVjeamXbjMFlu7IHJOGq2Bv9sMZJr4GVxMFwhRi6z7LqY3kyf8NbrWlMm9XRmiClmIL7WSupe7YQ5q97qRFy7FZzpYonNnOayX+sES0IQxZysmUj8sFSnm28Q6JTk57VIPlPuI3uunP7Uy//UOFB/TELjVOltLNM0y6mmCatye5mluXFTd50MB+xo+tZTWjQ2zQIH5vuhqu7s3eAx3ljdwGfITvUK0+fznx6u3lSAt+GGa/CnV8UDe5Wou7BOvdCmubFvzK1gu+q+hEL2TKILDPZZekZDVjMbTp299YKZmpBAZUidSG3a3e+3W1tvwu8wDTyVOcvOLaDI89DMbaxa1LscTWaTaEHaW9qhVj08N4/ch3bDeMF1du4MlU4V7EslyJaFVanCKk/aSW1JqIpxSF7S2BqziwdTlbjKQiNZ1Gm98EgEBjCV0l3XTODJvLxF16xbFZQA9a1kH0xeVHzuKLc9y3AUSui4HdrnUsheQq9N9JJsGVEUGc6aThYdOJ/M5mQlrM1iVSzyMrVskZ/zsaNDWmgY1Dcxw9PTVfJg4Q8a9ZJVc/RCpZ8uye1wns8Hdnl26CGWWQNlw2cyEUTjZZhLtHZDuzeaBuqMY5gih6Ny/xFfnIKnxyCb8pGylkMsXDOOyaI4PryNNKPaw/6wewD1iYKP+rA6pQ+tsiUKjCM94eyeEmlFza66qsUJJdSJuhRcFTtjBt0zFVRwbRemF8gLkcDCfWfg/Yh+wwKsyPvB7ApvW6io57fo0octismqYFdE2I8+mYJT7oaxQeJWdh6cV0JA+gxnYANtQuodbXmz5pklK80HaFf/VvxMHS+zDTNjL6DTpXpuxuZQnSpVRtVqvKZP4Zh+Yf7VHt7Z9XnLkzmwTmZJavT2NTIlvmVZnqPqkDRf9ionA5Zz0jgOacmGw07vCFEcPWwi9ayWv9TISsLjidLlkQBZid3Yenqxb+hEh110VBeqoRCKofVMAs3CPHn6efysY9BQ2KBqbfiyqr6w6QTdhGfbkEUTvOA1r+PQ1INaJYAyA3L5/PW3PjA78Px+kMhhBL2VNZ2pSAybYBb6VR1Tjys4VWF6a0QTX4jwrlOsO0172/qiyAZCVHUm7A5WEIiTu7U+DTYQu8wkw1FEDelmkK6vm+nGUdUKbdsz2lUmG0+BCpPtPLQbHbum0o7CA0grSh18kHgdk6n6cKKB9OGmnu0e89ly4+PeuJ7nvkmmm3DaPIPUQEoB3D1OyzC+FXaSop+c0OTioLklsn823F1qs48XyHaDhOgnBEb3s/plbmAe75FprFEl6zndhPn6V5sjff584o+x/2Mb2Y9hANhs//fVo8df7fv2f0/2P9t/f5LPNvZ/DzTpnHBkosE3GE/sdcrm0DEUuGZ3aHY1L6+gqnjMU1Q6+b5awFE7gu/T6M8ZemFQ2Co8j0nxcA3PoCkmtcmCAgMYLxIT+H2RoR0dgcbi+Tg/4mAM4kXy/9EuI+iC9187To3hNM+vVwvxbcaRKQdva0SlzhsA4t9Fms419P/cFB7F/MPLEyPoWoDqBV0Oo3hZ5KtFn2aGH5UL1A31rSkWIZjjeiBoMbuuSsF4BlhBlNwXCAI5pGw+dl9QF5T9HTqEoQE4DZF4u650sQK/VwXMnRfw/MPlmfWgwhZ61ntSOLoQnLfmhwOBOhu4ZUAsVQGmOu/ex+/ed2IM/5BwTDJZiLWR0O1mTEDK/0bHg2zEVvQ2K+lPpTDL1al0mF6OaeDzd5VKQXVCZ4/idwaaUB+13FREBevc6/TO9s+D5YUZoWpVTW1t+gE760TguZubzUlRseNmgbC2Wtf+IXPm7kWlGXYKVssJjTizn5m1PZdwgZoGtauPO+sDqiq5JdgECgMa4ylMg13d7AWzFkGgumAFyI727XDmc4g9K/1IPyaAuiKtHfRApChFwv3mI2yMqA074N3mxXWPfpccVJe87RWvMuBYqAMWnOLo9YrvazjqhEdvJ1mBml9ho3WiLXQyf4vBAMmNxc2hqTl57PAfWUv1DQofwKSTzzqrRknLT33uqoB2dDTxnkHG/ybJKBpTT08IfeFxIQ1VLqDUCavbuGjuOMxaUW3M9tENI6Q2b1zNl9k0ulouF+XR3t7F6vJXmNokLtIxyDXxKJ/twQrfDuFFPLrM/isbHx88+frRw4MDmLO3Eq9nNVdr5fTXPFahPnBwdJeFYctU8gtyXM6Wd7FeujjLdzA+L4dGWt7Bwb6A2S1jfOgWgxZJQ7jo9oTw9NxwIjwR42wygYVBidX0qqcDUAHRnma/poSaXfzHw86fVAkWy2k5Jdq3Fh8pAIG1o9A8aboqULTiyBwOy4Kt2vnMou/wAoyDnkSdRc4xDRf5uBNHnefz8SLPQNDjp6n+6cZShXb51sPuH2CxHp+Lt07aOVVkbFeOqxgpx72ec0cH9YBT80woqq/BdqoHg1Uxy1V2W44Tj1cQZJxzQWFseGZg+8d06tCgfKJOM2gg6iXJdIIX5Dz7KqXrWB0YqOmjcvMgLTNwHkQ/MMfHUfNxHyPbKeD5diiZcmMOW6TE7BClJGLu0OUdu1cxLCurH3fLXZsFg4VYTZNiaE3/2ZHtgS3BgkyhteNb11WnuVCfad5vZdonBBPm+5+UfphXVce8x7VF3jihkE53siIqWpuzooHdWNJ29KOT/FxSwAS7oIPouBHZ+VIvEV/9qcNFPy4rW4jiOElt2UNOVX6mjgezceLw7jIVw3vMCxKoJtZEHSSPfNWVPloVlRj1QcJNUuMSfUSYQzW1rgKPhtZ4o6IOHELUUjhlq9P6vp7UiGpzWmeONRfCM/mU1aPNpoKsPZYairKue30AQswhKoUt08LrAwps4zZ7oLGSabCzeBjPrchv4CwZe3eaibFO8OZKQ22eMF2MZk3/OjcGqteHdEqbd3btNDCUw9SxEMPhHlNRbOzAvqTAh18imSSVLPxyXh7Iu3rLQHshbNPAX3Bb6EX4ZQ6r8MvcXwZ4XO08PLznhbAAN66EVQ6XwvopazFkefiABWLsOzELw8kvc6vbuGJQl5bMasPigLmZQ27m0Gvm0LvOEVDHUhhPEBfBuUfU4jYLRQfsHbFzJHFif39h0XOZDwNszHcknVLcNKGI5pCmJ9SlPeIP1croEkwOrBjTVkQljmFPx4RNTp1OWUGqyRMdQ2hCf+zy0muLicEuYP51l7ae+P3GcH9Hke68y7YYEdbtj6Fb9mORwvVvW5z1wkaD0L9nCf3S+75bXzGZBlHgv76KcjS0MifpGEfQiPdI6z5C4adPseUywHfxknZ05SM9QR1ZXqyHpjHQ6HzpRxbFgJBmQJEwuLo4qdr4ypU5XMvFVOVj00ZFbkxfkyCYE+66RyAnUR6TQwLMFeMdHkyV2dDLe0TLqytaS1sO2aTZblR/V2t7ZPKKqK3L9dxNSZ2QNw5HbK+ch3KBPhENaNGrvUqvdE3vntIwhlbfBJntSVN9C00Ul9Zv3HPawQQTf1+rooxwn6+Wi9Wyy3+UeYda2ReUKcSLxi0xE7kG0pZ/YATfRTbQjf6DSKeKTcqiZJ2c3+Ewcjpj5AoHg6cJBV7NKflHpAVo1vztiZBY1ybifFqqUJoXqwy9gRkvxXbPUiNiVzG7Qj5XsgaTBxB14QiTSOeUSWqZzjkuforzmBR3bo4Umg9Jv9AwN+6e/WnvxBPaHqAjuCQYuZzmFwnehi5K7grw5LcpXTMW6SJfAMu/JM6dqbIUD0kItUooc+Ecql0G9GuWrhtfYkYKfMZjdmV7eqdiZ+NJjxfYXXrKD6+AmNBmmmPcpWzZc27uR/l0NZsPFzlHEMO5oang5+poUtNGUfrwVhboZAAlR1eoxxirGNQ4/YhhJ69e/On1y59fYS34LmFLJGrxKKGwnNAEht7MRgiNgVHEQVqjFy/J6gjIJAZ9oQCIbzMrIOj0Lo6ib1NqiiM83qTFHQXxSu44k4OKtIKrWS73Ljla6lJAiapXsOkfaIglSS1Rp0ACdIYsxIvOjCJqQWMqLB2MiVXG9o6lCfj25ZvvVctyjlSUyzDpxpUDUUDWQ6nQ4REpxTrm2DaVOYUULa8hWiEQ2uulHopqcsfGCn1FQOclkN6O2dTwy2sJXpv7CCyMe71jMf421GzuADH9Z0zkUcUoRXetcg7lp5LH0cDPwU32aoIy3c7PFOwTl4lEcm7sCFGd1bsaf2XtNOtigY3pwq2Lvk4Kitk3Z1bBc8qUZbnmo6ZX3vPVBG7Mvj30nq1h+edqtsCussZUUk3RRmQ5Rijp4i5aLVR2KpApFfeJjRsdCeXXPaaHZzXdPTq3gLPWSKWyoGXU+0B6AfUzptqcMIceXpFzr3fVgPoX7IBjQhbVyF6djtWN3LgacP4TdjNeptIjGC5uc8rURRpq4aNSWmDdEDxFUYT7wErQs/1zRTt95RTm09FHQ8LJzZnqcjYNynlIOEmlOAk2bivL47Pw0lVTHiGL7PqgZZqQXmHR6v0O3oZm85V/sYP7w7r67FbqEf9cg3m8k3vVLJ9EZupqeTs9UJ0Y9DqYRAgClQy5OO4siZo19VoIiz6/qu1Zl7l1DVkETCnHO/1Q36r3wJUnbiXfRDN0sVakdhZocpmJTpf54gUGDF5WLhsv4JS6Jo970iIu0HluHWMYDQYUUA5n4B86YZ0OmNpl2wKKuE/4/Psemh28WFqh13PJvZIL2VktYkUFMIIvhaJfYTYXllvEzYJC6Dwg9oAUgYNBkdzS2bJ3c4DHAXwr9zCOP/6OY84scLFC1wZK/j1meoY3WHRcrODQdDI32+wR8nKUpbTxc/r9y9eUhfFUHmhWpPlDVV6dPH3+DH78+cVPz3aMREjB76AP/mdUfdTqQ/Ev1I+nCswpx9jjqHrEkdV8RrPtwJKNnQGLYH5MFjv6HqW2Yrq4B4D69maHszc1VUxv7gMggtmhjCt02VYHkUpssZIOsB+wkdcU1ksTrto25/eBNz9p8X+ej+thMcD8PgBiwIIF0kGgSfPlDZLadDRNspkDfHEz2gqWM52vNJi/EJinCKYCvDroxZaI4wzUB76zyMdrVmyx5QS7g87HCApj9ZDMsl2TbUG9ETA7SM4l3j+yH0U+naL9sPUptltRF+RrA+apBrOjCPwvq3yZBEdMbz4cOIP5/2JjO2U6KtJttvxmQE8JzA6GUM9GaTIaYTKlENRym/FVoRGYEwajoDYMsryPjSpQd2arJWWxu00vrvL8mg+xVSHJjYKfZEx603zOjBOXja+/LuPM3k3O1vxRwPyVwTy1wexQsKvNerFNH/6iwQR7MQK+LZ8pvLaC0erWRsW4D/+ZB8DpkK8+qw0q0IO9eEpgFF4/02B2kOdau/beLCyy5vEH4ANbpdbeUI0ivcnK+hW3IdazN9bH51gEzGsBszNO0hlAa9jLFcK9DeRnBOYU9vIYiFh+N2vgX7jEhwLUYBR1bhhjcR9DFOqMY6To8JPVtB5k6b/YBuKpgHGnlbdstbGxT6sQpGMD1QKkmVbesjtiyQZYlV5kpBtfj7mrJciT2a+8U2o74O5VBvMawHzLYGzIbbfpB0LesUYJ1GlZZKMG4rwRSBeTzChfGzA7m8zx/UDf2WRyPxDkDtVCn9UpcHCY2L4Eqdhho64W3imvSqFg3vBxYH2vwQAHd6LBKHQKAubPyOcxLPBN+yiEThbgWTK6yuZpA+DZloCdcf/IYOwRF/n8n/lFw/qO/uk9uCAfsrUf99ABMP+TX+w0gwp+tgGHoPBmbVxHB7llnxhSlfXU0AH1LdYROkj1Nxvf1iB3MPMOZxhLy+xyTuTol1VqHSqjsnBbsavU80cerpo6pwzmNYMB1Ekl2WjpQ677jKb5amzq1Y3bQx0NRkFG3mx0lYYk10bITLLLWHx6AuAdyMibEZidxYpywiDR5by02VrQ2wB8pcGcSvrbnbK82hCjtoJ8evr9M8XyBohPPSxC/LX4G2J5gfjgMYFLO2qSb+4BoANmm126DVTZpUzqoSYsaB6a2ZHf1DbA5ER5KWAUWHWH+rHGKGD/Itk3oIlyQ2ZhK6gMZmc8D2mT7x3cs59OdyZpgt6Klxvok7YB9R2D+RPqk7JZUq9svT+QLxDMTjafFAnQttUI4X80fHnhgNkRIrXJKLeDSmB25ukSr0g+Nnr+xGB2ciRvH38BXyJ521EE5mrVmsHaCpiA+X51sSOZ+T72dL5iMAju7fpj9z7AvYVjF859zCD0kY/CUwVGkc7RNBvnt/Npnqw/oaTKOogh0vn0hxfPBIyCjJoztHHDnIfZvHmTfADk5wLmh/zyBwCjoK8FeT/QbZDzfKkT9K5l5rYH+ZMFRoG+S2bTkpyz1o/5A0D/7eTHH04JzA7GwNv4MMmLMabdbVBlRv71HYIxN6HT1tpTSkCKpmcNkHxg6ib0FMFscxvKVZohVoDSbej6g6uijTCa6aY+usDUwTXBzGsoiYSvdkIfrCIK5Vjz9Ws08t9BHRJ4EiC2WY5elJT+tM0txHYQXwmYHxCMq/+/Sqez0VVSLNGWtMzgzFlL/LFKC12m04Pvoc5TBPNagbnbhvOiKuthhzgvAgajTGY1cLOSXmwMzEUlrHNKYCpwa4aa+c8/EK4NcpkElTJZCS/uEeSb5JKh1sATIB7Q7aEivEaVU00HsK5cHZHjMvFYlQ74NB5VTlR1Uazm7VmVbaDR6F4RGBS7inSUFxvI0EobUQuMP+4tw0+nrwmM4tsHyXKZjK7w3mHg3QdCCXmLb+ABkp3RPIP/RpMG4u4AFL79REOxbgNFP3uVJtPlFZDIUYVbmV2N+vCfeSxV1qFRSD/7PYF5imAU5E3w6QMga3Ab3Ot/ADi8tJIQlWlRYPCAWVrdP6NlMZrYO1QgOudSALy/XxjMawajrq70bWjrfbstdAYjcDH1/DStvae7L7h/ZjAC1GlhkefT6i3DyLNR2waoLC4DfQVgXMDB0c58XfwHAxYLgnGOPkgtkbleW+p8AhYEzwjMDkbuu0zQDnuD7dMKprt9AMyfCMxOuaKUjEBYxQRjPeBtIJ4ymFcazA5wbGmRk3PttIpI9wb4LwTmBYPZuUiKdAan+fQqr7luuJhd9eE/eK2AYvGHjRx+Bei3AOZHrPc9tLNDfsY45vaXri2B+poHDaaFMWIIZpGN1sozFWPEH7leC/O8bUH6NnMKImycgljC9mrVWXbZbF3DH1fLwmDIUkLBFM24bq4J/AfAFM34j6qFHbx0XfI+aEsiZoARKP7ML2Ngh9K8xDg61WLu1ZUFBhdW2mhPlLaAiQvL1XDHXGyIvltBRDAIDN0/06Bt+b0CEzAWxGLV8rrjwyC+BjDKFHGTpdwGqpijqaVcXiXznMbZGnu2gfqGwLxeWVYYm6jrpexmBzeDUer6lEQUaWiRT7PRWoVDS6iuAonACNBXCOZuhw6z1cV8A6Z7mwHjYXZKYFCkWm9f/+EQYZjGvl4Jf4vrimahBczWcqO6onj15xdKT0dBL1uOU0Ai8jZR+9AF01MEs41usCXMoG5wPcJCCXj7geBchFWi2ibzCpzGWlWrN69KVKN5peu0ZIQeVsv8Om3D0lOVDbVydJ12QmDeIBiBKwZtaSvQ28NVYCzQHDzAsahrgL816KcE5sQGY8Nvu2M/EP42JiFriEMQpGcS4tkltQG7DdSnCMbYM6mDblRm4yJrZ3ezHVgC8/T0xTMCs42Kciu4WsWyqVXIltDIKgRms5wni/IqX9b44dwPuNMXpwLG8sPZwiRlG+BokpIuR5vKa9uAeg5gWMtMIaaBEpCnxFoGaRtYpGV+ymBOCYycN+ocb7meG8APnanWeqKabkOzuG0Gjmo6YxaHQM1QWwmL2wI1Q/1RhEUEvqH1w7bAjfUDAQ0I5g1iztZAA4I5iDlb2AZt0wNtG6SqtEeubcABLzw/xaIGuXTt9hj2QZCrGCZy7Kj9ZfZWPRA59umJVjO1BrclQAazAzxFMs0vlcd94AMlSscZVJuHNgrMnl00g2E6qVgJmduQ3eeovOnDf+bZVkAZjMytsvsUJfBimtRxbJnv+r4NcFECvwIwWpik4BtBoLnvDrINSCVM/omCyzdY8dZ9WgINGtmhbr8cFdminhWGEv3SNvnbZpCnFpgdENWvAYuBJmSTNib98JEq5no3DNzVbXGdHwUM6ijHWVmsqBMXq/FlRReyGF94D4g3WNs7X0f5TIP5lsCQ3luChtdwNYvSQ9+WoF31fj4+FTDC1WxhQClVNlLCKAPKhONlK5e5tR7jkTiNb2gIccJglMuc4zG+HraUMN7qLbsQdNcLwt7Aga64SEax68sW0mB8BCfFD4S8jZtgS5D36Sa4LUiAtdo0sANV2RCVX2OdbaRlMe5cp7cMScvK8q5et7bwr8qFXd9IZaks71i35mdRaDNElWdBBccMj9HV5yGYEw3mg8BuwJj5YFUbkjMJ9RPLIrFD95Qjb463Aauo/VMG89SA2ZG4K0pN0UaNqgrHws3WLLUbYoHAaDUFLbUHWmT6jw2awXjAW22plsBdmwEH+E65mK7m15O8uE2KcTuh1qtS50bnslFU5ztVZ2cTtZ4C2zhG/fEVUUqtV2abmg1sCZBiGEndBuwtK5RqC3AiVtnYawwBW4x1G5iMQMYScOeiyK/TQoUUUhHlG6GrwpuQjG8JjAop9ELA7GwEdyPo7h1sHdwNMOpD4CJDPMk24yKWq/l6g6LqvTqC2aG6m3ER24B7g3V2VovLIhk3mcBJicqDBh/eKrCfuY4wE/XSce2nkhQpXMxBW5aOJb/SJj5a2wB7IWmcgjmdPgK8n6GOgvkjgyHQG/K624LeAR75esNFZNO8TU3eEIyCxjQzyJhdlFO3me2hCWn/QTFmY7RVTbkLLUMKbAP8GYHhLqiQAsrrrm0kg21BCxgFdpGP+Uhrv8rbgH2Vj/lIk1XWYAtSu7fZsR8E9jWDwTB1y2zU3qNnS7ivCYzl0dN+mB8MF4ep7k82gLcNOHV/siM3CBT4NIzApR8KZCt4BIbjqyoEdjn6VpLcNqBdjl4TjGwRtGJv+lCA8OQCJPtGDxFX8fyKrNjRQXHKJwBJrhkQKZz6hiFvA+2lAUOS64tXrw2YHUrF0JgtIxz597eTaF0fgnvQOejnp8//vf/k0aNq/m949Dn/9yf4bJj/+58lYP0OVYpnIHdOVQV9xfiK8qo8p6DguEVUluvXhF7dnDwXJH5+IM/1VXZ5NSRH1yHr6zhMfwFHJWYHo7jGdqYk/FCu5VBFzMAQeOxWHA4TCRqns+DhhxMJOHDRCRPOEUyrmcyjfU6yAgIwVF3NqQQmzHKq2IkEqo1hzHwcC2UD8F5/cxztB1JK08uhLmu6XgO3JjN1bTOcVFktjjyntbEmfHan87Elkj83m/uzKTkBEq2TPPezFUgzJmOBSqt4kS0xc0L08jWl9ZlKrp/pHWVlpfYjHcC8C4D2+1RKgdZJqPZ7eihc3h8JDnlf/6KhcPOV8biTSFnwcHicjAVzYy5R2qbETzQ4hheN85TXeYYFiv+M8OoBxJXscg7MiaTHspCEocXY5FCarGZUKKP/e4x5kLpSmmFVskGUlZm9vOSwOdC7ca4SvATnt9QTB+VCs7aysl20nzczQmg2NLAvj60ClfeCU6WVUPXv891etSHd2Krz93mnZmJUZhM0aw2MUqW2wo/OwsQpDihZMMjZl1cO3qk0BUzrrLyypZXn44QnigmhoQ9LyvtRMlHBWxwKDIfPsXuYWYjzEM9zNdX6eV9Dwd0bBwewzRLJvFRnV6aRi/mTayhIDeLBDmiFeFDuYyAeNNuMeFDgEyBeUg4xk7Ece8BzznHBh7wxjwcHuKhj6NGQjZyPMUuLfiahAtyHRSp5sPm5ytCGnxlmxXL6uqtPw92j+gPUTQ2yy3RGVRCK6iUd2ZXph1JnMqFqqP4gvd/+iJ1f1cwmNZ/qZFSebNqWmm33Z68R+8xh937Hx4KZgwXIUgkWYHq++fL40cfCB68fCDker2aL0k2+QyP5yIvWZpnWzL/buswd/+lZMzwey5rIJPMPn33UC6fyG0sxJxNUA+fkc3PTlOGZhnvRN00tBJg006dFvujue4NieU0GxT/qB0UBb8ZdkfFMn0yTQ2hwUQgfXsUVhROErVa1SZJNh9lEujErL62qasrsM9YjmVABieVu1H3DBaLxCn3VMLNZb1flMKu2otGzd9YZzcadczc9r0Ofot9XWGlOrlYns3ShV31q47cjp3+sj5H/YbzAkOTF/WsAGuX/g/2DR08OPfn/8ODR4Wf5/1N8tpH/5TtuRvW9vCuVWoBpjGqIWWF5NU9mlC2cX7kpxoHcuw/6brrykNLhC3m4Wmb6GSdjJn9KOFBLzJ6GeA3bmlKLSw11bHOdfCQEUd6KLYp6PVoVQ3kkBdQbBCw5c4em98zak2zbzadC8ebp7TCfGh0DJySXHJL51M0h3FdJe+Gs63Kuacnytru323OKBhLNY3ZwXYbhqlPNy2ttYElOSJUVmCpp3c2pkIbua3XMbKS+qXIBlM84L3jhdT5t/aXkHNnVetPkIp3Wvp1k6XQ8VHSstpi1PGuKMFLUg5tiZrwJ8EN1RUDCMSmty2PKkRoohidVNlI4ptRb5ixfYdpOOdFpRYZDOZ9iPf+hmbcORaotAIaoO8fbWs4saYF2i1sThVl7zS+3GC8KpuikL+5Lb00w77f7xC+u55SK6l9uMXdeoaT7wOEFnLUMcFkWg+gUdRMh8jPR6pvZqNFVBKbPacDNfdwNMC92ZYu3rAi1MvlATNbtqYAYT1zQXzCvJGeZ3f0z0o+ytyetjpI5Ar4AsX2RjjjzPGW8nf8TloQoKGXOtjq76+ePRHpSHZ4AqB3Zg+hv+Qrhd5bRL6u0uJPcx1IPoSKzyN1i7QFNQIzX9FEH0KFTUa2tmyDsTbUn+FlbEzhYALlbxQGqUSHRZXddixZGpNOt+/4gOs2BYQaR5DaZU9rdCBWKd5Haff8VvRyNEnQboTyYqxKT19DRtKT4+tE0u04xiy5eMM4xufAkWs0znQi6Ad/Pzu0h+EroB9FLkx8XjU5vo6S4LM/2zwGEVzLaxVHv+k/P5Hk/iuM4Og9W2sPJqavJL1V1pxBr6o7XTvuO1/QPOHaYo19WsNi0XbiYXQpWs8qWdAliQJ+kekLMBBfaqRRCOgDyJr/GZNL7YcXUg+h/MAHriLKGV7EhrMwKLm2opL/Xrd519igtLnUQ1vgIu2JmAGdML0gEx9tqlo7bdqaG75Kp8hsgTEToJwQl0ImyHnDdZg4tSxXlZSqAgSOap6aDevM0QZ88SkFtKG2X8bs6iLYdqh28gLNA2TsiaoDYau7PCP65o1zI8CBvVC5wCyCvx7q0tPDfC0rZsLzT7Qm70tRchddRR0/lBU6LzePb6g3hU3CbIn0SLneepkAOYBKZo7N5NefSYJEUycyUPgJGeok5hVEdh385iXEaDQYYTk1XU3cN5kJsKnQF+rBiS1uk0fkIM3iPrvpo2QCb+W06ApENGPryijIcc9/p6sFSs8s82wwV/AYkuLjj2wutCHAGpX/gPHj0nW7DqC7nXCeeUrfT57TPsjj5qtDDIeJowGzE+1RRx7pMVP3UYk+/E/8zz0QlR5ukF2C6PNa0wizCsDuDAZUaqFLHHafUIsmKskojcbmu+9ENKYqztzEiOPw3Y4zyAQfOgTlfnxxHhHEhwnLt5Kj/vZ+j3qzVKfKehBVQKljmGo++s4OjMJ3XXUE0DpL7rFTG0d2bfnSRoxCMkPEbXZdnrHzew9jrtD9wNwCbSSnRMTFgEPANzr8lvd7oHOrVXjyI3GVCIQpFdhIWQGr+/XGlCi2dJSjj/4yk3IfZOu7QRRCPn8hG57gDq9rzOoCo8iX0VWEdtezJExZ2QvEAMq5jkTU+3ismMlSboqnPZwzcDAOhF6geigPM3g13Bu8khRYCw71aTJFPLwlT0WkTOCGaoxsU0PrInEOv9zBTwrxmXtP4Mo46GE2ar6S6q3KQJuUSb4/U18NeeMZ97I/g/91373vODiDg1U0Az/oG22E7hHgPqHRTj8113dhoE1aBhpgwnqsXclePay0nVJkmxeiKdkj6lpJlj9I4+iG/pMMMtggcOmjaiUfsxQrPJhBP45r2o3fQV9pNeMp1oiO+G3qPb+V8ptDVLrj/BBAJrTzOdaTrI1OP9cNrx9DaAksuSg9USzDVtdEr06kuyu9hUa7vgzKa1SQQipnyRC/C/V1gidhuPUJH8t3oNptO6Y4KLbTQdItYLURDzfzWAt+lsrvWCJTdAxQyLCJLgq2MR15bxiBGRGTuhZmhqKv58F4DZ8aC5iy5Q5JFQlwNs7YNS0U8vF+rOgusjyTlDKvebL55uzlZzetmJYthJ87z6Ayn53wvWhTpJHvbNEUVDG47ZUqjY52buGuUqp6GLWvuHXDMzgr24A9R74H02zsbHJxX59BSImhUUrP2gajUFpN+TJdX+Rgo0MKRCtK3yWg5vYuQjCB7IMejaqgf5Up/gyZKtM3QIuqWtArqIrVmhrXSzJlLB2lRnSGasIA+Y8317e5zGgwMF3WGcKpiQ0yyNTuoTQQBqbRZ9m59J76JDj5eH2ar6TLD4z/YE8EV7snZ/rnBlyK5dW0b4HS66EdfIIGCP19c3+K3qoCs77u6tkSsqnuXFgGFe4Ndx2g2Jkn5+Izpgyc/9/pEPc+t3lnCtk1MmnbBi/lNfg0YBNQe0JL1w7hHQYgvQDpWIm4b6tAgbHtKvKg79SlIrwbB5Qr0WBk971oD2+15xZwLiJqF6Vymy84HrkxwkTqD/JhYhn4UXq9z+wR/EL15+ezlUfSGGCdiKPAwSdH9IUEd/RwvYgcDNm4dwCkzmOQrkLZQM4BBGFB3gYwtPAIKDowtuvOq1mHHyYw4lhtUerf7U778Dtvq7SIVloJoItgLHmKuesJSNov9bVKkwBcsk6m/GsqgZffneXIxJTVNgRHSgfNzzqTdqrmtuXlWhjZkq+IciwD9VjarUi7j9VbNKfmUNLWIlPP01hAOQGe+GiN9zsogPN1O45FQwJqgG3oA47/TNUdAqwGVx0jjoUm6TebWE/KQtBSEI0AznA7TzPO3FFajIpPGrF3uxnHci2W8u8CM7faigWwyMxDYZRToHrs7x+OGK4/9Rvn2eVfF6NnVLU+T2cU4UZF4jnRIHgz5P4bljekGi32Cznbf3v26e674nx8xdcL80iyjqOvsZTmKTvAKHifLumrCCabJslR4ZPqbChKhV2q1/2jOnuiZZNWd4s4FhUgxSKo9WVA+dQWIPS0+POtEfwPPDRQ8mS/4ATIIeIlDEkcyurKaE8x2CaZBI25UaR+xV8sQEMCnk1cvXlJbARJ7wljsagfVPNBP0h6uQD4hAmz3oKOIdz1b4XtwAD1RPezW7DX8yHRr3k7roYPqe7shvC+u0XEYO0IowyxOt+eJMpaOwm61T2oZxjpSF/jqwHqrDz1Cly2leZkzF1thYXFQ3bmtuaH2v8SLih6Sg+C7eLfXO7eG48vZ1avlZzne5V7PgUW8gv8AhXgL8w0yjvMo2oWmSUHiDMa2YoCRaGOUXW5gt+bMs808ak1RPcsL59QN11h/Btf5YZC5ZJGmv6YbCUaJvWtO9a6hWbtCjpz2iiIYwgJN76rMj97MGtYbUdnLvS9gwgVes6OcPCWtB0nbNRcIsOEN3WH9GZLV0RW6ScLL5W2aGkUVckCZluQ08hHlSKa3yZ3y2uFBoQBDV9CUF9KYcfDBzSL+2Bx37HML7JhhESfRXb6KlsCVQMd42pE+eZYj5o7Clls0QSM0HsfRCfzUtdRYNCzLTIJX6SIdJaiuQ22E8v8R/bOeCLxZP8ISwNqxVmJg9QtOxT04rZqFencsazZgRwwqsC9qOsJzUcadqvRhdh1Xrtt1kbvxHFJTX2XzbRi124l645HVgjBeX3hSUeDODkscRS+BMYK9NgNm1swO4hO1RnuwUZIIH3WEXNbdXFhasXg4ZICT+Z0ciUHeCDAGA/+QeU5SmnvCmqPyQUQ03VCRi6TUpMQ1WUBTQHzhS+vq/USIkb69oi7g9DkdXeZDnrRjt3S1QdWoNo5UVasnLTD2c6B0IEBpb8MpKjXvFHtE9ULHOJZ1zEtZ0O+r4dYd6/xWne1smdmwVwh+y60ibf8rNwl5apap9kr68I2iW7zfzSISubdjPDlQds/JT8+ikx9+WLsnDKKTAnQbDDej3RzLG/SK+MkmYXy1od4T0ur2/n0QF2QG9KdReOvMfTP+OkVJ3FOLDNirmt0cbZU4w3iaT1wOLJpl81UZRmKNp8Few3KrTq3HsUaMboN4TYTS6knQXGEznFNN/VuhHPRAEI5OUzYCj2C8foCCNfz9dhKxLRBHp1fAW14hozCx7PjyUayVJ5r7Fz0ydfkIOnveC3sM+2vU4W50tmD8LGD/yiUb5av5ckh3rSBZbHw7NV/NLtLCFrbcTa0WjDy4AZdXZFNL4KJ8HmDdCkcYC8y9cSJ0rtmsIcmU2z6kpHsdwr4dku5Vm6iF4GgchEN2PRuKuPg/py9/EpUYuaPrqwtLPjPiur4tAulKX5XwTRxpErScJTchsU+e/dEcIUz2cp3nrMvlRpTX0cT0d0lyFpJxZh9UD3l7oRyJxI3CBs2bSLxmQPQFWxnNkqK8YpVaUrqjoxlCX0pqN2lQWInZnGGZ6F6aiQFuZbbOVZfOylqXrye86AHN17yuR6p9qxu6XKs4B7+rbL/dZJFJvoHdo2j35iBAt9l6Fd7icofeKx3tLjp7Bt6TVRK6kbtGQO+tcwQvrND4HmRoa17V9QaqvXYH+TEOpeZgqbn0cGTtCg46LSlgltWAf/3hXOaZWyHsvKUranMP1PqCzlzxqC89G1TwkiNR6L3rXPkErnNUuAljQc46ctRnUjMTekW7k3kIUuVkaLYK9e3JRRQt+J5EOcX8+6Kj9FXG41PpRgINKyaObetItbpQ1pp2PnA0xSktkpKO9ZWyQ62tHqvMKEgPPY1eUCVGt/xkWqZpPurxrixdH162pWwKvMjLMrvIptmS1Bey0QCIf2702pN+omZA/UUUpAMWMJ4InX+g+ChZtUwQaDD7R5I93JpZdn/kK0D0bdCVzJXGj+SQmk9qZj24ltBzbf6g5hCXH/5zAg4gycayITK9ljBZCEkhWZznbY0XNLZxN20DBgyvtOtdaZgOb28v0QySVAwUfwvGFLCSQOiOjYSa35rNx7xE6723oG5rEwGDKYYltPdiGCnKODoxuMQUNLOiVrkkU1Vvt0GWNJ4Z322ShGnB5e1BJJdW4+PsCDU5sjNkYvScqBvUys2eM0PBdRDf7UUmW1+8sHXvduwdBhuMDg2M8Fo6nA9xyxVM8B94DgKjJltsDO6oBnqMRelqME7Z4oE2t71T/Nsyt7oejtdIwBDR1LMEU5S1hherbDoWpNe8gIfihX2b1rHqdRwG4PvkJuVWB/Q2EidYYymj+p4p3zKjPkbFsmLGPC2EfmVYJzZGsbd1S9O7VtzTrjWIXWtazsQ2S/XHvHKsXyy2ia5UomI1n9OBac8NrMoS6GBCQh/bzvHFJrryOyNwODLXscoxJVFWJN7iF5YdWIr7QIWSyqDWMgFOSYgbpj8bAjkYpaKhWIIoOc3nl5T2oVE+/BbHhHQPY63gSmdzNvhFjoAiqdbbWtm2AyTXFKsRBplQuwjVjPNx6mosFkV+CcRnRjLQquSz/jq9K/fICr4k8yu5YsxJyIPRAmtD4s6iyEAuy6ZkoIWeqiVxttDgBZ4qEq/WdjBFfo9QDPiU5QDOFrR0LDMQ3TlsTnSC5vjoYYWG3yDMTyZIcDLsx5gaT1ZAH7G3JOwv8ymFh9NWRtB7IIKUEEJvdqgMTbpTyZ5TN4BfrpWNJqtVphc/f5wQMn1zFH6NH16fME+rpyHFlGMXeAMfx/FubTnEJCkTbux9FH6+pp/QYLi56tP3QTxFe4h37x2i0cKkA/CKPfnjiVhoOK/hBcUYgjIcaij49qzDrXfOpS0dmaimuJpsXUE96CIuDelklk3puxefyE5HKlPmILAQjyA+2h3OZjBLFh1lVUT38UD9ErywoNXzGoRzpbiLo9McjcA6S7ou15SBsZeeKysnZBiwCnGfS7rcxtfqbhshuCtJtEcmEceKv7sWhTJf0ewFSZShVj6hcn65s4usp4JU5YLN3GMhmndd2ik8PgOUkGXhtz7RHRuiC9RmjuHKLNLLkcmOy7sy5jiH90uNX0EngFUkwEiUBRmQCs3VocxdiKBDqToWetGMbLyrIhYXPopeOnVJY8YWmhoc8WYErSsBrVmzpkfaiwNEa76XBAdCflHu/BFoMUKVZzVoYk1gW1yxRHFCwMpBKSI42qRm+Ur/tpeJo7Zd3C1VdJoIttWUAvM5/NUHHa/YO6JeIZnhz3AMst3qCs8a3zyZozYAp0blfWMBarnLJ46+jwKyRxLGSvQIuRytvTh6nsARRWculqAot/klSAt0S2Ob3hhJV8N6QbK28OhCncbpYprfkfUg9HOcpDM4X1M6SpV1I8fwFXtQTEEzZkPDRDEZWrpXNeSa3x5kdJmg+oHslFQYHKtrSraRFQNxBTpK8k8yNV7W43Emj4zDNXQFTSxEkw9kFZgU1KpmlmgtrTsIcBQ9szYMYYNwRBT2oowuChBQ7FtzHLC+8aYoHCrcCS3iMtfwOAAxGtEJv8HB83hxedO3ZyVg4iM+B7WBqDo80uKbo+iPAH3Iir5v6o555oHVYe9UCdcInvz3ecaP49VijPbI604fvffVF7X9hb7Y+9/6LmQA/2mkQWGN7NrjxSJXzYfLh1GwAMmqFxvrjyQmX0gZKhcmtYcUjfCejigmci0OqI9LBTY4/nh97cPvt4mjiJGaT2V8tLhVUaK1wJfXqzm57ai2ouQyQRWtqyjTd4leAG25ZuALPL7xKyuravplXRY2WHeSG7a6rchHo1VRkq7cC9bGZ51o6FRHsCHFYDOafXIk0w4K0g+lXrMnWYKOBVfEupDaVRV2N7yWMhXvxUdpvQ+ZrZ9RtXAZzdJ72iGluemw5kbOSS2KVSmxUrugsfvf56i/KdjbSF9T2RsDzZHXaJhbbI4Wd96+Pjd0lR3S8uKoK6ipFV6ARQYPxX9ao6Nf6zbJsGMFdMtIhf/AWDTwIhoM4NWxtAGn+z8MM6G6rcy3P/VOafLwc8wKxfIwNZbm4a2zxo3V3VnYTsC/tN7HterH1qBLXXsbFFC2hu6pG4pXdbNtaQMN/VNQBhOXiQKeb6HSpc5aTg5rvPwqDn4USUEoAYdS6ZOe8hZjrNSQgiBuceJ2uu5l0y8JB6PNWcwxWdnXXBSPBpQrcc8Ud24TmH3+rlJPd9S6SUK7c2UxZaIv8vmpyi/TOZ2YE+Ybg8XJIcLoQCkmw6pIP9Vx6brnWhuT+hhCfLPH7E1mpmjt7tJF/euMaTD6jg68Y7erYqaE46WI+cTUSkoxqMtJQWE34DWHPuHsIoDSsxxkfJw5e9aU55xEpCO7eglJgy2Uq8kke1uBUpmDqaujCwdjqdTqvHt/bAUWmXrhldrRHF7Vfw+So26RfEy0wmxYgf/ZnVXz4Ma7dT2dUQ03khmrxVa0xipfJTheY1tSHbuVtqSnUue3Sn/Ugv4WSJA1aWFD8JvPNIjWrxUN0iv770WGAggZpEQwN0Nk64USsedyfeQRJ6oB9FO8SsNRBlw39bXO6brhoOe7OMYlYo+i1OTSKPm1RiAGroBuVImTCgLgee1j5IRkhPoE3S5e4WIEBorXS5WB6y1inhLSZKhZcZZaHGDL3O4qamSC/vO+wYVWSMiUVMM0VAiPF9aohQq1UNtC+bxX17nJOKGETqnzqkhhC4+SstkmRdFGqtlMGCuv9BYeDBQweyurZ71NNRzcl0+7lfnq6EMMU7wJDG5kCtoyhBNX5fDJ5kN07oGDoJzcHR+ExlOuRpTy1uQ+0KYPQzlZy+P9vjplG3MkRP61d6hMO8LyKleEhZ0rJCCNNshxh2bpUfgJM0IuTdJty5BRzcY8AN5jcfvGJrEyCeKuvhQLPkOpMIsfnHeAjeUV3df0zUJECziR87HYlGg/oICPhsH/gMtw5qcqhBKzOPquhqy6qs4aytwte9YtHEZhkpgi2pPY80wgpxGmkN0qKawnf7Yvv3iusz0P3Vuq7APAepHWSRF1HUeRuc87Wdg7OyCLMsbuSXAVn4aevHqBX8ln30ovgI0JCgAwWV/d6KggNiqxl9hcwvEwf3r5hpS02XyF4QmnUzLwnFjm0rY5pj7M0F1wZeXcQC8vNv+WW7/q9aaL50fRXzO8SHSuaw0O9XnDWDvahYcfPzwNrYLxhzKNXqQT3gyptVEys8GqVqw24CMdR1qvsFL/E2i0lEI/eDyCGLu1Or9c5guny4hmZkeJjxCrdlk8YjtlygnOBtmJdFxvsiruqN2hx6ZScjrTZ0eX5NkVNLhIr5IbJAolXzrcOQ3LfrRHXJmtCoE5IjwxS0GcvMWlUOxLSj5qkWJZJndYaQFok5D0JMgZR9/eRXKb1ud10RYBJUcyBxpf5rQo0j7NdLDhVKwI8kWVy7L69olwQO44ceHpXwcTMtfrK7BKTMaQecK+o6UCjA1o7fIDrqPEs2JiyEtpuy9wz3F52Iigyh8qzDaIO2Rb6ZL5FvlBUX3UBCCk2oLBQxaRY8gnFPBhB/o54ByQJezBkT9hFWcHVyrTxGF4cRcIWE0THH5lGFiEEpROHbz3feLsl2HONizJ6j5ZUZ4a8nKo0ugWUd3BQQgNt1n2Z41XRec7BoKnndx56f3QCXSYmuQdYuEH9Z33SnCMPvGj6aWQ6+b5JrNr40OrCbYr9DBzuncANo3T3jJZ7UjJzLicpumia+G/i8buxphhdGPryZfRQT86eNyrMN7TqcV4EwMDPPcHcNhVfrol9/zXBAVYZmkVI6Q9/WwWml2aLdexOdG/uUMiAnw4RuGoGrYpcrUh633y09+imyyfUiYbn11Vh5E5p1ux3mh+9pn19sJo+bvbMMbE3pips89XvZ84NUkVfTx2nLkv4+r0W+Giqa9H0Y/ZPJutZoEQAMxpYdPi6S8csMzansZEjJZq9NeAZqiRqF6Cb8gO45byMXMjvshbN4s3DnDImZOLxvaL1z5Lnv33i4mPL4klvVGgBDv4mwxMzdKqDAgMn1lg+G2pHJDxrKgdOGdE5XjSRkuIEk1o4J7+hjO0UcJ6SZjhQCMs8U2SnD0jjDMFN5OWeZVWU75woWv4iTpClJ+bDaTWNfITMt1dTCmidsxFMlazg49xWmjFsoktZFsHGeX7ULSyskcJpqpcOdMwwMXlJcAax3p7En7JBvXv7ly9s7FxIdLEQTrGOAI7nOs8nw+InTMeoJlt3c2LWJWPscn6lAQcD9EidO6q2gcOZhWlEL9vj7xsL/cvh7gOzMfWAVDlPJtlFvx4ckv1/TrhRbq0LRft97KZkxZgddJSPQiv0qY9dGbK6WJTjYsiTa7/reSt6bQTHtLm4lZV4jk+Noj7GxN2KDe2DkLVLu9zQ5rnYIrncJrlmqzK2IqSenS/JDqIZD70H+sOvYs6152jqHPTid5Xyp6pwvT3kBOLnm9Sl5J1HKgW6JdqJtwzJ1+jF768Ng1wdMKxl7qdRT7u9PqeIxE/30P3QprTDqqLjExRmx3hjKolFyNMBYVfYeE757HdNWWXZuzQivSXVVakxndJSz3cRYzdyJGIkcHg+/9eHD3jUMOcEauSjlh39hcvv+AAeQECziPluNYsP1G6IrSETHFgAlQfhGJ5wEZavz8m7qrnNUw+XJo55InawyB1ESY6WLE4mFOyLJRfVS4tHfspfYv2u6UTimEQvbPSOnFWp/dR1FWZm0ziJlG79CIlWpRRZzCN7AxUnN7Kafv31cbDbatzoOc2b6rb6+ynUfRS8aHz9IIU5+iAG16J3x+Hl8DBJy/+cfTXqxSlQDeklpsLk5xTEjuac+k06dKMo+jnkqK49clASofeNiFJqb3MxN1WmUlF1uccCYaG5XBwlHhwmLh8uFfKTs9plXzgtIedqSTZDbpx3EPEZx4uI1VMNkYXQRXNwYlNrTPbIZdHdlhqn41Ud1GJYV12QZPIw8rMGPMD1bytaTgqOHa7eucQ2GqcR/m2219LqDTlFDvasCLWOw+83+FK3nHhh87yAxE6P3s7v/tf9dHItcdJV+LF3b3D2IfPkyeP6C98/L+PDh7u/+7g4aODw8eHT57g84ODJw+/+l20f+89CXxWSHui6FOA+i1+iMoMgXXH0BzDoQqpk1yU+XQFXDP/3tnh0Dsz0iZKGUmgIjxeVg6LHGsk4xnaZY67fIskLJeJWIOOmNSOpPmJgURxcFkrshuckpd4GsKxdObs4arzakcLxNXS9bV0baReyJedSLcb4pt2uMNYGhnpmpLVoBVewLgd9/n7nplBIPxDMd5pPX2LK4zdDiJA5zXX7LjtkfyYjjdYENPiqaprtTmHokOKst5+hfUKhRdbrcFrbNWa1vCE2xPGPt5wZnCH9E/dM6rxACU2esFuURRfQHzmpGPRXbr8Lyos+gmvqboBVXKrcXZcenoFTMudylKLDyhHmTGLmLcHYmtPzKUGzhuu0+4rDJuT0nbcJTWOKcONUSnsx64rH6ouWr2uh/Id3xXfC4QH0bP0skjGHFhiCifvOOoIgA5OzaP4YE13VAMfccQnKiJPAwxqaD0QQQ96Y/B3VMDumi8zKIeyUFouh8Dk3WQoBvg7FlUUod1qynuprSq4ua46JnjTWdwlVbLdeRmOpi43o+EFOo5tQVm+xXoWVclmyWXKjtBy5vjDVzylcuPTylHgcK3asrvpHgimQXYeN0m8LRWNoRk0N+cIVHQRgeligPZK0QGV48Qs1ILtqGyBi0P8LlurV4e/TC5tCvhpDrYXNJ5T1iK2O91oasM5gNudbv/buOT/vR/D/6P10lDHVrxPOWAN/7//+Mljj/9/+PDw4Wf+/1N8hJdfzZH5LTWj3xRjc4djgL6B4mRd0FWVY3z0NCm1UyvSdkIrinHqB9znGJ0m2qVRMLNVfVmmxfJF2RXVNpPSXj/aDxZ8jnq7rhOArR+9e7++LEUDhbKdd+87NV1AzRcVV9odUcQMJa3FDpxCQ/oBEhSebkOg8Nl8OOzwaPUE4dPub4o4evufj6t71gE07/+H+w8fPfL2/+GjJwef9/+n+LSV/2vohKMQ+OL+aAMHRT5m86VaysBp3FXg9n41pWug+JkJ9H7eroaVTHbDCme7LhB34OSzWhk5hm42A491mMdYYjqHAKOY06WKwLlXegjECd9VMvVOyG3IirzNHeI7jECoCmyEUzKvb+rXtMix0uXyatcfN4ZpBbb2ppIGkyJ8OU3vYjR+zwNp9wKeHXrPZsniYDcU2mx3BI8fBsL5Yy6ARy7r6rWJFzjYaJXzflxt70n10Vfuo/NA84fB5sNM/S4ImSHI9G4C755UXgVCvta0fdnQ9lWbtq3RGRFhphEZD2S0yCV6cWzdJY8P8fo4tpDCQVx4DQg3rkO4zitVLRpnY1HulEsyjsrxRo2xESUbJ8iPv3esVLyzPne5F9ziJBnZxceHZx1Evs755vUIA7AipqTYsvLZvgXY3WcJSXwV+hKgqjNNsT7vQPr8m+9AqpTMV9PpLl/r+q84ZBhmQ1lepUzvre1bxwQvu7PWh61busXJaRU+2aTwt82FmdW3yl/0o8MwJah0GlF6s3Ge8TaonPlNI6A6GLaJLJhbMCVuD3XNNTArE4E9jUf96GG7eaPi4370qF23DuJ2HSLGwtSkHd9r7pRb5UyoxPmaWj4QIpyPN6jQj84e95/0vzpvVUd3y6qm6y0Lzyel0rmHrlbNYrGe6XOO/A3MIfdiPk7fkv+3nZWFTNOsdy5cNPtvtxeIVm646akOH1CbVzvYtlp8uW3Fq022nq4V2H81UOepQxKRPrfcq0Kw+xa9bomFitSfu3Wtox8H8hBtVDsJmm5ddM6bZo1Kt5pg3gEP1xE06bIuTfgCXIfTR+RLHlu/KenbVTqd5rvW05F918aPxvoSxjxL4RllRGqawWQdcYBzRHrQWAwILEVkaSoDVFXi/TcUSjnDVZjJG44wRATwh+m8DIpWhuvzSx673WtiBU/asoInm7KCz9azgi8+Niv45t5Ywe9+c6zgT/Ws4OlvjBWsYdjatLyGb6y0/O0nZgXbAPrx5NXBZ+6xLffoTVz8tHWHqPizdh2iss8/s7N17GzNkH58/ubk2cmbk/iHF6dv2s2CqnK2S3VazIILpMUsuBXazUKlW5+Z+s9M/aZMveqig4AtZ9+t03r2vWptZ79SLf7TthW/bzH71Vobzr5uQM3+Z5nqs0z1ATIVXVawiZaIUbqRsZajXN4d1rkiG8GyV2SjzqgTEow6b6vV6fldtQl6/itahHV+PUDs+vWQ/n3YqbMzft9w8zG2LXJ5FO+bbiGq5R++723Wfl9mZkM4qt6jLeoVqt42/ezLqr2vAdy2Pi9yXSvrhu02c3g/nekLht1Pp1RzD99/6FS7vesLvp/f76D71jb6iE3zDq0FsN0UVyA8FgguJUP6X0PJpgdAyvTFYxdNFfqozumjlibMDtNcTA+srk4PGmbNKXl23rqk6UTrGsmGNaBs66LJro3M85weD6czb/oOaiaN19dp0rTRhBJuLw5qOhwoy33ZpDRM3uEGVejwDW/wQOl3dIV86JCE6aE7e9hiE84dVqHXA7cLMw9gQ37oQnaOzIDGMnD9Tc8Dek96ThrOJk1mAMa4BkZaA2OyJQzy/g+DOQOC8pDZqvN2EPWvGk0AL9zD9ubuoanTv5p268YwAgOkZf5Xwq5DsY36JBtgk041Afc6+xGgbwFl83lvQP3JJxll+ilGSbv44fknAYPE4tNBCpKlj7BM7UgkNfzRB781uXa6Yk69R/d36tVcWiUHDbdWF+H7QFMZ794Ct3amgcPGBriRh1Dmq/pGuCEs9HVtmUDG+eDTwGVe89HfeEo++jCC3Q79PhAIPV+7/tWJ+q329IMwtS2W/C8bfJtdeF8brIGq/Ru72/zmPp7/TymRU+7VBWhd/I+Hjw98/79D+PPZ/+cTfD7Q/0fhi6qnfstr2KPsc00vOcAPkba0dHyFTlWAoPXuQk4jvpmTT03dwp1lOltgwOi9t6gzs341EOSmJs7sn03Kp5aN3HndalTJbNfmYcdJHfQgOk0xqRUHzpqsfv31LqImJfgqBWdCr6LbvLimYC+bDnH8z2R+mTt9ivUXE9oqy1XJrQa9KZT4cKv1agLz0frvdnnjtg8/6RIcbgjG3dxlupSQLhj1oLK7Dzh+K5OKM1theOi88UMRvr37tdNk4VEexOXqYlkkmBfosKdzOcNQ1lQDeoW5wtw6NeBNQw+DvT1s3d1Dq7sPw6DX1M/my7Qo05oG/A5vTyWW7pJVEN7q5vIwXLQZhyzi1ljwsHFG0PpoqVZzaa1mr8740KlnZtOre7C+rl5Jr+p+7xPO4rZz02J8H2tuPgsB9/Lx+P/VMrt39/91/P/hk8OK///Dh08+8/+f4vOB/D/ii6qTvqXNOpyvZmmBGVSGZBnVJnTgaJoDZ5H0owvMETodLvPp8UE62P9DH3tCP/fjfS++1EXZTQYXveiPx3AYve1KvegL+kVve1S9e9Hr6XZ628odNaPzeRRtQFRxn6Aw6hEMw7MI6oQeZvjUe/ZjsOSPWfDxfvhpTeF5zeNw8YP0EE0R9oNvqD8H+4++fvzVk3CJrKEyv+JPpUD88OARdekw/sPho4ePD7/++tHhHw6fAAVJBwd+6YcxlEZrFPi7b38eEmJVC6+kMLx+Eng947nYh04E3krdwJus/tWf8RX0rjJSfHnd9PJHeRmaJnz/J/O+rsgbp0hdqVd+qbqCz9XsfXnwtf/64FH8nKfhyeH+wVewco8ePXn86A9/eJx+ebivC1vuJhgpMcMwgHo/xdfpXSU1sS+3KUqia51l5/06wtTNepac84BARfkoVsRpHZC6ZpVlZh/R5beurzTn/0c5+unTfP7DYb/v6/8ODr/6rP/7JJ8Nz3+UIibZNFW/y7tSfc1y9S3Xz9KimOvHGGhL18vewr54ED2V+OOzZJ5cUuj2ZCmh0G+TRVQux/kKQ1OPoSUOD3+xmkxSTNf1IDqZ31FoHI7JOJ9D/QuMch4tCpA6TLKtUbLA0Y0jzoFbplBXmuH0QzecikhS5CblkQQ4j4VVeLlaLlbAFlAzXX5nMwjDYQZbeDgMqiNhAMALZHn87R1s8hcv/WhGODLvvdVwiuJToOW7MubJ0SltVkv/LbesgOi3wSzgQ8x0o4aAJHM0xACS/I2zSUVI7tKLZHRd2xH4MRzyz+GwpjuqDPyEMoYbmy2+g1Uws4s1VVzNp0VK+RMSQsC8SIq7CNGwT8Hx6bTAAMCAC3u3RbbkNFc657GK2kZNvZjw7/nSy9Ob0QOsDi8BF5Y5p13CbEKUxG5EnaCMVdSSfrvIS53ouEhVWjATGTSMJ33VD0liUgJGZm+Pd+PlbLHrIxG/xOmjL+5L6oTY3zsv1ECPFag2mGW1qHZ7/EbNOi2R9NTqWB+2XEq91+fJYDTNAMRgQSlyvHBSdu/cE73iReX0KcblYZ5bVY/T+QhEi25ntZwMvrY1+W7t6aq86ta9xKbLNL3u7gNGlfHp8+d/Hp4+f9NDtgCfR4IMF+llRjG2Ueul1t9pU3yzdEqgmsGMMXxsfhfoEGUY2mnerDyYuoXTY9XlcW+Ei3uz0jwboU5xaQJgAVTD82CqlSf0qs0utQYFmPnqbTjr5BRXaczxWgmQQSaBTeTPXUxJd7SYZhwbsuzmdHR45OxZWmaXc07GQYWj1SLikpz2Y5Afc+YVpEKcWK/MMJVIIP2OxOlUWXh0XiBUjdo/MI0QtSSpThgepvVAbnWKSVgKiYmukvZR3lNGdOkddAdqi+uPlRmkuVvO4AEJpDHJWVMJUy3J1NSvtzH2bNHtEZ1/S3HTqQH1PKY57O7+HcgMNm8q/P4YwJ5b4bbzKQr8gCF0unUrQautIFuUyKyLA4OlX2EaF1jtZU8Hn59j4O7hNJvDKuPxAjQwGeMRdNwBmcdtttOJ/5ljjiwu8SXWotHQFxJz4HyiYXCDysSaQBE7M5zml2WXg073MTkORzmFcwuOlAyED+nQKF/BkQI7l+gidUcoo0QRp2KA+B343xfB2lRuDj0aeoW7odLRl0pFLuHtEXvgZJn8Mqd+4vCA3YsxKSml5uzqztsbiaNpM31fdd69P8NSR0Rwz9+9P5bP3+edGNMPJcsu96HvQrTIP+3aoSA1pRPS+CIR57udvxfQIOwT+LdX35UKUFl1G0LfnjBbrqwZWDofbzYsGxU4xDme4MWdRonxbw0VqHt9lRLLR4Jx0+J/8e79UWBOqMWey0rQM5KqcV47nit0decIMa/rvjVZ6ovleD31cwNmE5tkSNuM2sg74DR0mBJWw6l7CIriUTxezRZlV/cR+3Z86J110yBQnF1ui0hbaXUAzqdRkV0EIrp7XaCfLXdJeD4CjeJ66pXkzGluusH72W/V/aGXXLbLRyWXlEAWKd44hW07LasIb/WiHvX/PgcCwSXb0D6X4jURB+7Vlnjf0Mm1dMymXrPrcVYMF3AMLq9kDhxmD5jNWXKdQqGSy9AbYe5enlKsA0xNB08Iuzn9ZvTNYfzYIQlvRzFpHpAs0Jf4+fP/34vTNyTtAxBsO85KgGN3RU+jzSpWUZwZdPrGhzs2QcoM1ONF5SiZcuZqTKp2vbpIi3kKsuvOEF8P6TUG+H3XQeX04CHsK1Q8D9CjD1XMA3TxuxZXvz/L3x/F5Y/UuvAXdbeP4C9qZx/DX1S+Pnkvs1ynl4R/DR/6nAtFuhBvW0k7p/ZYkc7yG+IE53c0vBIFvTKda11DipEZDvb/HB1/w5r7/8RfGf88fLRvs33I7gOUCr/HGuALSg0CtXY0EYGf/GuR36aF+4tnEp491DrUIdEreDQ4UCChmbPBwTmdD1nHkUTwjhmnJPpj9NBbYbtbVteMM47bgwNT0u3Goe4aE9cz8/pc9ZDeArGw0ePIBQSV7bfUynlgKGdHVvuUFdaL1OxOahi9VZnJNGd67bbKO3IqHVdDxbGaZgJNwBqoqjbASsmevbH47Rc4CV1cgj5PxxfW7H++1//8+fz5/Pn8+fz5/Pn8+fz5/Pn8+fz5/Pn8+fz5/Pn8+fz5/Pn8+fz5/Pn8+fz5/PE+/3+m+t9PABAEAA=='
OPENSHIFT_CLIENT_PYTHON_TGZ = six.BytesIO(base64.b64decode(REPLACED_BY_REBUILD_MODULE))
module = AnsibleModule(
argument_spec=dict(
script=dict(required=True),
vars=dict(required=False, default={}, type='dict'),
project=dict(required=False, default=None),
timeout=dict(required=False, default=None, type='int'),
changes=dict(required=False, default=False, type='bool')
)
)
client_python_extract_dir = tempfile.mkdtemp()
module.debug('Extracting openshift-client-python module to: {}'.format(client_python_extract_dir))
try:
tf = tarfile.open(fileobj=OPENSHIFT_CLIENT_PYTHON_TGZ, mode='r:gz')
tf.extractall(client_python_extract_dir)
# Add the newly extacted directory to the python path to resolve the openshift package
sys.path.append(client_python_extract_dir)
# Import openshift as oc so that we can delete the extract directory. module.exit_ type methods
# call sys.exit, so this is our only chance to leave no trace.
import openshift as oc
shutil.rmtree(client_python_extract_dir)
main()
finally:
if os.path.exists(client_python_extract_dir):
shutil.rmtree(client_python_extract_dir)
| 722.990476 | 71,563 | 0.948784 | 2,639 | 75,914 | 27.261084 | 0.838954 | 0.002669 | 0.002335 | 0.002446 | 0.008924 | 0.007909 | 0.006964 | 0.006964 | 0.006213 | 0.006213 | 0 | 0.151887 | 0.016598 | 75,914 | 104 | 71,564 | 729.942308 | 0.811785 | 0.009866 | 0 | 0.220779 | 1 | 0.012987 | 0.956169 | 0.952989 | 0 | 1 | 0 | 0 | 0 | 1 | 0.025974 | false | 0 | 0.181818 | 0 | 0.207792 | 0.051948 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f77ee7566b0ef09501ad14771d66ff3c4ac5735c | 40 | py | Python | tests/test_cipher_jq2334.py | QMSS-G5072-2021/cipher_qiu_jade | 7ac5b8fc34724abe381fc2415ddc8bd51b7b67e9 | [
"MIT"
] | null | null | null | tests/test_cipher_jq2334.py | QMSS-G5072-2021/cipher_qiu_jade | 7ac5b8fc34724abe381fc2415ddc8bd51b7b67e9 | [
"MIT"
] | null | null | null | tests/test_cipher_jq2334.py | QMSS-G5072-2021/cipher_qiu_jade | 7ac5b8fc34724abe381fc2415ddc8bd51b7b67e9 | [
"MIT"
] | null | null | null | from cipher_jq2334 import cipher_jq2334
| 20 | 39 | 0.9 | 6 | 40 | 5.666667 | 0.666667 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 0.1 | 40 | 1 | 40 | 40 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
e3947f86a8933a90dcc78e3795b4033146954d1d | 9,536 | py | Python | pkgs/sdk-pkg/src/genie/libs/sdk/triggers/clear/mcast/clear.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 94 | 2018-04-30T20:29:15.000Z | 2022-03-29T13:40:31.000Z | pkgs/sdk-pkg/src/genie/libs/sdk/triggers/clear/mcast/clear.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 67 | 2018-12-06T21:08:09.000Z | 2022-03-29T18:00:46.000Z | pkgs/sdk-pkg/src/genie/libs/sdk/triggers/clear/mcast/clear.py | miott/genielibs | 6464642cdd67aa2367bdbb12561af4bb060e5e62 | [
"Apache-2.0"
] | 49 | 2018-06-29T18:59:03.000Z | 2022-03-10T02:07:59.000Z | '''Common implementation for routing clear triggers'''
# python
from functools import partial
# genie libs
from genie.libs.sdk.libs.utils.mapping import Mapping
from genie.libs.sdk.triggers.clear.clear import TriggerClear, verify_clear_callable
from genie.libs.sdk.libs.utils.triggeractions import CompareUptime
# Ignore keys when doing the diff with Ops objects for save_snapshot and
# verify_clear, it will be used for LearnPollDiff.ops_diff callable
exclude = ['maker', 'uptime']
class TriggerClearIpMroute(TriggerClear):
# Argument with dynamic value for verify callable
# As verify callable can be re-used in multiple triggers
# with different variable names. This dictionary is used to map
# dynamic argument name to actual script argument name
# <expected argument_name for callable>: <script argument name>
verify_func_args={'r_obj': [['table', 'vrf', '(?P<vrf>^default$)',
'address_family', '(?P<af>ipv4)',
'multicast_group', '(?P<group>.*)',
'source_address', '(?P<source>.*)',
'uptime', '(.*)']],
'relation': '<',
'threshold_time': 'compare_time',
'ops': 'ops'}
mapping = Mapping(requirements={'ops.mcast.mcast.Mcast':{
'requirements':[\
['table', 'vrf', '(?P<vrf>^default$)',
'address_family', '(?P<af>ipv4)',
'multicast_group', '(?P<group>.*)',
'source_address', '(?P<source>.*)', 'uptime', '(?P<uptime>.*)']],
'kwargs':{'attributes': [
'table[vrf][(.*)][address_family][ipv4][multicast_group][(.*)][source_address][(.*)]']},
'exclude': exclude}},
verify_ops={'ops.mcast.mcast.Mcast':{
'requirements':[[partial(verify_clear_callable,
verify_func=CompareUptime.compare_uptime,
verify_func_args=verify_func_args)]],
'kwargs':{'attributes': [
'table[vrf][(.*)][address_family][ipv4][multicast_group][(.*)][source_address][(.*)]']},
'exclude': exclude}},
num_values={'vrf': 'all', 'route':'all',
'af':'all'})
class TriggerClearIpv6Mroute(TriggerClear):
# Argument with dynamic value for verify callable
# As verify callable can be re-used in multiple triggers
# with different variable names. This dictionary is used to map
# dynamic argument name to actual script argument name
# <expected argument_name for callable>: <script argument name>
verify_func_args={'r_obj': [['table', 'vrf', '(?P<vrf>^default$)',
'address_family', '(?P<af>ipv6)',
'multicast_group', '(?P<group>.*)',
'source_address', '(?P<source>.*)',
'uptime', '(.*)']],
'relation': '<',
'threshold_time': 'compare_time',
'ops': 'ops'}
mapping = Mapping(requirements={'ops.mcast.mcast.Mcast':{
'requirements':[\
['table', 'vrf', '(?P<vrf>^default$)',
'address_family', '(?P<af>ipv6)',
'multicast_group', '(?P<group>.*)',
'source_address', '(?P<source>.*)', 'uptime', '(?P<uptime>.*)']],
'kwargs':{'attributes': [
'table[vrf][(.*)][address_family][ipv6][multicast_group][(.*)][source_address][(.*)]']},
'exclude': exclude}},
verify_ops={'ops.mcast.mcast.Mcast':{
'requirements':[[partial(verify_clear_callable,
verify_func=CompareUptime.compare_uptime,
verify_func_args=verify_func_args)]],
'kwargs':{'attributes': [
'table[vrf][(.*)][address_family][ipv6][multicast_group][(.*)][source_address][(.*)]']},
'exclude': exclude}},
num_values={'vrf': 'all', 'route':'all',
'af':'all'})
class TriggerClearIpMrouteVrfAll(TriggerClear):
# Argument with dynamic value for verify callable
# As verify callable can be re-used in multiple triggers
# with different variable names. This dictionary is used to map
# dynamic argument name to actual script argument name
# <expected argument_name for callable>: <script argument name>
verify_func_args={'r_obj': [['table', 'vrf', '(?P<vrf>.*)',
'address_family', '(?P<af>ipv4)',
'multicast_group', '(?P<group>.*)',
'source_address', '(?P<source>.*)',
'uptime', '(.*)']],
'relation': '<',
'threshold_time': 'compare_time',
'ops': 'ops'}
mapping = Mapping(requirements={'ops.mcast.mcast.Mcast':{
'requirements':[\
['table', 'vrf', '(?P<vrf>.*)',
'address_family', '(?P<af>ipv4)',
'multicast_group', '(?P<group>.*)',
'source_address', '(?P<source>.*)', 'uptime', '(?P<uptime>.*)']],
'kwargs':{'attributes': [
'table[vrf][(.*)][address_family][ipv4][multicast_group][(.*)][source_address][(.*)]']},
'exclude': exclude}},
verify_ops={'ops.mcast.mcast.Mcast':{
'requirements':[[partial(verify_clear_callable,
verify_func=CompareUptime.compare_uptime,
verify_func_args=verify_func_args)]],
'kwargs':{'attributes': [
'table[vrf][(.*)][address_family][ipv4][multicast_group][(.*)][source_address][(.*)]']},
'exclude': exclude}},
num_values={'vrf': 'all', 'route':'all',
'af':'all'})
class TriggerClearIpv6MrouteVrfAll(TriggerClear):
# Argument with dynamic value for verify callable
# As verify callable can be re-used in multiple triggers
# with different variable names. This dictionary is used to map
# dynamic argument name to actual script argument name
# <expected argument_name for callable>: <script argument name>
verify_func_args={'r_obj': [['table', 'vrf', '(?P<vrf>.*)',
'address_family', '(?P<af>ipv6)',
'multicast_group', '(?P<group>.*)',
'source_address', '(?P<source>.*)',
'uptime', '(.*)']],
'relation': '<',
'threshold_time': 'compare_time',
'ops': 'ops'}
mapping = Mapping(requirements={'ops.mcast.mcast.Mcast':{
'requirements':[\
['table', 'vrf', '(?P<vrf>.*)',
'address_family', '(?P<af>ipv6)',
'multicast_group', '(?P<group>.*)',
'source_address', '(?P<source>.*)', 'uptime', '(?P<uptime>.*)']],
'kwargs':{'attributes': [
'table[vrf][(.*)][address_family][ipv6][multicast_group][(.*)][source_address][(.*)]']},
'exclude': exclude}},
verify_ops={'ops.mcast.mcast.Mcast':{
'requirements':[[partial(verify_clear_callable,
verify_func=CompareUptime.compare_uptime,
verify_func_args=verify_func_args)]],
'kwargs':{'attributes': [
'table[vrf][(.*)][address_family][ipv6][multicast_group][(.*)][source_address][(.*)]']},
'exclude': exclude}},
num_values={'vrf': 'all', 'route':'all',
'af':'all'})
| 60.35443 | 135 | 0.419987 | 714 | 9,536 | 5.445378 | 0.131653 | 0.049383 | 0.074074 | 0.024691 | 0.887088 | 0.887088 | 0.874228 | 0.874228 | 0.874228 | 0.874228 | 0 | 0.003358 | 0.437815 | 9,536 | 157 | 136 | 60.738854 | 0.72188 | 0.138842 | 0 | 0.920354 | 0 | 0 | 0.285836 | 0.101674 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035398 | 0 | 0.141593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e3a872863e9455ea613640165810a9c6d88e295a | 60,121 | py | Python | parser/fase2/team09/Instrucciones/Expresiones/Aritmetica.py | Gabriel-15/tytus | fb00718bf3fcc5211a3604fba1a551f44bdc6deb | [
"MIT"
] | 35 | 2020-12-07T03:11:43.000Z | 2021-04-15T17:38:16.000Z | parser/fase2/team09/Instrucciones/Expresiones/Aritmetica.py | Gabriel-15/tytus | fb00718bf3fcc5211a3604fba1a551f44bdc6deb | [
"MIT"
] | 47 | 2020-12-09T01:29:09.000Z | 2021-01-13T05:37:50.000Z | parser/fase2/team09/Instrucciones/Expresiones/Aritmetica.py | Gabriel-15/tytus | fb00718bf3fcc5211a3604fba1a551f44bdc6deb | [
"MIT"
] | 556 | 2020-12-07T03:13:31.000Z | 2021-06-17T17:41:10.000Z | from Instrucciones.TablaSimbolos.Instruccion import Instruccion
from Instrucciones.TablaSimbolos.Tipo import Tipo_Dato, Tipo
from Instrucciones.Excepcion import Excepcion
from Instrucciones.C3D.temporal import temporal
class Aritmetica(Instruccion):
def __init__(self, opIzq, opDer, operador, strGram, linea, columna):
Instruccion.__init__(self,None,linea,columna,strGram)
self.opIzq = opIzq
self.opDer = opDer
self.operador = operador
def ejecutar(self, tabla, arbol):
super().ejecutar(tabla,arbol)
# Operación con dos operadores
if(self.opDer != None):
# Si existe algún error en el operador izquierdo, retorno el error.
resultadoIzq = self.opIzq.ejecutar(tabla, arbol)
if isinstance(resultadoIzq, Excepcion):
return resultadoIzq
# Si existe algún error en el operador derecho, retorno el error.
resultadoDer = self.opDer.ejecutar(tabla, arbol)
if isinstance(resultadoDer, Excepcion):
return resultadoDer
# Comprobamos el tipo de operador
if self.operador == '+':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
return resultadoIzq + resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq + resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq + resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq + resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq + resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq + resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq + resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq + resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq + resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.CHAR and self.opDer.tipo.tipo == Tipo_Dato.CHAR:
self.tipo = Tipo(Tipo_Dato.CHAR)
return resultadoIzq + resultadoDer
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" + "+self.opDer.tipo.toString(),self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
elif self.operador == '-':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
return resultadoIzq - resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq - resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq - resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq - resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq - resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq - resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq - resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq - resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq - resultadoDer
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" - "+self.opDer.tipo.toString(),self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
elif self.operador == '*':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
return resultadoIzq * resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq * resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq * resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq * resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq * resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq * resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq * resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq * resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq * resultadoDer
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" - "+self.opDer.tipo.toString(),self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
elif self.operador == '/':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.INTEGER)
return resultadoIzq // resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq / resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq / resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq / resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq / resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq / resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq / resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq / resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq / resultadoDer
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" / "+self.opDer.tipo.toString(),self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
elif self.operador == '^':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq ** resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq ** resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq ** resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq ** resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq ** resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq ** resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq ** resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq ** resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return resultadoIzq ** resultadoDer
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" ^ "+self.opDer.tipo.toString(),self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
elif self.operador == '%':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","División entera o módulo por cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.INTEGER)
return resultadoIzq % resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","División entera o módulo por cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq % resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","División entera o módulo por cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq % resultadoDer
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","División entera o módulo por cero",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return resultadoIzq % resultadoDer
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" % "+self.opDer.tipo.toString(),self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
else:
error = Excepcion('42883',"Semántico","Operador desconocido.",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
# Operación unaria
else:
# Si existe algún error en el operador izquierdo, retorno el error.
resultadoIzq = self.opIzq.ejecutar(tabla, arbol)
if isinstance(resultadoIzq, Excepcion):
return resultadoIzq
if self.operador == '-':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
return -1 * resultadoIzq
if self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
return -1.0 * resultadoIzq
if self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
return -1.0 * resultadoIzq
else:
error = Excepcion('42883',"Semántico","Tipo de datos incorrectos en la operación negativo",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
else:
error = Excepcion('42883',"Semántico","Operador desconocido.",self.linea,self.columna)
arbol.excepciones.append(error)
arbol.consola.append(error.toString())
return error
return None
#******************** traduccion fase 2 *****************
def traducir(self, tabla, controlador):
codigo =''
if(self.opDer != None):
# Si existe algún error en el operador izquierdo, retorno el error.
resultadoIzq = self.opIzq.traducir(tabla, controlador)
if isinstance(resultadoIzq, Excepcion):
return resultadoIzq
# Si existe algún error en el operador derecho, retorno el error.
resultadoDer = self.opDer.traducir(tabla, controlador)
if isinstance(resultadoDer, Excepcion):
return resultadoDer
temp_izq = resultadoIzq.get_temp()
temp_der = resultadoDer.get_temp()
controlador.cont_temp = controlador.cont_temp + 1
temp_resultado = temporal(controlador.cont_temp,None)
# Comprobamos el tipo de operador
if self.operador == '+':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
temp_resultado.Tipo = Tipo(Tipo_Dato.INTEGER)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.CHAR and self.opDer.tipo.tipo == Tipo_Dato.CHAR:
self.tipo = Tipo(Tipo_Dato.CHAR)
temp_resultado.Tipo = Tipo(Tipo_Dato.CHAR)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' + ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" + "+self.opDer.tipo.toString(),self.linea,self.columna)
#arbol.excepciones.append(error)
#arbol.consola.append(error.toString())
return error
elif self.operador == '-':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
temp_resultado.Tipo = Tipo(Tipo_Dato.INTEGER)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' - ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' - ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' - ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' - ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' - ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' - ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' - ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' - ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' - ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" - "+self.opDer.tipo.toString(),self.linea,self.columna)
#arbol.excepciones.append(error)
#arbol.consola.append(error.toString())
return error
elif self.operador == '*':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
temp_resultado.Tipo = Tipo(Tipo_Dato.INTEGER)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' * ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' * ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' * ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' * ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' * ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' * ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' * ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' * ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' * ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" - "+self.opDer.tipo.toString(),self.linea,self.columna)
#arbol.excepciones.append(error)
#arbol.consola.append(error.toString())
return error
elif self.operador == '/':
#se revisa que no venga division por 0 eso es error
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","No se puede dividir entre cero",self.linea,self.columna)
return error
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
temp_resultado.Tipo = Tipo(Tipo_Dato.INTEGER)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' / ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' / ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' / ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' / ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' / ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' / ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' / ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' / ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' / ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" / "+self.opDer.tipo.toString(),self.linea,self.columna)
#arbol.excepciones.append(error)
#arbol.consola.append(error.toString())
return error
elif self.operador == '^':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
lv = controlador.get_etiqueta()
lf = controlador.get_etiqueta()
lr = controlador.get_etiqueta()
controlador.cont_temp = controlador.cont_temp +1
temp = temporal(controlador.cont_temp,None)
codigo += ' # operacion de potencia \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' #base potencia \n'
codigo += ' '+str(temp.get_temp()) + ' = 1 #contador de potencia \n'
codigo += ' '+'label .'+str(lr) + ' \n'
codigo += ' '+'if('+str(temp.get_temp()) + ' < ' + str(temp_der) +'): \n'
codigo += ' '+' goto .'+ str(lv) + ' \n \n'
codigo += ' '+'goto .'+ str(lf) +'\n'
codigo += ' '+'label .'+ str(lv) + ' \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_resultado.get_temp()) + ' * ' + str(temp_izq) + '\n'
codigo += ' '+str(temp.get_temp()) + ' = ' + str(temp.get_temp()) + '+ 1 \n'
codigo += ' '+'goto .'+str(lr) + '\n'
codigo += ' '+'label .' + str(lf) + '\n'
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
lv = controlador.get_etiqueta()
lf = controlador.get_etiqueta()
lr = controlador.get_etiqueta()
controlador.cont_temp = controlador.cont_temp +1
temp = temporal(controlador.cont_temp,None)
codigo += ' # operacion de potencia \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' #base potencia \n'
codigo += ' '+str(temp.get_temp()) + ' = 1 #contador de potencia \n'
codigo += ' '+'label .'+str(lr) + ' \n'
codigo += ' '+'if('+str(temp.get_temp()) + ' < ' + str(temp_der) +'): \n'
codigo += ' '+' goto .'+ str(lv) + ' \n \n'
codigo += ' '+'goto .'+ str(lf) +'\n'
codigo += ' '+'label .'+ str(lv) + ' \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_resultado.get_temp()) + ' * ' + str(temp_izq) + '\n'
codigo += ' '+str(temp.get_temp()) + ' = ' + str(temp.get_temp()) + '+ 1 \n'
codigo += ' '+'goto .'+str(lr) + '\n'
codigo += ' '+'label .' + str(lf) + '\n'
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
lv = controlador.get_etiqueta()
lf = controlador.get_etiqueta()
lr = controlador.get_etiqueta()
controlador.cont_temp = controlador.cont_temp +1
temp = temporal(controlador.cont_temp,None)
codigo += ' # operacion de potencia \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' #base potencia \n'
codigo += ' '+str(temp.get_temp()) + ' = 1 #contador de potencia \n'
codigo += ' '+'label .'+str(lr) + ' \n'
codigo += ' '+'if('+str(temp.get_temp()) + ' < ' + str(temp_der) +'): \n'
codigo += ' '+' goto .'+ str(lv) + ' \n \n'
codigo += ' '+'goto .'+ str(lf) +'\n'
codigo += ' '+'label .'+ str(lv) + ' \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_resultado.get_temp()) + ' * ' + str(temp_izq) + '\n'
codigo += ' '+str(temp.get_temp()) + ' = ' + str(temp.get_temp()) + '+ 1 \n'
codigo += ' '+'goto .'+str(lr) + '\n'
codigo += ' '+'label .' + str(lf) + '\n'
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
lv = controlador.get_etiqueta()
lf = controlador.get_etiqueta()
lr = controlador.get_etiqueta()
controlador.cont_temp = controlador.cont_temp +1
temp = temporal(controlador.cont_temp,None)
codigo += ' # operacion de potencia \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' #base potencia \n'
codigo += ' '+str(temp.get_temp()) + ' = 1 #contador de potencia \n'
codigo += ' '+'label .'+str(lr) + ' \n'
codigo += ' '+'if('+str(temp.get_temp()) + ' < ' + str(temp_der) +'): \n'
codigo += ' '+' goto .'+ str(lv) + ' \n \n'
codigo += ' '+'goto .'+ str(lf) +'\n'
codigo += ' '+'label .'+ str(lv) + ' \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_resultado.get_temp()) + ' * ' + str(temp_izq) + '\n'
codigo += ' '+str(temp.get_temp()) + ' = ' + str(temp.get_temp()) + '+ 1 \n'
codigo += ' '+'goto .'+str(lr) + '\n'
codigo += ' '+'label .' + str(lf) + '\n'
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
lv = controlador.get_etiqueta()
lf = controlador.get_etiqueta()
lr = controlador.get_etiqueta()
controlador.cont_temp = controlador.cont_temp +1
temp = temporal(controlador.cont_temp,None)
codigo += ' # operacion de potencia \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' #base potencia \n'
codigo += ' '+str(temp.get_temp()) + ' = 1 #contador de potencia \n'
codigo += ' '+'label .'+str(lr) + ' \n'
codigo += ' '+'if('+str(temp.get_temp()) + ' < ' + str(temp_der) +'): \n'
codigo += ' '+' goto .'+ str(lv) + ' \n \n'
codigo += ' '+'goto .'+ str(lf) +'\n'
codigo += ' '+'label .'+ str(lv) + ' \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_resultado.get_temp()) + ' * ' + str(temp_izq) + '\n'
codigo += ' '+str(temp.get_temp()) + ' = ' + str(temp.get_temp()) + '+ 1 \n'
codigo += ' '+'goto .'+str(lr) + '\n'
codigo += ' '+'label .' + str(lf) + '\n'
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
lv = controlador.get_etiqueta()
lf = controlador.get_etiqueta()
lr = controlador.get_etiqueta()
controlador.cont_temp = controlador.cont_temp +1
temp = temporal(controlador.cont_temp,None)
codigo += ' # operacion de potencia \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' #base potencia \n'
codigo += ' '+str(temp.get_temp()) + ' = 1 #contador de potencia \n'
codigo += ' '+'label .'+str(lr) + ' \n'
codigo += ' '+'if('+str(temp.get_temp()) + ' < ' + str(temp_der) +'): \n'
codigo += ' '+' goto .'+ str(lv) + ' \n \n'
codigo += ' '+'goto .'+ str(lf) +'\n'
codigo += ' '+'label .'+ str(lv) + ' \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_resultado.get_temp()) + ' * ' + str(temp_izq) + '\n'
codigo += ' '+str(temp.get_temp()) + ' = ' + str(temp.get_temp()) + '+ 1 \n'
codigo += ' '+'goto .'+str(lr) + '\n'
codigo += ' '+'label .' + str(lf) + '\n'
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
lv = controlador.get_etiqueta()
lf = controlador.get_etiqueta()
lr = controlador.get_etiqueta()
controlador.cont_temp = controlador.cont_temp +1
temp = temporal(controlador.cont_temp,None)
codigo += ' # operacion de potencia \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' #base potencia \n'
codigo += ' '+str(temp.get_temp()) + ' = 1 #contador de potencia \n'
codigo += ' '+'label .'+str(lr) + ' \n'
codigo += ' '+'if('+str(temp.get_temp()) + ' < ' + str(temp_der) +'): \n'
codigo += ' '+' goto .'+ str(lv) + ' \n \n'
codigo += ' '+'goto .'+ str(lf) +'\n'
codigo += ' '+'label .'+ str(lv) + ' \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_resultado.get_temp()) + ' * ' + str(temp_izq) + '\n'
codigo += ' '+str(temp.get_temp()) + ' = ' + str(temp.get_temp()) + '+ 1 \n'
codigo += ' '+'goto .'+str(lr) + '\n'
codigo += ' '+'label .' + str(lf) + '\n'
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
lv = controlador.get_etiqueta()
lf = controlador.get_etiqueta()
lr = controlador.get_etiqueta()
controlador.cont_temp = controlador.cont_temp +1
temp = temporal(controlador.cont_temp,None)
codigo += ' # operacion de potencia \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' #base potencia \n'
codigo += ' '+str(temp.get_temp()) + ' = 1 #contador de potencia \n'
codigo += ' '+'label .'+str(lr) + ' \n'
codigo += ' '+'if('+str(temp.get_temp()) + ' < ' + str(temp_der) +'): \n'
codigo += ' '+' goto .'+ str(lv) + ' \n \n'
codigo += ' '+'goto .'+ str(lf) +'\n'
codigo += ' '+'label .'+ str(lv) + ' \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_resultado.get_temp()) + ' * ' + str(temp_izq) + '\n'
codigo += ' '+str(temp.get_temp()) + ' = ' + str(temp.get_temp()) + '+ 1 \n'
codigo += ' '+'goto .'+str(lr) + '\n'
codigo += ' '+'label .' + str(lf) + '\n'
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
lv = controlador.get_etiqueta()
lf = controlador.get_etiqueta()
lr = controlador.get_etiqueta()
controlador.cont_temp = controlador.cont_temp +1
temp = temporal(controlador.cont_temp,None)
codigo += ' # operacion de potencia \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' #base potencia \n'
codigo += ' '+str(temp.get_temp()) + ' = 1 #contador de potencia \n'
codigo += ' '+'label .'+str(lr) + ' \n'
codigo += ' '+'if('+str(temp.get_temp()) + ' < ' + str(temp_der) +'): \n'
codigo += ' '+' goto .'+ str(lv) + ' \n \n'
codigo += ' '+'goto .'+ str(lf) +'\n'
codigo += ' '+'label .'+ str(lv) + ' \n'
codigo += ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_resultado.get_temp()) + ' * ' + str(temp_izq) + '\n'
codigo += ' '+str(temp.get_temp()) + ' = ' + str(temp.get_temp()) + '+ 1 \n'
codigo += ' '+'goto .'+str(lr) + '\n'
codigo += ' '+'label .' + str(lf) + '\n'
controlador.append_3d(codigo)
return temp_resultado
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" ^ "+self.opDer.tipo.toString(),self.linea,self.columna)
#arbol.excepciones.append(error)
#arbol.consola.append(error.toString())
return error
elif self.operador == '%':
#verificamos el opder no sea igual a 0
if resultadoDer == 0:
error = Excepcion('42883',"Semántico","División entera o módulo por cero",self.linea,self.columna)
return error
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
temp_resultado.Tipo = Tipo(Tipo_Dato.INTEGER)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' % ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' % ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.INTEGER and self.opDer.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' % ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
elif self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC and self.opDer.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = ' + str(temp_izq) + ' % ' + str(temp_der)
controlador.append_3d(codigo)
return temp_resultado
else:
error = Excepcion('42883',"Semántico","el operador no existe: "+self.opIzq.tipo.toString()+" % "+self.opDer.tipo.toString(),self.linea,self.columna)
#arbol.excepciones.append(error)
#arbol.consola.append(error.toString())
return error
else:
error = Excepcion('42883',"Semántico","Operador desconocido.",self.linea,self.columna)
#arbol.excepciones.append(error)
#arbol.consola.append(error.toString())
return error
# Operación unaria
else:
# Si existe algún error en el operador izquierdo, retorno el error.
# Si existe algún error en el operador izquierdo, retorno el error.
resultadoIzq = self.opIzq.traducir(tabla, controlador)
if isinstance(resultadoIzq, Excepcion):
return resultadoIzq
temp_izq = resultadoIzq.get_temp()
controlador.cont_temp = controlador.cont_temp + 1
temp_resultado = temporal(controlador.cont_temp,None)
if self.operador == '-':
if self.opIzq.tipo.tipo == Tipo_Dato.INTEGER:
self.tipo = Tipo(Tipo_Dato.INTEGER)
temp_resultado.Tipo = Tipo(Tipo_Dato.INTEGER)
codigo = ' '+str(temp_resultado.get_temp()) + ' = 0 -' + str(temp_izq)
controlador.append_3d(codigo)
return temp_resultado
if self.opIzq.tipo.tipo == Tipo_Dato.NUMERIC:
self.tipo = Tipo(Tipo_Dato.NUMERIC)
temp_resultado.Tipo = Tipo(Tipo_Dato.NUMERIC)
codigo = ' '+str(temp_resultado.get_temp()) + ' = 0 -' + str(temp_izq)
controlador.append_3d(codigo)
return temp_resultado
if self.opIzq.tipo.tipo == Tipo_Dato.DOUBLE_PRECISION:
self.tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
temp_resultado.Tipo = Tipo(Tipo_Dato.DOUBLE_PRECISION)
codigo = ' '+str(temp_resultado.get_temp()) + ' = 0 -' + str(temp_izq)
controlador.append_3d(codigo)
return temp_resultado
else:
error = Excepcion('42883',"Semántico","Tipo de datos incorrectos en la operación negativo",self.linea,self.columna)
return error
else:
error = Excepcion('42883',"Semántico","Operador desconocido.",self.linea,self.columna)
return error
return None | 68.010181 | 168 | 0.531728 | 6,191 | 60,121 | 4.996608 | 0.018414 | 0.188789 | 0.141592 | 0.188789 | 0.980345 | 0.977404 | 0.972813 | 0.972813 | 0.972037 | 0.972037 | 0 | 0.007044 | 0.353038 | 60,121 | 884 | 169 | 68.010181 | 0.788256 | 0.020143 | 0 | 0.975699 | 0 | 0 | 0.064894 | 0 | 0.076549 | 0 | 0 | 0 | 0 | 1 | 0.003645 | false | 0 | 0.00486 | 0 | 0.188335 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e3c228a6df853df4520cccf6695151c8d2ad8532 | 8,546 | py | Python | tests/test_concentric_neutral_cable.py | opusonesolutions/carsons | b1d6254f5becc093855802b17fecf608fd8e49b9 | [
"MIT"
] | 13 | 2018-10-12T18:31:32.000Z | 2021-05-12T02:29:57.000Z | tests/test_concentric_neutral_cable.py | opusonesolutions/carsons | b1d6254f5becc093855802b17fecf608fd8e49b9 | [
"MIT"
] | 36 | 2018-10-12T20:11:02.000Z | 2020-03-12T14:49:54.000Z | tests/test_concentric_neutral_cable.py | opusonesolutions/carsons | b1d6254f5becc093855802b17fecf608fd8e49b9 | [
"MIT"
] | 6 | 2018-12-24T00:57:49.000Z | 2021-04-10T02:03:12.000Z | import pint
from numpy import array
from numpy.testing import assert_array_almost_equal
from carsons import ConcentricNeutralCarsonsEquations, calculate_impedance
from tests.helpers import ConcentricLineModel
from tests.test_overhead_line import OHM_PER_MILE_TO_OHM_PER_METER
ureg = pint.UnitRegistry()
feet = ureg.feet
inches = ureg.inches
miles = ureg.miles
ohms = ureg.ohms
kft = ureg.feet * 1000
def test_concentric_neutral_cable():
"""
Validation test against example in Kersting's book.
"""
model = ConcentricNeutralCarsonsEquations(ConcentricLineModel({
"A": {
'resistance': (0.4100*(ohms / miles)).to('ohm / meters').magnitude,
'gmr': (0.0171*feet).to('meters').magnitude,
'wire_positions': (0, 0)
},
"B": {
'resistance': (0.4100*(ohms / miles)).to('ohm / meters').magnitude,
'gmr': (0.0171*feet).to('meters').magnitude,
'wire_positions': ((6*inches).to('meters').magnitude, 0)
},
"C": {
'resistance': (0.4100*(ohms / miles)).to('ohm / meters').magnitude,
'gmr': (0.0171*feet).to('meters').magnitude,
'wire_positions': ((12*inches).to('meters').magnitude, 0)
},
"NA": {
'neutral_strand_gmr': (0.00208*feet).to('meters').magnitude,
'neutral_strand_resistance':
(14.87*ohms / miles).to('ohm / meters').magnitude,
'neutral_strand_diameter': (0.0641*inches).to('meters').magnitude,
'diameter_over_neutral': (1.29*inches).to('meters').magnitude,
'neutral_strand_count': 13,
},
"NB": {
'neutral_strand_gmr': (0.00208*feet).to('meters').magnitude,
'neutral_strand_resistance':
(14.87*ohms / miles).to('ohm / meters').magnitude,
'neutral_strand_diameter': (0.0641*inches).to('meters').magnitude,
'diameter_over_neutral': (1.29*inches).to('meters').magnitude,
'neutral_strand_count': 13,
},
"NC": {
'neutral_strand_gmr': (0.00208*feet).to('meters').magnitude,
'neutral_strand_resistance':
(14.87*ohms / miles).to('ohm / meters').magnitude,
'neutral_strand_diameter': (0.0641*inches).to('meters').magnitude,
'diameter_over_neutral': (1.29*inches).to('meters').magnitude,
'neutral_strand_count': 13,
},
}))
assert_array_almost_equal(
calculate_impedance(model),
array([
[0.7981 + 1j*0.4467, 0.3188 + 1j*0.0334, 0.2848 + 1j*0.0138],
[0.3188 + 1j*0.0334, 0.7890 + 1j*0.4048, 0.3188 + 1j*0.0334],
[0.2848 + 1j*0.0138, 0.3188 + 1j*0.0334, 0.7981 + 1j*0.4467],
]) * OHM_PER_MILE_TO_OHM_PER_METER,
decimal=4
)
def test_concentric_neutral_cable_IEEE37():
"""
Validation test against IEEE37 network underground cable configuration 723.
"""
model = ConcentricNeutralCarsonsEquations(ConcentricLineModel({
"A": {
'resistance': (0.7690 * (ohms/miles)).to('ohm / meters').magnitude,
'gmr': (0.0125 * feet).to('meters').magnitude,
'wire_positions': (0, 0)
},
"B": {
'resistance': (0.7690 * (ohms/miles)).to('ohm / meters').magnitude,
'gmr': (0.0125 * feet).to('meters').magnitude,
'wire_positions': ((6 * inches).to('meters').magnitude, 0)
},
"C": {
'resistance': (0.7690 * (ohms/miles)).to('ohm / meters').magnitude,
'gmr': (0.0125 * feet).to('meters').magnitude,
'wire_positions': ((12 * inches).to('meters').magnitude, 0)
},
"NA": {
'neutral_strand_gmr': (0.00208 * feet).to('meters').magnitude,
'neutral_strand_resistance':
(14.87 * ohms / miles).to('ohm / meters').magnitude,
'neutral_strand_diameter': (0.0641*inches).to('meters').magnitude,
'diameter_over_neutral': (1.10 * inches).to('meters').magnitude,
'neutral_strand_count': 7,
},
"NB": {
'neutral_strand_gmr': (0.00208 * feet).to('meters').magnitude,
'neutral_strand_resistance':
(14.87 * ohms / miles).to('ohm / meters').magnitude,
'neutral_strand_diameter': (0.0641*inches).to('meters').magnitude,
'diameter_over_neutral': (1.10 * inches).to('meters').magnitude,
'neutral_strand_count': 7,
},
"NC": {
'neutral_strand_gmr': (0.00208 * feet).to('meters').magnitude,
'neutral_strand_resistance':
(14.87 * ohms / miles).to('ohm / meters').magnitude,
'neutral_strand_diameter': (0.0641*inches).to('meters').magnitude,
'diameter_over_neutral': (1.10 * inches).to('meters').magnitude,
'neutral_strand_count': 7,
},
}))
assert_array_almost_equal(
calculate_impedance(model),
array([
[1.2936 + 1j*0.6713, 0.4871 + 1j*0.2111, 0.4585 + 1j*0.1521],
[0.4871 + 1j*0.2111, 1.3022 + 1j*0.6326, 0.4871 + 1j*0.2111],
[0.4585 + 1j*0.1521, 0.4871 + 1j*0.2111, 1.2936 + 1j*0.6713],
]) * OHM_PER_MILE_TO_OHM_PER_METER,
decimal=4
)
def test_2ph_concentric_neutral_cable():
"""
Validation test against OpenDSS example found in documentation
http://svn.code.sf.net/p/electricdss/code/trunk/Distrib/Doc/
'TechNote CableModelling.pdf' - Practical Example: Concentric Neutral Cable
"""
model = ConcentricNeutralCarsonsEquations(ConcentricLineModel({
"A": {
'resistance': (0.0776 * (ohms/kft)).to('ohm / meters').magnitude,
'gmr': (0.205 * inches).to('meters').magnitude,
'wire_positions': (0, 0)
},
"B": {
'resistance': (0.0776 * (ohms/kft)).to('ohm / meters').magnitude,
'gmr': (0.205 * inches).to('meters').magnitude,
'wire_positions': ((6 * inches).to('meters').magnitude, 0)
},
"NA": {
'neutral_strand_gmr': (0.02496 * inches).to('meters').magnitude,
'neutral_strand_resistance':
(2.55 * (ohms/kft)).to('ohm / meters').magnitude,
'neutral_strand_diameter': (0.064*inches).to('meters').magnitude,
'diameter_over_neutral': (1.29 * inches).to('meters').magnitude,
'neutral_strand_count': 13,
},
"NB": {
'neutral_strand_gmr': (0.02496 * inches).to('meters').magnitude,
'neutral_strand_resistance':
(2.55 * (ohms/kft)).to('ohm / meters').magnitude,
'neutral_strand_diameter': (0.064*inches).to('meters').magnitude,
'diameter_over_neutral': (1.29 * inches).to('meters').magnitude,
'neutral_strand_count': 13,
},
}))
assert_array_almost_equal(
calculate_impedance(model),
array([
[0.867953 + 1j*0.442045, 0.389392 + 1j*0.0511399, 0 + 1j * 0],
[0.389392 + 1j*0.0511399, 0.867953 + 1j*0.442045, 0 + 1j * 0],
[0 + 1j * 0, 0 + 1j * 0, 0 + 1j * 0],
]) * OHM_PER_MILE_TO_OHM_PER_METER,
decimal=4
)
def test_1ph_concentric_neutral_cable():
"""
Validation test against OpenDSS example found in documentation
http://svn.code.sf.net/p/electricdss/code/trunk/Distrib/Doc/
'TechNote CableModelling.pdf' - Practical Example: Concentric Neutral Cable
"""
model = ConcentricNeutralCarsonsEquations(ConcentricLineModel({
"A": {
'resistance': (0.0776 * (ohms/kft)).to('ohm / meters').magnitude,
'gmr': (0.205 * inches).to('meters').magnitude,
'wire_positions': (0, 0)
},
"NA": {
'neutral_strand_gmr': (0.02496 * inches).to('meters').magnitude,
'neutral_strand_resistance':
(2.55 * (ohms/kft)).to('ohm / meters').magnitude,
'neutral_strand_diameter': (0.064*inches).to('meters').magnitude,
'diameter_over_neutral': (1.29 * inches).to('meters').magnitude,
'neutral_strand_count': 13,
},
}))
assert_array_almost_equal(
calculate_impedance(model),
array([
[1.04185 + 1j*0.602329, 0 + 1j * 0, 0 + 1j * 0],
[0 + 1j * 0, 0 + 1j * 0, 0 + 1j * 0],
[0 + 1j * 0, 0 + 1j * 0, 0 + 1j * 0],
]) * OHM_PER_MILE_TO_OHM_PER_METER,
decimal=4
)
| 39.201835 | 79 | 0.561783 | 981 | 8,546 | 4.724771 | 0.133537 | 0.190939 | 0.150378 | 0.143905 | 0.904639 | 0.888889 | 0.840345 | 0.835383 | 0.835383 | 0.824164 | 0 | 0.097938 | 0.273578 | 8,546 | 217 | 80 | 39.382488 | 0.648679 | 0.061666 | 0 | 0.734463 | 0 | 0 | 0.213422 | 0.078192 | 0 | 0 | 0 | 0 | 0.028249 | 1 | 0.022599 | false | 0 | 0.033898 | 0 | 0.056497 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e3d5feb61feb5de86cb9387c7a55eb3f75afc824 | 121,312 | py | Python | Tests/test_Align_psl.py | singing-scientist/biopython | 27b7663ddf89a100427f5ff6939848b87cb1195d | [
"BSD-3-Clause"
] | 1 | 2022-02-08T23:44:30.000Z | 2022-02-08T23:44:30.000Z | Tests/test_Align_psl.py | singing-scientist/biopython | 27b7663ddf89a100427f5ff6939848b87cb1195d | [
"BSD-3-Clause"
] | null | null | null | Tests/test_Align_psl.py | singing-scientist/biopython | 27b7663ddf89a100427f5ff6939848b87cb1195d | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2022 by Michiel de Hoon. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Tests for Align.psl module."""
import unittest
from io import StringIO
from Bio.Align import Alignment, psl
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio import SeqIO
try:
import numpy
except ImportError:
from Bio import MissingPythonDependencyError
raise MissingPythonDependencyError(
"Install numpy if you want to use Bio.Align.psl."
) from None
class TestAlign_dna_rna(unittest.TestCase):
# The PSL file dna_rna.psl was generated using this command:
# blat -mask=lower dna.fa rna.fa dna_rna.psl
def setUp(self):
records = SeqIO.parse("Blat/dna.fa", "fasta")
self.dna = {record.id: record.seq for record in records}
records = SeqIO.parse("Blat/rna.fa", "fasta")
self.rna = {record.id: record.seq for record in records}
def test_reading(self):
"""Test parsing dna_rna.psl."""
path = "Blat/dna_rna.psl"
alignments = psl.AlignmentIterator(path)
self.assertEqual(alignments.metadata["version"], "3")
alignment = next(alignments)
self.assertEqual(alignment.matches, 207)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 3707))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chrY:22316600-22320400")
self.assertEqual(alignment.query.id, "NR_104151.1")
self.assertEqual(len(alignment.target.seq), 3800)
self.assertEqual(len(alignment.query.seq), 207)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[ 77, 147, 2182, 2297, 3762, 3784],
[ 0, 70, 70, 185, 185, 207]]),
# fmt: on
)
)
alignment.target.seq = self.dna[alignment.target.id]
alignment.query.seq = self.rna[alignment.query.id]
self.assertTrue(
numpy.array_equal(
alignment.substitutions,
# fmt: off
# flake8: noqa
numpy.array([[64., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 44., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 52., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 47., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
])
)
)
self.assertEqual(alignment.substitutions.alphabet, "ACGTacgt")
matches = sum(
alignment.substitutions[c, c] for c in alignment.substitutions.alphabet
)
repMatches = sum(
alignment.substitutions[c, c.swapcase()]
for c in alignment.substitutions.alphabet
)
self.assertEqual(matches, alignment.matches)
self.assertEqual(repMatches, alignment.repMatches)
alignment = next(alignments)
self.assertEqual(alignment.matches, 175)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 6)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 1711))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr3:42530800-42532700")
self.assertEqual(alignment.query.id, "NR_046654.1")
self.assertEqual(len(alignment.target.seq), 1900)
self.assertEqual(len(alignment.query.seq), 181)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[ 95, 158, 1220, 1295, 1763, 1806],
[ 181, 118, 118, 43, 43, 0]]),
# fmt: on
)
)
alignment.target.seq = self.dna[alignment.target.id]
alignment.query.seq = self.rna[alignment.query.id]
self.assertTrue(
numpy.array_equal(
alignment.substitutions,
# fmt: off
# flake8: noqa
numpy.array([[36., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 40., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 57., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 42., 0., 0., 0., 0.],
[ 2., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 3., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
])
)
)
self.assertEqual(alignment.substitutions.alphabet, "ACGTacgt")
matches = sum(
alignment.substitutions[c, c] for c in alignment.substitutions.alphabet
)
repMatches = sum(
alignment.substitutions[c, c.swapcase()]
for c in alignment.substitutions.alphabet
)
self.assertEqual(matches, alignment.matches)
self.assertEqual(repMatches, alignment.repMatches)
alignment = next(alignments)
self.assertEqual(alignment.matches, 207)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 3707))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chrY:22316600-22320400")
self.assertEqual(alignment.query.id, "NR_104151.1_extended")
self.assertEqual(len(alignment.target.seq), 3800)
self.assertEqual(len(alignment.query.seq), 215)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[ 77, 147, 2182, 2297, 3762, 3784],
[ 3, 73, 73, 188, 188, 210]]),
# fmt: on
)
)
alignment.target.seq = self.dna[alignment.target.id]
alignment.query.seq = self.rna[alignment.query.id]
self.assertTrue(
numpy.array_equal(
alignment.substitutions,
# fmt: off
# flake8: noqa
numpy.array([[64., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 44., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 52., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 47., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
]),
)
)
self.assertEqual(alignment.substitutions.alphabet, "ACGTXacgt")
matches = sum(
alignment.substitutions[c, c] for c in alignment.substitutions.alphabet
)
repMatches = sum(
alignment.substitutions[c, c.swapcase()]
for c in alignment.substitutions.alphabet
if c != "X"
)
self.assertEqual(matches, alignment.matches)
self.assertEqual(repMatches, alignment.repMatches)
alignment = next(alignments)
self.assertEqual(alignment.matches, 175)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 6)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 1711))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr3:42530800-42532700")
self.assertEqual(alignment.query.id, "NR_046654.1_extended")
self.assertEqual(len(alignment.target.seq), 1900)
self.assertEqual(len(alignment.query.seq), 194)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[ 95, 158, 1220, 1295, 1763, 1806],
[ 184, 121, 121, 46, 46, 3]]),
# fmt: on
)
)
alignment.target.seq = self.dna[alignment.target.id]
alignment.query.seq = self.rna[alignment.query.id]
self.assertTrue(
numpy.array_equal(
alignment.substitutions,
# fmt: off
# flake8: noqa
numpy.array([[36., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 40., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 57., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 42., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 2., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 3., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
]),
)
)
self.assertEqual(alignment.substitutions.alphabet, "ACGTXacgt")
matches = sum(
alignment.substitutions[c, c] for c in alignment.substitutions.alphabet
)
repMatches = sum(
alignment.substitutions[c, c.swapcase()]
for c in alignment.substitutions.alphabet
if c != "X"
)
self.assertEqual(matches, alignment.matches)
self.assertEqual(repMatches, alignment.repMatches)
self.assertRaises(StopIteration, next, alignments)
def test_writing(self):
"""Test writing the alignments in dna_rna.psl."""
path = "Blat/dna_rna.psl"
with open(path) as stream:
original_data = stream.read()
alignments = psl.AlignmentIterator(path)
stream = StringIO()
writer = psl.AlignmentWriter(stream)
n = writer.write_file(alignments, mincount=4, maxcount=4)
self.assertEqual(n, 4)
stream.seek(0)
written_data = stream.read()
stream.close()
self.assertEqual(original_data, written_data)
# Try this again. This time, we first strip the matches, misMatches,
# repMatches, and nCount attributes from each alignment, and insert the
# appropriate sequence data in each alignment. The writer will then
# recalculate the matches, misMatches, repMatches, and nCount values
# from the sequence data and the alignment, and store those values in
# the PSL file.
alignments = []
for alignment in psl.AlignmentIterator(path):
del alignment.matches
del alignment.misMatches
del alignment.repMatches
del alignment.nCount
alignment.sequences[0].seq = self.dna[alignment.sequences[0].id]
alignment.sequences[1].seq = self.rna[alignment.sequences[1].id]
alignments.append(alignment)
stream = StringIO()
writer = psl.AlignmentWriter(stream, mask="lower")
n = writer.write_file(alignments, mincount=4, maxcount=4)
self.assertEqual(n, 4)
stream.seek(0)
written_data = stream.read()
stream.close()
self.assertEqual(original_data, written_data)
class TestAlign_dna(unittest.TestCase):
def test_reading_psl_34_001(self):
"""Test parsing psl_34_001.psl."""
path = "Blat/psl_34_001.psl"
alignments = psl.AlignmentIterator(path)
self.assertEqual(alignments.metadata["version"], "3")
alignment = next(alignments)
self.assertEqual(alignment.matches, 16)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 16))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr4")
self.assertEqual(alignment.query.id, "hg18_dna")
self.assertEqual(len(alignment.target.seq), 191154276)
self.assertEqual(len(alignment.query.seq), 33)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[61646095, 61646111],
[ 11, 27]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 33))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg18_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 33)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[10271783, 10271816],
[ 0, 33]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 17)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 17))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr2")
self.assertEqual(alignment.query.id, "hg18_dna")
self.assertEqual(len(alignment.target.seq), 243199373)
self.assertEqual(len(alignment.query.seq), 33)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[53575980, 53575997],
[ 25, 8]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 38)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 41))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr9")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 141213431)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[85737865, 85737906],
[ 9, 50]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 41)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 41))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr8")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 146364022)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[95160479, 95160520],
[ 8, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr22")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 51304566)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[42144400, 42144436],
[ 11, 47]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 43)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 48))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr2")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 243199373)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[183925984, 183925990, 183925990, 183926028],
[ 1, 7, 11, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 34)
self.assertEqual(alignment.misMatches, 2)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 170))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr19")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 59128983)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[35483340, 35483365, 35483499, 35483510],
[ 10, 35, 35, 46]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 39)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 39))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr18")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 78077248)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[23891310, 23891349],
[ 10, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 27)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 28))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr18")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 78077248)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[43252217, 43252245],
[ 21, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 44)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 54))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr13")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 115169878)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[52759147, 52759154, 52759160, 52759160, 52759198],
[ 1, 8, 8, 11, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 50)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 50))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[1207056, 1207106],
[ 0, 50]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 31)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 34))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[61700837, 61700871],
[ 1, 35]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 28)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 44))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr4")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 191154276)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[37558157, 37558167, 37558173, 37558173, 37558191],
[ 49, 39, 39, 29, 11]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 35)
self.assertEqual(alignment.misMatches, 2)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 37))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr22")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 51304566)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[48997405, 48997442],
[ 49, 12]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 35)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr2")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 243199373)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[120641740, 120641776],
[ 49, 13]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 39)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 39))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr19")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 59128983)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[54017130, 54017169],
[ 49, 10]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 36)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 39))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr19")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 59128983)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[553742, 553781],
[ 49, 10]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr10")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 135534747)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[99388555, 99388591],
[ 49, 13]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 24)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 25))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr10")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 135534747)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[112178171, 112178196],
[ 35, 10]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 35)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[39368490, 39368526],
[ 49, 13]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 34))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[220325687, 220325721],
[ 47, 13]]),
# fmt: on
)
)
self.assertRaises(StopIteration, next, alignments)
def test_writing_psl_34_001(self):
"""Test writing the alignments in psl_34_001.psl."""
path = "Blat/psl_34_001.psl"
with open(path) as stream:
original_data = stream.read()
alignments = psl.AlignmentIterator(path)
stream = StringIO()
writer = psl.AlignmentWriter(stream)
n = writer.write_file(alignments, mincount=22, maxcount=22)
self.assertEqual(n, 22)
stream.seek(0)
written_data = stream.read()
stream.close()
self.assertEqual(original_data, written_data)
def test_reading_psl_34_002(self):
"""Test parsing psl_34_002.psl."""
path = "Blat/psl_34_002.psl"
alignments = psl.AlignmentIterator(path)
self.assertEqual(alignments.metadata["version"], "3")
self.assertRaises(StopIteration, next, alignments)
def test_writing_psl_34_002(self):
"""Test writing the alignments in psl_34_002.psl."""
path = "Blat/psl_34_002.psl"
with open(path) as stream:
original_data = stream.read()
alignments = psl.AlignmentIterator(path)
stream = StringIO()
writer = psl.AlignmentWriter(stream)
n = writer.write_file(alignments, mincount=0, maxcount=0)
self.assertEqual(n, 0)
stream.seek(0)
written_data = stream.read()
stream.close()
self.assertEqual(original_data, written_data)
def test_reading_psl_34_003(self):
"""Test parsing psl_34_003.psl."""
path = "Blat/psl_34_003.psl"
alignments = psl.AlignmentIterator(path)
self.assertEqual(alignments.metadata["version"], "3")
alignment = next(alignments)
self.assertEqual(alignment.matches, 16)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 16))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr4")
self.assertEqual(alignment.query.id, "hg18_dna")
self.assertEqual(len(alignment.target.seq), 191154276)
self.assertEqual(len(alignment.query.seq), 33)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[61646095, 61646111],
[ 11, 27]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 33))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg18_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 33)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[10271783, 10271816],
[ 0, 33]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 17)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 17))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr2")
self.assertEqual(alignment.query.id, "hg18_dna")
self.assertEqual(len(alignment.target.seq), 243199373)
self.assertEqual(len(alignment.query.seq), 33)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[53575980, 53575997],
[ 25, 8]]),
# fmt: on
)
)
self.assertRaises(StopIteration, next, alignments)
def test_writing_psl_34_003(self):
"""Test writing the alignments in psl_34_003.psl."""
path = "Blat/psl_34_003.psl"
with open(path) as stream:
original_data = stream.read()
alignments = psl.AlignmentIterator(path)
stream = StringIO()
writer = psl.AlignmentWriter(stream)
n = writer.write_file(alignments, mincount=3, maxcount=3)
self.assertEqual(n, 3)
stream.seek(0)
written_data = stream.read()
stream.close()
self.assertEqual(original_data, written_data)
def test_reading_psl_34_004(self):
"""Test parsing psl_34_004.psl."""
path = "Blat/psl_34_004.psl"
alignments = psl.AlignmentIterator(path)
self.assertEqual(alignments.metadata["version"], "3")
alignment = next(alignments)
self.assertEqual(alignment.matches, 38)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 41))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr9")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 141213431)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[85737865, 85737906],
[ 9, 50]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 41)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 41))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr8")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 146364022)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[95160479, 95160520],
[ 8, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr22")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 51304566)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[42144400, 42144436],
[ 11, 47]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 43)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 48))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr2")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 243199373)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[183925984, 183925990, 183925990, 183926028],
[ 1, 7, 11, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 34)
self.assertEqual(alignment.misMatches, 2)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 170))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr19")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 59128983)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[35483340, 35483365, 35483499, 35483510],
[ 10, 35, 35, 46]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 39)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 39))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr18")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 78077248)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[23891310, 23891349],
[ 10, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 27)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 28))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr18")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 78077248)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[43252217, 43252245],
[ 21, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 44)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 54))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr13")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 115169878)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[52759147, 52759154, 52759160, 52759160, 52759198],
[ 1, 8, 8, 11, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 50)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 50))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[1207056, 1207106],
[ 0, 50]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 31)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 34))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[61700837, 61700871],
[ 1, 35]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 28)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 44))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr4")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 191154276)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[37558157, 37558167, 37558173, 37558173, 37558191],
[ 49, 39, 39, 29, 11]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 35)
self.assertEqual(alignment.misMatches, 2)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 37))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr22")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 51304566)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[48997405, 48997442],
[ 49, 12]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 35)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr2")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 243199373)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[120641740, 120641776],
[ 49, 13]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 39)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 39))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr19")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 59128983)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[54017130, 54017169],
[ 49, 10]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 36)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 39))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr19")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 59128983)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[553742, 553781],
[ 49, 10]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr10")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 135534747)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[99388555, 99388591],
[ 49, 13]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 24)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 25))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr10")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 135534747)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[112178171, 112178196],
[ 35, 10]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 35)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[39368490, 39368526],
[ 49, 13]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 34))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[220325687, 220325721],
[ 47, 13]]),
# fmt: on
)
)
self.assertRaises(StopIteration, next, alignments)
def test_writing_psl_34_004(self):
"""Test writing the alignments in psl_34_004.psl."""
path = "Blat/psl_34_004.psl"
with open(path) as stream:
original_data = stream.read()
alignments = psl.AlignmentIterator(path)
stream = StringIO()
writer = psl.AlignmentWriter(stream)
n = writer.write_file(alignments, mincount=19, maxcount=19)
self.assertEqual(n, 19)
stream.seek(0)
written_data = stream.read()
stream.close()
self.assertEqual(original_data, written_data)
def test_reading_psl_34_005(self):
"""Test parsing psl_34_005.psl."""
path = "Blat/psl_34_005.psl"
alignments = psl.AlignmentIterator(path)
alignment = next(alignments)
self.assertEqual(alignment.matches, 16)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 16))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr4")
self.assertEqual(alignment.query.id, "hg18_dna")
self.assertEqual(len(alignment.target.seq), 191154276)
self.assertEqual(len(alignment.query.seq), 33)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[61646095, 61646111],
[ 11, 27]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 33))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg18_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 33)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[10271783, 10271816],
[ 0, 33]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 17)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 17))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr2")
self.assertEqual(alignment.query.id, "hg18_dna")
self.assertEqual(len(alignment.target.seq), 243199373)
self.assertEqual(len(alignment.query.seq), 33)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[53575980, 53575997],
[ 25, 8]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 38)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 41))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr9")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 141213431)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[85737865, 85737906],
[ 9, 50]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 41)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 41))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr8")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 146364022)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[95160479, 95160520],
[ 8, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr22")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 51304566)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[42144400, 42144436],
[ 11, 47]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 43)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 48))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr2")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 243199373)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[183925984, 183925990, 183925990, 183926028],
[ 1, 7, 11, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 34)
self.assertEqual(alignment.misMatches, 2)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 170))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr19")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 59128983)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[35483340, 35483365, 35483499, 35483510],
[ 10, 35, 35, 46]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 39)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 39))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr18")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 78077248)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[23891310, 23891349],
[ 10, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 27)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 28))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr18")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 78077248)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[43252217, 43252245],
[ 21, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 44)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 54))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr13")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 115169878)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[52759147, 52759154, 52759160, 52759160, 52759198],
[ 1, 8, 8, 11, 49]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 50)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 50))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[1207056, 1207106],
[ 0, 50]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 31)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 34))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[61700837, 61700871],
[ 1, 35]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 28)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 44))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr4")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 191154276)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[37558157, 37558167, 37558173, 37558173, 37558191],
[ 49, 39, 39, 29, 11]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 35)
self.assertEqual(alignment.misMatches, 2)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 37))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr22")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 51304566)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[48997405, 48997442],
[ 49, 12]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 35)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr2")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 243199373)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[120641740, 120641776],
[ 49, 13]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 39)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 39))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr19")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 59128983)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[54017130, 54017169],
[ 49, 10]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 36)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 39))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr19")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 59128983)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[553742, 553781],
[ 49, 10]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr10")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 135534747)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[99388555, 99388591],
[ 49, 13]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 24)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 25))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr10")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 135534747)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[112178171, 112178196],
[ 35, 10]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 35)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 36))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
numpy.array([[39368490, 39368526],
[ 49, 13]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 33)
self.assertEqual(alignment.misMatches, 1)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertEqual(alignment.shape, (2, 34))
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertGreater(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr1")
self.assertEqual(alignment.query.id, "hg19_dna")
self.assertEqual(len(alignment.target.seq), 249250621)
self.assertEqual(len(alignment.query.seq), 50)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[220325687, 220325721],
[ 47, 13]]),
# fmt: on
)
)
self.assertRaises(StopIteration, next, alignments)
def test_writing_psl_34_005(self):
"""Test writing the alignments in psl_34_005.psl."""
path = "Blat/psl_34_005.psl"
with open(path) as stream:
original_data = stream.read()
alignments = psl.AlignmentIterator(path)
stream = StringIO()
writer = psl.AlignmentWriter(stream, header=False)
n = writer.write_file(alignments, mincount=22, maxcount=22)
self.assertEqual(n, 22)
stream.seek(0)
written_data = stream.read()
stream.close()
self.assertEqual(original_data, written_data)
class TestAlign_dnax_prot(unittest.TestCase):
def test_reading_psl_35_001(self):
"""Test parsing psl_35_001.psl."""
path = "Blat/psl_35_001.psl"
alignments = psl.AlignmentIterator(path)
self.assertEqual(alignments.metadata["version"], "3")
alignment = next(alignments)
self.assertEqual(alignment.matches, 52)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr13")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 114364328)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[75566694, 75566850],
[ 61, 113]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 44)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr13")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 114364328)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[75560749, 75560881],
[ 17, 61]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 44)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr13")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 114364328)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[75549820, 75549865, 75567225, 75567225, 75567312],
[ 0, 15, 15, 113, 142]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 47)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr13")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 114364328)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[75604767, 75604827, 75605728, 75605809],
[ 183, 203, 203, 230]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 25)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr13")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 114364328)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[75594914, 75594989],
[ 158, 183]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 16)
self.assertEqual(alignment.misMatches, 0)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr13")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 114364328)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[75569459, 75569507],
[ 142, 158]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 26)
self.assertEqual(alignment.misMatches, 8)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr4")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 190214555)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[41260685, 41260787],
[ 76, 110]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 37)
self.assertEqual(alignment.misMatches, 26)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "chr4")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 190214555)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[41257605, 41257731, 41263227, 41263227, 41263290],
[ 17, 59, 59, 162, 183]]),
# fmt: on
)
)
self.assertRaises(StopIteration, next, alignments)
def test_writing_psl_35_001(self):
"""Test writing the alignments in psl_35_001.psl."""
path = "Blat/psl_35_001.psl"
with open(path) as stream:
original_data = stream.read()
alignments = psl.AlignmentIterator(path)
stream = StringIO()
writer = psl.AlignmentWriter(stream)
n = writer.write_file(alignments, mincount=8, maxcount=8)
self.assertEqual(n, 8)
stream.seek(0)
written_data = stream.read()
stream.close()
self.assertEqual(original_data, written_data)
# Convert the alignment to a protein alignment and insert the
# appropriate sequence data. Write this alignment in a PSL file;
# the writer will recalculate the values for matches, misMatches,
# repMatches, and nCount from the sequence data and the alignment.
#
# The alignments were generated using
# blat -t=dnax -q=prot hg38.2bit CAG33136.1.fasta psl_35_001.psl
#
# To save disk space, we extracted the necessary sequence data using
#
# twoBitToFa hg38.2bit:chr13:75549820-75605809 stdout
# twoBitToFa hg38.2bit:chr4:41257605-41263290 stdout
#
# and concatenating the results into file hg38.fa. We will use this
# file below, and create partially defined Seq objects.
#
# Load the protein sequence:
protein = SeqIO.read("Blat/CAG33136.1.fasta", "fasta")
protein_alignments = []
alignments = psl.AlignmentIterator(path)
for i, alignment in enumerate(alignments):
records = SeqIO.parse("Blat/hg38.fa", "fasta")
for record in records:
name, start_end = record.id.split(":")
if name == alignment.sequences[0].id:
break
else:
raise Exception("Failed to find DNA sequence")
start, end = start_end.split("-")
start = int(start)
end = int(end)
length = len(alignment.sequences[0])
sequence = str(record.seq)
dna = Seq({start: sequence}, length=length)
alignment.sequences[0].seq = dna
self.assertEqual(alignment.sequences[1].id, protein.id)
alignment.sequences[1].seq = protein.seq
# The alignment is on the forward strand of the DNA sequence:
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
# The protein alignment is also in the forward orientation:
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
# Now extract the aligned sequences:
aligned_dna = ""
aligned_protein = ""
for start, end in alignment.aligned[0]:
aligned_dna += alignment.sequences[0].seq[start:end]
for start, end in alignment.aligned[1]:
aligned_protein += alignment.sequences[1].seq[start:end]
# Translate the aligned DNA sequence:
aligned_dna = Seq(aligned_dna)
aligned_dna_translated = Seq(aligned_dna.translate())
aligned_protein = Seq(aligned_protein)
# Create a new alignment including the aligned sequences only:
records = [
SeqRecord(aligned_dna_translated, id=alignment.sequences[0].id),
SeqRecord(aligned_protein, id=alignment.sequences[1].id),
]
coordinates = numpy.array(
[[0, len(aligned_dna_translated)], [0, len(aligned_protein)]]
)
protein_alignment = Alignment(records, coordinates)
protein_alignments.append(protein_alignment)
if i == 0:
self.assertEqual(
str(protein_alignment),
"""\
YEVFRTEEEEKIKSQGQDVTSSVYFMKQTISNACGTIGLIHAIANNKDKMHF
||||||||||||||||||||||||||||||||||||||||||||||||||||
YEVFRTEEEEKIKSQGQDVTSSVYFMKQTISNACGTIGLIHAIANNKDKMHF
""",
)
elif i == 1:
self.assertEqual(
str(protein_alignment),
"""\
QFLKQLGLHPNWQFVDVYGMDPELLSMVPRPVCAVLLLFPITEK
||||||||||||||||||||||||||||||||||||||||||||
QFLKQLGLHPNWQFVDVYGMDPELLSMVPRPVCAVLLLFPITEK
""",
)
elif i == 2:
self.assertEqual(
str(protein_alignment),
"""\
MEGQRWLPLEANPEVESGSTLKKFLEESVSMSPEERARYLENYD
||||||||||||||||||||||||||||||||||||||||||||
MEGQRWLPLEANPEVESGSTLKKFLEESVSMSPEERARYLENYD
""",
)
elif i == 3:
self.assertEqual(
str(protein_alignment),
"""\
DGRKPFPINHGETSDETLLEDAIEVCKKFMERDPDELRFNAIALSAA
|||||||||||||||||||||||||||||||||||||||||||||||
DGRKPFPINHGETSDETLLEDAIEVCKKFMERDPDELRFNAIALSAA
""",
)
elif i == 4:
self.assertEqual(
str(protein_alignment),
"""\
APSIDEKVDLHFIALVHVDGHLYEL
|||||||||||||||||||||||||
APSIDEKVDLHFIALVHVDGHLYEL
""",
)
elif i == 5:
self.assertEqual(
str(protein_alignment),
"""\
AIRVTHETSAHEGQTE
||||||||||||||||
AIRVTHETSAHEGQTE
""",
)
elif i == 6:
self.assertEqual(
str(protein_alignment),
"""\
GQEVSPKVYFMKQTIGNSCGTIGLIHAVANNQDK
||.|...||||||||.|.|||||||||.|||.||
GQDVTSSVYFMKQTISNACGTIGLIHAIANNKDK
""",
)
elif i == 7:
self.assertEqual(
str(protein_alignment),
"""\
QVLSRLGVAGQWRFVDVLGLEEESLGSVPAPACALLLLFPLTDDKVNFHFILFNNVDGHLYEL
|.|..||....|.||||.|...|.|..||.|.||.|||||.||.||..|||....||||||||
QFLKQLGLHPNWQFVDVYGMDPELLSMVPRPVCAVLLLFPITDEKVDLHFIALVHVDGHLYEL
""",
)
# Write the protein alignments to a PSL file:
stream = StringIO()
writer = psl.AlignmentWriter(stream, wildcard="X")
n = writer.write_file(protein_alignments, mincount=8, maxcount=8)
self.assertEqual(n, 8)
# Read the alignments back in:
alignments = psl.AlignmentIterator(path)
stream.seek(0)
protein_alignments = psl.AlignmentIterator(stream)
for alignment, protein_alignment in zip(alignments, protein_alignments):
# Confirm that the recalculated values for matches, misMatches,
# repMatches, and nCount are correct:
self.assertEqual(alignment.matches, protein_alignment.matches)
self.assertEqual(alignment.misMatches, protein_alignment.misMatches)
self.assertEqual(alignment.repMatches, protein_alignment.repMatches)
self.assertEqual(alignment.nCount, protein_alignment.nCount)
def test_reading_psl_35_002(self):
"""Test parsing psl_35_002.psl."""
path = "Blat/psl_35_002.psl"
alignments = psl.AlignmentIterator(path)
self.assertEqual(alignments.metadata["version"], "3")
alignment = next(alignments)
self.assertEqual(alignment.matches, 210)
self.assertEqual(alignment.misMatches, 3)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "KI537979")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 14052872)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[9712654, 9712786, 9715941, 9716097, 9716445, 9716532, 9718374,
9718422, 9739264, 9739339, 9743706, 9743766, 9744511, 9744592],
[ 17, 61, 61, 113, 113, 142, 142,
158, 158, 183, 183, 203, 203, 230]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 207)
self.assertEqual(alignment.misMatches, 22)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertLess(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "KI538594")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 7819582)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[2103463, 2103523, 2103522, 2103522, 2104149],
[ 0, 20, 20, 21, 230]]),
# fmt: on
)
)
alignment = next(alignments)
self.assertEqual(alignment.matches, 204)
self.assertEqual(alignment.misMatches, 6)
self.assertEqual(alignment.repMatches, 0)
self.assertEqual(alignment.nCount, 0)
self.assertGreater(alignment.coordinates[0, 0], alignment.coordinates[0, -1])
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
self.assertEqual(len(alignment), 2)
self.assertIs(alignment.sequences[0], alignment.target)
self.assertIs(alignment.sequences[1], alignment.query)
self.assertEqual(alignment.target.id, "KI537194")
self.assertEqual(alignment.query.id, "CAG33136.1")
self.assertEqual(len(alignment.target.seq), 37111980)
self.assertEqual(len(alignment.query.seq), 230)
self.assertTrue(
numpy.array_equal(
alignment.coordinates,
# fmt: off
# flake8: noqa
numpy.array([[20873021, 20872472, 20872471, 20872471, 20872390],
[ 0, 183, 183, 203, 230]]),
# fmt: on
)
)
self.assertRaises(StopIteration, next, alignments)
def test_writing_psl_35_002(self):
"""Test writing the alignments in psl_35_002.psl."""
path = "Blat/psl_35_002.psl"
with open(path) as stream:
original_data = stream.read()
alignments = psl.AlignmentIterator(path)
stream = StringIO()
writer = psl.AlignmentWriter(stream)
n = writer.write_file(alignments, mincount=3, maxcount=3)
self.assertEqual(n, 3)
stream.seek(0)
written_data = stream.read()
stream.close()
self.assertEqual(original_data, written_data)
# Convert the alignment to a protein alignment and insert the
# appropriate sequence data. Write this alignment in a PSL file;
# the writer will recalculate the values for matches, misMatches,
# repMatches, and nCount from the sequence data and the alignment.
#
# The alignments were generated using
# blat -t=dnax -q=prot balAcu1.2bit CAG33136.1.fasta psl_35_001.psl
#
# To save disk space, we extracted the necessary sequence data using
#
# twoBitToFa balAcu1.2bit:KI537979:9712654-9744592 stdout
# twoBitToFa balAcu1.2bit:KI538594:2103463-2104149 stdout
# twoBitToFa balAcu1.2bit:KI537194:20872390-20873021 stdout
#
# and concatenating the results into file balAcu1.fa. We will use this
# file below, and create partially defined Seq objects.
#
# Load the protein sequence:
protein = SeqIO.read("Blat/CAG33136.1.fasta", "fasta")
protein_alignments = []
alignments = psl.AlignmentIterator(path)
for i, alignment in enumerate(alignments):
records = SeqIO.parse("Blat/balAcu1.fa", "fasta")
for record in records:
name, start_end = record.id.split(":")
if name == alignment.sequences[0].id:
break
else:
raise Exception("Failed to find DNA sequence")
start, end = start_end.split("-")
start = int(start)
end = int(end)
length = len(alignment.sequences[0])
sequence = str(record.seq)
dna = Seq({start: sequence}, length=length)
alignment.sequences[0].seq = dna
self.assertEqual(alignment.sequences[1].id, protein.id)
alignment.sequences[1].seq = protein.seq
if i == 0 or i == 1:
# The alignment is on the forward strand of the DNA sequence:
self.assertLess(
alignment.coordinates[0, 0], alignment.coordinates[0, -1]
)
elif i == 2:
# The alignment is on the reverse strand of the DNA sequence:
self.assertGreater(
alignment.coordinates[0, 0], alignment.coordinates[0, -1]
)
# so we take the reverse complement:
alignment.coordinates[0, :] = len(dna) - alignment.coordinates[0, :]
alignment.sequences[0].seq = dna.reverse_complement()
# The protein alignment is always in the forward orientation:
self.assertLess(alignment.coordinates[1, 0], alignment.coordinates[1, -1])
# Now extract the aligned sequences:
aligned_dna = ""
aligned_protein = ""
for start, end in alignment.aligned[0]:
aligned_dna += alignment.sequences[0].seq[start:end]
for start, end in alignment.aligned[1]:
aligned_protein += alignment.sequences[1].seq[start:end]
# Translate the aligned DNA sequence:
aligned_dna = Seq(aligned_dna)
aligned_dna_translated = Seq(aligned_dna.translate())
aligned_protein = Seq(aligned_protein)
# Create a new alignment including the aligned sequences only:
records = [
SeqRecord(aligned_dna_translated, id=alignment.sequences[0].id),
SeqRecord(aligned_protein, id=alignment.sequences[1].id),
]
coordinates = numpy.array(
[[0, len(aligned_dna_translated)], [0, len(aligned_protein)]]
)
protein_alignment = Alignment(records, coordinates)
protein_alignments.append(protein_alignment)
if i == 0:
self.assertEqual(
str(protein_alignment),
"""\
QFLKQLGLHPNWQFVDVYGMDPELLSMVPRPVCAVLLLFPITEKYEIFRTEEEEKIKSQGQDVTSSVYFMKQTISNACGTIGLIHAIANNKDKMHFESGSTLKKFLEESASMSPEERARYLENYDAIRVTHETSAHEGQTEAPNIDEKVDLHFIALVHVDGHLYELDGRKPFPINHGETSDETLLEDAIEVCKKFMERDPDELRFNAIALSAA
||||||||||||||||||||||||||||||||||||||||||||||.||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||.|||||||||||||||||||||||||||||||||.|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
QFLKQLGLHPNWQFVDVYGMDPELLSMVPRPVCAVLLLFPITEKYEVFRTEEEEKIKSQGQDVTSSVYFMKQTISNACGTIGLIHAIANNKDKMHFESGSTLKKFLEESVSMSPEERARYLENYDAIRVTHETSAHEGQTEAPSIDEKVDLHFIALVHVDGHLYELDGRKPFPINHGETSDETLLEDAIEVCKKFMERDPDELRFNAIALSAA
""",
)
elif i == 1:
self.assertEqual(
str(protein_alignment),
"""\
MEGQCWLPLEANPEVTNQLLQLGLHPNWQFVDVYGMDPELLSMVPRPVCAVLLLFPITEKYEVFRTEEEEKIKSQGQNITSSGYFMRQTISSACGTIGLIHAIANNKDKMHFESGSTLKKFLEESASLSPEERAIYLENYDSIRVTHKTSDHEGQTEAQNIDEKVDLHFIALVHVDGHLYELDGWKPFPINHGETSDATLLRDAIEVFKKFRERDPDERRFNVIALSAA
||||.|||||||||||||.||||||||||||||||||||||||||||||||||||||||||||||||||||||||||..|||.|||.||||.|||||||||||||||||||||||||||||||||.|.||||||.||||||.|||||.||.|||||||..||||||||||||||||||||||||.||||||||||||.|||.|||||.|||.||||||.|||.||||||
MEGQRWLPLEANPEVTNQFLQLGLHPNWQFVDVYGMDPELLSMVPRPVCAVLLLFPITEKYEVFRTEEEEKIKSQGQDVTSSVYFMKQTISNACGTIGLIHAIANNKDKMHFESGSTLKKFLEESVSMSPEERARYLENYDAIRVTHETSAHEGQTEAPSIDEKVDLHFIALVHVDGHLYELDGRKPFPINHGETSDETLLEDAIEVCKKFMERDPDELRFNAIALSAA
""",
)
elif i == 2:
self.assertEqual(
str(protein_alignment),
"""\
MESQRWLPLEANPEVTNQFLKQLGLHPNWQCVDVYGMDPELLSMVPRPVCAVLLLFPITEKYEIFRTEEEEKTKSQGQDVTSSVYFMKQTISNACGTIGLIHAIANNKDKMHFESGSTLKKFLEESASMSPEERARYLENYDAIRVTHETSAHEGQTEAPNIDEKVDLHFIALVHVDGHLYELDAIEVCKKFMERDPDELRFNAIALSAA
||.|||||||||||||||||||||||||||.||||||||||||||||||||||||||||||||.||||||||.|||||||||||||||||||||||||||||||||||||||||||||||||||||.|||||||||||||||||||||||||||||||||.|||||||||||||||||||||||||||||||||||||||||||||||||
MEGQRWLPLEANPEVTNQFLKQLGLHPNWQFVDVYGMDPELLSMVPRPVCAVLLLFPITEKYEVFRTEEEEKIKSQGQDVTSSVYFMKQTISNACGTIGLIHAIANNKDKMHFESGSTLKKFLEESVSMSPEERARYLENYDAIRVTHETSAHEGQTEAPSIDEKVDLHFIALVHVDGHLYELDAIEVCKKFMERDPDELRFNAIALSAA
""",
)
# Write the protein alignments to a PSL file:
stream = StringIO()
writer = psl.AlignmentWriter(stream, wildcard="X")
n = writer.write_file(protein_alignments, mincount=3, maxcount=3)
self.assertEqual(n, 3)
# Read the alignments back in:
alignments = psl.AlignmentIterator(path)
stream.seek(0)
protein_alignments = psl.AlignmentIterator(stream)
for alignment, protein_alignment in zip(alignments, protein_alignments):
# Confirm that the recalculated values for matches, misMatches,
# repMatches, and nCount are correct:
self.assertEqual(alignment.matches, protein_alignment.matches)
self.assertEqual(alignment.misMatches, protein_alignment.misMatches)
self.assertEqual(alignment.repMatches, protein_alignment.repMatches)
self.assertEqual(alignment.nCount, protein_alignment.nCount)
if __name__ == "__main__":
runner = unittest.TextTestRunner(verbosity=2)
unittest.main(testRunner=runner)
| 45.934116 | 229 | 0.590494 | 12,090 | 121,312 | 5.890819 | 0.039454 | 0.180918 | 0.192081 | 0.092123 | 0.939273 | 0.933235 | 0.929865 | 0.927647 | 0.919629 | 0.915937 | 0 | 0.068272 | 0.285215 | 121,312 | 2,640 | 230 | 45.951515 | 0.753068 | 0.050745 | 0 | 0.842365 | 0 | 0 | 0.01589 | 0.001164 | 0 | 0 | 0 | 0 | 0.573668 | 1 | 0.007613 | false | 0 | 0.00403 | 0 | 0.012987 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e3fa699bff639098ba30d9e51677a05ba2f43e81 | 129 | py | Python | SecuriTree/tests.py | davymaish/django-SecuriTree | 01cf925e591877ae2669ca8430845abe278832bf | [
"BSD-2-Clause"
] | null | null | null | SecuriTree/tests.py | davymaish/django-SecuriTree | 01cf925e591877ae2669ca8430845abe278832bf | [
"BSD-2-Clause"
] | null | null | null | SecuriTree/tests.py | davymaish/django-SecuriTree | 01cf925e591877ae2669ca8430845abe278832bf | [
"BSD-2-Clause"
] | null | null | null | from django.test import TestCase
# Create your tests here.from django.test import TestCase
# python3 manage.py test SecuriTree
| 21.5 | 57 | 0.806202 | 19 | 129 | 5.473684 | 0.684211 | 0.192308 | 0.269231 | 0.384615 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009091 | 0.147287 | 129 | 5 | 58 | 25.8 | 0.936364 | 0.689922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
540abb65f0173de703b6a3ffefc3ab993e650af6 | 111 | py | Python | tests/test_firestore_imports.py | Paul-Kim/firestore | 4933f1a9f307539a8cef4ea80869c8821a92bc8d | [
"MIT"
] | 7 | 2019-04-11T19:39:59.000Z | 2021-02-02T10:28:27.000Z | tests/test_firestore_imports.py | Paul-Kim/firestore | 4933f1a9f307539a8cef4ea80869c8821a92bc8d | [
"MIT"
] | null | null | null | tests/test_firestore_imports.py | Paul-Kim/firestore | 4933f1a9f307539a8cef4ea80869c8821a92bc8d | [
"MIT"
] | 4 | 2019-04-22T04:34:47.000Z | 2019-11-25T07:19:23.000Z | from firestore import Array, Boolean, Byte, Collection, Reference as Ref
def test_imports_worked():
pass
| 18.5 | 72 | 0.765766 | 15 | 111 | 5.533333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171171 | 111 | 5 | 73 | 22.2 | 0.902174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0.666667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
5428657a9fae90455a57b2cb6931ef560121040f | 20,712 | py | Python | jython/ReLUnitTests.py | nginth/Carnot-OWL | e6cae8c7039d589987f2437d463a2cdc21b37170 | [
"Apache-2.0"
] | 1 | 2016-04-28T04:00:07.000Z | 2016-04-28T04:00:07.000Z | jython/ReLUnitTests.py | nginth/Carnot-OWL | e6cae8c7039d589987f2437d463a2cdc21b37170 | [
"Apache-2.0"
] | 1 | 2016-04-28T00:35:37.000Z | 2016-04-28T00:38:18.000Z | jython/ReLUnitTests.py | nginth/Carnot-OWL | e6cae8c7039d589987f2437d463a2cdc21b37170 | [
"Apache-2.0"
] | null | null | null | import unittest
connOracleEE_native = connectTo 'jdbc:oracle:thin:@sayonara.microlab.cs.utexas.edu:1521:orcl' 'C##cs329e_UTEid' 'orcl_UTEid' 'native_mode' nodebug
SQL on connOracleEE_native "{call SEM_APIS.DROP_RDF_MODEL('A0_C##CS329E_UTEID')}"
SQL on connOracleEE_native "drop table A0_C##CS329E_UTEID_DATA;"
SQL on connOracleEE_native "DROP SEQUENCE A0_C##CS329E_UTEID_SQNC;"
SQL on connOracleEE_native "DROP SEQUENCE A0_C##CS329E_UTEID_GUID_SQNC;"
connOracleEE = connectTo 'jdbc:oracle:thin:@sayonara.microlab.cs.utexas.edu:1521:orcl' 'C##cs329e_UTEid' 'orcl_UTEid' 'rdf_mode' 'A0' nodebug
conn_native = connectTo 'jdbc:oracle:thin:@sayonara.microlab.cs.utexas.edu:1521:orcl' 'C##cs329e_UTEid' 'orcl_UTEid' 'native_mode' 'A0' nodebug
connOracleRDFNoSQL = connectTo 'OracleNoSQL' 'kvstore' 'localhost:5000' 'rdf_mode' 'A0'
connOracleRDFNoSQL = connectTo 'OracleNoSQL' 'kvstore' 'localhost:5000' 'rdf_mode' 'A0' nodebug
global_conn = connectTo 'jdbc:oracle:thin:@sayonara.microlab.cs.utexas.edu:1521:orcl' 'C##cs329e_UTEid' 'orcl_UTEid' 'rdf_mode' 'A1' nodebug
print "Connections are opened, start loading Databases"
# A heterogeneous set of inserts is used for testing RDF NoSQL
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7369, 'SMITH', 'CLERK', 7902, TO_DATE('17-DEC-1980', 'DD-MON-YYYY'), 800, NULL, 20);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7499, 'ALLEN', 'SALESMAN', 7698, TO_DATE('20-FEB-1981', 'DD-MON-YYYY'), 1600, 300, 30);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7521, 'WARD', 'SALESMAN', 7698, TO_DATE('22-FEB-1981', 'DD-MON-YYYY'), 1250, 500, 30);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7566, 'JONES', 'MANAGER', 7839, TO_DATE('2-APR-1981', 'DD-MON-YYYY'), 2975, NULL, 20);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7654, 'MARTIN', 'SALESMAN', 7698, TO_DATE('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7698, 'BLAKE', 'MANAGER', 7839, TO_DATE('1-MAY-1981', 'DD-MON-YYYY'), 2850, NULL, 30);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7782, 'CLARK', 'MANAGER', 7839, TO_DATE('9-JUN-1981', 'DD-MON-YYYY'), 2450, NULL, 10);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7788, 'SCOTT', 'ANALYST', 7566, TO_DATE('09-DEC-1982', 'DD-MON-YYYY'), 3000, NULL, 20);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7839, 'KING', 'PRESIDENT', NULL, TO_DATE('17-NOV-1981', 'DD-MON-YYYY'), 5000, NULL, 10);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7844, 'TURNER', 'SALESMAN', 7698, TO_DATE('8-SEP-1981', 'DD-MON-YYYY'), 1500, NULL, 30);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7876, 'ADAMS', 'CLERK', 7788, TO_DATE('12-JAN-1983', 'DD-MON-YYYY'), 1100, NULL, 20);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7900, 'JAMES', 'CLERK', 7698, TO_DATE('3-DEC-1981', 'DD-MON-YYYY'), 950, NULL, 30);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7902, 'FORD', 'ANALYST', 7566, TO_DATE('3-DEC-1981', 'DD-MON-YYYY'), 3000, NULL, 20);"
SQL on connOracleRDFNoSQL "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7934, 'MILLER', 'CLERK', 7782, TO_DATE('23-JAN-1982', 'DD-MON-YYYY'), 1300, NULL, 50);"
Neo4j on connOracleRDFNoSQL "CREATE (:dept { DEPTNO : 10, DNAME : 'ACCOUNTING', LOC : 'NEW YORK' })"
Neo4j on connOracleRDFNoSQL "CREATE (:dept { DEPTNO : 20, DNAME : 'RESEARCH', LOC : 'DALLAS' })"
Neo4j on connOracleRDFNoSQL "CREATE (:dept { DEPTNO : 30, DNAME : 'SALES', LOC : 'CHICAGO' })"
Neo4j on connOracleRDFNoSQL "CREATE (:dept { DEPTNO : 40, DNAME : 'OPERATIONS', LOC : 'BOSTON' })"
Neo4j on connOracleRDFNoSQL "MATCH (a:emp),(b:dept) WHERE a.deptno = 10 AND b.deptno = 10 CREATE (a)<-[:employees]-(b)"
Neo4j on connOracleRDFNoSQL "MATCH (a:emp),(b:dept) WHERE a.deptno = 20 AND b.deptno = 20 CREATE (a)<-[:employees]-(b)"
Neo4j on connOracleRDFNoSQL "MATCH (a:emp),(b:dept) WHERE a.deptno = 30 AND b.deptno = 30 CREATE (a)<-[:employees]-(b)"
Neo4j on connOracleRDFNoSQL "MATCH (a:emp),(b:dept) WHERE a.deptno = 40 AND b.deptno = 40 CREATE (a)<-[:employees]-(b)"
Neo4j on connOracleRDFNoSQL "MATCH (a:emp),(b:dept) WHERE a.deptno = 10 AND b.deptno = 10 CREATE (a)-[:dept]->(b)"
Neo4j on connOracleRDFNoSQL "MATCH (a:emp),(b:dept) WHERE a.deptno = 20 AND b.deptno = 20 CREATE (a)-[:dept]->(b)"
Neo4j on connOracleRDFNoSQL "MATCH (a:emp),(b:dept) WHERE a.deptno = 30 AND b.deptno = 30 CREATE (a)-[:dept]->(b)"
Neo4j on connOracleRDFNoSQL "MATCH (a:emp),(b:dept) WHERE a.deptno = 40 AND b.deptno = 40 CREATE (a)-[:dept]->(b)"
print "Finished loading the NoSQL Database"
# A heterogeneous set of inserts is used for testing Oracle EE
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7369, ENAME : 'SMITH', JOB : 'CLERK', MGR : 7902, HIREDATE : '17-DEC-80', SAL : 800, COMM : 0, DEPTNO : 20})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7499, ENAME : 'ALLEN', JOB : 'SALESMAN', MGR : 7698, HIREDATE : '20-FEB-81', SAL : 1600, COMM : 300, DEPTNO : 30})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7521, ENAME : 'WARD', JOB : 'SALESMAN', MGR : 7698, HIREDATE : '22-FEB-81', SAL : 1250, COMM : 500, DEPTNO : 30})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7566, ENAME : 'JONES', JOB : 'MANAGER', MGR : 7839, HIREDATE : '02-APR-81', SAL : 2975, COMM : 0, DEPTNO : 20})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7654, ENAME : 'MARTIN', JOB : 'SALESMAN', MGR : 7698, HIREDATE : '28-SEP-81', SAL : 1250, COMM : 1400, DEPTNO : 30})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7698, ENAME : 'BLAKE', JOB : 'MANAGER', MGR : 7839, HIREDATE : '01-MAY-81', SAL : 2850, COMM : 0, DEPTNO : 30})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7782, ENAME : 'CLARK', JOB : 'MANAGER', MGR : 7839, HIREDATE : '09-JUN-81', SAL : 2450, COMM : 0, DEPTNO : 10})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7788, ENAME : 'SCOTT', JOB : 'ANALYST', MGR : 7566, HIREDATE : '09-DEC-82', SAL : 3000, COMM : 0, DEPTNO : 20})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7839, ENAME : 'KING', JOB : 'PRESIDENT', MGR : 0, HIREDATE : '17-NOV-81', SAL : 5000, COMM : 0, DEPTNO : 10})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7844, ENAME : 'TURNER', JOB : 'SALESMAN', MGR : 7698, HIREDATE : '08-SEP-81', SAL : 1500, COMM : 0, DEPTNO : 30})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7876, ENAME : 'ADAMS', JOB : 'CLERK', MGR : 7788, HIREDATE : '12-JAN-83', SAL : 1100, COMM : 0, DEPTNO : 20})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7900, ENAME : 'JAMES', JOB : 'CLERK', MGR : 7698, HIREDATE : '03-DEC-81', SAL : 950, COMM : 0, DEPTNO : 30})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7902, ENAME : 'FORD', JOB : 'ANALYST', MGR : 7566, HIREDATE : '03-DEC-81', SAL : 3000, COMM : 0, DEPTNO : 20})"
Neo4j on connOracleEE "CREATE (:emp { EMPNO : 7934, ENAME : 'MILLER', JOB : 'CLERK', MGR : 7782, HIREDATE : '23-JAN-82', SAL : 1300, COMM : 0, DEPTNO : 50})"
SQL on connOracleEE "INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES (10, 'ACCOUNTING', 'NEW YORK');"
SQL on connOracleEE "INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES (20, 'RESEARCH', 'DALLAS');"
SQL on connOracleEE "INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES (30, 'SALES', 'CHICAGO');"
SQL on connOracleEE "INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES (40, 'OPERATIONS', 'BOSTON');"
Neo4j on connOracleEE "MATCH (a:emp),(b:dept) WHERE a.deptno = 10 AND b.deptno = 10 CREATE (a)<-[:employees]-(b)"
Neo4j on connOracleEE "MATCH (a:emp),(b:dept) WHERE a.deptno = 20 AND b.deptno = 20 CREATE (a)<-[:employees]-(b)"
Neo4j on connOracleEE "MATCH (a:emp),(b:dept) WHERE a.deptno = 30 AND b.deptno = 30 CREATE (a)<-[:employees]-(b)"
Neo4j on connOracleEE "MATCH (a:emp),(b:dept) WHERE a.deptno = 40 AND b.deptno = 40 CREATE (a)<-[:employees]-(b)"
Neo4j on connOracleEE "MATCH (a:emp),(b:dept) WHERE a.deptno = 10 AND b.deptno = 10 CREATE (a)-[:dept]->(b)"
Neo4j on connOracleEE "MATCH (a:emp),(b:dept) WHERE a.deptno = 20 AND b.deptno = 20 CREATE (a)-[:dept]->(b)"
Neo4j on connOracleEE "MATCH (a:emp),(b:dept) WHERE a.deptno = 30 AND b.deptno = 30 CREATE (a)-[:dept]->(b)"
Neo4j on connOracleEE "MATCH (a:emp),(b:dept) WHERE a.deptno = 40 AND b.deptno = 40 CREATE (a)-[:dept]->(b)"
print "Finished loading the EE Database"
SQL on conn_native "truncate table emp"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7369, 'SMITH', 'CLERK', 7902, TO_DATE('17-DEC-1980', 'DD-MON-YYYY'), 800, NULL, 20);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7499, 'ALLEN', 'SALESMAN', 7698, TO_DATE('20-FEB-1981', 'DD-MON-YYYY'), 1600, 300, 30);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7521, 'WARD', 'SALESMAN', 7698, TO_DATE('22-FEB-1981', 'DD-MON-YYYY'), 1250, 500, 30);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7566, 'JONES', 'MANAGER', 7839, TO_DATE('2-APR-1981', 'DD-MON-YYYY'), 2975, NULL, 20);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7654, 'MARTIN', 'SALESMAN', 7698, TO_DATE('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7698, 'BLAKE', 'MANAGER', 7839, TO_DATE('1-MAY-1981', 'DD-MON-YYYY'), 2850, NULL, 30);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7782, 'CLARK', 'MANAGER', 7839, TO_DATE('9-JUN-1981', 'DD-MON-YYYY'), 2450, NULL, 10);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7788, 'SCOTT', 'ANALYST', 7566, TO_DATE('09-DEC-1982', 'DD-MON-YYYY'), 3000, NULL, 20);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7839, 'KING', 'PRESIDENT', NULL, TO_DATE('17-NOV-1981', 'DD-MON-YYYY'), 5000, NULL, 10);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7844, 'TURNER', 'SALESMAN', 7698, TO_DATE('8-SEP-1981', 'DD-MON-YYYY'), 1500, NULL, 30);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7876, 'ADAMS', 'CLERK', 7788, TO_DATE('12-JAN-1983', 'DD-MON-YYYY'), 1100, NULL, 20);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7900, 'JAMES', 'CLERK', 7698, TO_DATE('3-DEC-1981', 'DD-MON-YYYY'), 950, NULL, 30);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7902, 'FORD', 'ANALYST', 7566, TO_DATE('3-DEC-1981', 'DD-MON-YYYY'), 3000, NULL, 20);"
SQL on conn_native "INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES (7934, 'MILLER', 'CLERK', 7782, TO_DATE('23-JAN-1982', 'DD-MON-YYYY'), 1300, NULL, 50);"
SQL on conn_native "truncate table dept"
SQL on conn_native "INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES (10, 'ACCOUNTING', 'NEW YORK');"
SQL on conn_native "INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES (20, 'RESEARCH', 'DALLAS');"
SQL on conn_native "INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES (30, 'SALES', 'CHICAGO');"
SQL on conn_native "INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES (40, 'OPERATIONS', 'BOSTON');"
print "Finished loading the Native Database"
# Start testing of Oracle EE
class EESQLTestCase(unittest.TestCase):
def runTest(self):
results = SQL on connOracleEE "select ename from emp"
assert sorted(results) == [('ADAMS',), ('ALLEN',), ('BLAKE',), ('CLARK',), ('FORD',), ('JAMES',), ('JONES',), ('KING',), ('MARTIN',), ('MILLER',), ('SCOTT',), ('SMITH',), ('TURNER',), ('WARD',), ('ename',)], 'SQL query failed'
class EESIMTestCase_1(unittest.TestCase):
def runTest(self):
results = SIM on connOracleEE "From emp Retrieve ename"
assert sorted(results) == [('ADAMS',), ('ALLEN',), ('BLAKE',), ('CLARK',), ('ENAME',), ('FORD',), ('JAMES',), ('JONES',), ('KING',), ('MARTIN',), ('MILLER',), ('SCOTT',), ('SMITH',), ('TURNER',), ('WARD',)], 'SIM query failed'
class EESIMTestCase_2(unittest.TestCase):
def runTest(self):
results = SIM on connOracleEE "FROM EMP RETRIEVE DNAME OF dept, ENAME"
assert sorted(results) == [('ADAMS', 'RESEARCH'), ('ALLEN', 'SALES'), ('BLAKE', 'SALES'), ('CLARK', 'ACCOUNTING'), ('ENAME', 'X0_1'), ('FORD', 'RESEARCH'), ('JAMES', 'SALES'), ('JONES', 'RESEARCH'), ('KING', 'ACCOUNTING'), ('MARTIN', 'SALES'), ('MILLER', 'null'), ('SCOTT', 'RESEARCH'), ('SMITH', 'RESEARCH'), ('TURNER', 'SALES'), ('WARD', 'SALES')], 'SIM query failed'
class EENeo4jTestCase(unittest.TestCase):
def runTest(self):
results = Neo4j on connOracleEE "MATCH(a:emp)-[:dept]->(b:dept) RETURN b.dname, a.ename"
assert sorted(results) == [('ADAMS', 'RESEARCH'), ('ALLEN', 'SALES'), ('BLAKE', 'SALES'), ('CLARK', 'ACCOUNTING'), ('ENAME', 'X0_1'), ('FORD', 'RESEARCH'), ('JAMES', 'SALES'), ('JONES', 'RESEARCH'), ('KING', 'ACCOUNTING'), ('MARTIN', 'SALES'), ('MILLER', 'null'), ('SCOTT', 'RESEARCH'), ('SMITH', 'RESEARCH'), ('TURNER', 'SALES'), ('WARD', 'SALES')], 'Neo4j query failed'
# Start testing of native_mode
class SQLTestCase(unittest.TestCase):
def runTest(self):
results = SQL on conn_native "select ename, dname from emp e join dept d on(e.deptno = d.deptno)"
assert sorted(results) == [('ADAMS', 'RESEARCH'), ('ALLEN', 'SALES'), ('BLAKE', 'SALES'), ('CLARK', 'ACCOUNTING'), ('ENAME', 'DNAME'), ('FORD', 'RESEARCH'), ('JAMES', 'SALES'), ('JONES', 'RESEARCH'), ('KING', 'ACCOUNTING'), ('MARTIN', 'SALES'), ('SCOTT', 'RESEARCH'), ('SMITH', 'RESEARCH'), ('TURNER', 'SALES'), ('WARD', 'SALES')], 'SQL query failed'
# Start testing of Oracle RDF NoSQL
class OracleNoSQLTestCase_Neo4J_select(unittest.TestCase):
def runTest(self):
results = Neo4j on connOracleRDFNoSQL "MATCH(a:emp) RETURN a.ename"
assert sorted(results) == [('ADAMS',), ('ALLEN',), ('BLAKE',), ('CLARK',), ('FORD',), ('JAMES',), ('JONES',), ('KING',), ('MARTIN',), ('MILLER',), ('SCOTT',), ('SMITH',), ('TURNER',), ('WARD',), ('ename',)], 'OracleNoSQLTestCase_Neo4J_select query failed'
class OracleNoSQLTestCase_Neo4J_join(unittest.TestCase):
def runTest(self):
results = Neo4j on connOracleRDFNoSQL "MATCH(a:emp)-[:dept]->(b:dept) RETURN b.dname, a.ename"
assert sorted(results) == [('ADAMS', 'RESEARCH'), ('ALLEN', 'SALES'), ('BLAKE', 'SALES'), ('CLARK', 'ACCOUNTING'), ('FORD', 'RESEARCH'), ('JAMES', 'SALES'), ('JONES', 'RESEARCH'), ('KING', 'ACCOUNTING'), ('MARTIN', 'SALES'), ('MILLER',), ('SCOTT', 'RESEARCH'), ('SMITH', 'RESEARCH'), ('TURNER', 'SALES'), ('WARD', 'SALES'), ('ename', 'x0_1')], 'OracleNoSQLTestCase_Neo4J_join query failed'
class OracleNoSQLTestCase_Neo4J_where(unittest.TestCase):
def runTest(self):
results = Neo4j on connOracleRDFNoSQL "MATCH(a:emp) WHERE a.deptno = 20 RETURN a.ename"
assert sorted(results) == [('ADAMS',), ('FORD',), ('JONES',), ('SCOTT',), ('SMITH',), ('ename',)], 'OracleNoSQLTestCase_Neo4J_where query failed'
class OracleNoSQLTestCase_SIM_select(unittest.TestCase):
def runTest(self):
results = SIM on connOracleRDFNoSQL "FROM emp RETRIEVE *"
assert sorted(results) == [(7566, 7788, 'NULL', 'SCOTT', 20, '09-DEC-1982,', 3000, 'ANALYST'), (7566, 7902, 'NULL', 'FORD', 20, '3-DEC-1981,', 3000, 'ANALYST'), (7698, 7499, 300, 'ALLEN', 30, '20-FEB-1981,', 1600, 'SALESMAN'), (7698, 7521, 500, 'WARD', 30, '22-FEB-1981,', 1250, 'SALESMAN'), (7698, 7654, 1400, 'MARTIN', 30, '28-SEP-1981,', 1250, 'SALESMAN'), (7698, 7844, 'NULL', 'TURNER', 30, '8-SEP-1981,', 1500, 'SALESMAN'), (7698, 7900, 'NULL', 'JAMES', 30, '3-DEC-1981,', 950, 'CLERK'), (7782, 7934, 'NULL', 'MILLER', 50, '23-JAN-1982,', 1300, 'CLERK'), (7788, 7876, 'NULL', 'ADAMS', 20, '12-JAN-1983,', 1100, 'CLERK'), (7839, 7566, 'NULL', 'JONES', 20, '2-APR-1981,', 2975, 'MANAGER'), (7839, 7698, 'NULL', 'BLAKE', 30, '1-MAY-1981,', 2850, 'MANAGER'), (7839, 7782, 'NULL', 'CLARK', 10, '9-JUN-1981,', 2450, 'MANAGER'), (7902, 7369, 'NULL', 'SMITH', 20, '17-DEC-1980,', 800, 'CLERK'), ('NULL', 7839, 'NULL', 'KING', 10, '17-NOV-1981,', 5000, 'PRESIDENT'), ('mgr', 'empno', 'comm', 'ename', 'deptno', 'hiredate', 'sal', 'job')], 'OracleNoSQLTestCase_SIM_select query failed'
class OracleNoSQLTestCase_SIM_join(unittest.TestCase):
def runTest(self):
results = SIM on connOracleRDFNoSQL "FROM emp RETRIEVE dname OF dept, ename;"
assert sorted(results) == [('ADAMS', 'RESEARCH'), ('ALLEN', 'SALES'), ('BLAKE', 'SALES'), ('CLARK', 'ACCOUNTING'), ('FORD', 'RESEARCH'), ('JAMES', 'SALES'), ('JONES', 'RESEARCH'), ('KING', 'ACCOUNTING'), ('MARTIN', 'SALES'), ('MILLER',), ('SCOTT', 'RESEARCH'), ('SMITH', 'RESEARCH'), ('TURNER', 'SALES'), ('WARD', 'SALES'), ('ename', 'x0_1')], 'OracleNoSQLTestCase_SIM_join query failed'
class OracleNoSQLTestCase_SIM_where(unittest.TestCase):
def runTest(self):
results = SIM on connOracleRDFNoSQL "FROM EMP RETRIEVE ENAME where deptno = 20"
assert sorted(results) == [('ADAMS',), ('FORD',), ('JONES',), ('SCOTT',), ('SMITH',), ('ename',)], 'OracleNoSQLTestCase_SIM_where query failed'
class OracleNoSQLTestCase_SQL_project(unittest.TestCase):
def runTest(self):
results = SQL on connOracleRDFNoSQL "select ename, sal from emp order by ename"
assert sorted(results) == [('ADAMS', 1100), ('ALLEN', 1600), ('BLAKE', 2850), ('CLARK', 2450), ('FORD', 3000), ('JAMES', 950), ('JONES', 2975), ('KING', 5000), ('MARTIN', 1250), ('MILLER', 1300), ('SCOTT', 3000), ('SMITH', 800), ('TURNER', 1500), ('WARD', 1250), ('ename', 'sal')], 'OracleNoSQLTestCase_SIM_project query failed'
class OracleNoSQLTestCase_SQL_join(unittest.TestCase):
def runTest(self):
results = SQL on connOracleRDFNoSQL "select ename, sal, dname, loc from emp e join dept d on (e.deptno=d.deptno)"
assert sorted(results) == [('ADAMS', 1100, 'RESEARCH', 'DALLAS'), ('ALLEN', 1600, 'SALES', 'CHICAGO'), ('BLAKE', 2850, 'SALES', 'CHICAGO'), ('CLARK', 2450, 'ACCOUNTING', 'NEW YORK'), ('FORD', 3000, 'RESEARCH', 'DALLAS'), ('JAMES', 950, 'SALES', 'CHICAGO'), ('JONES', 2975, 'RESEARCH', 'DALLAS'), ('KING', 5000, 'ACCOUNTING', 'NEW YORK'), ('MARTIN', 1250, 'SALES', 'CHICAGO'), ('SCOTT', 3000, 'RESEARCH', 'DALLAS'), ('SMITH', 800, 'RESEARCH', 'DALLAS'), ('TURNER', 1500, 'SALES', 'CHICAGO'), ('WARD', 1250, 'SALES', 'CHICAGO'), ('ename', 'sal', 'dname', 'loc')], 'OracleNoSQLTestCase_SIM_join query failed'
class OracleNoSQLTestCase_SPARQL_join(unittest.TestCase):
def runTest(self):
results = SPARQL on connOracleRDFNoSQL "select ?ename ?x0_1 where { GRAPH c:emp_SCHEMA { ?indiv rdf:type c:emp } GRAPH c:emp { ?indiv c:ename ?ename . } OPTIONAL { GRAPH ?0 { ?indiv c:dept ?x0_0 . } GRAPH ?1 { ?x0_0 c:dname ?x0_1 . } } }"
assert sorted(results) == [('ADAMS', 'RESEARCH'), ('ALLEN', 'SALES'), ('BLAKE', 'SALES'), ('CLARK', 'ACCOUNTING'), ('FORD', 'RESEARCH'), ('JAMES', 'SALES'), ('JONES', 'RESEARCH'), ('KING', 'ACCOUNTING'), ('MARTIN', 'SALES'), ('MILLER',), ('SCOTT', 'RESEARCH'), ('SMITH', 'RESEARCH'), ('TURNER', 'SALES'), ('WARD', 'SALES'), ('ename', 'x0_1')], 'OracleNoSQLTestCase_SPARQL_join query failed'
class OOReLTestCase(unittest.TestCase):
def runTest(self):
persist on global_conn class TEST(object):
A = 0
B = 0
def __init__(self):
self.A = ""
self.B = 0
item = TEST()
item.A = "samplestring"
item.B = 540
relInsert on global_conn item
relCommit on global_conn
results = SQL on global_conn "select * from TEST"
# SQL on global_conn """ DELETE * FROM TEST"""
# results = SQL on global_conn """ SELECT TEST.A, TEST.B FROM TEST """
assert False, 'OOReL needs work'
if __name__ == "__main__":
unittest.main()
| 101.034146 | 1,088 | 0.648127 | 2,858 | 20,712 | 4.643807 | 0.082575 | 0.01846 | 0.027426 | 0.037975 | 0.789632 | 0.739979 | 0.715793 | 0.700949 | 0.674879 | 0.658454 | 0 | 0.081207 | 0.150396 | 20,712 | 204 | 1,089 | 101.529412 | 0.673012 | 0.015691 | 0 | 0.09375 | 0 | 0.3875 | 0.636341 | 0.071691 | 0 | 0 | 0 | 0 | 0.09375 | 0 | null | null | 0 | 0.00625 | null | null | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5811c86695fcb5e5cb5ed0ccc699b7aba52ed48b | 4,431 | py | Python | tests/test_provider_gavinbunney_bitbucketserver.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | tests/test_provider_gavinbunney_bitbucketserver.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | tests/test_provider_gavinbunney_bitbucketserver.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # tests/test_provider_gavinbunney_bitbucketserver.py
# Automatically generated by tools/makecode.py (24-Sep-2021 15:13:29 UTC)
def test_provider_import():
import terrascript.provider.gavinbunney.bitbucketserver
def test_resource_import():
from terrascript.resource.gavinbunney.bitbucketserver import bitbucketserver_banner
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_default_reviewers_condition,
)
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_global_permissions_group,
)
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_global_permissions_user,
)
from terrascript.resource.gavinbunney.bitbucketserver import bitbucketserver_group
from terrascript.resource.gavinbunney.bitbucketserver import bitbucketserver_license
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_mail_server,
)
from terrascript.resource.gavinbunney.bitbucketserver import bitbucketserver_plugin
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_plugin_config,
)
from terrascript.resource.gavinbunney.bitbucketserver import bitbucketserver_project
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_project_hook,
)
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_project_permissions_group,
)
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_project_permissions_user,
)
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_repository,
)
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_repository_hook,
)
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_repository_permissions_group,
)
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_repository_permissions_user,
)
from terrascript.resource.gavinbunney.bitbucketserver import bitbucketserver_user
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_user_access_token,
)
from terrascript.resource.gavinbunney.bitbucketserver import (
bitbucketserver_user_group,
)
def test_datasource_import():
from terrascript.data.gavinbunney.bitbucketserver import (
bitbucketserver_application_properties,
)
from terrascript.data.gavinbunney.bitbucketserver import bitbucketserver_cluster
from terrascript.data.gavinbunney.bitbucketserver import (
bitbucketserver_global_permissions_groups,
)
from terrascript.data.gavinbunney.bitbucketserver import (
bitbucketserver_global_permissions_users,
)
from terrascript.data.gavinbunney.bitbucketserver import bitbucketserver_group_users
from terrascript.data.gavinbunney.bitbucketserver import bitbucketserver_groups
from terrascript.data.gavinbunney.bitbucketserver import bitbucketserver_plugin
from terrascript.data.gavinbunney.bitbucketserver import (
bitbucketserver_project_hooks,
)
from terrascript.data.gavinbunney.bitbucketserver import (
bitbucketserver_project_permissions_groups,
)
from terrascript.data.gavinbunney.bitbucketserver import (
bitbucketserver_project_permissions_users,
)
from terrascript.data.gavinbunney.bitbucketserver import (
bitbucketserver_repository_hooks,
)
from terrascript.data.gavinbunney.bitbucketserver import (
bitbucketserver_repository_permissions_groups,
)
from terrascript.data.gavinbunney.bitbucketserver import (
bitbucketserver_repository_permissions_users,
)
from terrascript.data.gavinbunney.bitbucketserver import bitbucketserver_user
# TODO: Shortcut imports without namespace for official and supported providers.
# TODO: This has to be moved into a required_providers block.
# def test_version_source():
#
# import terrascript.provider.gavinbunney.bitbucketserver
#
# t = terrascript.provider.gavinbunney.bitbucketserver.bitbucketserver()
# s = str(t)
#
# assert 'https://github.com/gavinbunney/terraform-provider-bitbucketserver' in s
# assert '1.5.0' in s
| 31.877698 | 88 | 0.780862 | 400 | 4,431 | 8.4425 | 0.205 | 0.292567 | 0.322179 | 0.473201 | 0.839206 | 0.806337 | 0.806337 | 0.703287 | 0.469648 | 0 | 0 | 0.004054 | 0.164974 | 4,431 | 138 | 89 | 32.108696 | 0.908649 | 0.124351 | 0 | 0.27381 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0 | 1 | 0.035714 | true | 0 | 0.452381 | 0 | 0.488095 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
5824bf85b24bda9faa0e98287116716e8832be1c | 109 | py | Python | python/fuzzy_classification/util/dummy_logger.py | oljubuncic1/fuzzy-classification | 915a4f1a9fb25b576ac985755c379f89f54c65a3 | [
"MIT"
] | null | null | null | python/fuzzy_classification/util/dummy_logger.py | oljubuncic1/fuzzy-classification | 915a4f1a9fb25b576ac985755c379f89f54c65a3 | [
"MIT"
] | null | null | null | python/fuzzy_classification/util/dummy_logger.py | oljubuncic1/fuzzy-classification | 915a4f1a9fb25b576ac985755c379f89f54c65a3 | [
"MIT"
] | 1 | 2020-04-20T07:32:22.000Z | 2020-04-20T07:32:22.000Z | class DummyLogger:
def debug(self, *args):
return
def info(self, *args):
return
| 15.571429 | 27 | 0.541284 | 12 | 109 | 4.916667 | 0.666667 | 0.271186 | 0.474576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.357798 | 109 | 6 | 28 | 18.166667 | 0.842857 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
585b6c4687fdd6c28794b61ec9d4f25d3b939bab | 50,463 | py | Python | infoblox_netmri/api/broker/v2_9_0/auth_server_broker.py | infobloxopen/infoblox_netmri | aa1c744df7e439dbe163bb9edd165e4e85a9771b | [
"Apache-2.0"
] | 12 | 2016-02-19T12:37:54.000Z | 2022-03-04T20:11:08.000Z | infoblox_netmri/api/broker/v2_9_0/auth_server_broker.py | azinfoblox/infoblox-netmri | 02372c5231e2677ab6299cb659a73c9a41b4b0f4 | [
"Apache-2.0"
] | 18 | 2015-11-12T18:37:00.000Z | 2021-05-19T07:59:55.000Z | infoblox_netmri/api/broker/v2_9_0/auth_server_broker.py | azinfoblox/infoblox-netmri | 02372c5231e2677ab6299cb659a73c9a41b4b0f4 | [
"Apache-2.0"
] | 18 | 2016-01-07T12:04:34.000Z | 2022-03-31T11:05:41.000Z | from ..broker import Broker
class AuthServerBroker(Broker):
controller = "auth_servers"
def show(self, **kwargs):
"""Shows the details for the specified auth server.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The authentication server identifier.
:type id: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return auth_server: The auth server identified by the specified id.
:rtype auth_server: AuthServer
"""
return self.api_request(self._get_method_fullname("show"), kwargs)
def index(self, **kwargs):
"""Lists the available auth servers. Any of the inputs listed may be be used to narrow the list; other inputs will be ignored. Of the various ways to query lists, using this method is most efficient.
**Inputs**
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_service_id: The id of the authentication service, this server is member of.
:type auth_service_id: Array of Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param id: The authentication server identifier.
:type id: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` id
:param sort: The data field(s) to use for sorting the output. Default is id. Valid values are id, priority, enabled_ind, auth_server, auth_port, auth_shared_secret, auth_encryption, auth_cert, created_at, updated_at, secure_version, auth_service_id, auth_protocol, source_interface_id, auth_version.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each AuthServer. Valid values are id, priority, enabled_ind, auth_server, auth_port, auth_shared_secret, auth_encryption, auth_cert, created_at, updated_at, secure_version, auth_service_id, auth_protocol, source_interface_id, auth_version. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return auth_servers: An array of the AuthServer objects that match the specified input criteria.
:rtype auth_servers: Array of AuthServer
"""
return self.api_list_request(self._get_method_fullname("index"), kwargs)
def search(self, **kwargs):
"""Lists the available auth servers matching the input criteria. This method provides a more flexible search interface than the index method, but searching using this method is more demanding on the system and will not perform to the same level as the index method. The input fields listed below will be used as in the index method, to filter the result, along with the optional query string and XML filter described below.
**Inputs**
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_cert: The SSL certificate of an Authentication Server. (Required for Active Directory method).
:type auth_cert: Array of String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_encryption: The Encryption method (none or SSL) (required for Active Directory method).
:type auth_encryption: Array of String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_port: Authentication Port (required for Active Directory method).
:type auth_port: Array of Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_protocol: The password exchange protocol to use for authentication. One of (PAP, CHAP)
:type auth_protocol: Array of String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_server: Authentication Server Name (required for Radius, Tacacs, LDAP and Active Directory methods).
:type auth_server: Array of String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_service_id: The id of the authentication service, this server is member of.
:type auth_service_id: Array of Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_shared_secret: The shared secret of an authentication server (required for Radius and Tacacs methods).
:type auth_shared_secret: Array of String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_version: The version used for the authentication (LDAP).
:type auth_version: Array of Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param created_at: The date and time the record was initially created in NetMRI.
:type created_at: Array of DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param enabled_ind: A flag indicating whether the authentication server settings is enabled or disabled.
:type enabled_ind: Array of Boolean
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param id: The authentication server identifier.
:type id: Array of Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param priority: Priority assigned to an authentication server.
:type priority: Array of Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param secure_version: Internal encrypt version used for any auth_shared_secret
:type secure_version: Array of Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param source_interface_id: The NetMRI interface to use as source of the packets sent to the authentication server.
:type source_interface_id: Array of Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param updated_at: The date and time the record was last modified in NetMRI.
:type updated_at: Array of DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` id
:param sort: The data field(s) to use for sorting the output. Default is id. Valid values are id, priority, enabled_ind, auth_server, auth_port, auth_shared_secret, auth_encryption, auth_cert, created_at, updated_at, secure_version, auth_service_id, auth_protocol, source_interface_id, auth_version.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each AuthServer. Valid values are id, priority, enabled_ind, auth_server, auth_port, auth_shared_secret, auth_encryption, auth_cert, created_at, updated_at, secure_version, auth_service_id, auth_protocol, source_interface_id, auth_version. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param query: This value will be matched against auth servers, looking to see if one or more of the listed attributes contain the passed value. You may also surround the value with '/' and '/' to perform a regular expression search rather than a containment operation. Any record that matches will be returned. The attributes searched are: auth_cert, auth_encryption, auth_port, auth_protocol, auth_server, auth_service_id, auth_shared_secret, auth_version, created_at, enabled_ind, id, priority, secure_version, source_interface_id, updated_at.
:type query: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return auth_servers: An array of the AuthServer objects that match the specified input criteria.
:rtype auth_servers: Array of AuthServer
"""
return self.api_list_request(self._get_method_fullname("search"), kwargs)
def find(self, **kwargs):
"""Lists the available auth servers matching the input specification. This provides the most flexible search specification of all the query mechanisms, enabling searching using comparison operations other than equality. However, it is more complex to use and will not perform as efficiently as the index or search methods. In the input descriptions below, 'field names' refers to the following fields: auth_cert, auth_encryption, auth_port, auth_protocol, auth_server, auth_service_id, auth_shared_secret, auth_version, created_at, enabled_ind, id, priority, secure_version, source_interface_id, updated_at.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_auth_cert: The operator to apply to the field auth_cert. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. auth_cert: The SSL certificate of an Authentication Server. (Required for Active Directory method). For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_auth_cert: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_auth_cert: If op_auth_cert is specified, the field named in this input will be compared to the value in auth_cert using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_auth_cert must be specified if op_auth_cert is specified.
:type val_f_auth_cert: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_auth_cert: If op_auth_cert is specified, this value will be compared to the value in auth_cert using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_auth_cert must be specified if op_auth_cert is specified.
:type val_c_auth_cert: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_auth_encryption: The operator to apply to the field auth_encryption. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. auth_encryption: The Encryption method (none or SSL) (required for Active Directory method). For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_auth_encryption: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_auth_encryption: If op_auth_encryption is specified, the field named in this input will be compared to the value in auth_encryption using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_auth_encryption must be specified if op_auth_encryption is specified.
:type val_f_auth_encryption: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_auth_encryption: If op_auth_encryption is specified, this value will be compared to the value in auth_encryption using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_auth_encryption must be specified if op_auth_encryption is specified.
:type val_c_auth_encryption: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_auth_port: The operator to apply to the field auth_port. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. auth_port: Authentication Port (required for Active Directory method). For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_auth_port: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_auth_port: If op_auth_port is specified, the field named in this input will be compared to the value in auth_port using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_auth_port must be specified if op_auth_port is specified.
:type val_f_auth_port: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_auth_port: If op_auth_port is specified, this value will be compared to the value in auth_port using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_auth_port must be specified if op_auth_port is specified.
:type val_c_auth_port: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_auth_protocol: The operator to apply to the field auth_protocol. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. auth_protocol: The password exchange protocol to use for authentication. One of (PAP, CHAP) For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_auth_protocol: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_auth_protocol: If op_auth_protocol is specified, the field named in this input will be compared to the value in auth_protocol using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_auth_protocol must be specified if op_auth_protocol is specified.
:type val_f_auth_protocol: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_auth_protocol: If op_auth_protocol is specified, this value will be compared to the value in auth_protocol using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_auth_protocol must be specified if op_auth_protocol is specified.
:type val_c_auth_protocol: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_auth_server: The operator to apply to the field auth_server. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. auth_server: Authentication Server Name (required for Radius, Tacacs, LDAP and Active Directory methods). For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_auth_server: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_auth_server: If op_auth_server is specified, the field named in this input will be compared to the value in auth_server using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_auth_server must be specified if op_auth_server is specified.
:type val_f_auth_server: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_auth_server: If op_auth_server is specified, this value will be compared to the value in auth_server using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_auth_server must be specified if op_auth_server is specified.
:type val_c_auth_server: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_auth_service_id: The operator to apply to the field auth_service_id. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. auth_service_id: The id of the authentication service, this server is member of. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_auth_service_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_auth_service_id: If op_auth_service_id is specified, the field named in this input will be compared to the value in auth_service_id using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_auth_service_id must be specified if op_auth_service_id is specified.
:type val_f_auth_service_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_auth_service_id: If op_auth_service_id is specified, this value will be compared to the value in auth_service_id using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_auth_service_id must be specified if op_auth_service_id is specified.
:type val_c_auth_service_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_auth_shared_secret: The operator to apply to the field auth_shared_secret. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. auth_shared_secret: The shared secret of an authentication server (required for Radius and Tacacs methods). For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_auth_shared_secret: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_auth_shared_secret: If op_auth_shared_secret is specified, the field named in this input will be compared to the value in auth_shared_secret using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_auth_shared_secret must be specified if op_auth_shared_secret is specified.
:type val_f_auth_shared_secret: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_auth_shared_secret: If op_auth_shared_secret is specified, this value will be compared to the value in auth_shared_secret using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_auth_shared_secret must be specified if op_auth_shared_secret is specified.
:type val_c_auth_shared_secret: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_auth_version: The operator to apply to the field auth_version. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. auth_version: The version used for the authentication (LDAP). For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_auth_version: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_auth_version: If op_auth_version is specified, the field named in this input will be compared to the value in auth_version using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_auth_version must be specified if op_auth_version is specified.
:type val_f_auth_version: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_auth_version: If op_auth_version is specified, this value will be compared to the value in auth_version using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_auth_version must be specified if op_auth_version is specified.
:type val_c_auth_version: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_created_at: The operator to apply to the field created_at. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. created_at: The date and time the record was initially created in NetMRI. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_created_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_created_at: If op_created_at is specified, the field named in this input will be compared to the value in created_at using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_created_at must be specified if op_created_at is specified.
:type val_f_created_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_created_at: If op_created_at is specified, this value will be compared to the value in created_at using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_created_at must be specified if op_created_at is specified.
:type val_c_created_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_enabled_ind: The operator to apply to the field enabled_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. enabled_ind: A flag indicating whether the authentication server settings is enabled or disabled. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_enabled_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_enabled_ind: If op_enabled_ind is specified, the field named in this input will be compared to the value in enabled_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_enabled_ind must be specified if op_enabled_ind is specified.
:type val_f_enabled_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_enabled_ind: If op_enabled_ind is specified, this value will be compared to the value in enabled_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_enabled_ind must be specified if op_enabled_ind is specified.
:type val_c_enabled_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_id: The operator to apply to the field id. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. id: The authentication server identifier. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_id: If op_id is specified, the field named in this input will be compared to the value in id using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_id must be specified if op_id is specified.
:type val_f_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_id: If op_id is specified, this value will be compared to the value in id using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_id must be specified if op_id is specified.
:type val_c_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_priority: The operator to apply to the field priority. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. priority: Priority assigned to an authentication server. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_priority: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_priority: If op_priority is specified, the field named in this input will be compared to the value in priority using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_priority must be specified if op_priority is specified.
:type val_f_priority: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_priority: If op_priority is specified, this value will be compared to the value in priority using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_priority must be specified if op_priority is specified.
:type val_c_priority: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_secure_version: The operator to apply to the field secure_version. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. secure_version: Internal encrypt version used for any auth_shared_secret For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_secure_version: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_secure_version: If op_secure_version is specified, the field named in this input will be compared to the value in secure_version using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_secure_version must be specified if op_secure_version is specified.
:type val_f_secure_version: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_secure_version: If op_secure_version is specified, this value will be compared to the value in secure_version using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_secure_version must be specified if op_secure_version is specified.
:type val_c_secure_version: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_source_interface_id: The operator to apply to the field source_interface_id. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. source_interface_id: The NetMRI interface to use as source of the packets sent to the authentication server. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_source_interface_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_source_interface_id: If op_source_interface_id is specified, the field named in this input will be compared to the value in source_interface_id using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_source_interface_id must be specified if op_source_interface_id is specified.
:type val_f_source_interface_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_source_interface_id: If op_source_interface_id is specified, this value will be compared to the value in source_interface_id using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_source_interface_id must be specified if op_source_interface_id is specified.
:type val_c_source_interface_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_updated_at: The operator to apply to the field updated_at. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. updated_at: The date and time the record was last modified in NetMRI. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_updated_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_updated_at: If op_updated_at is specified, the field named in this input will be compared to the value in updated_at using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_updated_at must be specified if op_updated_at is specified.
:type val_f_updated_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_updated_at: If op_updated_at is specified, this value will be compared to the value in updated_at using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_updated_at must be specified if op_updated_at is specified.
:type val_c_updated_at: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` id
:param sort: The data field(s) to use for sorting the output. Default is id. Valid values are id, priority, enabled_ind, auth_server, auth_port, auth_shared_secret, auth_encryption, auth_cert, created_at, updated_at, secure_version, auth_service_id, auth_protocol, source_interface_id, auth_version.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each AuthServer. Valid values are id, priority, enabled_ind, auth_server, auth_port, auth_shared_secret, auth_encryption, auth_cert, created_at, updated_at, secure_version, auth_service_id, auth_protocol, source_interface_id, auth_version. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return auth_servers: An array of the AuthServer objects that match the specified input criteria.
:rtype auth_servers: Array of AuthServer
"""
return self.api_list_request(self._get_method_fullname("find"), kwargs)
def update(self, **kwargs):
"""Updates an existing auth server.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The authentication server identifier.
:type id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_cert: The SSL certificate of an Authentication Server. (Required for Active Directory method). If omitted, this field will not be updated.
:type auth_cert: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_encryption: The Encryption method (none or SSL) (required for Active Directory method). If omitted, this field will not be updated.
:type auth_encryption: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_port: Authentication Port (required for Active Directory method). If omitted, this field will not be updated.
:type auth_port: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_protocol: The password exchange protocol to use for authentication. One of (PAP, CHAP) If omitted, this field will not be updated.
:type auth_protocol: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_server: Authentication Server Name (required for Radius, Tacacs, LDAP and Active Directory methods). If omitted, this field will not be updated.
:type auth_server: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_service_id: The id of the authentication service, this server is member of. If omitted, this field will not be updated.
:type auth_service_id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_shared_secret: The shared secret of an authentication server (required for Radius and Tacacs methods). If omitted, this field will not be updated.
:type auth_shared_secret: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param auth_version: The version used for the authentication (LDAP). If omitted, this field will not be updated.
:type auth_version: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` True
:param enabled_ind: A flag indicating whether the authentication server settings is enabled or disabled. If omitted, this field will be updated to the default value.
:type enabled_ind: Boolean
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param priority: Priority assigned to an authentication server. If omitted, this field will not be updated.
:type priority: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param source_interface_id: The NetMRI interface to use as source of the packets sent to the authentication server. If omitted, this field will not be updated.
:type source_interface_id: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return id: The id of the updated auth server.
:rtype id: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return model: The class name of the updated auth server.
:rtype model: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return uri: A URI that may be used to retrieve the updated auth server.
:rtype uri: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return auth_server: The updated auth server.
:rtype auth_server: AuthServer
"""
return self.api_request(self._get_method_fullname("update"), kwargs)
def destroy(self, **kwargs):
"""Deletes the specified auth server from NetMRI.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param id: The authentication server identifier.
:type id: Integer
**Outputs**
"""
return self.api_request(self._get_method_fullname("destroy"), kwargs)
| 53.913462 | 615 | 0.61338 | 6,610 | 50,463 | 4.565204 | 0.043873 | 0.07158 | 0.046527 | 0.060843 | 0.938726 | 0.935081 | 0.930707 | 0.924742 | 0.895811 | 0.893591 | 0 | 0.003215 | 0.303529 | 50,463 | 935 | 616 | 53.971123 | 0.855375 | 0.824505 | 0 | 0 | 0 | 0 | 0.053593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.066667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
5882f6a3dc06959f41b192ae0a2c2099e62f175b | 185,480 | py | Python | nova/tests/unit/virt/libvirt/test_config.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/virt/libvirt/test_config.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/virt/libvirt/test_config.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | 2 | 2017-07-20T17:31:34.000Z | 2020-07-24T02:42:19.000Z | begin_unit
comment|'# Copyright (C) 2012 Red Hat, Inc.'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Licensed under the Apache License, Version 2.0 (the "License"); you may'
nl|'\n'
comment|'# not use this file except in compliance with the License. You may obtain'
nl|'\n'
comment|'# a copy of the License at'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# http://www.apache.org/licenses/LICENSE-2.0'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Unless required by applicable law or agreed to in writing, software'
nl|'\n'
comment|'# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT'
nl|'\n'
comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the'
nl|'\n'
comment|'# License for the specific language governing permissions and limitations'
nl|'\n'
comment|'# under the License.'
nl|'\n'
nl|'\n'
name|'from'
name|'lxml'
name|'import'
name|'etree'
newline|'\n'
name|'from'
name|'oslo_utils'
name|'import'
name|'units'
newline|'\n'
nl|'\n'
name|'from'
name|'nova'
op|'.'
name|'compute'
name|'import'
name|'arch'
newline|'\n'
name|'from'
name|'nova'
name|'import'
name|'test'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
name|'import'
name|'matchers'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'virt'
op|'.'
name|'libvirt'
name|'import'
name|'config'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigBaseTest
name|'class'
name|'LibvirtConfigBaseTest'
op|'('
name|'test'
op|'.'
name|'NoDBTestCase'
op|')'
op|':'
newline|'\n'
DECL|member|assertXmlEqual
indent|' '
name|'def'
name|'assertXmlEqual'
op|'('
name|'self'
op|','
name|'expectedXmlstr'
op|','
name|'actualXmlstr'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'assertThat'
op|'('
name|'actualXmlstr'
op|','
name|'matchers'
op|'.'
name|'XMLMatches'
op|'('
name|'expectedXmlstr'
op|')'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_plain
indent|' '
name|'def'
name|'test_config_plain'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigObject'
op|'('
name|'root_name'
op|'='
string|'"demo"'
op|')'
newline|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"<demo/>"'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_ns
dedent|''
name|'def'
name|'test_config_ns'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigObject'
op|'('
name|'root_name'
op|'='
string|'"demo"'
op|','
name|'ns_prefix'
op|'='
string|'"foo"'
op|','
nl|'\n'
name|'ns_uri'
op|'='
string|'"http://example.com/foo"'
op|')'
newline|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <foo:demo xmlns:foo="http://example.com/foo"/>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_text
dedent|''
name|'def'
name|'test_config_text'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigObject'
op|'('
name|'root_name'
op|'='
string|'"demo"'
op|')'
newline|'\n'
name|'root'
op|'='
name|'obj'
op|'.'
name|'format_dom'
op|'('
op|')'
newline|'\n'
name|'root'
op|'.'
name|'append'
op|'('
name|'obj'
op|'.'
name|'_text_node'
op|'('
string|'"foo"'
op|','
string|'"bar"'
op|')'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'etree'
op|'.'
name|'tostring'
op|'('
name|'root'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"<demo><foo>bar</foo></demo>"'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_text_unicode
dedent|''
name|'def'
name|'test_config_text_unicode'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigObject'
op|'('
name|'root_name'
op|'='
string|"'demo'"
op|')'
newline|'\n'
name|'root'
op|'='
name|'obj'
op|'.'
name|'format_dom'
op|'('
op|')'
newline|'\n'
name|'root'
op|'.'
name|'append'
op|'('
name|'obj'
op|'.'
name|'_text_node'
op|'('
string|"'foo'"
op|','
string|"u'\\xF0\\x9F\\x92\\xA9'"
op|')'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|"'<demo><foo>💩</foo></demo>'"
op|','
nl|'\n'
name|'etree'
op|'.'
name|'tostring'
op|'('
name|'root'
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_parse
dedent|''
name|'def'
name|'test_config_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'inxml'
op|'='
string|'"<demo><foo/></demo>"'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigObject'
op|'('
name|'root_name'
op|'='
string|'"demo"'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'inxml'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigCapsTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigCapsTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_host
indent|' '
name|'def'
name|'test_config_host'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <capabilities>\n <host>\n <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>\n <cpu>\n <arch>x86_64</arch>\n <model>Opteron_G3</model>\n <vendor>AMD</vendor>\n <topology sockets=\'1\' cores=\'4\' threads=\'1\'/>\n <feature name=\'ibs\'/>\n <feature name=\'osvw\'/>\n </cpu>\n <topology>\n <cells num=\'2\'>\n <cell id=\'0\'>\n <memory unit=\'KiB\'>4048280</memory>\n <pages unit=\'KiB\' size=\'4\'>1011941</pages>\n <pages unit=\'KiB\' size=\'2048\'>0</pages>\n <cpus num=\'4\'>\n <cpu id=\'0\' socket_id=\'0\' core_id=\'0\' siblings=\'0\'/>\n <cpu id=\'1\' socket_id=\'0\' core_id=\'1\' siblings=\'1\'/>\n <cpu id=\'2\' socket_id=\'0\' core_id=\'2\' siblings=\'2\'/>\n <cpu id=\'3\' socket_id=\'0\' core_id=\'3\' siblings=\'3\'/>\n </cpus>\n </cell>\n <cell id=\'1\'>\n <memory unit=\'KiB\'>4127684</memory>\n <pages unit=\'KiB\' size=\'4\'>1031921</pages>\n <pages unit=\'KiB\' size=\'2048\'>0</pages>\n <cpus num=\'4\'>\n <cpu id=\'4\' socket_id=\'1\' core_id=\'0\' siblings=\'4\'/>\n <cpu id=\'5\' socket_id=\'1\' core_id=\'1\' siblings=\'5\'/>\n <cpu id=\'6\' socket_id=\'1\' core_id=\'2\' siblings=\'6\'/>\n <cpu id=\'7\' socket_id=\'1\' core_id=\'3\' siblings=\'7\'/>\n </cpus>\n </cell>\n </cells>\n </topology>\n </host>\n <guest>\n <os_type>hvm</os_type>\n <arch name=\'x86_64\'/>\n </guest>\n <guest>\n <os_type>hvm</os_type>\n <arch name=\'i686\'/>\n </guest>\n </capabilities>"""'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigCaps'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'host'
op|','
name|'config'
op|'.'
name|'LibvirtConfigCapsHost'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'host'
op|'.'
name|'uuid'
op|','
string|'"c7a5fdbd-edaf-9455-926a-d65c16db1809"'
op|')'
newline|'\n'
nl|'\n'
name|'xmlout'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xmlin'
op|','
name|'xmlout'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_host_numa_cell_no_memory_caps
dedent|''
name|'def'
name|'test_config_host_numa_cell_no_memory_caps'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <cell id=\'0\'>\n <cpus num=\'1\'>\n <cpu id=\'0\' socket_id=\'0\' core_id=\'0\' siblings=\'0\'/>\n </cpus>\n </cell>"""'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigCapsNUMACell'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'0'
op|','
name|'obj'
op|'.'
name|'memory'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'1'
op|','
name|'len'
op|'('
name|'obj'
op|'.'
name|'cpus'
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_host_numa_cell_no_cpus_caps
dedent|''
name|'def'
name|'test_config_host_numa_cell_no_cpus_caps'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <cell id=\'0\'>\n <memory unit=\'KiB\'>128</memory>\n </cell>"""'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigCapsNUMACell'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'128'
op|','
name|'obj'
op|'.'
name|'memory'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'0'
op|','
name|'len'
op|'('
name|'obj'
op|'.'
name|'cpus'
op|')'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestTimerTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestTimerTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_platform
indent|' '
name|'def'
name|'test_config_platform'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestTimer'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'track'
op|'='
string|'"host"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <timer name="platform" track="host"/>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_pit
dedent|''
name|'def'
name|'test_config_pit'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestTimer'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"pit"'
newline|'\n'
name|'obj'
op|'.'
name|'tickpolicy'
op|'='
string|'"discard"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <timer name="pit" tickpolicy="discard"/>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_hpet
dedent|''
name|'def'
name|'test_config_hpet'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestTimer'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"hpet"'
newline|'\n'
name|'obj'
op|'.'
name|'present'
op|'='
name|'False'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <timer name="hpet" present="no"/>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestClockTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestClockTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_utc
indent|' '
name|'def'
name|'test_config_utc'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestClock'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <clock offset="utc"/>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_localtime
dedent|''
name|'def'
name|'test_config_localtime'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestClock'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'offset'
op|'='
string|'"localtime"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <clock offset="localtime"/>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_timezone
dedent|''
name|'def'
name|'test_config_timezone'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestClock'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'offset'
op|'='
string|'"timezone"'
newline|'\n'
name|'obj'
op|'.'
name|'timezone'
op|'='
string|'"EDT"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <clock offset="timezone" timezone="EDT"/>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_variable
dedent|''
name|'def'
name|'test_config_variable'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestClock'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'offset'
op|'='
string|'"variable"'
newline|'\n'
name|'obj'
op|'.'
name|'adjustment'
op|'='
string|'"123456"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <clock offset="variable" adjustment="123456"/>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_timers
dedent|''
name|'def'
name|'test_config_timers'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestClock'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'tmpit'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestTimer'
op|'('
op|')'
newline|'\n'
name|'tmpit'
op|'.'
name|'name'
op|'='
string|'"pit"'
newline|'\n'
name|'tmpit'
op|'.'
name|'tickpolicy'
op|'='
string|'"discard"'
newline|'\n'
nl|'\n'
name|'tmrtc'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestTimer'
op|'('
op|')'
newline|'\n'
name|'tmrtc'
op|'.'
name|'name'
op|'='
string|'"rtc"'
newline|'\n'
name|'tmrtc'
op|'.'
name|'tickpolicy'
op|'='
string|'"merge"'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'add_timer'
op|'('
name|'tmpit'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'add_timer'
op|'('
name|'tmrtc'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <clock offset="utc">\n <timer name="pit" tickpolicy="discard"/>\n <timer name="rtc" tickpolicy="merge"/>\n </clock>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigCPUFeatureTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigCPUFeatureTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_simple
indent|' '
name|'def'
name|'test_config_simple'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigCPUFeature'
op|'('
string|'"mtrr"'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <feature name="mtrr"/>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestCPUFeatureTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestCPUFeatureTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_simple
indent|' '
name|'def'
name|'test_config_simple'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUFeature'
op|'('
string|'"mtrr"'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'policy'
op|'='
string|'"force"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <feature name="mtrr" policy="force"/>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestCPUNUMATest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestCPUNUMATest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_parse_dom
indent|' '
name|'def'
name|'test_parse_dom'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""\n <numa>\n <cell id="0" cpus="0-1" memory="1000000"/>\n <cell id="1" cpus="2-3" memory="1500000"/>\n </numa>\n """'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUNUMA'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'2'
op|','
name|'len'
op|'('
name|'obj'
op|'.'
name|'cells'
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_simple
dedent|''
name|'def'
name|'test_config_simple'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUNUMA'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'cell'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUNUMACell'
op|'('
op|')'
newline|'\n'
name|'cell'
op|'.'
name|'id'
op|'='
number|'0'
newline|'\n'
name|'cell'
op|'.'
name|'cpus'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|']'
op|')'
newline|'\n'
name|'cell'
op|'.'
name|'memory'
op|'='
number|'1000000'
newline|'\n'
name|'cell'
op|'.'
name|'memAccess'
op|'='
string|'"shared"'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'cells'
op|'.'
name|'append'
op|'('
name|'cell'
op|')'
newline|'\n'
nl|'\n'
name|'cell'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUNUMACell'
op|'('
op|')'
newline|'\n'
name|'cell'
op|'.'
name|'id'
op|'='
number|'1'
newline|'\n'
name|'cell'
op|'.'
name|'cpus'
op|'='
name|'set'
op|'('
op|'['
number|'2'
op|','
number|'3'
op|']'
op|')'
newline|'\n'
name|'cell'
op|'.'
name|'memory'
op|'='
number|'1500000'
newline|'\n'
name|'cell'
op|'.'
name|'memAccess'
op|'='
string|'"private"'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'cells'
op|'.'
name|'append'
op|'('
name|'cell'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <numa>\n <cell id="0" cpus="0-1" memory="1000000" memAccess="shared"/>\n <cell id="1" cpus="2-3" memory="1500000" memAccess="private"/>\n </numa>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigCPUTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigCPUTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_simple
indent|' '
name|'def'
name|'test_config_simple'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigCPU'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"Penryn"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cpu>\n <model>Penryn</model>\n </cpu>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_complex
dedent|''
name|'def'
name|'test_config_complex'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigCPU'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"Penryn"'
newline|'\n'
name|'obj'
op|'.'
name|'vendor'
op|'='
string|'"Intel"'
newline|'\n'
name|'obj'
op|'.'
name|'arch'
op|'='
name|'arch'
op|'.'
name|'X86_64'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'add_feature'
op|'('
name|'config'
op|'.'
name|'LibvirtConfigCPUFeature'
op|'('
string|'"mtrr"'
op|')'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'add_feature'
op|'('
name|'config'
op|'.'
name|'LibvirtConfigCPUFeature'
op|'('
string|'"apic"'
op|')'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cpu>\n <arch>x86_64</arch>\n <model>Penryn</model>\n <vendor>Intel</vendor>\n <feature name="apic"/>\n <feature name="mtrr"/>\n </cpu>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_only_uniq_cpu_featues
dedent|''
name|'def'
name|'test_only_uniq_cpu_featues'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigCPU'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"Penryn"'
newline|'\n'
name|'obj'
op|'.'
name|'vendor'
op|'='
string|'"Intel"'
newline|'\n'
name|'obj'
op|'.'
name|'arch'
op|'='
name|'arch'
op|'.'
name|'X86_64'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'add_feature'
op|'('
name|'config'
op|'.'
name|'LibvirtConfigCPUFeature'
op|'('
string|'"mtrr"'
op|')'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'add_feature'
op|'('
name|'config'
op|'.'
name|'LibvirtConfigCPUFeature'
op|'('
string|'"apic"'
op|')'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'add_feature'
op|'('
name|'config'
op|'.'
name|'LibvirtConfigCPUFeature'
op|'('
string|'"apic"'
op|')'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'add_feature'
op|'('
name|'config'
op|'.'
name|'LibvirtConfigCPUFeature'
op|'('
string|'"mtrr"'
op|')'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cpu>\n <arch>x86_64</arch>\n <model>Penryn</model>\n <vendor>Intel</vendor>\n <feature name="apic"/>\n <feature name="mtrr"/>\n </cpu>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_topology
dedent|''
name|'def'
name|'test_config_topology'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigCPU'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"Penryn"'
newline|'\n'
name|'obj'
op|'.'
name|'sockets'
op|'='
number|'4'
newline|'\n'
name|'obj'
op|'.'
name|'cores'
op|'='
number|'4'
newline|'\n'
name|'obj'
op|'.'
name|'threads'
op|'='
number|'2'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cpu>\n <model>Penryn</model>\n <topology sockets="4" cores="4" threads="2"/>\n </cpu>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestCPUTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestCPUTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_simple
indent|' '
name|'def'
name|'test_config_simple'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPU'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"Penryn"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cpu match="exact">\n <model>Penryn</model>\n </cpu>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_complex
dedent|''
name|'def'
name|'test_config_complex'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPU'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"Penryn"'
newline|'\n'
name|'obj'
op|'.'
name|'vendor'
op|'='
string|'"Intel"'
newline|'\n'
name|'obj'
op|'.'
name|'arch'
op|'='
name|'arch'
op|'.'
name|'X86_64'
newline|'\n'
name|'obj'
op|'.'
name|'mode'
op|'='
string|'"custom"'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'add_feature'
op|'('
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUFeature'
op|'('
string|'"mtrr"'
op|')'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'add_feature'
op|'('
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUFeature'
op|'('
string|'"apic"'
op|')'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cpu mode="custom" match="exact">\n <arch>x86_64</arch>\n <model>Penryn</model>\n <vendor>Intel</vendor>\n <feature name="apic" policy="require"/>\n <feature name="mtrr" policy="require"/>\n </cpu>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_host
dedent|''
name|'def'
name|'test_config_host'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPU'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'mode'
op|'='
string|'"host-model"'
newline|'\n'
name|'obj'
op|'.'
name|'match'
op|'='
string|'"exact"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cpu mode="host-model" match="exact"/>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_host_with_numa
dedent|''
name|'def'
name|'test_config_host_with_numa'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPU'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'mode'
op|'='
string|'"host-model"'
newline|'\n'
name|'obj'
op|'.'
name|'match'
op|'='
string|'"exact"'
newline|'\n'
nl|'\n'
name|'numa'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUNUMA'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'cell'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUNUMACell'
op|'('
op|')'
newline|'\n'
name|'cell'
op|'.'
name|'id'
op|'='
number|'0'
newline|'\n'
name|'cell'
op|'.'
name|'cpus'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|']'
op|')'
newline|'\n'
name|'cell'
op|'.'
name|'memory'
op|'='
number|'1000000'
newline|'\n'
name|'cell'
op|'.'
name|'memAccess'
op|'='
string|'"private"'
newline|'\n'
nl|'\n'
name|'numa'
op|'.'
name|'cells'
op|'.'
name|'append'
op|'('
name|'cell'
op|')'
newline|'\n'
nl|'\n'
name|'cell'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUNUMACell'
op|'('
op|')'
newline|'\n'
name|'cell'
op|'.'
name|'id'
op|'='
number|'1'
newline|'\n'
name|'cell'
op|'.'
name|'cpus'
op|'='
name|'set'
op|'('
op|'['
number|'2'
op|','
number|'3'
op|']'
op|')'
newline|'\n'
name|'cell'
op|'.'
name|'memory'
op|'='
number|'1500000'
newline|'\n'
nl|'\n'
name|'numa'
op|'.'
name|'cells'
op|'.'
name|'append'
op|'('
name|'cell'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'numa'
op|'='
name|'numa'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cpu mode="host-model" match="exact">\n <numa>\n <cell id="0" cpus="0-1" memory="1000000" memAccess="private"/>\n <cell id="1" cpus="2-3" memory="1500000"/>\n </numa>\n </cpu>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestSMBIOSTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestSMBIOSTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_simple
indent|' '
name|'def'
name|'test_config_simple'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSMBIOS'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <smbios mode="sysinfo"/>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestSysinfoTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestSysinfoTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_simple
indent|' '
name|'def'
name|'test_config_simple'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSysinfo'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <sysinfo type="smbios"/>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_bios
dedent|''
name|'def'
name|'test_config_bios'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSysinfo'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'bios_vendor'
op|'='
string|'"Acme"'
newline|'\n'
name|'obj'
op|'.'
name|'bios_version'
op|'='
string|'"6.6.6"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <sysinfo type="smbios">\n <bios>\n <entry name="vendor">Acme</entry>\n <entry name="version">6.6.6</entry>\n </bios>\n </sysinfo>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_system
dedent|''
name|'def'
name|'test_config_system'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSysinfo'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'system_manufacturer'
op|'='
string|'"Acme"'
newline|'\n'
name|'obj'
op|'.'
name|'system_product'
op|'='
string|'"Wile Coyote"'
newline|'\n'
name|'obj'
op|'.'
name|'system_version'
op|'='
string|'"6.6.6"'
newline|'\n'
name|'obj'
op|'.'
name|'system_serial'
op|'='
string|'"123456"'
newline|'\n'
name|'obj'
op|'.'
name|'system_uuid'
op|'='
string|'"c7a5fdbd-edaf-9455-926a-d65c16db1809"'
newline|'\n'
name|'obj'
op|'.'
name|'system_family'
op|'='
string|'"Anvils"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <sysinfo type="smbios">\n <system>\n <entry name="manufacturer">Acme</entry>\n <entry name="product">Wile Coyote</entry>\n <entry name="version">6.6.6</entry>\n <entry name="serial">123456</entry>\n <entry name="uuid">c7a5fdbd-edaf-9455-926a-d65c16db1809</entry>\n <entry name="family">Anvils</entry>\n </system>\n </sysinfo>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_mixed
dedent|''
name|'def'
name|'test_config_mixed'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSysinfo'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'bios_vendor'
op|'='
string|'"Acme"'
newline|'\n'
name|'obj'
op|'.'
name|'system_manufacturer'
op|'='
string|'"Acme"'
newline|'\n'
name|'obj'
op|'.'
name|'system_product'
op|'='
string|'"Wile Coyote"'
newline|'\n'
name|'obj'
op|'.'
name|'system_uuid'
op|'='
string|'"c7a5fdbd-edaf-9455-926a-d65c16db1809"'
newline|'\n'
name|'obj'
op|'.'
name|'system_family'
op|'='
string|'"Anvils"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <sysinfo type="smbios">\n <bios>\n <entry name="vendor">Acme</entry>\n </bios>\n <system>\n <entry name="manufacturer">Acme</entry>\n <entry name="product">Wile Coyote</entry>\n <entry name="uuid">c7a5fdbd-edaf-9455-926a-d65c16db1809</entry>\n <entry name="family">Anvils</entry>\n </system>\n </sysinfo>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestDiskTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestDiskTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_file
indent|' '
name|'def'
name|'test_config_file'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_parse
dedent|''
name|'def'
name|'test_config_file_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""<disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n </disk>"""'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_type'
op|','
string|"'file'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_path'
op|','
string|"'/tmp/hello'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_dev'
op|','
string|"'/dev/hda'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_bus'
op|','
string|"'ide'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertFalse'
op|'('
name|'obj'
op|'.'
name|'readonly'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertFalse'
op|'('
name|'obj'
op|'.'
name|'shareable'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_readonly
dedent|''
name|'def'
name|'test_config_file_readonly'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
name|'obj'
op|'.'
name|'readonly'
op|'='
name|'True'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n <readonly/>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_parse_readonly
dedent|''
name|'def'
name|'test_config_file_parse_readonly'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""<disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n <readonly/>\n </disk>"""'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_type'
op|','
string|"'file'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_path'
op|','
string|"'/tmp/hello'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_dev'
op|','
string|"'/dev/hda'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_bus'
op|','
string|"'ide'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertTrue'
op|'('
name|'obj'
op|'.'
name|'readonly'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertFalse'
op|'('
name|'obj'
op|'.'
name|'shareable'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_shareable
dedent|''
name|'def'
name|'test_config_file_shareable'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
name|'obj'
op|'.'
name|'shareable'
op|'='
name|'True'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n <shareable/>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_parse_shareable
dedent|''
name|'def'
name|'test_config_file_parse_shareable'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""<disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n <shareable/>\n </disk>"""'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_type'
op|','
string|"'file'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_path'
op|','
string|"'/tmp/hello'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_dev'
op|','
string|"'/dev/hda'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_bus'
op|','
string|"'ide'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertFalse'
op|'('
name|'obj'
op|'.'
name|'readonly'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertTrue'
op|'('
name|'obj'
op|'.'
name|'shareable'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_serial
dedent|''
name|'def'
name|'test_config_file_serial'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
name|'obj'
op|'.'
name|'serial'
op|'='
string|'"7a97c4a3-6f59-41d4-bf47-191d7f97f8e9"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n <serial>7a97c4a3-6f59-41d4-bf47-191d7f97f8e9</serial>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_serial_parse
dedent|''
name|'def'
name|'test_config_file_serial_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""<disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n <serial>7a97c4a3-6f59-41d4-bf47-191d7f97f8e9</serial>\n </disk>"""'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_type'
op|','
string|"'file'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'serial'
op|','
string|"'7a97c4a3-6f59-41d4-bf47-191d7f97f8e9'"
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_discard
dedent|''
name|'def'
name|'test_config_file_discard'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'driver_name'
op|'='
string|'"qemu"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_format'
op|'='
string|'"qcow2"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_cache'
op|'='
string|'"none"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_discard'
op|'='
string|'"unmap"'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello.qcow2"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
name|'obj'
op|'.'
name|'serial'
op|'='
string|'"7a97c4a3-6f59-41d4-bf47-191d7f97f8e9"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|'"""\n <disk type="file" device="disk">\n <driver name="qemu" type="qcow2" cache="none" discard="unmap"/>\n <source file="/tmp/hello.qcow2"/>\n <target bus="ide" dev="/dev/hda"/>\n <serial>7a97c4a3-6f59-41d4-bf47-191d7f97f8e9</serial>\n </disk>"""'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_discard_parse
dedent|''
name|'def'
name|'test_config_file_discard_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""\n <disk type="file" device="disk">\n <driver name="qemu" type="qcow2" cache="none" discard="unmap"/>\n <source file="/tmp/hello.qcow2"/>\n <target bus="ide" dev="/dev/hda"/>\n <serial>7a97c4a3-6f59-41d4-bf47-191d7f97f8e9</serial>\n </disk>"""'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
string|"'unmap'"
op|','
name|'obj'
op|'.'
name|'driver_discard'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_io
dedent|''
name|'def'
name|'test_config_file_io'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'driver_name'
op|'='
string|'"qemu"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_format'
op|'='
string|'"qcow2"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_cache'
op|'='
string|'"none"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_io'
op|'='
string|'"native"'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello.qcow2"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
name|'obj'
op|'.'
name|'serial'
op|'='
string|'"7a97c4a3-6f59-41d4-bf47-191d7f97f8e9"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|'"""\n <disk type="file" device="disk">\n <driver name="qemu" type="qcow2" cache="none" io="native"/>\n <source file="/tmp/hello.qcow2"/>\n <target bus="ide" dev="/dev/hda"/>\n <serial>7a97c4a3-6f59-41d4-bf47-191d7f97f8e9</serial>\n </disk>"""'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_io_parse
dedent|''
name|'def'
name|'test_config_file_io_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""\n <disk type="file" device="disk">\n <driver name="qemu" type="qcow2" cache="none" io="native"/>\n <source file="/tmp/hello.qcow2"/>\n <target bus="ide" dev="/dev/hda"/>\n <serial>7a97c4a3-6f59-41d4-bf47-191d7f97f8e9</serial>\n </disk>"""'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
string|"'native'"
op|','
name|'obj'
op|'.'
name|'driver_io'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_block
dedent|''
name|'def'
name|'test_config_block'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"block"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello"'
newline|'\n'
name|'obj'
op|'.'
name|'source_device'
op|'='
string|'"cdrom"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_name'
op|'='
string|'"qemu"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hdc"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="block" device="cdrom">\n <driver name="qemu"/>\n <source dev="/tmp/hello"/>\n <target bus="ide" dev="/dev/hdc"/>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_block_parse
dedent|''
name|'def'
name|'test_config_block_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""<disk type="block" device="cdrom">\n <driver name="qemu"/>\n <source dev="/tmp/hello"/>\n <target bus="ide" dev="/dev/hdc"/>\n </disk>"""'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_type'
op|','
string|"'block'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_path'
op|','
string|"'/tmp/hello'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_dev'
op|','
string|"'/dev/hdc'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_bus'
op|','
string|"'ide'"
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_network
dedent|''
name|'def'
name|'test_config_network'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"network"'
newline|'\n'
name|'obj'
op|'.'
name|'source_protocol'
op|'='
string|'"iscsi"'
newline|'\n'
name|'obj'
op|'.'
name|'source_name'
op|'='
string|'"foo.bar.com"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_name'
op|'='
string|'"qemu"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_format'
op|'='
string|'"qcow2"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="network" device="disk">\n <driver name="qemu" type="qcow2"/>\n <source name="foo.bar.com" protocol="iscsi"/>\n <target bus="ide" dev="/dev/hda"/>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_network_parse
dedent|''
name|'def'
name|'test_config_network_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""<disk type="network" device="disk">\n <driver name="qemu" type="qcow2"/>\n <source name="foo.bar.com" protocol="iscsi"/>\n <target bus="ide" dev="/dev/hda"/>\n </disk>"""'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_type'
op|','
string|"'network'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_protocol'
op|','
string|"'iscsi'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_name'
op|','
string|"'foo.bar.com'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'driver_name'
op|','
string|"'qemu'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'driver_format'
op|','
string|"'qcow2'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_dev'
op|','
string|"'/dev/hda'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_bus'
op|','
string|"'ide'"
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_network_no_name
dedent|''
name|'def'
name|'test_config_network_no_name'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|"'network'"
newline|'\n'
name|'obj'
op|'.'
name|'source_protocol'
op|'='
string|"'nbd'"
newline|'\n'
name|'obj'
op|'.'
name|'source_hosts'
op|'='
op|'['
string|"'foo.bar.com'"
op|']'
newline|'\n'
name|'obj'
op|'.'
name|'source_ports'
op|'='
op|'['
name|'None'
op|']'
newline|'\n'
name|'obj'
op|'.'
name|'driver_name'
op|'='
string|"'qemu'"
newline|'\n'
name|'obj'
op|'.'
name|'driver_format'
op|'='
string|"'raw'"
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|"'/dev/vda'"
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|"'virtio'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="network" device="disk">\n <driver name="qemu" type="raw"/>\n <source protocol="nbd">\n <host name="foo.bar.com"/>\n </source>\n <target bus="virtio" dev="/dev/vda"/>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_network_multihost
dedent|''
name|'def'
name|'test_config_network_multihost'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|"'network'"
newline|'\n'
name|'obj'
op|'.'
name|'source_protocol'
op|'='
string|"'rbd'"
newline|'\n'
name|'obj'
op|'.'
name|'source_name'
op|'='
string|"'pool/image'"
newline|'\n'
name|'obj'
op|'.'
name|'source_hosts'
op|'='
op|'['
string|"'foo.bar.com'"
op|','
string|"'::1'"
op|','
string|"'1.2.3.4'"
op|']'
newline|'\n'
name|'obj'
op|'.'
name|'source_ports'
op|'='
op|'['
name|'None'
op|','
string|"'123'"
op|','
string|"'456'"
op|']'
newline|'\n'
name|'obj'
op|'.'
name|'driver_name'
op|'='
string|"'qemu'"
newline|'\n'
name|'obj'
op|'.'
name|'driver_format'
op|'='
string|"'raw'"
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|"'/dev/vda'"
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|"'virtio'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="network" device="disk">\n <driver name="qemu" type="raw"/>\n <source name="pool/image" protocol="rbd">\n <host name="foo.bar.com"/>\n <host name="::1" port="123"/>\n <host name="1.2.3.4" port="456"/>\n </source>\n <target bus="virtio" dev="/dev/vda"/>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_network_auth
dedent|''
name|'def'
name|'test_config_network_auth'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"network"'
newline|'\n'
name|'obj'
op|'.'
name|'source_protocol'
op|'='
string|'"rbd"'
newline|'\n'
name|'obj'
op|'.'
name|'source_name'
op|'='
string|'"pool/image"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_name'
op|'='
string|'"qemu"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_format'
op|'='
string|'"raw"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/vda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"virtio"'
newline|'\n'
name|'obj'
op|'.'
name|'auth_username'
op|'='
string|'"foo"'
newline|'\n'
name|'obj'
op|'.'
name|'auth_secret_type'
op|'='
string|'"ceph"'
newline|'\n'
name|'obj'
op|'.'
name|'auth_secret_uuid'
op|'='
string|'"b38a3f43-4be2-4046-897f-b67c2f5e0147"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="network" device="disk">\n <driver name="qemu" type="raw"/>\n <source name="pool/image" protocol="rbd"/>\n <auth username="foo">\n <secret type="ceph"\n uuid="b38a3f43-4be2-4046-897f-b67c2f5e0147"/>\n </auth>\n <target bus="virtio" dev="/dev/vda"/>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_iotune
dedent|''
name|'def'
name|'test_config_iotune'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
name|'obj'
op|'.'
name|'disk_read_bytes_sec'
op|'='
number|'1024000'
newline|'\n'
name|'obj'
op|'.'
name|'disk_read_iops_sec'
op|'='
number|'1000'
newline|'\n'
name|'obj'
op|'.'
name|'disk_total_bytes_sec'
op|'='
number|'2048000'
newline|'\n'
name|'obj'
op|'.'
name|'disk_write_bytes_sec'
op|'='
number|'1024000'
newline|'\n'
name|'obj'
op|'.'
name|'disk_write_iops_sec'
op|'='
number|'1000'
newline|'\n'
name|'obj'
op|'.'
name|'disk_total_iops_sec'
op|'='
number|'2000'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n <iotune>\n <read_bytes_sec>1024000</read_bytes_sec>\n <read_iops_sec>1000</read_iops_sec>\n <write_bytes_sec>1024000</write_bytes_sec>\n <write_iops_sec>1000</write_iops_sec>\n <total_bytes_sec>2048000</total_bytes_sec>\n <total_iops_sec>2000</total_iops_sec>\n </iotune>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_blockio
dedent|''
name|'def'
name|'test_config_blockio'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
name|'obj'
op|'.'
name|'logical_block_size'
op|'='
string|'"4096"'
newline|'\n'
name|'obj'
op|'.'
name|'physical_block_size'
op|'='
string|'"4096"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|'"""\n <disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n <blockio logical_block_size="4096" physical_block_size="4096"/>\n </disk>"""'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestSnapshotDiskTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestSnapshotDiskTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_file
indent|' '
name|'def'
name|'test_config_file'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/hello"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/hda"'
newline|'\n'
name|'obj'
op|'.'
name|'target_bus'
op|'='
string|'"ide"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n </disk>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_parse
dedent|''
name|'def'
name|'test_config_file_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""<disk type="file" device="disk">\n <source file="/tmp/hello"/>\n <target bus="ide" dev="/dev/hda"/>\n </disk>"""'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_type'
op|','
string|"'file'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_path'
op|','
string|"'/tmp/hello'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_dev'
op|','
string|"'/dev/hda'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'target_bus'
op|','
string|"'ide'"
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestDiskBackingStoreTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestDiskBackingStoreTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_file_parse
indent|' '
name|'def'
name|'test_config_file_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""<backingStore type=\'file\'>\n <driver name=\'qemu\' type=\'qcow2\'/>\n <source file=\'/var/lib/libvirt/images/mid.qcow2\'/>\n <backingStore type=\'file\'>\n <driver name=\'qemu\' type=\'qcow2\'/>\n <source file=\'/var/lib/libvirt/images/base.qcow2\'/>\n <backingStore/>\n </backingStore>\n </backingStore>\n """'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDiskBackingStore'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'driver_name'
op|','
string|"'qemu'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'driver_format'
op|','
string|"'qcow2'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_type'
op|','
string|"'file'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_file'
op|','
string|"'/var/lib/libvirt/images/mid.qcow2'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'backing_store'
op|'.'
name|'driver_name'
op|','
string|"'qemu'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'backing_store'
op|'.'
name|'source_type'
op|','
string|"'file'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'backing_store'
op|'.'
name|'source_file'
op|','
nl|'\n'
string|"'/var/lib/libvirt/images/base.qcow2'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsNone'
op|'('
name|'obj'
op|'.'
name|'backing_store'
op|'.'
name|'backing_store'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_network_parse
dedent|''
name|'def'
name|'test_config_network_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xml'
op|'='
string|'"""<backingStore type=\'network\' index=\'1\'>\n <format type=\'qcow2\'/>\n <source protocol=\'gluster\' name=\'volume1/img1\'>\n <host name=\'host1\' port=\'24007\'/>\n </source>\n <backingStore type=\'network\' index=\'2\'>\n <format type=\'qcow2\'/>\n <source protocol=\'gluster\' name=\'volume1/img2\'>\n <host name=\'host1\' port=\'24007\'/>\n </source>\n <backingStore/>\n </backingStore>\n </backingStore>\n """'
newline|'\n'
name|'xmldoc'
op|'='
name|'etree'
op|'.'
name|'fromstring'
op|'('
name|'xml'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDiskBackingStore'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_dom'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_type'
op|','
string|"'network'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_protocol'
op|','
string|"'gluster'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_name'
op|','
string|"'volume1/img1'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_hosts'
op|'['
number|'0'
op|']'
op|','
string|"'host1'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'source_ports'
op|'['
number|'0'
op|']'
op|','
string|"'24007'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'index'
op|','
string|"'1'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'backing_store'
op|'.'
name|'source_name'
op|','
string|"'volume1/img2'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'backing_store'
op|'.'
name|'index'
op|','
string|"'2'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'backing_store'
op|'.'
name|'source_hosts'
op|'['
number|'0'
op|']'
op|','
string|"'host1'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'backing_store'
op|'.'
name|'source_ports'
op|'['
number|'0'
op|']'
op|','
string|"'24007'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsNone'
op|'('
name|'obj'
op|'.'
name|'backing_store'
op|'.'
name|'backing_store'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestFilesysTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestFilesysTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_mount
indent|' '
name|'def'
name|'test_config_mount'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestFilesys'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"mount"'
newline|'\n'
name|'obj'
op|'.'
name|'source_dir'
op|'='
string|'"/tmp/hello"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dir'
op|'='
string|'"/mnt"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <filesystem type="mount">\n <source dir="/tmp/hello"/>\n <target dir="/mnt"/>\n </filesystem>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_block
dedent|''
name|'def'
name|'test_config_block'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestFilesys'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"block"'
newline|'\n'
name|'obj'
op|'.'
name|'source_dev'
op|'='
string|'"/dev/sdb"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dir'
op|'='
string|'"/mnt"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <filesystem type="block">\n <source dev="/dev/sdb"/>\n <target dir="/mnt"/>\n </filesystem>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_file
dedent|''
name|'def'
name|'test_config_file'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestFilesys'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_file'
op|'='
string|'"/data/myimage.qcow2"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_type'
op|'='
string|'"nbd"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_format'
op|'='
string|'"qcow2"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dir'
op|'='
string|'"/mnt"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <filesystem type="file">\n <driver format="qcow2" type="nbd"/>\n <source file="/data/myimage.qcow2"/>\n <target dir="/mnt"/>\n </filesystem>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestInputTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestInputTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_tablet
indent|' '
name|'def'
name|'test_config_tablet'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInput'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <input type="tablet" bus="usb"/>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestGraphicsTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestGraphicsTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_graphics
indent|' '
name|'def'
name|'test_config_graphics'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestGraphics'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|'"vnc"'
newline|'\n'
name|'obj'
op|'.'
name|'autoport'
op|'='
name|'True'
newline|'\n'
name|'obj'
op|'.'
name|'keymap'
op|'='
string|'"en_US"'
newline|'\n'
name|'obj'
op|'.'
name|'listen'
op|'='
string|'"127.0.0.1"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <graphics type="vnc" autoport="yes" keymap="en_US" listen="127.0.0.1"/>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestHostdev
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestHostdev'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_pci_guest_host_dev
indent|' '
name|'def'
name|'test_config_pci_guest_host_dev'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestHostdev'
op|'('
name|'mode'
op|'='
string|"'subsystem'"
op|','
name|'type'
op|'='
string|"'pci'"
op|')'
newline|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'expected'
op|'='
string|'"""\n <hostdev mode="subsystem" type="pci" managed="yes"/>\n """'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'expected'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_parse_GuestHostdev
dedent|''
name|'def'
name|'test_parse_GuestHostdev'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmldoc'
op|'='
string|'"""<hostdev mode="subsystem" type="pci" managed="yes"/>"""'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestHostdev'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'mode'
op|','
string|"'subsystem'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'type'
op|','
string|"'pci'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'managed'
op|','
string|"'yes'"
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_parse_GuestHostdev_non_pci
dedent|''
name|'def'
name|'test_parse_GuestHostdev_non_pci'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmldoc'
op|'='
string|'"""<hostdev mode="subsystem" type="usb" managed="no"/>"""'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestHostdev'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'mode'
op|','
string|"'subsystem'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'type'
op|','
string|"'usb'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'managed'
op|','
string|"'no'"
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestHostdevPCI
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestHostdevPCI'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
indent|' '
name|'expected'
op|'='
string|'"""\n <hostdev mode="subsystem" type="pci" managed="yes">\n <source>\n <address bus="0x11" domain="0x1234" function="0x3"\n slot="0x22" />\n </source>\n </hostdev>\n """'
newline|'\n'
nl|'\n'
DECL|member|test_config_guest_hosdev_pci
name|'def'
name|'test_config_guest_hosdev_pci'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'hostdev'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestHostdevPCI'
op|'('
op|')'
newline|'\n'
name|'hostdev'
op|'.'
name|'domain'
op|'='
string|'"1234"'
newline|'\n'
name|'hostdev'
op|'.'
name|'bus'
op|'='
string|'"11"'
newline|'\n'
name|'hostdev'
op|'.'
name|'slot'
op|'='
string|'"22"'
newline|'\n'
name|'hostdev'
op|'.'
name|'function'
op|'='
string|'"3"'
newline|'\n'
name|'xml'
op|'='
name|'hostdev'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'self'
op|'.'
name|'expected'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_parse_guest_hosdev_pci
dedent|''
name|'def'
name|'test_parse_guest_hosdev_pci'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmldoc'
op|'='
name|'self'
op|'.'
name|'expected'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestHostdevPCI'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'mode'
op|','
string|"'subsystem'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'type'
op|','
string|"'pci'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'managed'
op|','
string|"'yes'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'domain'
op|','
string|"'0x1234'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'bus'
op|','
string|"'0x11'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'slot'
op|','
string|"'0x22'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'function'
op|','
string|"'0x3'"
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_parse_guest_hosdev_usb
dedent|''
name|'def'
name|'test_parse_guest_hosdev_usb'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmldoc'
op|'='
string|'"""<hostdev mode=\'subsystem\' type=\'usb\'>\n <source startupPolicy=\'optional\'>\n <vendor id=\'0x1234\'/>\n <product id=\'0xbeef\'/>\n </source>\n <boot order=\'2\'/>\n </hostdev>"""'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestHostdevPCI'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'mode'
op|','
string|"'subsystem'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'type'
op|','
string|"'usb'"
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestSerialTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestSerialTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_file
indent|' '
name|'def'
name|'test_config_file'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSerial'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/vm.log"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <serial type="file">\n <source path="/tmp/vm.log"/>\n </serial>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_serial_port
dedent|''
name|'def'
name|'test_config_serial_port'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSerial'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|'"tcp"'
newline|'\n'
name|'obj'
op|'.'
name|'listen_port'
op|'='
number|'11111'
newline|'\n'
name|'obj'
op|'.'
name|'listen_host'
op|'='
string|'"0.0.0.0"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <serial type="tcp">\n <source host="0.0.0.0" service="11111" mode="bind"/>\n </serial>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestConsoleTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestConsoleTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_pty
indent|' '
name|'def'
name|'test_config_pty'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestConsole'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|'"pty"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <console type="pty"/>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_target_type
dedent|''
name|'def'
name|'test_config_target_type'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestConsole'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|'"pty"'
newline|'\n'
name|'obj'
op|'.'
name|'target_type'
op|'='
string|'"sclp"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <console type="pty">\n <target type="sclp"/>\n </console>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_type_file_with_target_type
dedent|''
name|'def'
name|'test_config_type_file_with_target_type'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestConsole'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|'"file"'
newline|'\n'
name|'obj'
op|'.'
name|'target_type'
op|'='
string|'"sclplm"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/var/lib/nova/instances/uuid/console.log"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <console type="file">\n <source path="/var/lib/nova/instances/uuid/console.log"/>\n <target type="sclplm"/>\n </console>\n """'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_target_port
dedent|''
name|'def'
name|'test_config_target_port'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestConsole'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'target_port'
op|'='
number|'0'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <console type="pty">\n <target port="0"/>\n </console>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestChannelTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestChannelTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_spice_minimal
indent|' '
name|'def'
name|'test_config_spice_minimal'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestChannel'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|'"spicevmc"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <channel type="spicevmc">\n <target type=\'virtio\'/>\n </channel>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_spice_full
dedent|''
name|'def'
name|'test_config_spice_full'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestChannel'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|'"spicevmc"'
newline|'\n'
name|'obj'
op|'.'
name|'target_name'
op|'='
string|'"com.redhat.spice.0"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <channel type="spicevmc">\n <target type=\'virtio\' name=\'com.redhat.spice.0\'/>\n </channel>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_qga_full
dedent|''
name|'def'
name|'test_config_qga_full'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestChannel'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|'"unix"'
newline|'\n'
name|'obj'
op|'.'
name|'target_name'
op|'='
string|'"org.qemu.guest_agent.0"'
newline|'\n'
name|'obj'
op|'.'
name|'source_path'
op|'='
string|'"/var/lib/libvirt/qemu/%s.%s.sock"'
op|'%'
op|'('
nl|'\n'
name|'obj'
op|'.'
name|'target_name'
op|','
string|'"instance-name"'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <channel type="unix">\n <source path="%s" mode="bind"/>\n <target type="virtio" name="org.qemu.guest_agent.0"/>\n </channel>"""'
op|'%'
name|'obj'
op|'.'
name|'source_path'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestInterfaceTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestInterfaceTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_ethernet
indent|' '
name|'def'
name|'test_config_ethernet'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"ethernet"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"DE:AD:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"virtio"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"vnet0"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_name'
op|'='
string|'"vhost"'
newline|'\n'
name|'obj'
op|'.'
name|'vif_inbound_average'
op|'='
number|'16384'
newline|'\n'
name|'obj'
op|'.'
name|'vif_inbound_peak'
op|'='
number|'32768'
newline|'\n'
name|'obj'
op|'.'
name|'vif_inbound_burst'
op|'='
number|'3276'
newline|'\n'
name|'obj'
op|'.'
name|'vif_outbound_average'
op|'='
number|'32768'
newline|'\n'
name|'obj'
op|'.'
name|'vif_outbound_peak'
op|'='
number|'65536'
newline|'\n'
name|'obj'
op|'.'
name|'vif_outbound_burst'
op|'='
number|'6553'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="ethernet">\n <mac address="DE:AD:BE:EF:CA:FE"/>\n <model type="virtio"/>\n <driver name="vhost"/>\n <target dev="vnet0"/>\n <bandwidth>\n <inbound average="16384" peak="32768" burst="3276"/>\n <outbound average="32768" peak="65536" burst="6553"/>\n </bandwidth>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_driver_options
dedent|''
name|'def'
name|'test_config_driver_options'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"ethernet"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"DE:AD:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"virtio"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"vnet0"'
newline|'\n'
name|'obj'
op|'.'
name|'driver_name'
op|'='
string|'"vhost"'
newline|'\n'
name|'obj'
op|'.'
name|'vhost_queues'
op|'='
number|'4'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="ethernet">\n <mac address="DE:AD:BE:EF:CA:FE"/>\n <model type="virtio"/>\n <driver name="vhost" queues="4"/>\n <target dev="vnet0"/>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_bridge
dedent|''
name|'def'
name|'test_config_bridge'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"bridge"'
newline|'\n'
name|'obj'
op|'.'
name|'source_dev'
op|'='
string|'"br0"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"DE:AD:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"virtio"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"tap12345678"'
newline|'\n'
name|'obj'
op|'.'
name|'filtername'
op|'='
string|'"clean-traffic"'
newline|'\n'
name|'obj'
op|'.'
name|'filterparams'
op|'.'
name|'append'
op|'('
op|'{'
string|'"key"'
op|':'
string|'"IP"'
op|','
string|'"value"'
op|':'
string|'"192.168.122.1"'
op|'}'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'vif_inbound_average'
op|'='
number|'16384'
newline|'\n'
name|'obj'
op|'.'
name|'vif_inbound_peak'
op|'='
number|'32768'
newline|'\n'
name|'obj'
op|'.'
name|'vif_inbound_burst'
op|'='
number|'3276'
newline|'\n'
name|'obj'
op|'.'
name|'vif_outbound_average'
op|'='
number|'32768'
newline|'\n'
name|'obj'
op|'.'
name|'vif_outbound_peak'
op|'='
number|'65536'
newline|'\n'
name|'obj'
op|'.'
name|'vif_outbound_burst'
op|'='
number|'6553'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="bridge">\n <mac address="DE:AD:BE:EF:CA:FE"/>\n <model type="virtio"/>\n <source bridge="br0"/>\n <target dev="tap12345678"/>\n <filterref filter="clean-traffic">\n <parameter name="IP" value="192.168.122.1"/>\n </filterref>\n <bandwidth>\n <inbound average="16384" peak="32768" burst="3276"/>\n <outbound average="32768" peak="65536" burst="6553"/>\n </bandwidth>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_bridge_ovs
dedent|''
name|'def'
name|'test_config_bridge_ovs'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"bridge"'
newline|'\n'
name|'obj'
op|'.'
name|'source_dev'
op|'='
string|'"br0"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"DE:AD:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"virtio"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"tap12345678"'
newline|'\n'
name|'obj'
op|'.'
name|'vporttype'
op|'='
string|'"openvswitch"'
newline|'\n'
name|'obj'
op|'.'
name|'vportparams'
op|'.'
name|'append'
op|'('
op|'{'
string|'"key"'
op|':'
string|'"instanceid"'
op|','
string|'"value"'
op|':'
string|'"foobar"'
op|'}'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="bridge">\n <mac address="DE:AD:BE:EF:CA:FE"/>\n <model type="virtio"/>\n <source bridge="br0"/>\n <target dev="tap12345678"/>\n <virtualport type="openvswitch">\n <parameters instanceid="foobar"/>\n </virtualport>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_bridge_xen
dedent|''
name|'def'
name|'test_config_bridge_xen'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"bridge"'
newline|'\n'
name|'obj'
op|'.'
name|'source_dev'
op|'='
string|'"br0"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"CA:FE:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'script'
op|'='
string|'"/path/to/test-vif-openstack"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="bridge">\n <mac address="CA:FE:BE:EF:CA:FE"/>\n <source bridge="br0"/>\n <script path="/path/to/test-vif-openstack"/>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_8021Qbh
dedent|''
name|'def'
name|'test_config_8021Qbh'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"direct"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"DE:AD:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"virtio"'
newline|'\n'
name|'obj'
op|'.'
name|'target_dev'
op|'='
string|'"tap12345678"'
newline|'\n'
name|'obj'
op|'.'
name|'source_dev'
op|'='
string|'"eth0"'
newline|'\n'
name|'obj'
op|'.'
name|'vporttype'
op|'='
string|'"802.1Qbh"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="direct">\n <mac address="DE:AD:BE:EF:CA:FE"/>\n <model type="virtio"/>\n <source dev="eth0" mode="private"/>\n <target dev="tap12345678"/>\n <virtualport type="802.1Qbh"/>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_direct
dedent|''
name|'def'
name|'test_config_direct'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"direct"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"DE:AD:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"virtio"'
newline|'\n'
name|'obj'
op|'.'
name|'source_dev'
op|'='
string|'"eth0"'
newline|'\n'
name|'obj'
op|'.'
name|'source_mode'
op|'='
string|'"passthrough"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="direct">\n <mac address="DE:AD:BE:EF:CA:FE"/>\n <model type="virtio"/>\n <source dev="eth0" mode="passthrough"/>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_8021Qbh_hostdev
dedent|''
name|'def'
name|'test_config_8021Qbh_hostdev'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"hostdev"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"DE:AD:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'source_dev'
op|'='
string|'"0000:0a:00.1"'
newline|'\n'
name|'obj'
op|'.'
name|'vporttype'
op|'='
string|'"802.1Qbh"'
newline|'\n'
name|'obj'
op|'.'
name|'add_vport_param'
op|'('
string|'"profileid"'
op|','
string|'"MyPortProfile"'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="hostdev" managed="yes">\n <mac address="DE:AD:BE:EF:CA:FE"/>\n <source>\n <address type="pci" domain="0x0000"\n bus="0x0a" slot="0x00" function="0x1"/>\n </source>\n <virtualport type="802.1Qbh">\n <parameters profileid="MyPortProfile"/>\n </virtualport>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_hw_veb_hostdev
dedent|''
name|'def'
name|'test_config_hw_veb_hostdev'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"hostdev"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"DE:AD:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'source_dev'
op|'='
string|'"0000:0a:00.1"'
newline|'\n'
name|'obj'
op|'.'
name|'vlan'
op|'='
string|'"100"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="hostdev" managed="yes">\n <mac address="DE:AD:BE:EF:CA:FE"/>\n <source>\n <address type="pci" domain="0x0000"\n bus="0x0a" slot="0x00" function="0x1"/>\n </source>\n <vlan>\n <tag id="100"/>\n </vlan>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_vhostuser
dedent|''
name|'def'
name|'test_config_vhostuser'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'net_type'
op|'='
string|'"vhostuser"'
newline|'\n'
name|'obj'
op|'.'
name|'vhostuser_type'
op|'='
string|'"unix"'
newline|'\n'
name|'obj'
op|'.'
name|'vhostuser_mode'
op|'='
string|'"server"'
newline|'\n'
name|'obj'
op|'.'
name|'mac_addr'
op|'='
string|'"DE:AD:BE:EF:CA:FE"'
newline|'\n'
name|'obj'
op|'.'
name|'vhostuser_path'
op|'='
string|'"/vhost-user/test.sock"'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|'"virtio"'
newline|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <interface type="vhostuser">\n <mac address="DE:AD:BE:EF:CA:FE"/>\n <model type="virtio"/>\n <source type="unix" mode="server" path="/vhost-user/test.sock"/>\n </interface>"""'
op|')'
newline|'\n'
nl|'\n'
comment|'# parse the xml from the first object into a new object and make sure'
nl|'\n'
comment|'# they are the same'
nl|'\n'
name|'obj2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestInterface'
op|'('
op|')'
newline|'\n'
name|'obj2'
op|'.'
name|'parse_str'
op|'('
name|'xml'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
name|'obj2'
op|'.'
name|'to_xml'
op|'('
op|')'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestFeatureTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestFeatureTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_feature_hyperv_relaxed
indent|' '
name|'def'
name|'test_feature_hyperv_relaxed'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
nl|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestFeatureHyperV'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'relaxed'
op|'='
name|'True'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <hyperv>\n <relaxed state="on"/>\n </hyperv>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_feature_hyperv_all
dedent|''
name|'def'
name|'test_feature_hyperv_all'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
nl|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestFeatureHyperV'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'relaxed'
op|'='
name|'True'
newline|'\n'
name|'obj'
op|'.'
name|'vapic'
op|'='
name|'True'
newline|'\n'
name|'obj'
op|'.'
name|'spinlocks'
op|'='
name|'True'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <hyperv>\n <relaxed state="on"/>\n <vapic state="on"/>\n <spinlocks state="on" retries="4095"/>\n </hyperv>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_lxc
indent|' '
name|'def'
name|'test_config_lxc'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'virt_type'
op|'='
string|'"lxc"'
newline|'\n'
name|'obj'
op|'.'
name|'memory'
op|'='
number|'100'
op|'*'
name|'units'
op|'.'
name|'Mi'
newline|'\n'
name|'obj'
op|'.'
name|'vcpus'
op|'='
number|'2'
newline|'\n'
name|'obj'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|','
number|'3'
op|','
number|'4'
op|','
number|'5'
op|']'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"demo"'
newline|'\n'
name|'obj'
op|'.'
name|'uuid'
op|'='
string|'"b38a3f43-4be2-4046-897f-b67c2f5e0147"'
newline|'\n'
name|'obj'
op|'.'
name|'os_type'
op|'='
string|'"exe"'
newline|'\n'
name|'obj'
op|'.'
name|'os_init_path'
op|'='
string|'"/sbin/init"'
newline|'\n'
nl|'\n'
name|'fs'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestFilesys'
op|'('
op|')'
newline|'\n'
name|'fs'
op|'.'
name|'source_dir'
op|'='
string|'"/root/lxc"'
newline|'\n'
name|'fs'
op|'.'
name|'target_dir'
op|'='
string|'"/"'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'add_device'
op|'('
name|'fs'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domain type="lxc">\n <uuid>b38a3f43-4be2-4046-897f-b67c2f5e0147</uuid>\n <name>demo</name>\n <memory>104857600</memory>\n <vcpu cpuset="0-1,3-5">2</vcpu>\n <os>\n <type>exe</type>\n <init>/sbin/init</init>\n </os>\n <devices>\n <filesystem type="mount">\n <source dir="/root/lxc"/>\n <target dir="/"/>\n </filesystem>\n </devices>\n </domain>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_lxc_with_idmap
dedent|''
name|'def'
name|'test_config_lxc_with_idmap'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'virt_type'
op|'='
string|'"lxc"'
newline|'\n'
name|'obj'
op|'.'
name|'memory'
op|'='
number|'100'
op|'*'
name|'units'
op|'.'
name|'Mi'
newline|'\n'
name|'obj'
op|'.'
name|'vcpus'
op|'='
number|'2'
newline|'\n'
name|'obj'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|','
number|'3'
op|','
number|'4'
op|','
number|'5'
op|']'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"demo"'
newline|'\n'
name|'obj'
op|'.'
name|'uuid'
op|'='
string|'"b38a3f43-4be2-4046-897f-b67c2f5e0147"'
newline|'\n'
name|'obj'
op|'.'
name|'os_type'
op|'='
string|'"exe"'
newline|'\n'
name|'obj'
op|'.'
name|'os_init_path'
op|'='
string|'"/sbin/init"'
newline|'\n'
nl|'\n'
name|'uidmap'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestUIDMap'
op|'('
op|')'
newline|'\n'
name|'uidmap'
op|'.'
name|'target'
op|'='
string|'"10000"'
newline|'\n'
name|'uidmap'
op|'.'
name|'count'
op|'='
string|'"1"'
newline|'\n'
name|'obj'
op|'.'
name|'idmaps'
op|'.'
name|'append'
op|'('
name|'uidmap'
op|')'
newline|'\n'
name|'gidmap'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestGIDMap'
op|'('
op|')'
newline|'\n'
name|'gidmap'
op|'.'
name|'target'
op|'='
string|'"10000"'
newline|'\n'
name|'gidmap'
op|'.'
name|'count'
op|'='
string|'"1"'
newline|'\n'
name|'obj'
op|'.'
name|'idmaps'
op|'.'
name|'append'
op|'('
name|'gidmap'
op|')'
newline|'\n'
nl|'\n'
name|'fs'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestFilesys'
op|'('
op|')'
newline|'\n'
name|'fs'
op|'.'
name|'source_dir'
op|'='
string|'"/root/lxc"'
newline|'\n'
name|'fs'
op|'.'
name|'target_dir'
op|'='
string|'"/"'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'add_device'
op|'('
name|'fs'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|'"""\n <domain type="lxc">\n <uuid>b38a3f43-4be2-4046-897f-b67c2f5e0147</uuid>\n <name>demo</name>\n <memory>104857600</memory>\n <vcpu cpuset="0-1,3-5">2</vcpu>\n <os>\n <type>exe</type>\n <init>/sbin/init</init>\n </os>\n <devices>\n <filesystem type="mount">\n <source dir="/root/lxc"/>\n <target dir="/"/>\n </filesystem>\n </devices>\n <idmap>\n <uid start="0" target="10000" count="1"/>\n <gid start="0" target="10000" count="1"/>\n </idmap>\n </domain>"""'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_xen_pv
dedent|''
name|'def'
name|'test_config_xen_pv'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'virt_type'
op|'='
string|'"xen"'
newline|'\n'
name|'obj'
op|'.'
name|'memory'
op|'='
number|'100'
op|'*'
name|'units'
op|'.'
name|'Mi'
newline|'\n'
name|'obj'
op|'.'
name|'vcpus'
op|'='
number|'2'
newline|'\n'
name|'obj'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|','
number|'3'
op|','
number|'4'
op|','
number|'5'
op|']'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"demo"'
newline|'\n'
name|'obj'
op|'.'
name|'uuid'
op|'='
string|'"b38a3f43-4be2-4046-897f-b67c2f5e0147"'
newline|'\n'
name|'obj'
op|'.'
name|'os_type'
op|'='
string|'"linux"'
newline|'\n'
name|'obj'
op|'.'
name|'os_kernel'
op|'='
string|'"/tmp/vmlinuz"'
newline|'\n'
name|'obj'
op|'.'
name|'os_initrd'
op|'='
string|'"/tmp/ramdisk"'
newline|'\n'
name|'obj'
op|'.'
name|'os_cmdline'
op|'='
string|'"console=xvc0"'
newline|'\n'
nl|'\n'
name|'disk'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'disk'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'disk'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/img"'
newline|'\n'
name|'disk'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/xvda"'
newline|'\n'
name|'disk'
op|'.'
name|'target_bus'
op|'='
string|'"xen"'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'add_device'
op|'('
name|'disk'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domain type="xen">\n <uuid>b38a3f43-4be2-4046-897f-b67c2f5e0147</uuid>\n <name>demo</name>\n <memory>104857600</memory>\n <vcpu cpuset="0-1,3-5">2</vcpu>\n <os>\n <type>linux</type>\n <kernel>/tmp/vmlinuz</kernel>\n <initrd>/tmp/ramdisk</initrd>\n <cmdline>console=xvc0</cmdline>\n </os>\n <devices>\n <disk type="file" device="disk">\n <source file="/tmp/img"/>\n <target bus="xen" dev="/dev/xvda"/>\n </disk>\n </devices>\n </domain>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_xen_hvm
dedent|''
name|'def'
name|'test_config_xen_hvm'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'virt_type'
op|'='
string|'"xen"'
newline|'\n'
name|'obj'
op|'.'
name|'memory'
op|'='
number|'100'
op|'*'
name|'units'
op|'.'
name|'Mi'
newline|'\n'
name|'obj'
op|'.'
name|'vcpus'
op|'='
number|'2'
newline|'\n'
name|'obj'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|','
number|'3'
op|','
number|'4'
op|','
number|'5'
op|']'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"demo"'
newline|'\n'
name|'obj'
op|'.'
name|'uuid'
op|'='
string|'"b38a3f43-4be2-4046-897f-b67c2f5e0147"'
newline|'\n'
name|'obj'
op|'.'
name|'os_type'
op|'='
string|'"hvm"'
newline|'\n'
name|'obj'
op|'.'
name|'os_loader'
op|'='
string|"'/usr/lib/xen/boot/hvmloader'"
newline|'\n'
name|'obj'
op|'.'
name|'os_root'
op|'='
string|'"root=xvda"'
newline|'\n'
name|'obj'
op|'.'
name|'os_cmdline'
op|'='
string|'"console=xvc0"'
newline|'\n'
name|'obj'
op|'.'
name|'features'
op|'='
op|'['
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigGuestFeatureACPI'
op|'('
op|')'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigGuestFeatureAPIC'
op|'('
op|')'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigGuestFeaturePAE'
op|'('
op|')'
op|','
nl|'\n'
op|']'
newline|'\n'
nl|'\n'
name|'disk'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'disk'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'disk'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/img"'
newline|'\n'
name|'disk'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/xvda"'
newline|'\n'
name|'disk'
op|'.'
name|'target_bus'
op|'='
string|'"xen"'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'add_device'
op|'('
name|'disk'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domain type="xen">\n <uuid>b38a3f43-4be2-4046-897f-b67c2f5e0147</uuid>\n <name>demo</name>\n <memory>104857600</memory>\n <vcpu cpuset="0-1,3-5">2</vcpu>\n <os>\n <type>hvm</type>\n <loader>/usr/lib/xen/boot/hvmloader</loader>\n <cmdline>console=xvc0</cmdline>\n <root>root=xvda</root>\n </os>\n <features>\n <acpi/>\n <apic/>\n <pae/>\n </features>\n <devices>\n <disk type="file" device="disk">\n <source file="/tmp/img"/>\n <target bus="xen" dev="/dev/xvda"/>\n </disk>\n </devices>\n </domain>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_kvm
dedent|''
name|'def'
name|'test_config_kvm'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'virt_type'
op|'='
string|'"kvm"'
newline|'\n'
name|'obj'
op|'.'
name|'memory'
op|'='
number|'100'
op|'*'
name|'units'
op|'.'
name|'Mi'
newline|'\n'
name|'obj'
op|'.'
name|'vcpus'
op|'='
number|'2'
newline|'\n'
name|'obj'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|','
number|'3'
op|','
number|'4'
op|','
number|'5'
op|']'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'cputune'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTune'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'cputune'
op|'.'
name|'shares'
op|'='
number|'100'
newline|'\n'
name|'obj'
op|'.'
name|'cputune'
op|'.'
name|'quota'
op|'='
number|'50000'
newline|'\n'
name|'obj'
op|'.'
name|'cputune'
op|'.'
name|'period'
op|'='
number|'25000'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'membacking'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMemoryBacking'
op|'('
op|')'
newline|'\n'
name|'page1'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMemoryBackingPage'
op|'('
op|')'
newline|'\n'
name|'page1'
op|'.'
name|'size_kb'
op|'='
number|'2048'
newline|'\n'
name|'page1'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'0'
op|','
number|'1'
op|','
number|'2'
op|','
number|'3'
op|','
number|'5'
op|']'
newline|'\n'
name|'page2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMemoryBackingPage'
op|'('
op|')'
newline|'\n'
name|'page2'
op|'.'
name|'size_kb'
op|'='
number|'1048576'
newline|'\n'
name|'page2'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'4'
op|']'
newline|'\n'
name|'obj'
op|'.'
name|'membacking'
op|'.'
name|'hugepages'
op|'.'
name|'append'
op|'('
name|'page1'
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'membacking'
op|'.'
name|'hugepages'
op|'.'
name|'append'
op|'('
name|'page2'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'memtune'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMemoryTune'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'memtune'
op|'.'
name|'hard_limit'
op|'='
number|'496'
newline|'\n'
name|'obj'
op|'.'
name|'memtune'
op|'.'
name|'soft_limit'
op|'='
number|'672'
newline|'\n'
name|'obj'
op|'.'
name|'memtune'
op|'.'
name|'swap_hard_limit'
op|'='
number|'1638'
newline|'\n'
name|'obj'
op|'.'
name|'memtune'
op|'.'
name|'min_guarantee'
op|'='
number|'2970'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'numatune'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATune'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'numamemory'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATuneMemory'
op|'('
op|')'
newline|'\n'
name|'numamemory'
op|'.'
name|'mode'
op|'='
string|'"preferred"'
newline|'\n'
name|'numamemory'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'0'
op|','
number|'1'
op|','
number|'2'
op|','
number|'3'
op|','
number|'8'
op|']'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'numatune'
op|'.'
name|'memory'
op|'='
name|'numamemory'
newline|'\n'
nl|'\n'
name|'numamemnode0'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATuneMemNode'
op|'('
op|')'
newline|'\n'
name|'numamemnode0'
op|'.'
name|'cellid'
op|'='
number|'0'
newline|'\n'
name|'numamemnode0'
op|'.'
name|'mode'
op|'='
string|'"preferred"'
newline|'\n'
name|'numamemnode0'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'0'
op|','
number|'1'
op|']'
newline|'\n'
nl|'\n'
name|'numamemnode1'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATuneMemNode'
op|'('
op|')'
newline|'\n'
name|'numamemnode1'
op|'.'
name|'cellid'
op|'='
number|'1'
newline|'\n'
name|'numamemnode1'
op|'.'
name|'mode'
op|'='
string|'"preferred"'
newline|'\n'
name|'numamemnode1'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'2'
op|','
number|'3'
op|']'
newline|'\n'
nl|'\n'
name|'numamemnode2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATuneMemNode'
op|'('
op|')'
newline|'\n'
name|'numamemnode2'
op|'.'
name|'cellid'
op|'='
number|'2'
newline|'\n'
name|'numamemnode2'
op|'.'
name|'mode'
op|'='
string|'"preferred"'
newline|'\n'
name|'numamemnode2'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'8'
op|']'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'numatune'
op|'.'
name|'memnodes'
op|'.'
name|'extend'
op|'('
op|'['
name|'numamemnode0'
op|','
nl|'\n'
name|'numamemnode1'
op|','
nl|'\n'
name|'numamemnode2'
op|']'
op|')'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"demo"'
newline|'\n'
name|'obj'
op|'.'
name|'uuid'
op|'='
string|'"b38a3f43-4be2-4046-897f-b67c2f5e0147"'
newline|'\n'
name|'obj'
op|'.'
name|'os_type'
op|'='
string|'"linux"'
newline|'\n'
name|'obj'
op|'.'
name|'os_boot_dev'
op|'='
op|'['
string|'"hd"'
op|','
string|'"cdrom"'
op|','
string|'"fd"'
op|']'
newline|'\n'
name|'obj'
op|'.'
name|'os_smbios'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSMBIOS'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'features'
op|'='
op|'['
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigGuestFeatureACPI'
op|'('
op|')'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigGuestFeatureAPIC'
op|'('
op|')'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigGuestFeaturePAE'
op|'('
op|')'
op|','
nl|'\n'
op|']'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'sysinfo'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSysinfo'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'sysinfo'
op|'.'
name|'bios_vendor'
op|'='
string|'"Acme"'
newline|'\n'
name|'obj'
op|'.'
name|'sysinfo'
op|'.'
name|'system_version'
op|'='
string|'"1.0.0"'
newline|'\n'
nl|'\n'
name|'disk'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestDisk'
op|'('
op|')'
newline|'\n'
name|'disk'
op|'.'
name|'source_type'
op|'='
string|'"file"'
newline|'\n'
name|'disk'
op|'.'
name|'source_path'
op|'='
string|'"/tmp/img"'
newline|'\n'
name|'disk'
op|'.'
name|'target_dev'
op|'='
string|'"/dev/vda"'
newline|'\n'
name|'disk'
op|'.'
name|'target_bus'
op|'='
string|'"virtio"'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'add_device'
op|'('
name|'disk'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domain type="kvm">\n <uuid>b38a3f43-4be2-4046-897f-b67c2f5e0147</uuid>\n <name>demo</name>\n <memory>104857600</memory>\n <memoryBacking>\n <hugepages>\n <page size="2048" unit="KiB" nodeset="0-3,5"/>\n <page size="1048576" unit="KiB" nodeset="4"/>\n </hugepages>\n </memoryBacking>\n <memtune>\n <hard_limit units="K">496</hard_limit>\n <soft_limit units="K">672</soft_limit>\n <swap_hard_limit units="K">1638</swap_hard_limit>\n <min_guarantee units="K">2970</min_guarantee>\n </memtune>\n <numatune>\n <memory mode="preferred" nodeset="0-3,8"/>\n <memnode cellid="0" mode="preferred" nodeset="0-1"/>\n <memnode cellid="1" mode="preferred" nodeset="2-3"/>\n <memnode cellid="2" mode="preferred" nodeset="8"/>\n </numatune>\n <vcpu cpuset="0-1,3-5">2</vcpu>\n <sysinfo type=\'smbios\'>\n <bios>\n <entry name="vendor">Acme</entry>\n </bios>\n <system>\n <entry name="version">1.0.0</entry>\n </system>\n </sysinfo>\n <os>\n <type>linux</type>\n <boot dev="hd"/>\n <boot dev="cdrom"/>\n <boot dev="fd"/>\n <smbios mode="sysinfo"/>\n </os>\n <features>\n <acpi/>\n <apic/>\n <pae/>\n </features>\n <cputune>\n <shares>100</shares>\n <quota>50000</quota>\n <period>25000</period>\n </cputune>\n <devices>\n <disk type="file" device="disk">\n <source file="/tmp/img"/>\n <target bus="virtio" dev="/dev/vda"/>\n </disk>\n </devices>\n </domain>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_uefi
dedent|''
name|'def'
name|'test_config_uefi'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'virt_type'
op|'='
string|'"kvm"'
newline|'\n'
name|'obj'
op|'.'
name|'memory'
op|'='
number|'100'
op|'*'
name|'units'
op|'.'
name|'Mi'
newline|'\n'
name|'obj'
op|'.'
name|'vcpus'
op|'='
number|'1'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"uefi"'
newline|'\n'
name|'obj'
op|'.'
name|'uuid'
op|'='
string|'"f01cf68d-515c-4daf-b85f-ef1424d93bfc"'
newline|'\n'
name|'obj'
op|'.'
name|'os_type'
op|'='
string|'"x86_64"'
newline|'\n'
name|'obj'
op|'.'
name|'os_loader'
op|'='
string|"'/tmp/OVMF_CODE.fd'"
newline|'\n'
name|'obj'
op|'.'
name|'os_loader_type'
op|'='
string|"'pflash'"
newline|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domain type="kvm">\n <uuid>f01cf68d-515c-4daf-b85f-ef1424d93bfc</uuid>\n <name>uefi</name>\n <memory>104857600</memory>\n <vcpu>1</vcpu>\n <os>\n <type>x86_64</type>\n <loader readonly=\'yes\' type=\'pflash\'>/tmp/OVMF_CODE.fd</loader>\n </os>\n </domain>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_boot_menu
dedent|''
name|'def'
name|'test_config_boot_menu'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'virt_type'
op|'='
string|'"kvm"'
newline|'\n'
name|'obj'
op|'.'
name|'memory'
op|'='
number|'100'
op|'*'
name|'units'
op|'.'
name|'Mi'
newline|'\n'
name|'obj'
op|'.'
name|'vcpus'
op|'='
number|'2'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"bootmenu"'
newline|'\n'
name|'obj'
op|'.'
name|'uuid'
op|'='
string|'"f01cf68d-515c-4daf-b85f-ef1424d93bfc"'
newline|'\n'
name|'obj'
op|'.'
name|'os_type'
op|'='
string|'"fake"'
newline|'\n'
name|'obj'
op|'.'
name|'os_bootmenu'
op|'='
name|'True'
newline|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domain type="kvm">\n <uuid>f01cf68d-515c-4daf-b85f-ef1424d93bfc</uuid>\n <name>bootmenu</name>\n <memory>104857600</memory>\n <vcpu>2</vcpu>\n <os>\n <type>fake</type>\n <bootmenu enable="yes"/>\n </os>\n </domain>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_machine_type
dedent|''
name|'def'
name|'test_config_machine_type'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'virt_type'
op|'='
string|'"kvm"'
newline|'\n'
name|'obj'
op|'.'
name|'memory'
op|'='
number|'100'
op|'*'
name|'units'
op|'.'
name|'Mi'
newline|'\n'
name|'obj'
op|'.'
name|'vcpus'
op|'='
number|'2'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"demo"'
newline|'\n'
name|'obj'
op|'.'
name|'uuid'
op|'='
string|'"b38a3f43-4be2-4046-897f-b67c2f5e0147"'
newline|'\n'
name|'obj'
op|'.'
name|'os_type'
op|'='
string|'"hvm"'
newline|'\n'
name|'obj'
op|'.'
name|'os_mach_type'
op|'='
string|'"fake_machine_type"'
newline|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domain type="kvm">\n <uuid>b38a3f43-4be2-4046-897f-b67c2f5e0147</uuid>\n <name>demo</name>\n <memory>104857600</memory>\n <vcpu>2</vcpu>\n <os>\n <type machine="fake_machine_type">hvm</type>\n </os>\n </domain>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_ConfigGuest_parse_devices
dedent|''
name|'def'
name|'test_ConfigGuest_parse_devices'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmldoc'
op|'='
string|'""" <domain type="kvm">\n <devices>\n <hostdev mode="subsystem" type="pci" managed="no">\n </hostdev>\n </devices>\n </domain>\n """'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'len'
op|'('
name|'obj'
op|'.'
name|'devices'
op|')'
op|','
number|'1'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'devices'
op|'['
number|'0'
op|']'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigGuestHostdevPCI'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'devices'
op|'['
number|'0'
op|']'
op|'.'
name|'mode'
op|','
string|"'subsystem'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'devices'
op|'['
number|'0'
op|']'
op|'.'
name|'managed'
op|','
string|"'no'"
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_ConfigGuest_parse_devices_wrong_type
dedent|''
name|'def'
name|'test_ConfigGuest_parse_devices_wrong_type'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmldoc'
op|'='
string|'""" <domain type="kvm">\n <devices>\n <hostdev mode="subsystem" type="xxxx" managed="no">\n </hostdev>\n </devices>\n </domain>\n """'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'len'
op|'('
name|'obj'
op|'.'
name|'devices'
op|')'
op|','
number|'0'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_ConfigGuest_parese_cpu
dedent|''
name|'def'
name|'test_ConfigGuest_parese_cpu'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmldoc'
op|'='
string|'""" <domain>\n <cpu mode=\'custom\' match=\'exact\'>\n <model>kvm64</model>\n </cpu>\n </domain>\n """'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuest'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmldoc'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'cpu'
op|'.'
name|'mode'
op|','
string|"'custom'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'cpu'
op|'.'
name|'match'
op|','
string|"'exact'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'cpu'
op|'.'
name|'model'
op|','
string|"'kvm64'"
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestSnapshotTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestSnapshotTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_snapshot
indent|' '
name|'def'
name|'test_config_snapshot'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSnapshot'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"Demo"'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domainsnapshot>\n <name>Demo</name>\n <disks/>\n </domainsnapshot>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_snapshot_with_disks
dedent|''
name|'def'
name|'test_config_snapshot_with_disks'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSnapshot'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"Demo"'
newline|'\n'
nl|'\n'
name|'disk'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSnapshotDisk'
op|'('
op|')'
newline|'\n'
name|'disk'
op|'.'
name|'name'
op|'='
string|"'vda'"
newline|'\n'
name|'disk'
op|'.'
name|'source_path'
op|'='
string|"'source-path'"
newline|'\n'
name|'disk'
op|'.'
name|'source_type'
op|'='
string|"'file'"
newline|'\n'
name|'disk'
op|'.'
name|'snapshot'
op|'='
string|"'external'"
newline|'\n'
name|'disk'
op|'.'
name|'driver_name'
op|'='
string|"'qcow2'"
newline|'\n'
name|'obj'
op|'.'
name|'add_disk'
op|'('
name|'disk'
op|')'
newline|'\n'
nl|'\n'
name|'disk2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSnapshotDisk'
op|'('
op|')'
newline|'\n'
name|'disk2'
op|'.'
name|'name'
op|'='
string|"'vdb'"
newline|'\n'
name|'disk2'
op|'.'
name|'snapshot'
op|'='
string|"'no'"
newline|'\n'
name|'obj'
op|'.'
name|'add_disk'
op|'('
name|'disk2'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domainsnapshot>\n <name>Demo</name>\n <disks>\n <disk name=\'vda\' snapshot=\'external\' type=\'file\'>\n <source file=\'source-path\'/>\n </disk>\n <disk name=\'vdb\' snapshot=\'no\'/>\n </disks>\n </domainsnapshot>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_snapshot_with_network_disks
dedent|''
name|'def'
name|'test_config_snapshot_with_network_disks'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSnapshot'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'name'
op|'='
string|'"Demo"'
newline|'\n'
nl|'\n'
name|'disk'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSnapshotDisk'
op|'('
op|')'
newline|'\n'
name|'disk'
op|'.'
name|'name'
op|'='
string|"'vda'"
newline|'\n'
name|'disk'
op|'.'
name|'source_name'
op|'='
string|"'source-file'"
newline|'\n'
name|'disk'
op|'.'
name|'source_type'
op|'='
string|"'network'"
newline|'\n'
name|'disk'
op|'.'
name|'source_hosts'
op|'='
op|'['
string|"'host1'"
op|']'
newline|'\n'
name|'disk'
op|'.'
name|'source_ports'
op|'='
op|'['
string|"'12345'"
op|']'
newline|'\n'
name|'disk'
op|'.'
name|'source_protocol'
op|'='
string|"'glusterfs'"
newline|'\n'
name|'disk'
op|'.'
name|'snapshot'
op|'='
string|"'external'"
newline|'\n'
name|'disk'
op|'.'
name|'driver_name'
op|'='
string|"'qcow2'"
newline|'\n'
name|'obj'
op|'.'
name|'add_disk'
op|'('
name|'disk'
op|')'
newline|'\n'
nl|'\n'
name|'disk2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestSnapshotDisk'
op|'('
op|')'
newline|'\n'
name|'disk2'
op|'.'
name|'name'
op|'='
string|"'vdb'"
newline|'\n'
name|'disk2'
op|'.'
name|'snapshot'
op|'='
string|"'no'"
newline|'\n'
name|'obj'
op|'.'
name|'add_disk'
op|'('
name|'disk2'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <domainsnapshot>\n <name>Demo</name>\n <disks>\n <disk name=\'vda\' snapshot=\'external\' type=\'network\'>\n <source protocol=\'glusterfs\' name=\'source-file\'>\n <host name=\'host1\' port=\'12345\'/>\n </source>\n </disk>\n <disk name=\'vdb\' snapshot=\'no\'/>\n </disks>\n </domainsnapshot>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigNodeDeviceTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigNodeDeviceTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_virt_usb_device
indent|' '
name|'def'
name|'test_config_virt_usb_device'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <device>\n <name>usb_0000_09_00_0</name>\n <parent>pci_0000_00_1c_0</parent>\n <driver>\n <name>vxge</name>\n </driver>\n <capability type="usb">\n <domain>0</domain>\n <capability type="fake_usb">\n <address fake_usb="fake"/>\n </capability>\n </capability>\n </device>"""'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigNodeDevice'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertIsNone'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_virt_device
dedent|''
name|'def'
name|'test_config_virt_device'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <device>\n <name>pci_0000_09_00_0</name>\n <parent>pci_0000_00_1c_0</parent>\n <driver>\n <name>vxge</name>\n </driver>\n <capability type="pci">\n <domain>0</domain>\n <bus>9</bus>\n <slot>0</slot>\n <function>0</function>\n <product id="0x5833">X3100 Series 10 Gigabit Ethernet PCIe</product>\n <vendor id="0x17d5">Neterion Inc.</vendor>\n <capability type="virt_functions">\n <address domain="0x0000" bus="0x0a" slot="0x00" function="0x1"/>\n <address domain="0x0000" bus="0x0a" slot="0x00" function="0x2"/>\n <address domain="0x0000" bus="0x0a" slot="0x00" function="0x3"/>\n </capability>\n </capability>\n </device>"""'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigNodeDevice'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciSubFunctionCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
name|'type'
op|','
nl|'\n'
string|'"virt_functions"'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'len'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
nl|'\n'
name|'device_addrs'
op|')'
op|','
nl|'\n'
number|'3'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'bus'
op|','
number|'9'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_phy_device
dedent|''
name|'def'
name|'test_config_phy_device'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <device>\n <name>pci_0000_33_00_0</name>\n <parent>pci_0000_22_1c_0</parent>\n <driver>\n <name>vxx</name>\n </driver>\n <capability type="pci">\n <domain>0</domain>\n <bus>9</bus>\n <slot>0</slot>\n <function>0</function>\n <product id="0x5833">X3100 Series 10 Gigabit Ethernet PCIe</product>\n <vendor id="0x17d5">Neterion Inc.</vendor>\n <capability type="phys_function">\n <address domain=\'0x0000\' bus=\'0x09\' slot=\'0x00\' function=\'0x0\'/>\n </capability>\n </capability>\n </device>"""'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigNodeDevice'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciSubFunctionCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
name|'type'
op|','
nl|'\n'
string|'"phys_function"'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'len'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
nl|'\n'
name|'device_addrs'
op|')'
op|','
nl|'\n'
number|'1'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_non_device
dedent|''
name|'def'
name|'test_config_non_device'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <device>\n <name>pci_0000_33_00_0</name>\n <parent>pci_0000_22_1c_0</parent>\n <driver>\n <name>vxx</name>\n </driver>\n <capability type="pci">\n <domain>0</domain>\n <bus>9</bus>\n <slot>0</slot>\n <function>0</function>\n <product id="0x5833">X3100 Series 10 Gigabit Ethernet PCIe</product>\n <vendor id="0x17d5">Neterion Inc.</vendor>\n <capability type="virt_functions"/>\n </capability>\n </device>"""'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigNodeDevice'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciSubFunctionCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
name|'type'
op|','
nl|'\n'
string|'"virt_functions"'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_fail_device
dedent|''
name|'def'
name|'test_config_fail_device'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <device>\n <name>pci_0000_33_00_0</name>\n <parent>pci_0000_22_1c_0</parent>\n <driver>\n <name>vxx</name>\n </driver>\n <capability type="pci">\n <domain>0</domain>\n <bus>9</bus>\n <slot>0</slot>\n <function>0</function>\n <product id="0x5833">X3100 Series 10 Gigabit Ethernet PCIe</product>\n <vendor id="0x17d5">Neterion Inc.</vendor>\n <capability type="virt_functions">\n </capability>\n </capability>\n </device>"""'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigNodeDevice'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciSubFunctionCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
name|'type'
op|','
nl|'\n'
string|'"virt_functions"'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_2cap_device
dedent|''
name|'def'
name|'test_config_2cap_device'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <device>\n <name>pci_0000_04_10_7</name>\n <parent>pci_0000_00_01_1</parent>\n <driver>\n <name>igbvf</name>\n </driver>\n <capability type=\'pci\'>\n <domain>0</domain>\n <bus>4</bus>\n <slot>16</slot>\n <function>7</function>\n <product id=\'0x1520\'>I350 Ethernet Controller Virtual</product>\n <vendor id=\'0x8086\'>Intel Corporation</vendor>\n <capability type=\'phys_function\'>\n <address domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x3\'/>\n </capability>\n <capability type=\'virt_functions\'>\n <address domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x3\'/>\n </capability>\n </capability>\n </device>"""'
newline|'\n'
nl|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigNodeDevice'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciSubFunctionCap'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
name|'type'
op|','
nl|'\n'
string|'"phys_function"'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'pci_capability'
op|'.'
name|'fun_capability'
op|'['
number|'1'
op|']'
op|'.'
name|'type'
op|','
nl|'\n'
string|'"virt_functions"'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigNodeDevicePciCapTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigNodeDevicePciCapTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_device_pci_cap
indent|' '
name|'def'
name|'test_config_device_pci_cap'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <capability type="pci">\n <domain>0</domain>\n <bus>10</bus>\n <slot>1</slot>\n <function>5</function>\n <product id="0x10bd">Intel 10 Gigabit Ethernet</product>\n <vendor id="0x8086">Intel Inc.</vendor>\n <capability type="virt_functions">\n <address domain="0000" bus="0x0a" slot="0x1" function="0x1"/>\n <address domain="0001" bus="0x0a" slot="0x02" function="0x03"/>\n </capability>\n </capability>"""'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciCap'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'domain'
op|','
number|'0'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'bus'
op|','
number|'10'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'slot'
op|','
number|'1'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'function'
op|','
number|'5'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'product'
op|','
string|'"Intel 10 Gigabit Ethernet"'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'product_id'
op|','
number|'0x10bd'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'vendor'
op|','
string|'"Intel Inc."'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'vendor_id'
op|','
number|'0x8086'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsNone'
op|'('
name|'obj'
op|'.'
name|'numa_node'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciSubFunctionCap'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
name|'type'
op|','
string|"'virt_functions'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
name|'device_addrs'
op|','
nl|'\n'
op|'['
op|'('
number|'0'
op|','
number|'10'
op|','
number|'1'
op|','
number|'1'
op|')'
op|','
nl|'\n'
op|'('
number|'1'
op|','
number|'10'
op|','
number|'2'
op|','
number|'3'
op|')'
op|','
op|']'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_device_pci_2cap
dedent|''
name|'def'
name|'test_config_device_pci_2cap'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <capability type="pci">\n <domain>0</domain>\n <bus>10</bus>\n <slot>1</slot>\n <function>5</function>\n <product id="0x10bd">Intel 10 Gigabit Ethernet</product>\n <vendor id="0x8086">Intel Inc.</vendor>\n <numa node=\'0\'/>\n <capability type="virt_functions">\n <address domain="0000" bus="0x0a" slot="0x1" function="0x1"/>\n <address domain="0001" bus="0x0a" slot="0x02" function="0x03"/>\n </capability>\n <capability type="phys_function">\n <address domain="0000" bus="0x0a" slot="0x1" function="0x1"/>\n </capability>\n </capability>"""'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciCap'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'domain'
op|','
number|'0'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'bus'
op|','
number|'10'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'slot'
op|','
number|'1'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'function'
op|','
number|'5'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'product'
op|','
string|'"Intel 10 Gigabit Ethernet"'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'product_id'
op|','
number|'0x10bd'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'vendor'
op|','
string|'"Intel Inc."'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'vendor_id'
op|','
number|'0x8086'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'0'
op|','
name|'obj'
op|'.'
name|'numa_node'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'obj'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|','
nl|'\n'
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciSubFunctionCap'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
name|'type'
op|','
string|"'virt_functions'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'fun_capability'
op|'['
number|'0'
op|']'
op|'.'
name|'device_addrs'
op|','
nl|'\n'
op|'['
op|'('
number|'0'
op|','
number|'10'
op|','
number|'1'
op|','
number|'1'
op|')'
op|','
nl|'\n'
op|'('
number|'1'
op|','
number|'10'
op|','
number|'2'
op|','
number|'3'
op|')'
op|','
op|']'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'fun_capability'
op|'['
number|'1'
op|']'
op|'.'
name|'type'
op|','
string|"'phys_function'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'obj'
op|'.'
name|'fun_capability'
op|'['
number|'1'
op|']'
op|'.'
name|'device_addrs'
op|','
nl|'\n'
op|'['
op|'('
number|'0'
op|','
number|'10'
op|','
number|'1'
op|','
number|'1'
op|')'
op|','
op|']'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigNodeDevicePciSubFunctionCap
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigNodeDevicePciSubFunctionCap'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_device_pci_subfunction
indent|' '
name|'def'
name|'test_config_device_pci_subfunction'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"""\n <capability type="virt_functions">\n <address domain="0000" bus="0x0a" slot="0x1" function="0x1"/>\n <address domain="0001" bus="0x0a" slot="0x02" function="0x03"/>\n </capability>"""'
newline|'\n'
name|'fun_capability'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigNodeDevicePciSubFunctionCap'
op|'('
op|')'
newline|'\n'
name|'fun_capability'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
string|"'virt_functions'"
op|','
name|'fun_capability'
op|'.'
name|'type'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
op|'['
op|'('
number|'0'
op|','
number|'10'
op|','
number|'1'
op|','
number|'1'
op|')'
op|','
nl|'\n'
op|'('
number|'1'
op|','
number|'10'
op|','
number|'2'
op|','
number|'3'
op|')'
op|']'
op|','
nl|'\n'
name|'fun_capability'
op|'.'
name|'device_addrs'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestVideoTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestVideoTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_video_driver
indent|' '
name|'def'
name|'test_config_video_driver'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestVideo'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|"'qxl'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <video>\n <model type=\'qxl\'/>\n </video>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_video_driver_vram_heads
dedent|''
name|'def'
name|'test_config_video_driver_vram_heads'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestVideo'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|"'qxl'"
newline|'\n'
name|'obj'
op|'.'
name|'vram'
op|'='
string|"'9216'"
newline|'\n'
name|'obj'
op|'.'
name|'heads'
op|'='
string|"'1'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <video>\n <model type=\'qxl\' vram=\'9216\' heads=\'1\'/>\n </video>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestSeclabel
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestSeclabel'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_seclabel_config
indent|' '
name|'def'
name|'test_config_seclabel_config'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigSeclabel'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <seclabel type=\'dynamic\'/>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_seclabel_baselabel
dedent|''
name|'def'
name|'test_config_seclabel_baselabel'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigSeclabel'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|"'dynamic'"
newline|'\n'
name|'obj'
op|'.'
name|'baselabel'
op|'='
string|"'system_u:system_r:my_svirt_t:s0'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <seclabel type=\'dynamic\'>\n <baselabel>system_u:system_r:my_svirt_t:s0</baselabel>\n </seclabel>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestRngTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestRngTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_rng_driver
indent|' '
name|'def'
name|'test_config_rng_driver'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestRng'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n<rng model=\'virtio\'>\n <backend model=\'random\'/>\n</rng>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_rng_driver_with_rate
dedent|''
name|'def'
name|'test_config_rng_driver_with_rate'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestRng'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'backend'
op|'='
string|"'/dev/random'"
newline|'\n'
name|'obj'
op|'.'
name|'rate_period'
op|'='
string|"'12'"
newline|'\n'
name|'obj'
op|'.'
name|'rate_bytes'
op|'='
string|"'34'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n<rng model=\'virtio\'>\n <rate period=\'12\' bytes=\'34\'/>\n <backend model=\'random\'>/dev/random</backend>\n</rng>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestControllerTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestControllerTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_guest_contoller
indent|' '
name|'def'
name|'test_config_guest_contoller'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestController'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'type'
op|'='
string|"'scsi'"
newline|'\n'
name|'obj'
op|'.'
name|'index'
op|'='
number|'0'
newline|'\n'
name|'obj'
op|'.'
name|'model'
op|'='
string|"'virtio-scsi'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'/>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestWatchdogTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestWatchdogTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_watchdog
indent|' '
name|'def'
name|'test_config_watchdog'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestWatchdog'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'action'
op|'='
string|"'none'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"<watchdog model=\'i6300esb\' action=\'none\'/>"'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_watchdog_default_action
dedent|''
name|'def'
name|'test_config_watchdog_default_action'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestWatchdog'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"<watchdog model=\'i6300esb\' action=\'reset\'/>"'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestCPUTuneTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestCPUTuneTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_cputune_timeslice
indent|' '
name|'def'
name|'test_config_cputune_timeslice'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'cputune'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTune'
op|'('
op|')'
newline|'\n'
name|'cputune'
op|'.'
name|'shares'
op|'='
number|'100'
newline|'\n'
name|'cputune'
op|'.'
name|'quota'
op|'='
number|'50000'
newline|'\n'
name|'cputune'
op|'.'
name|'period'
op|'='
number|'25000'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'cputune'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cputune>\n <shares>100</shares>\n <quota>50000</quota>\n <period>25000</period>\n </cputune>"""'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_cputune_vcpus
dedent|''
name|'def'
name|'test_config_cputune_vcpus'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'cputune'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTune'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'vcpu0'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTuneVCPUPin'
op|'('
op|')'
newline|'\n'
name|'vcpu0'
op|'.'
name|'id'
op|'='
number|'0'
newline|'\n'
name|'vcpu0'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|']'
op|')'
newline|'\n'
name|'vcpu1'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTuneVCPUPin'
op|'('
op|')'
newline|'\n'
name|'vcpu1'
op|'.'
name|'id'
op|'='
number|'1'
newline|'\n'
name|'vcpu1'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'2'
op|','
number|'3'
op|']'
op|')'
newline|'\n'
name|'vcpu2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTuneVCPUPin'
op|'('
op|')'
newline|'\n'
name|'vcpu2'
op|'.'
name|'id'
op|'='
number|'2'
newline|'\n'
name|'vcpu2'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'4'
op|','
number|'5'
op|']'
op|')'
newline|'\n'
name|'vcpu3'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTuneVCPUPin'
op|'('
op|')'
newline|'\n'
name|'vcpu3'
op|'.'
name|'id'
op|'='
number|'3'
newline|'\n'
name|'vcpu3'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'6'
op|','
number|'7'
op|']'
op|')'
newline|'\n'
name|'cputune'
op|'.'
name|'vcpupin'
op|'.'
name|'extend'
op|'('
op|'['
name|'vcpu0'
op|','
name|'vcpu1'
op|','
name|'vcpu2'
op|','
name|'vcpu3'
op|']'
op|')'
newline|'\n'
nl|'\n'
name|'emu'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTuneEmulatorPin'
op|'('
op|')'
newline|'\n'
name|'emu'
op|'.'
name|'cpuset'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|','
number|'2'
op|','
number|'3'
op|','
number|'4'
op|','
number|'5'
op|','
number|'6'
op|','
number|'7'
op|']'
op|')'
newline|'\n'
name|'cputune'
op|'.'
name|'emulatorpin'
op|'='
name|'emu'
newline|'\n'
nl|'\n'
name|'sch0'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTuneVCPUSched'
op|'('
op|')'
newline|'\n'
name|'sch0'
op|'.'
name|'vcpus'
op|'='
name|'set'
op|'('
op|'['
number|'0'
op|','
number|'1'
op|','
number|'2'
op|','
number|'3'
op|']'
op|')'
newline|'\n'
name|'sch0'
op|'.'
name|'scheduler'
op|'='
string|'"fifo"'
newline|'\n'
name|'sch0'
op|'.'
name|'priority'
op|'='
number|'1'
newline|'\n'
name|'sch1'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestCPUTuneVCPUSched'
op|'('
op|')'
newline|'\n'
name|'sch1'
op|'.'
name|'vcpus'
op|'='
name|'set'
op|'('
op|'['
number|'4'
op|','
number|'5'
op|','
number|'6'
op|','
number|'7'
op|']'
op|')'
newline|'\n'
name|'sch1'
op|'.'
name|'scheduler'
op|'='
string|'"fifo"'
newline|'\n'
name|'sch1'
op|'.'
name|'priority'
op|'='
number|'99'
newline|'\n'
name|'cputune'
op|'.'
name|'vcpusched'
op|'.'
name|'extend'
op|'('
op|'['
name|'sch0'
op|','
name|'sch1'
op|']'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'cputune'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <cputune>\n <emulatorpin cpuset="0-7"/>\n <vcpupin vcpu="0" cpuset="0-1"/>\n <vcpupin vcpu="1" cpuset="2-3"/>\n <vcpupin vcpu="2" cpuset="4-5"/>\n <vcpupin vcpu="3" cpuset="6-7"/>\n <vcpusched vcpus="0-3" scheduler="fifo" priority="1"/>\n <vcpusched vcpus="4-7" scheduler="fifo" priority="99"/>\n </cputune>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestMemoryBackingTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestMemoryBackingTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_memory_backing_none
indent|' '
name|'def'
name|'test_config_memory_backing_none'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMemoryBacking'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"<memoryBacking/>"'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_memory_backing_all
dedent|''
name|'def'
name|'test_config_memory_backing_all'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMemoryBacking'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'locked'
op|'='
name|'True'
newline|'\n'
name|'obj'
op|'.'
name|'sharedpages'
op|'='
name|'False'
newline|'\n'
name|'page'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMemoryBackingPage'
op|'('
op|')'
newline|'\n'
name|'page'
op|'.'
name|'size_kb'
op|'='
number|'2048'
newline|'\n'
name|'page'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'2'
op|','
number|'3'
op|']'
newline|'\n'
name|'obj'
op|'.'
name|'hugepages'
op|'.'
name|'append'
op|'('
name|'page'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <memoryBacking>\n <hugepages>\n <page size="2048" unit="KiB" nodeset="2-3"/>\n </hugepages>\n <nosharepages/>\n <locked/>\n </memoryBacking>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestMemoryTuneTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestMemoryTuneTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_memory_backing_none
indent|' '
name|'def'
name|'test_config_memory_backing_none'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMemoryTune'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"<memtune/>"'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_memory_backing_all
dedent|''
name|'def'
name|'test_config_memory_backing_all'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMemoryTune'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'soft_limit'
op|'='
number|'6'
newline|'\n'
name|'obj'
op|'.'
name|'hard_limit'
op|'='
number|'28'
newline|'\n'
name|'obj'
op|'.'
name|'swap_hard_limit'
op|'='
number|'140'
newline|'\n'
name|'obj'
op|'.'
name|'min_guarantee'
op|'='
number|'270'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <memtune>\n <hard_limit units="K">28</hard_limit>\n <soft_limit units="K">6</soft_limit>\n <swap_hard_limit units="K">140</swap_hard_limit>\n <min_guarantee units="K">270</min_guarantee>\n </memtune>"""'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestNUMATuneTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestNUMATuneTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_numa_tune_none
indent|' '
name|'def'
name|'test_config_numa_tune_none'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATune'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|'"<numatune/>"'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_numa_tune_memory
dedent|''
name|'def'
name|'test_config_numa_tune_memory'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATune'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'numamemory'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATuneMemory'
op|'('
op|')'
newline|'\n'
name|'numamemory'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'0'
op|','
number|'1'
op|','
number|'2'
op|','
number|'3'
op|','
number|'8'
op|']'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'memory'
op|'='
name|'numamemory'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|'"""\n <numatune>\n <memory mode="strict" nodeset="0-3,8"/>\n </numatune>"""'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_numa_tune_memnodes
dedent|''
name|'def'
name|'test_config_numa_tune_memnodes'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATune'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'numamemnode0'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATuneMemNode'
op|'('
op|')'
newline|'\n'
name|'numamemnode0'
op|'.'
name|'cellid'
op|'='
number|'0'
newline|'\n'
name|'numamemnode0'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'0'
op|','
number|'1'
op|']'
newline|'\n'
nl|'\n'
name|'numamemnode1'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATuneMemNode'
op|'('
op|')'
newline|'\n'
name|'numamemnode1'
op|'.'
name|'cellid'
op|'='
number|'1'
newline|'\n'
name|'numamemnode1'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'2'
op|','
number|'3'
op|']'
newline|'\n'
nl|'\n'
name|'numamemnode2'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestNUMATuneMemNode'
op|'('
op|')'
newline|'\n'
name|'numamemnode2'
op|'.'
name|'cellid'
op|'='
number|'2'
newline|'\n'
name|'numamemnode2'
op|'.'
name|'nodeset'
op|'='
op|'['
number|'8'
op|']'
newline|'\n'
nl|'\n'
name|'obj'
op|'.'
name|'memnodes'
op|'.'
name|'extend'
op|'('
op|'['
name|'numamemnode0'
op|','
nl|'\n'
name|'numamemnode1'
op|','
nl|'\n'
name|'numamemnode2'
op|']'
op|')'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|'"""\n <numatune>\n <memnode cellid="0" mode="strict" nodeset="0-1"/>\n <memnode cellid="1" mode="strict" nodeset="2-3"/>\n <memnode cellid="2" mode="strict" nodeset="8"/>\n </numatune>"""'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestMetadataNovaTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestMetadataNovaTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_metadata
indent|' '
name|'def'
name|'test_config_metadata'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'meta'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMetaNovaInstance'
op|'('
op|')'
newline|'\n'
name|'meta'
op|'.'
name|'package'
op|'='
string|'"2014.2.3"'
newline|'\n'
name|'meta'
op|'.'
name|'name'
op|'='
string|'"moonbuggy"'
newline|'\n'
name|'meta'
op|'.'
name|'creationTime'
op|'='
number|'1234567890'
newline|'\n'
name|'meta'
op|'.'
name|'roottype'
op|'='
string|'"image"'
newline|'\n'
name|'meta'
op|'.'
name|'rootid'
op|'='
string|'"fe55c69a-8b2e-4bbc-811a-9ad2023a0426"'
newline|'\n'
nl|'\n'
name|'owner'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMetaNovaOwner'
op|'('
op|')'
newline|'\n'
name|'owner'
op|'.'
name|'userid'
op|'='
string|'"3472c2a6-de91-4fb5-b618-42bc781ef670"'
newline|'\n'
name|'owner'
op|'.'
name|'username'
op|'='
string|'"buzz"'
newline|'\n'
name|'owner'
op|'.'
name|'projectid'
op|'='
string|'"f241e906-010e-4917-ae81-53f4fb8aa021"'
newline|'\n'
name|'owner'
op|'.'
name|'projectname'
op|'='
string|'"moonshot"'
newline|'\n'
nl|'\n'
name|'meta'
op|'.'
name|'owner'
op|'='
name|'owner'
newline|'\n'
nl|'\n'
name|'flavor'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestMetaNovaFlavor'
op|'('
op|')'
newline|'\n'
name|'flavor'
op|'.'
name|'name'
op|'='
string|'"m1.lowgravity"'
newline|'\n'
name|'flavor'
op|'.'
name|'vcpus'
op|'='
number|'8'
newline|'\n'
name|'flavor'
op|'.'
name|'memory'
op|'='
number|'2048'
newline|'\n'
name|'flavor'
op|'.'
name|'swap'
op|'='
number|'10'
newline|'\n'
name|'flavor'
op|'.'
name|'disk'
op|'='
number|'50'
newline|'\n'
name|'flavor'
op|'.'
name|'ephemeral'
op|'='
number|'10'
newline|'\n'
nl|'\n'
name|'meta'
op|'.'
name|'flavor'
op|'='
name|'flavor'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'meta'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'xml'
op|','
string|'"""\n <nova:instance xmlns:nova=\'http://openstack.org/xmlns/libvirt/nova/1.0\'>\n <nova:package version="2014.2.3"/>\n <nova:name>moonbuggy</nova:name>\n <nova:creationTime>2009-02-13 23:31:30</nova:creationTime>\n <nova:flavor name="m1.lowgravity">\n <nova:memory>2048</nova:memory>\n <nova:disk>50</nova:disk>\n <nova:swap>10</nova:swap>\n <nova:ephemeral>10</nova:ephemeral>\n <nova:vcpus>8</nova:vcpus>\n </nova:flavor>\n <nova:owner>\n <nova:user\n uuid="3472c2a6-de91-4fb5-b618-42bc781ef670">buzz</nova:user>\n <nova:project\n uuid="f241e906-010e-4917-ae81-53f4fb8aa021">moonshot</nova:project>\n </nova:owner>\n <nova:root type="image" uuid="fe55c69a-8b2e-4bbc-811a-9ad2023a0426"/>\n </nova:instance>\n """'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigGuestIDMap
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigGuestIDMap'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
DECL|member|test_config_id_map_parse_start_not_int
indent|' '
name|'def'
name|'test_config_id_map_parse_start_not_int'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"<uid start=\'a\' target=\'20000\' count=\'5\'/>"'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestIDMap'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'ValueError'
op|','
name|'obj'
op|'.'
name|'parse_str'
op|','
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_id_map_parse_target_not_int
dedent|''
name|'def'
name|'test_config_id_map_parse_target_not_int'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"<uid start=\'2\' target=\'a\' count=\'5\'/>"'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestIDMap'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'ValueError'
op|','
name|'obj'
op|'.'
name|'parse_str'
op|','
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_id_map_parse_count_not_int
dedent|''
name|'def'
name|'test_config_id_map_parse_count_not_int'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"<uid start=\'2\' target=\'20000\' count=\'a\'/>"'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestIDMap'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'ValueError'
op|','
name|'obj'
op|'.'
name|'parse_str'
op|','
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_uid_map
dedent|''
name|'def'
name|'test_config_uid_map'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestUIDMap'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'start'
op|'='
number|'1'
newline|'\n'
name|'obj'
op|'.'
name|'target'
op|'='
number|'10000'
newline|'\n'
name|'obj'
op|'.'
name|'count'
op|'='
number|'2'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|'"<uid start=\'1\' target=\'10000\' count=\'2\'/>"'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_uid_map_parse
dedent|''
name|'def'
name|'test_config_uid_map_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"<uid start=\'2\' target=\'20000\' count=\'5\'/>"'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestUIDMap'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'2'
op|','
name|'obj'
op|'.'
name|'start'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'20000'
op|','
name|'obj'
op|'.'
name|'target'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'5'
op|','
name|'obj'
op|'.'
name|'count'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_gid_map
dedent|''
name|'def'
name|'test_config_gid_map'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestGIDMap'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'start'
op|'='
number|'1'
newline|'\n'
name|'obj'
op|'.'
name|'target'
op|'='
number|'10000'
newline|'\n'
name|'obj'
op|'.'
name|'count'
op|'='
number|'2'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'obj'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
string|'"<gid start=\'1\' target=\'10000\' count=\'2\'/>"'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_gid_map_parse
dedent|''
name|'def'
name|'test_config_gid_map_parse'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'xmlin'
op|'='
string|'"<gid start=\'2\' target=\'20000\' count=\'5\'/>"'
newline|'\n'
name|'obj'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigGuestGIDMap'
op|'('
op|')'
newline|'\n'
name|'obj'
op|'.'
name|'parse_str'
op|'('
name|'xmlin'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'2'
op|','
name|'obj'
op|'.'
name|'start'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'20000'
op|','
name|'obj'
op|'.'
name|'target'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
number|'5'
op|','
name|'obj'
op|'.'
name|'count'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigMemoryBalloonTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigMemoryBalloonTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_memory_balloon_period
indent|' '
name|'def'
name|'test_config_memory_balloon_period'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'balloon'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigMemoryBalloon'
op|'('
op|')'
newline|'\n'
name|'balloon'
op|'.'
name|'model'
op|'='
string|"'fake_virtio'"
newline|'\n'
name|'balloon'
op|'.'
name|'period'
op|'='
number|'11'
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'balloon'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'expected_xml'
op|'='
string|'"""\n <memballoon model=\'fake_virtio\'>\n <stats period=\'11\'/>\n </memballoon>"""'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'expected_xml'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_memory_balloon_no_period
dedent|''
name|'def'
name|'test_config_memory_balloon_no_period'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'balloon'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigMemoryBalloon'
op|'('
op|')'
newline|'\n'
name|'balloon'
op|'.'
name|'model'
op|'='
string|"'fake_virtio'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'balloon'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'expected_xml'
op|'='
string|'"""\n <memballoon model=\'fake_virtio\' />"""'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'expected_xml'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|LibvirtConfigSecretTest
dedent|''
dedent|''
name|'class'
name|'LibvirtConfigSecretTest'
op|'('
name|'LibvirtConfigBaseTest'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|test_config_secret_volume
indent|' '
name|'def'
name|'test_config_secret_volume'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'secret'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigSecret'
op|'('
op|')'
newline|'\n'
name|'secret'
op|'.'
name|'ephemeral'
op|'='
name|'True'
newline|'\n'
name|'secret'
op|'.'
name|'private'
op|'='
name|'True'
newline|'\n'
name|'secret'
op|'.'
name|'description'
op|'='
string|"'sample desc'"
newline|'\n'
name|'secret'
op|'.'
name|'uuid'
op|'='
string|"'c7a5fdbd-edaf-9455-926a-d65c16db1809'"
newline|'\n'
name|'secret'
op|'.'
name|'usage_type'
op|'='
string|"'volume'"
newline|'\n'
name|'secret'
op|'.'
name|'usage_id'
op|'='
string|"'sample_volume'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'secret'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'expected_xml'
op|'='
string|'"""\n <secret ephemeral="yes" private="yes">\n <description>sample desc</description>\n <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>\n <usage type="volume">\n <volume>sample_volume</volume>\n </usage>\n </secret>"""'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'expected_xml'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_secret_ceph
dedent|''
name|'def'
name|'test_config_secret_ceph'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'secret'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigSecret'
op|'('
op|')'
newline|'\n'
name|'secret'
op|'.'
name|'ephemeral'
op|'='
name|'True'
newline|'\n'
name|'secret'
op|'.'
name|'private'
op|'='
name|'True'
newline|'\n'
name|'secret'
op|'.'
name|'description'
op|'='
string|"'sample desc'"
newline|'\n'
name|'secret'
op|'.'
name|'usage_type'
op|'='
string|"'ceph'"
newline|'\n'
name|'secret'
op|'.'
name|'usage_id'
op|'='
string|"'sample_name'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'secret'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'expected_xml'
op|'='
string|'"""\n <secret ephemeral="yes" private="yes">\n <description>sample desc</description>\n <usage type="ceph">\n <name>sample_name</name>\n </usage>\n </secret>"""'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'expected_xml'
op|','
name|'xml'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_config_secret_iscsi
dedent|''
name|'def'
name|'test_config_secret_iscsi'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'secret'
op|'='
name|'config'
op|'.'
name|'LibvirtConfigSecret'
op|'('
op|')'
newline|'\n'
name|'secret'
op|'.'
name|'ephemeral'
op|'='
name|'True'
newline|'\n'
name|'secret'
op|'.'
name|'private'
op|'='
name|'True'
newline|'\n'
name|'secret'
op|'.'
name|'description'
op|'='
string|"'sample desc'"
newline|'\n'
name|'secret'
op|'.'
name|'usage_type'
op|'='
string|"'iscsi'"
newline|'\n'
name|'secret'
op|'.'
name|'usage_id'
op|'='
string|"'sample_target'"
newline|'\n'
nl|'\n'
name|'xml'
op|'='
name|'secret'
op|'.'
name|'to_xml'
op|'('
op|')'
newline|'\n'
name|'expected_xml'
op|'='
string|'"""\n <secret ephemeral="yes" private="yes">\n <description>sample desc</description>\n <usage type="iscsi">\n <target>sample_target</target>\n </usage>\n </secret>"""'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertXmlEqual'
op|'('
name|'expected_xml'
op|','
name|'xml'
op|')'
newline|'\n'
dedent|''
dedent|''
endmarker|''
end_unit
| 15.158549 | 2,154 | 0.574008 | 25,607 | 185,480 | 4.092123 | 0.033585 | 0.137021 | 0.084648 | 0.099001 | 0.910962 | 0.884299 | 0.851728 | 0.824081 | 0.804165 | 0.782817 | 0 | 0.018123 | 0.144436 | 185,480 | 12,235 | 2,155 | 15.159787 | 0.642202 | 0 | 0 | 0.942624 | 0 | 0.007765 | 0.45696 | 0.091358 | 0 | 0 | 0.001973 | 0 | 0.021577 | 0 | null | null | 0.000163 | 0.00049 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5430c15eb57fb44237f9ae2b3d7a4527d912cf75 | 26,075 | py | Python | SchoolDiggerScraper/utils.py | lofoyet/SchoolDigger-Scraper | 98b329bad99534ef0f234ffb11ec1e19fdd875b2 | [
"Apache-2.0"
] | 1 | 2019-02-28T20:49:41.000Z | 2019-02-28T20:49:41.000Z | SchoolDiggerScraper/utils.py | lofoyet/SchoolDigger-Scraper | 98b329bad99534ef0f234ffb11ec1e19fdd875b2 | [
"Apache-2.0"
] | null | null | null | SchoolDiggerScraper/utils.py | lofoyet/SchoolDigger-Scraper | 98b329bad99534ef0f234ffb11ec1e19fdd875b2 | [
"Apache-2.0"
] | 1 | 2019-02-28T20:49:50.000Z | 2019-02-28T20:49:50.000Z | """All constants."""
headers_init = {
# ":authority": "www.schooldigger.com", # for http/2
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7",
"content-type": "application/x-www-form-urlencoded; charset=UTF-8",
"dnt": "1",
"origin": "https://www.schooldigger.com",
"referer": "https://www.schooldigger.com/go/{state}/schoolrank.aspx",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/71.0.3578.98 Safari/537.36",
"x-requested-with": "XMLHttpRequest",
}
headers_page = {
# ":authority": "www.schooldigger.com", # for http/2
# ":method": "POST", # for http/2
# ":path": "/aj/ajRankData1.aspx", # for http/2
# ":scheme": "https", # for http/2
"accept": "application/json, text/javascript, */*; q=0.01",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7",
"content-length": 0,
"content-type": "application/x-www-form-urlencoded; charset=UTF-8",
"dnt": "1",
"origin": "https://www.schooldigger.com",
"referer": "https://www.schooldigger.com/go/{state}/"
"schoolrank.aspx?level={level}",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/71.0.3578.98 Safari/537.36",
"x-requested-with": "XMLHttpRequest",
}
main_url = "https://www.schooldigger.com/go/{state}/schoolrank.aspx?level={level}"
form_l1 = {
"draw": None, # critical
"columns[0][data]": "0",
"columns[0][name]": "",
"columns[0][searchable]": "false",
"columns[0][orderable]": "true",
"columns[0][search][value]": "",
"columns[0][search][regex]": "false",
"columns[1][data]": "1",
"columns[1][name]": "",
"columns[1][searchable]": "true",
"columns[1][orderable]": "true",
"columns[1][search][value]": "",
"columns[1][search][regex]": "false",
"columns[2][data]": "2",
"columns[2][name]": "",
"columns[2][searchable]": "true",
"columns[2][orderable]": "true",
"columns[2][search][value]": "",
"columns[2][search][regex]": "false",
"columns[3][data]": "3",
"columns[3][name]": "",
"columns[3][searchable]": "true",
"columns[3][orderable]": "true",
"columns[3][search][value]": "",
"columns[3][search][regex]": "false",
"columns[4][data]": "4",
"columns[4][name]": "",
"columns[4][searchable]": "true",
"columns[4][orderable]": "true",
"columns[4][search][value]": "",
"columns[4][search][regex]": "false",
"columns[5][data]": "5",
"columns[5][name]": "",
"columns[5][searchable]": "true",
"columns[5][orderable]": "true",
"columns[5][search][value]": "",
"columns[5][search][regex]": "false",
"columns[6][data]": "6",
"columns[6][name]": "",
"columns[6][searchable]": "true",
"columns[6][orderable]": "true",
"columns[6][search][value]": "",
"columns[6][search][regex]": "false",
"columns[7][data]": "7",
"columns[7][name]": "",
"columns[7][searchable]": "true",
"columns[7][orderable]": "true",
"columns[7][search][value]": "",
"columns[7][search][regex]": "false",
"columns[8][data]": "8",
"columns[8][name]": "",
"columns[8][searchable]": "true",
"columns[8][orderable]": "true",
"columns[8][search][value]": "",
"columns[8][search][regex]": "false",
"columns[9][data]": "9",
"columns[9][name]": "",
"columns[9][searchable]": "true",
"columns[9][orderable]": "true",
"columns[9][search][value]": "",
"columns[9][search][regex]": "false",
"columns[10][data]": "10",
"columns[10][name]": "",
"columns[10][searchable]": "true",
"columns[10][orderable]": "true",
"columns[10][search][value]": "",
"columns[10][search][regex]": "false",
"columns[11][data]": "11",
"columns[11][name]": "",
"columns[11][searchable]": "true",
"columns[11][orderable]": "true",
"columns[11][search][value]": "",
"columns[11][search][regex]": "false",
"columns[12][data]": "12",
"columns[12][name]": "",
"columns[12][searchable]": "true",
"columns[12][orderable]": "true",
"columns[12][search][value]": "",
"columns[12][search][regex]": "false",
"columns[13][data]": "13",
"columns[13][name]": "",
"columns[13][searchable]": "true",
"columns[13][orderable]": "true",
"columns[13][search][value]": "",
"columns[13][search][regex]": "false",
"columns[14][data]": "14",
"columns[14][name]": "",
"columns[14][searchable]": "true",
"columns[14][orderable]": "false",
"columns[14][search][value]": "",
"columns[14][search][regex]": "false",
"columns[15][data]": "15",
"columns[15][name]": "",
"columns[15][searchable]": "true",
"columns[15][orderable]": "true",
"columns[15][search][value]": "",
"columns[15][search][regex]": "false",
"columns[16][data]": "16",
"columns[16][name]": "",
"columns[16][searchable]": "true",
"columns[16][orderable]": "true",
"columns[16][search][value]": "",
"columns[16][search][regex]": "false",
"columns[17][data]": "17",
"columns[17][name]": "",
"columns[17][searchable]": "true",
"columns[17][orderable]": "true",
"columns[17][search][value]": "",
"columns[17][search][regex]": "false",
"columns[18][data]": "18",
"columns[18][name]": "",
"columns[18][searchable]": "true",
"columns[18][orderable]": "true",
"columns[18][search][value]": "",
"columns[18][search][regex]": "false",
"columns[19][data]": "19",
"columns[19][name]": "",
"columns[19][searchable]": "true",
"columns[19][orderable]": "true",
"columns[19][search][value]": "",
"columns[19][search][regex]": "false",
"columns[20][data]": "20",
"columns[20][name]": "",
"columns[20][searchable]": "true",
"columns[20][orderable]": "true",
"columns[20][search][value]": "",
"columns[20][search][regex]": "false",
"columns[21][data]": "21",
"columns[21][name]": "",
"columns[21][searchable]": "true",
"columns[21][orderable]": "true",
"columns[21][search][value]": "",
"columns[21][search][regex]": "false",
"columns[22][data]": "22",
"columns[22][name]": "",
"columns[22][searchable]": "true",
"columns[22][orderable]": "true",
"columns[22][search][value]": "",
"columns[22][search][regex]": "false",
"columns[23][data]": "23",
"columns[23][name]": "",
"columns[23][searchable]": "true",
"columns[23][orderable]": "true",
"columns[23][search][value]": "",
"columns[23][search][regex]": "false",
"columns[24][data]": "24",
"columns[24][name]": "",
"columns[24][searchable]": "true",
"columns[24][orderable]": "true",
"columns[24][search][value]": "",
"columns[24][search][regex]": "false",
"columns[25][data]": "25",
"columns[25][name]": "",
"columns[25][searchable]": "true",
"columns[25][orderable]": "true",
"columns[25][search][value]": "",
"columns[25][search][regex]": "false",
"columns[26][data]": "26",
"columns[26][name]": "",
"columns[26][searchable]": "true",
"columns[26][orderable]": "true",
"columns[26][search][value]": "",
"columns[26][search][regex]": "false",
"columns[27][data]": "27",
"columns[27][name]": "",
"columns[27][searchable]": "true",
"columns[27][orderable]": "true",
"columns[27][search][value]": "",
"columns[27][search][regex]": "false",
"columns[28][data]": "28",
"columns[28][name]": "",
"columns[28][searchable]": "true",
"columns[28][orderable]": "true",
"columns[28][search][value]": "",
"columns[28][search][regex]": "false",
"columns[29][data]": "29",
"columns[29][name]": "",
"columns[29][searchable]": "true",
"columns[29][orderable]": "true",
"columns[29][search][value]": "",
"columns[29][search][regex]": "false",
"columns[30][data]": "30",
"columns[30][name]": "",
"columns[30][searchable]": "true",
"columns[30][orderable]": "true",
"columns[30][search][value]": "",
"columns[30][search][regex]": "false",
"order[0][column]": "1",
"order[0][dir]": "asc",
"start": None, # critical
"length": "50",
"search[value]": "",
"search[regex]": "false",
"values[FIPS]": None, # critical
"values[rankType]": "0",
"values[Level]": "1",
"values[resAggregateYear]": "2017",
"values[reqAggregateYear]": "2017",
"values[resGSLInclude]": "PK,KG,01,02,03,04", # critical
"values[resGSHL]": "0",
"values[resGSHH]": "13",
"values[resGSLL]": "0",
"values[resGSLH]": "5",
"values[reqGSHL]": "",
"values[reqGSHH]": "",
"values[reqGSLL]": "",
"values[reqGSLH]": "",
"values[resYear]": "2018",
"values[resYearCompare]": "2017",
"values[reqYear]": "",
"values[reqYearCompare]": "",
"values[reqEntityID]": "",
"values[reqEntitySubID]": "",
"values[clientViewing]": "0",
"values[chartType]": "h",
}
form_l2 = {
"draw": None,
"columns[0][data]": "0",
"columns[0][name]": "",
"columns[0][searchable]": "false",
"columns[0][orderable]": "true",
"columns[0][search][value]": "",
"columns[0][search][regex]": "false",
"columns[1][data]": "1",
"columns[1][name]": "",
"columns[1][searchable]": "true",
"columns[1][orderable]": "true",
"columns[1][search][value]": "",
"columns[1][search][regex]": "false",
"columns[2][data]": "2",
"columns[2][name]": "",
"columns[2][searchable]": "true",
"columns[2][orderable]": "true",
"columns[2][search][value]": "",
"columns[2][search][regex]": "false",
"columns[3][data]": "3",
"columns[3][name]": "",
"columns[3][searchable]": "true",
"columns[3][orderable]": "true",
"columns[3][search][value]": "",
"columns[3][search][regex]": "false",
"columns[4][data]": "4",
"columns[4][name]": "",
"columns[4][searchable]": "true",
"columns[4][orderable]": "true",
"columns[4][search][value]": "",
"columns[4][search][regex]": "false",
"columns[5][data]": "5",
"columns[5][name]": "",
"columns[5][searchable]": "true",
"columns[5][orderable]": "true",
"columns[5][search][value]": "",
"columns[5][search][regex]": "false",
"columns[6][data]": "6",
"columns[6][name]": "",
"columns[6][searchable]": "true",
"columns[6][orderable]": "true",
"columns[6][search][value]": "",
"columns[6][search][regex]": "false",
"columns[7][data]": "7",
"columns[7][name]": "",
"columns[7][searchable]": "true",
"columns[7][orderable]": "true",
"columns[7][search][value]": "",
"columns[7][search][regex]": "false",
"columns[8][data]": "8",
"columns[8][name]": "",
"columns[8][searchable]": "true",
"columns[8][orderable]": "true",
"columns[8][search][value]": "",
"columns[8][search][regex]": "false",
"columns[9][data]": "9",
"columns[9][name]": "",
"columns[9][searchable]": "true",
"columns[9][orderable]": "true",
"columns[9][search][value]": "",
"columns[9][search][regex]": "false",
"columns[10][data]": "10",
"columns[10][name]": "",
"columns[10][searchable]": "true",
"columns[10][orderable]": "true",
"columns[10][search][value]": "",
"columns[10][search][regex]": "false",
"columns[11][data]": "11",
"columns[11][name]": "",
"columns[11][searchable]": "true",
"columns[11][orderable]": "true",
"columns[11][search][value]": "",
"columns[11][search][regex]": "false",
"columns[12][data]": "12",
"columns[12][name]": "",
"columns[12][searchable]": "true",
"columns[12][orderable]": "true",
"columns[12][search][value]": "",
"columns[12][search][regex]": "false",
"columns[13][data]": "13",
"columns[13][name]": "",
"columns[13][searchable]": "true",
"columns[13][orderable]": "true",
"columns[13][search][value]": "",
"columns[13][search][regex]": "false",
"columns[14][data]": "14",
"columns[14][name]": "",
"columns[14][searchable]": "true",
"columns[14][orderable]": "false",
"columns[14][search][value]": "",
"columns[14][search][regex]": "false",
"columns[15][data]": "15",
"columns[15][name]": "",
"columns[15][searchable]": "true",
"columns[15][orderable]": "true",
"columns[15][search][value]": "",
"columns[15][search][regex]": "false",
"columns[16][data]": "16",
"columns[16][name]": "",
"columns[16][searchable]": "true",
"columns[16][orderable]": "true",
"columns[16][search][value]": "",
"columns[16][search][regex]": "false",
"columns[17][data]": "17",
"columns[17][name]": "",
"columns[17][searchable]": "true",
"columns[17][orderable]": "true",
"columns[17][search][value]": "",
"columns[17][search][regex]": "false",
"columns[18][data]": "18",
"columns[18][name]": "",
"columns[18][searchable]": "true",
"columns[18][orderable]": "true",
"columns[18][search][value]": "",
"columns[18][search][regex]": "false",
"columns[19][data]": "19",
"columns[19][name]": "",
"columns[19][searchable]": "true",
"columns[19][orderable]": "true",
"columns[19][search][value]": "",
"columns[19][search][regex]": "false",
"columns[20][data]": "20",
"columns[20][name]": "",
"columns[20][searchable]": "true",
"columns[20][orderable]": "true",
"columns[20][search][value]": "",
"columns[20][search][regex]": "false",
"columns[21][data]": "21",
"columns[21][name]": "",
"columns[21][searchable]": "true",
"columns[21][orderable]": "true",
"columns[21][search][value]": "",
"columns[21][search][regex]": "false",
"columns[22][data]": "22",
"columns[22][name]": "",
"columns[22][searchable]": "true",
"columns[22][orderable]": "true",
"columns[22][search][value]": "",
"columns[22][search][regex]": "false",
"columns[23][data]": "23",
"columns[23][name]": "",
"columns[23][searchable]": "true",
"columns[23][orderable]": "true",
"columns[23][search][value]": "",
"columns[23][search][regex]": "false",
"columns[24][data]": "24",
"columns[24][name]": "",
"columns[24][searchable]": "true",
"columns[24][orderable]": "true",
"columns[24][search][value]": "",
"columns[24][search][regex]": "false",
"columns[25][data]": "25",
"columns[25][name]": "",
"columns[25][searchable]": "true",
"columns[25][orderable]": "true",
"columns[25][search][value]": "",
"columns[25][search][regex]": "false",
"columns[26][data]": "26",
"columns[26][name]": "",
"columns[26][searchable]": "true",
"columns[26][orderable]": "true",
"columns[26][search][value]": "",
"columns[26][search][regex]": "false",
"columns[27][data]": "27",
"columns[27][name]": "",
"columns[27][searchable]": "true",
"columns[27][orderable]": "true",
"columns[27][search][value]": "",
"columns[27][search][regex]": "false",
"columns[28][data]": "28",
"columns[28][name]": "",
"columns[28][searchable]": "true",
"columns[28][orderable]": "true",
"columns[28][search][value]": "",
"columns[28][search][regex]": "false",
"columns[29][data]": "29",
"columns[29][name]": "",
"columns[29][searchable]": "true",
"columns[29][orderable]": "true",
"columns[29][search][value]": "",
"columns[29][search][regex]": "false",
"columns[30][data]": "30",
"columns[30][name]": "",
"columns[30][searchable]": "true",
"columns[30][orderable]": "true",
"columns[30][search][value]": "",
"columns[30][search][regex]": "false",
"order[0][column]": "1",
"order[0][dir]": "asc",
"start": None,
"length": "50",
"search[value]": "",
"search[regex]": "false",
"values[FIPS]": None, # critical
"values[rankType]": "0",
"values[Level]": "2",
"values[resAggregateYear]": "2017",
"values[reqAggregateYear]": "2017",
"values[GSHInclude]": "07,08,09,10,11,12",
"values[resGSLInclude]": "PK,KG,01,02,03,04,05,06,07,08",
"values[resGSHL]": "8",
"values[resGSHH]": "13",
"values[resGSLL]": "0",
"values[resGSLH]": "9",
"values[reqGSHL]": "",
"values[reqGSHH]": "",
"values[reqGSLL]": "",
"values[reqGSLH]": "",
"values[resYear]": "2018",
"values[resYearCompare]": "2017",
"values[reqYear]": "",
"values[reqYearCompare]": "",
"values[reqEntityID]": "",
"values[reqEntitySubID]": "",
"values[clientViewing]": "0",
"values[chartType]": "h",
}
form_l3 = {
"draw": None,
"columns[0][data]": "0",
"columns[0][name]": "",
"columns[0][searchable]": "false",
"columns[0][orderable]": "true",
"columns[0][search][value]": "",
"columns[0][search][regex]": "false",
"columns[1][data]": "1",
"columns[1][name]": "",
"columns[1][searchable]": "true",
"columns[1][orderable]": "true",
"columns[1][search][value]": "",
"columns[1][search][regex]": "false",
"columns[2][data]": "2",
"columns[2][name]": "",
"columns[2][searchable]": "true",
"columns[2][orderable]": "true",
"columns[2][search][value]": "",
"columns[2][search][regex]": "false",
"columns[3][data]": "3",
"columns[3][name]": "",
"columns[3][searchable]": "true",
"columns[3][orderable]": "true",
"columns[3][search][value]": "",
"columns[3][search][regex]": "false",
"columns[4][data]": "4",
"columns[4][name]": "",
"columns[4][searchable]": "true",
"columns[4][orderable]": "true",
"columns[4][search][value]": "",
"columns[4][search][regex]": "false",
"columns[5][data]": "5",
"columns[5][name]": "",
"columns[5][searchable]": "true",
"columns[5][orderable]": "true",
"columns[5][search][value]": "",
"columns[5][search][regex]": "false",
"columns[6][data]": "6",
"columns[6][name]": "",
"columns[6][searchable]": "true",
"columns[6][orderable]": "true",
"columns[6][search][value]": "",
"columns[6][search][regex]": "false",
"columns[7][data]": "7",
"columns[7][name]": "",
"columns[7][searchable]": "true",
"columns[7][orderable]": "true",
"columns[7][search][value]": "",
"columns[7][search][regex]": "false",
"columns[8][data]": "8",
"columns[8][name]": "",
"columns[8][searchable]": "true",
"columns[8][orderable]": "true",
"columns[8][search][value]": "",
"columns[8][search][regex]": "false",
"columns[9][data]": "9",
"columns[9][name]": "",
"columns[9][searchable]": "true",
"columns[9][orderable]": "true",
"columns[9][search][value]": "",
"columns[9][search][regex]": "false",
"columns[10][data]": "10",
"columns[10][name]": "",
"columns[10][searchable]": "true",
"columns[10][orderable]": "true",
"columns[10][search][value]": "",
"columns[10][search][regex]": "false",
"columns[11][data]": "11",
"columns[11][name]": "",
"columns[11][searchable]": "true",
"columns[11][orderable]": "true",
"columns[11][search][value]": "",
"columns[11][search][regex]": "false",
"columns[12][data]": "12",
"columns[12][name]": "",
"columns[12][searchable]": "true",
"columns[12][orderable]": "true",
"columns[12][search][value]": "",
"columns[12][search][regex]": "false",
"columns[13][data]": "13",
"columns[13][name]": "",
"columns[13][searchable]": "true",
"columns[13][orderable]": "true",
"columns[13][search][value]": "",
"columns[13][search][regex]": "false",
"columns[14][data]": "14",
"columns[14][name]": "",
"columns[14][searchable]": "true",
"columns[14][orderable]": "false",
"columns[14][search][value]": "",
"columns[14][search][regex]": "false",
"columns[15][data]": "15",
"columns[15][name]": "",
"columns[15][searchable]": "true",
"columns[15][orderable]": "true",
"columns[15][search][value]": "",
"columns[15][search][regex]": "false",
"columns[16][data]": "16",
"columns[16][name]": "",
"columns[16][searchable]": "true",
"columns[16][orderable]": "true",
"columns[16][search][value]": "",
"columns[16][search][regex]": "false",
"columns[17][data]": "17",
"columns[17][name]": "",
"columns[17][searchable]": "true",
"columns[17][orderable]": "true",
"columns[17][search][value]": "",
"columns[17][search][regex]": "false",
"columns[18][data]": "18",
"columns[18][name]": "",
"columns[18][searchable]": "true",
"columns[18][orderable]": "true",
"columns[18][search][value]": "",
"columns[18][search][regex]": "false",
"columns[19][data]": "19",
"columns[19][name]": "",
"columns[19][searchable]": "true",
"columns[19][orderable]": "true",
"columns[19][search][value]": "",
"columns[19][search][regex]": "false",
"columns[20][data]": "20",
"columns[20][name]": "",
"columns[20][searchable]": "true",
"columns[20][orderable]": "true",
"columns[20][search][value]": "",
"columns[20][search][regex]": "false",
"columns[21][data]": "21",
"columns[21][name]": "",
"columns[21][searchable]": "true",
"columns[21][orderable]": "true",
"columns[21][search][value]": "",
"columns[21][search][regex]": "false",
"columns[22][data]": "22",
"columns[22][name]": "",
"columns[22][searchable]": "true",
"columns[22][orderable]": "true",
"columns[22][search][value]": "",
"columns[22][search][regex]": "false",
"columns[23][data]": "23",
"columns[23][name]": "",
"columns[23][searchable]": "true",
"columns[23][orderable]": "true",
"columns[23][search][value]": "",
"columns[23][search][regex]": "false",
"columns[24][data]": "24",
"columns[24][name]": "",
"columns[24][searchable]": "true",
"columns[24][orderable]": "true",
"columns[24][search][value]": "",
"columns[24][search][regex]": "false",
"columns[25][data]": "25",
"columns[25][name]": "",
"columns[25][searchable]": "true",
"columns[25][orderable]": "true",
"columns[25][search][value]": "",
"columns[25][search][regex]": "false",
"columns[26][data]": "26",
"columns[26][name]": "",
"columns[26][searchable]": "true",
"columns[26][orderable]": "true",
"columns[26][search][value]": "",
"columns[26][search][regex]": "false",
"columns[27][data]": "27",
"columns[27][name]": "",
"columns[27][searchable]": "true",
"columns[27][orderable]": "true",
"columns[27][search][value]": "",
"columns[27][search][regex]": "false",
"columns[28][data]": "28",
"columns[28][name]": "",
"columns[28][searchable]": "true",
"columns[28][orderable]": "true",
"columns[28][search][value]": "",
"columns[28][search][regex]": "false",
"columns[29][data]": "29",
"columns[29][name]": "",
"columns[29][searchable]": "true",
"columns[29][orderable]": "true",
"columns[29][search][value]": "",
"columns[29][search][regex]": "false",
"columns[30][data]": "30",
"columns[30][name]": "",
"columns[30][searchable]": "true",
"columns[30][orderable]": "true",
"columns[30][search][value]": "",
"columns[30][search][regex]": "false",
"order[0][column]": "1",
"order[0][dir]": "asc",
"start": None,
"length": "50",
"search[value]": "",
"search[regex]": "false",
"values[FIPS]": None, # critical
"values[rankType]": "0",
"values[Level]": "3",
"values[resAggregateYear]": "2017",
"values[reqAggregateYear]": "2017",
"values[GSHInclude]": "12",
"values[resGSHL]": "13",
"values[resGSHH]": "13",
"values[resGSLL]": "0",
"values[resGSLH]": "13",
"values[reqGSHL]": "",
"values[reqGSHH]": "",
"values[reqGSLL]": "",
"values[reqGSLH]": "",
"values[resYear]": "2018",
"values[resYearCompare]": "2017",
"values[reqYear]": "",
"values[reqYearCompare]": "",
"values[reqEntityID]": "",
"values[reqEntitySubID]": "",
"values[clientViewing]": "0",
"values[chartType]": "h"
}
# school level to form
forms = {
1: form_l1,
2: form_l2,
3: form_l3,
}
school_level = {1: "elementary", 2: "middle", 3: "high"}
school_level_to_year = {1: "PK,KG,01,02,03,04", 2: "05,06,07,08", 3: "09,10,11,12"}
entry_point = "https://www.schooldigger.com/aj/ajRankData1.aspx"
state_codes = [
"AL",
"AK",
"AZ",
"AR",
"CA",
"CO",
"CT",
"DE",
"DC",
"FL",
"GA",
"HI",
"ID",
"IL",
"IN",
"IA",
"KS",
"KY",
"LA",
"ME",
"MD",
"MA",
"MI",
"MN",
"MS",
"MO",
"MT",
"NE",
"NV",
"NH",
"NJ",
"NM",
"NY",
"NC",
"ND",
"OH",
"OK",
"OR",
"PA",
"RI",
"SC",
"SD",
"TN",
"TX",
"UT",
"VT",
"VA",
"WA",
"WV",
"WI",
"WY",
]
state_to_fips = {
"AL": "01",
"AK": "02",
"AZ": "04",
"AR": "05",
"CA": "06",
"CO": "08",
"CT": "09",
"DE": "10",
"FL": "12",
"GA": "13",
"HI": "15",
"ID": "16",
"IL": "17",
"IN": "18",
"IA": "19",
"KS": "20",
"KY": "21",
"LA": "22",
"ME": "23",
"MD": "24",
"MA": "25",
"MI": "26",
"MN": "27",
"MS": "28",
"MO": "29",
"MT": "30",
"NE": "31",
"NV": "32",
"NH": "33",
"NJ": "34",
"NM": "35",
"NY": "36",
"NC": "37",
"ND": "38",
"OH": "39",
"OK": "40",
"OR": "41",
"PA": "42",
"RI": "44",
"SC": "45",
"SD": "46",
"TN": "47",
"TX": "48",
"UT": "49",
"VT": "50",
"VA": "51",
"WA": "53",
"WV": "54",
"WI": "55",
"WY": "56",
"DC": "11",
"AS": "60",
"GU": "66",
"MP": "69",
"PR": "72",
"VI": "78",
}
| 31.876528 | 83 | 0.526059 | 2,937 | 26,075 | 4.663262 | 0.086823 | 0.144568 | 0.11215 | 0.151139 | 0.943268 | 0.943268 | 0.942392 | 0.933484 | 0.909828 | 0.909828 | 0 | 0.068099 | 0.183969 | 26,075 | 817 | 84 | 31.915545 | 0.575571 | 0.01162 | 0 | 0.813517 | 0 | 0.002503 | 0.633359 | 0.370641 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
54471ec06c9772f0c2a847d5fbb8db067e232da6 | 12,975 | py | Python | MirrorMirror/augmenters/pooling.py | RubanSeven/MirrorMirror | 47c7a1f458f87c536d068fcf249625f426920cc3 | [
"Apache-2.0"
] | 2 | 2021-07-07T13:21:11.000Z | 2021-09-24T06:57:16.000Z | MirrorMirror/augmenters/pooling.py | RubanSeven/MirrorMirror | 47c7a1f458f87c536d068fcf249625f426920cc3 | [
"Apache-2.0"
] | null | null | null | MirrorMirror/augmenters/pooling.py | RubanSeven/MirrorMirror | 47c7a1f458f87c536d068fcf249625f426920cc3 | [
"Apache-2.0"
] | null | null | null | # -*- coding:utf-8 -*-
"""
@author: RubanSeven
@project: MirrorMirror
"""
from .param import *
from PyQt5.QtWidgets import *
from imgaug.augmenters.pooling import *
class AveragePoolingWidget(QTreeWidgetItem):
def __init__(self, *__args):
super().__init__(*__args)
self.name = 'AveragePooling'
self.setText(0, self.name)
self.params = self.__init_param()
self.has_child = False
self.describe = ''
@staticmethod
def __init_param():
params = dict()
params['kernel_size'] = ChoiceParam(
name='kernel_size',
value_range={
'Int': IntParam(
value_range=(1, 1e8),
default=3
),
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
),
'Dict': DictParam(
children={
"height": ChoiceParam(
value_range={
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
)
},
default='Int List'
),
"width": ChoiceParam(
value_range={
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
)
},
default='Int List'
)
}
)
},
default='Int Range'
)
params['keep_size'] = EnumParam(
name='keep_size',
value_range=('True', 'False'),
default='True'
)
return params
def to_aug(self):
kernel_size = self.params['kernel_size'].get_value()
if type(kernel_size) is dict:
kernel_size = (kernel_size['height'], kernel_size['width'])
return AveragePooling(
kernel_size=kernel_size,
keep_size=self.params['keep_size'].get_value()
)
def to_code(self):
kernel_size = self.params['kernel_size'].get_value()
if type(kernel_size) is dict:
kernel_size = (kernel_size['height'], kernel_size['width'])
return 'iaa.AveragePooling(\nkernel_size={},\nkeep_size={}\n)'.format(
kernel_size,
self.params['keep_size'].get_value()
)
class MaxPoolingWidget(QTreeWidgetItem):
def __init__(self, *__args):
super().__init__(*__args)
self.name = 'MaxPooling'
self.setText(0, self.name)
self.params = self.__init_param()
self.has_child = False
self.describe = ''
@staticmethod
def __init_param():
params = dict()
params['kernel_size'] = ChoiceParam(
name='kernel_size',
value_range={
'Int': IntParam(
value_range=(1, 1e8),
default=3
),
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
),
'Dict': DictParam(
children={
"height": ChoiceParam(
value_range={
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
)
},
default='Int List'
),
"width": ChoiceParam(
value_range={
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
)
},
default='Int List'
)
}
)
},
default='Int Range'
)
params['keep_size'] = EnumParam(
name='keep_size',
value_range=('True', 'False'),
default='True'
)
return params
def to_aug(self):
kernel_size = self.params['kernel_size'].get_value()
if type(kernel_size) is dict:
kernel_size = (kernel_size['height'], kernel_size['width'])
return MaxPooling(
kernel_size=kernel_size,
keep_size=self.params['keep_size'].get_value()
)
def to_code(self):
kernel_size = self.params['kernel_size'].get_value()
if type(kernel_size) is dict:
kernel_size = (kernel_size['height'], kernel_size['width'])
return 'iaa.MaxPooling(\nkernel_size={},\nkeep_size={}\n)'.format(
kernel_size,
self.params['keep_size'].get_value()
)
class MedianPoolingWidget(QTreeWidgetItem):
def __init__(self, *__args):
super().__init__(*__args)
self.name = 'MedianPooling'
self.setText(0, self.name)
self.params = self.__init_param()
self.has_child = False
self.describe = ''
@staticmethod
def __init_param():
params = dict()
params['kernel_size'] = ChoiceParam(
name='kernel_size',
value_range={
'Int': IntParam(
value_range=(1, 1e8),
default=3
),
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
),
'Dict': DictParam(
children={
"height": ChoiceParam(
value_range={
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
)
},
default='Int List'
),
"width": ChoiceParam(
value_range={
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
)
},
default='Int List'
)
}
)
},
default='Int Range'
)
params['keep_size'] = EnumParam(
name='keep_size',
value_range=('True', 'False'),
default='True'
)
return params
def to_aug(self):
kernel_size = self.params['kernel_size'].get_value()
if type(kernel_size) is dict:
kernel_size = (kernel_size['height'], kernel_size['width'])
return MedianPooling(
kernel_size=kernel_size,
keep_size=self.params['keep_size'].get_value()
)
def to_code(self):
kernel_size = self.params['kernel_size'].get_value()
if type(kernel_size) is dict:
kernel_size = (kernel_size['height'], kernel_size['width'])
return 'iaa.MedianPooling(\nkernel_size={},\nkeep_size={}\n)'.format(
kernel_size,
self.params['keep_size'].get_value()
)
class MinPoolingWidget(QTreeWidgetItem):
def __init__(self, *__args):
super().__init__(*__args)
self.name = 'MinPooling'
self.setText(0, self.name)
self.params = self.__init_param()
self.has_child = False
self.describe = ''
@staticmethod
def __init_param():
params = dict()
params['kernel_size'] = ChoiceParam(
name='kernel_size',
value_range={
'Int': IntParam(
value_range=(1, 1e8),
default=3
),
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
),
'Dict': DictParam(
children={
"height": ChoiceParam(
value_range={
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
)
},
default='Int List'
),
"width": ChoiceParam(
value_range={
'Int Range': IntRangeParam(
value_ranges=((1, 1e8), (1, 1e8)),
default=(1, 5)
),
'Int List': IntListParam(
value_range=(1, 1e8),
default='1,3,5'
)
},
default='Int List'
)
}
)
},
default='Int Range'
)
params['keep_size'] = EnumParam(
name='keep_size',
value_range=('True', 'False'),
default='True'
)
return params
def to_aug(self):
kernel_size = self.params['kernel_size'].get_value()
if type(kernel_size) is dict:
kernel_size = (kernel_size['height'], kernel_size['width'])
return MinPooling(
kernel_size=kernel_size,
keep_size=self.params['keep_size'].get_value()
)
def to_code(self):
kernel_size = self.params['kernel_size'].get_value()
if type(kernel_size) is dict:
kernel_size = (kernel_size['height'], kernel_size['width'])
return 'iaa.MinPooling(\nkernel_size={},\nkeep_size={}\n)'.format(
kernel_size,
self.params['keep_size'].get_value()
)
| 36.243017 | 79 | 0.37526 | 975 | 12,975 | 4.747692 | 0.077949 | 0.1469 | 0.066537 | 0.062216 | 0.927846 | 0.927846 | 0.927846 | 0.927846 | 0.927846 | 0.927846 | 0 | 0.030571 | 0.521002 | 12,975 | 357 | 80 | 36.344538 | 0.71424 | 0.004933 | 0 | 0.795107 | 0 | 0 | 0.091344 | 0.01618 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04893 | false | 0 | 0.009174 | 0 | 0.107034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
54a0fa8c70c3defac69fabca05c8df238c3cfa5a | 4,831 | py | Python | tests/core/transform/PythonTest.py | niklas2902/py4godot | bf50624d1fc94b55faf82a3a4d322e33fbba60ce | [
"MIT"
] | 2 | 2021-12-10T21:17:57.000Z | 2021-12-17T18:54:49.000Z | tests/core/transform/PythonTest.py | niklas2902/py4godot | bf50624d1fc94b55faf82a3a4d322e33fbba60ce | [
"MIT"
] | 9 | 2021-12-21T18:35:28.000Z | 2022-03-27T20:03:50.000Z | tests/core/transform/PythonTest.py | niklas2902/py4godot | bf50624d1fc94b55faf82a3a4d322e33fbba60ce | [
"MIT"
] | 1 | 2022-03-07T08:06:57.000Z | 2022-03-07T08:06:57.000Z | import unittest
from py4godot import Transform, Vector3, Basis, Plane, AABB
class PythonTest(unittest.TestCase):
def test_new_with_axis_origin(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(0,0,0))
self.assertEqual(transform, Transform(Basis(), Vector3(0,0,0)) )
def test_set_basis(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(0,0,0))
transform.set_basis(Basis.new_with_rows(Vector3(3,0,0), Vector3(0,3,0), Vector3(0,0,3)))
self.assertEqual(transform.get_basis(), Basis.new_with_rows(Vector3(3,0,0), Vector3(0,3,0), Vector3(0,0,3)))
def test_set_origin(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(0,0,0))
transform.set_origin(Vector3(1,2,3))
self.assertEqual(transform.get_origin(), Vector3(1,2,3))
def test_inverse(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.inverse()
self.assertEqual(transform, Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(-1,-2,-3)))
def test_affine_inverse(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.affine_inverse()
self.assertEqual(transform, Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(-1,-2,-3)))
def test_orthonormalized(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.orthonormalized()
self.assertEqual(transform,
Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3)))
def test_rotated(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.rotated(Vector3(1,0,0), 1.5707963267948966)
self.assertEqual(str(transform),"1, 0, 0, 0, -0, -1, 0, 1, -0 - 1, -3, 2" )
def test_scaled(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.scaled(Vector3(2,3,4))
self.assertEqual(transform, Transform.new_with_axis_origin(Vector3(2,0,0),Vector3(0,3,0), Vector3(0,0,4), Vector3(2,6,12)))
def test_looking_at(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.looking_at(Vector3(3,3,3), Vector3(0,0,1))
self.assertEqual(str(transform), "0.447214, 0, -0.894427, -0.894427, 0, -0.447214, 0, 1, 0 - 1, 2, 3")
def test_xform_plane(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.xform_plane(Plane(1,2,3,4))
self.assertEqual(str(transform), "0.267261, 0.534522, 0.801784, 18.708286")
def test_xform_inv_plane(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.xform_inv_plane(Plane(1,2,3,4))
self.assertEqual(str(transform), "0.267261, 0.534522, 0.801784, 11.224972")
def test_xform_aabb(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.xform_aabb(AABB(Vector3(1,1,1,), Vector3(1,2,3)))
self.assertEqual(str(transform), "2, 3, 4 - 1, 2, 3")
def test_xform_inv_aabb(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.xform_inv_aabb(AABB(Vector3(1,1,1,), Vector3(1,2,3)))
self.assertEqual(str(transform), "0, -1, -2 - 1, 2, 3")
def test_xform_inv_vector3(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.xform_vector3(Vector3(1,2,3))
self.assertEqual(str(transform), "0, 0, 0")
def test_xform_inv_vector3(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform = transform.xform_inv_vector3(Vector3(1,2,3))
self.assertEqual(str(transform), "0, 0, 0")
def test_mult(self):
transform = Transform.new_with_axis_origin(Vector3(1,0,0),Vector3(0,1,0), Vector3(0,0,1), Vector3(1,2,3))
transform2 = Transform.new_with_axis_origin(Vector3(2,0,0),Vector3(0,3,0), Vector3(0,0,4), Vector3(1,2,3))
self.assertEqual(transform*transform2, Transform.new_with_axis_origin(Vector3(2,0,0),Vector3(0,3,0), Vector3(0,0,4), Vector3(2,4,6)))
self.assertEqual(transform.mult(transform2), Transform.new_with_axis_origin(Vector3(2,0,0),Vector3(0,3,0), Vector3(0,0,4), Vector3(2,4,6)))
| 54.280899 | 141 | 0.721797 | 876 | 4,831 | 3.842466 | 0.059361 | 0.040998 | 0.13369 | 0.074272 | 0.856803 | 0.832145 | 0.822638 | 0.798277 | 0.798277 | 0.79085 | 0 | 0.136704 | 0.086939 | 4,831 | 88 | 142 | 54.897727 | 0.626389 | 0 | 0 | 0.323529 | 0 | 0.029412 | 0.04823 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.235294 | false | 0 | 0.029412 | 0 | 0.279412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
49a7907febafcc95d317892f3b865773adfa899d | 107 | py | Python | Python/Advanced OOP/Inheritance/Zoo/02. Bear.py | teodoramilcheva/softuni-software-engineering | 98dc9faa66f42570f6538fd7ef186d2bd1d39bff | [
"MIT"
] | null | null | null | Python/Advanced OOP/Inheritance/Zoo/02. Bear.py | teodoramilcheva/softuni-software-engineering | 98dc9faa66f42570f6538fd7ef186d2bd1d39bff | [
"MIT"
] | null | null | null | Python/Advanced OOP/Inheritance/Zoo/02. Bear.py | teodoramilcheva/softuni-software-engineering | 98dc9faa66f42570f6538fd7ef186d2bd1d39bff | [
"MIT"
] | null | null | null | from project.lizard import Lizard
from project.mammal import Mammal
class Bear(Mammal):
pass
| 13.375 | 34 | 0.719626 | 14 | 107 | 5.5 | 0.571429 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233645 | 107 | 7 | 35 | 15.285714 | 0.939024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
49aa200b97990b1a452beb7026b2cfd73be13fd6 | 491 | py | Python | Iniciante/1847 - Bem-vindos e Bem-vindas ao Inverno!.py | PedroTejon/Beecrowd-Online-Judge-URI- | 5d681ff1b4db452cb45e4013c8570f83824e34c2 | [
"MIT"
] | 2 | 2021-12-14T01:40:54.000Z | 2021-12-24T22:33:22.000Z | Iniciante/1847 - Bem-vindos e Bem-vindas ao Inverno!.py | PedroTejon/Beecrowd-Online-Judge-URI | 5d681ff1b4db452cb45e4013c8570f83824e34c2 | [
"MIT"
] | null | null | null | Iniciante/1847 - Bem-vindos e Bem-vindas ao Inverno!.py | PedroTejon/Beecrowd-Online-Judge-URI | 5d681ff1b4db452cb45e4013c8570f83824e34c2 | [
"MIT"
] | null | null | null | dia1, dia2, dia3 = map(int, input().split())
if dia1 > dia2 <= dia3 or (dia1 < dia2 < dia3 and dia2 - dia1 <= dia3 - dia2) or (dia1 > dia2 > dia3 and dia1 - dia2 > dia2 - dia3):
print(':)')
elif dia1 < dia2 >= dia3 or (dia1 < dia2 < dia3 and dia2 - dia1 > dia3 - dia2) or (dia1 > dia2 > dia3 and dia1 - dia2 <= dia2 - dia3):
print(':(')
elif dia1 == dia2 and dia2 < dia3:
print(':)')
elif dia1 == dia2 and dia2 > dia3:
print(':(')
elif dia1 == dia2 == dia3:
print(':(') | 37.769231 | 134 | 0.564155 | 72 | 491 | 3.847222 | 0.166667 | 0.34657 | 0.34657 | 0.202166 | 0.873646 | 0.873646 | 0.873646 | 0.859206 | 0.859206 | 0.859206 | 0 | 0.130081 | 0.248473 | 491 | 13 | 135 | 37.769231 | 0.620596 | 0 | 0 | 0.454545 | 0 | 0 | 0.020325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.454545 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 11 |
49e01b8fc896c864afd9de1411f42fc344b3c2c0 | 29,616 | py | Python | tests/test_spatial.py | jasonb5/xcdat | 4a35d6a6131fe3fec22593f54a9e48b640ceac4f | [
"Apache-2.0"
] | 10 | 2021-11-18T08:16:39.000Z | 2022-03-31T19:01:14.000Z | tests/test_spatial.py | jasonb5/xcdat | 4a35d6a6131fe3fec22593f54a9e48b640ceac4f | [
"Apache-2.0"
] | 96 | 2021-09-16T21:20:46.000Z | 2022-03-30T16:40:05.000Z | tests/test_spatial.py | XCDAT/xcdat | 514f3ab2402e8fc47ae6083dc24d47185c2a2389 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import pytest
import xarray as xr
from tests import requires_dask
from tests.fixtures import generate_dataset
from xcdat.spatial import SpatialAccessor
class TestSpatialAccessor:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
def test__init__(self):
ds = self.ds.copy()
obj = SpatialAccessor(ds)
assert obj._dataset.identical(ds)
def test_decorator_call(self):
ds = self.ds.copy()
obj = ds.spatial
assert obj._dataset.identical(ds)
class TestAverage:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
# Limit to just 3 data points to simplify testing.
self.ds = self.ds.isel(time=slice(None, 3))
# Change the value of the first element so that it is easier to identify
# changes in the output.
self.ds["ts"].data[0] = np.full((4, 4), 2.25)
def test_raises_error_if_data_var_not_in_dataset(self):
with pytest.raises(KeyError):
self.ds.spatial.average(
"not_a_data_var",
axis=["Y", "incorrect_axis"],
)
def test_spatial_average_for_lat_and_lon_region_using_custom_weights(self):
ds = self.ds.copy()
weights = xr.DataArray(
data=np.array([[1, 2, 3, 4], [2, 4, 6, 8], [3, 6, 9, 12], [4, 8, 12, 16]]),
coords={"lat": ds.lat, "lon": ds.lon},
dims=["lat", "lon"],
)
result = ds.spatial.average(
axis=["X", "Y"],
lat_bounds=(-5.0, 5),
lon_bounds=(-170, -120.1),
weights=weights,
data_var="ts",
)
expected = self.ds.copy()
expected["ts"] = xr.DataArray(
data=np.array([2.25, 1.0, 1.0]),
coords={"time": expected.time},
dims="time",
)
assert result.identical(expected)
def test_spatial_average_for_lat_and_lon_region(self):
ds = self.ds.copy()
result = ds.spatial.average(
"ts", axis=["X", "Y"], lat_bounds=(-5.0, 5), lon_bounds=(-170, -120.1)
)
expected = self.ds.copy()
expected["ts"] = xr.DataArray(
data=np.array([2.25, 1.0, 1.0]),
coords={"time": expected.time},
dims="time",
)
assert result.identical(expected)
def test_spatial_average_for_lat_region(self):
ds = self.ds.copy()
# Specifying axis as a str instead of list of str.
result = ds.spatial.average(
"ts", axis=["Y"], lat_bounds=(-5.0, 5), lon_bounds=(-170, -120.1)
)
expected = self.ds.copy()
expected["ts"] = xr.DataArray(
data=np.array(
[[2.25, 2.25, 2.25, 2.25], [1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0]]
),
coords={"time": expected.time, "lon": expected.lon},
dims=["time", "lon"],
)
assert result.identical(expected)
@requires_dask
def test_chunked_spatial_average_for_lat_region(self):
ds = self.ds.copy().chunk(2)
# Specifying axis as a str instead of list of str.
result = ds.spatial.average(
"ts", axis=["Y"], lat_bounds=(-5.0, 5), lon_bounds=(-170, -120.1)
)
expected = self.ds.copy()
expected["ts"] = xr.DataArray(
data=np.array(
[[2.25, 2.25, 2.25, 2.25], [1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0]]
),
coords={"time": expected.time, "lon": expected.lon},
dims=["time", "lon"],
)
assert result.identical(expected)
class TestValidateAxisArg:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
def test_raises_error_if_axis_list_contains_unsupported_axis(self):
with pytest.raises(ValueError):
self.ds.spatial._validate_axis_arg(axis=["Y", "incorrect_axis"])
def test_raises_error_if_lat_axis_does_not_exist(self):
ds = self.ds.copy()
ds.lat.attrs["axis"] = None
with pytest.raises(KeyError):
ds.spatial._validate_axis_arg(axis=["X", "Y"])
def test_raises_error_if_lon_axis_does_not_exist(self):
ds = self.ds.copy()
ds.lon.attrs["axis"] = None
with pytest.raises(KeyError):
ds.spatial._validate_axis_arg(axis=["X", "Y"])
class TestValidateRegionBounds:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
def test_raises_error_if_bounds_type_is_not_a_tuple(self):
with pytest.raises(TypeError):
self.ds.spatial._validate_region_bounds("lon", [1, 1])
with pytest.raises(TypeError):
self.ds.spatial._validate_region_bounds("lon", "str")
def test_raises_error_if_there_are_0_elements_in_the_bounds(self):
with pytest.raises(ValueError):
self.ds.spatial._validate_region_bounds("lon", ())
def test_raises_error_if_there_are_more_than_two_elements_in_the_bounds(self):
with pytest.raises(ValueError):
self.ds.spatial._validate_region_bounds("lon", (1, 1, 2))
def test_does_not_raise_error_if_lower_and_upper_bounds_are_floats_or_ints(self):
self.ds.spatial._validate_region_bounds("lon", (1, 1))
self.ds.spatial._validate_region_bounds("lon", (1, 1.2))
def test_raises_error_if_lower_bound_is_not_a_float_or_int(self):
with pytest.raises(TypeError):
self.ds.spatial._validate_region_bounds("Y", ("invalid", 1))
def test_raises_error_if_upper_bound_is_not_a_float_or_int(self):
with pytest.raises(TypeError):
self.ds.spatial._validate_region_bounds("X", (1, "invalid"))
def test_raises_error_if_lower_lat_bound_is_bigger_than_upper(self):
with pytest.raises(ValueError):
self.ds.spatial._validate_region_bounds("Y", (2, 1))
def test_does_not_raise_error_if_lon_lower_bound_is_larger_than_upper(self):
self.ds.spatial._validate_region_bounds("X", (2, 1))
class TestValidateWeights:
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
self.weights = xr.DataArray(
data=np.ones((4, 4)),
coords={"lat": self.ds.lat, "lon": self.ds.lon},
dims=["lat", "lon"],
)
def test_no_error_is_raised_when_spatial_dim_sizes_align_between_weights_and_data_var(
self,
):
weights = xr.DataArray(
data=np.ones((4, 4)),
coords={"lat": self.ds.lat, "lon": self.ds.lon},
dims=["lat", "lon"],
)
self.ds.spatial._validate_weights(self.ds["ts"], axis=["Y"], weights=weights)
def test_error_is_raised_when_lat_axis_is_specified_but_lat_is_not_in_weights_dims(
self,
):
weights = xr.DataArray(
data=np.ones(4), coords={"lon": self.ds.lon}, dims=["lon"]
)
with pytest.raises(KeyError):
self.ds.spatial._validate_weights(
self.ds["ts"], axis=["X", "Y"], weights=weights
)
def test_error_is_raised_when_lon_axis_is_specified_but_lon_is_not_in_weights_dims(
self,
):
weights = xr.DataArray(
data=np.ones(4), coords={"lat": self.ds.lat}, dims=["lat"]
)
with pytest.raises(KeyError):
self.ds.spatial._validate_weights(
self.ds["ts"], axis=["X", "Y"], weights=weights
)
def test_error_is_raised_when_weights_lat_and_lon_dims_dont_align_with_data_var_dims(
self,
):
# Get a slice of the dataset to reduce the size of the dimensions for
# simpler testing.
ds = self.ds.isel(lat=slice(0, 3), lon=slice(0, 3))
weights = xr.DataArray(
data=np.ones((3, 3)),
coords={"lat": ds.lat, "lon": ds.lon},
dims=["lat", "lon"],
)
with pytest.raises(ValueError):
self.ds.spatial._validate_weights(
self.ds["ts"], axis=["X", "Y"], weights=weights
)
class TestSwapLonAxis:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
def test_raises_error_with_incorrect_orientation_to_swap_to(self):
domain = xr.DataArray(
name="lon_bnds",
data=np.array([[-65, -5], [-5, 0], [0, 120]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
with pytest.raises(ValueError):
self.ds.spatial._swap_lon_axis(domain, to=9000)
@requires_dask
def test_swap_chunked_domain_dataarray_from_180_to_360(self):
domain = xr.DataArray(
name="lon_bnds",
data=np.array([[-65, -5], [-5, 0], [0, 120]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
).chunk(2)
result = self.ds.spatial._swap_lon_axis(domain, to=360)
expected = xr.DataArray(
name="lon_bnds",
data=np.array([[295, 355], [355, 0], [0, 120]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
assert result.identical(expected)
@requires_dask
def test_swap_chunked_domain_dataarray_from_360_to_180(self):
domain = xr.DataArray(
name="lon_bnds",
data=np.array([[0, 120], [120, 181], [181, 360]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
).chunk(2)
result = self.ds.spatial._swap_lon_axis(domain, to=180)
expected = xr.DataArray(
name="lon_bnds",
data=np.array([[0, 120], [120, -179], [-179, 0]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
assert result.identical(expected)
domain = xr.DataArray(
name="lon_bnds",
data=np.array([[-0.25, 120], [120, 359.75]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
).chunk(2)
result = self.ds.spatial._swap_lon_axis(domain, to=180)
expected = xr.DataArray(
name="lon_bnds",
data=np.array([[-0.25, 120], [120, -0.25]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
assert result.identical(expected)
def test_swap_domain_dataarray_from_180_to_360(self):
domain = xr.DataArray(
name="lon_bnds",
data=np.array([[-65, -5], [-5, 0], [0, 120]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
result = self.ds.spatial._swap_lon_axis(domain, to=360)
expected = xr.DataArray(
name="lon_bnds",
data=np.array([[295, 355], [355, 0], [0, 120]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
assert result.identical(expected)
def test_swap_domain_dataarray_from_360_to_180(self):
domain = xr.DataArray(
name="lon_bnds",
data=np.array([[0, 120], [120, 181], [181, 360]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
result = self.ds.spatial._swap_lon_axis(domain, to=180)
expected = xr.DataArray(
name="lon_bnds",
data=np.array([[0, 120], [120, -179], [-179, 0]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
assert result.identical(expected)
domain = xr.DataArray(
name="lon_bnds",
data=np.array([[-0.25, 120], [120, 359.75]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
result = self.ds.spatial._swap_lon_axis(domain, to=180)
expected = xr.DataArray(
name="lon_bnds",
data=np.array([[-0.25, 120], [120, -0.25]]),
dims=["lon", "bnds"],
attrs={"is_generated": "True"},
)
assert result.identical(expected)
def test_swap_region_ndarray_from_180_to_360(self):
result = self.ds.spatial._swap_lon_axis(np.array([-65, 0, 120]), to=360)
expected = np.array([295, 0, 120])
assert np.array_equal(result, expected)
result = self.ds.spatial._swap_lon_axis(np.array([-180, 0, 180]), to=360)
expected = np.array([180, 0, 180])
assert np.array_equal(result, expected)
def test_swap_region_ndarray_from_360_to_180(self):
result = self.ds.spatial._swap_lon_axis(np.array([0, 120, 181, 360]), to=180)
expected = np.array([0, 120, -179, 0])
assert np.array_equal(result, expected)
result = self.ds.spatial._swap_lon_axis(np.array([-0.25, 120, 359.75]), to=180)
expected = np.array([-0.25, 120, -0.25])
assert np.array_equal(result, expected)
class TestGetWeights:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
def test_weights_for_region_in_lat_and_lon_domains(self):
result = self.ds.spatial._get_weights(
axis=["Y", "X"], lat_bounds=(-5, 5), lon_bounds=(-170, -120)
)
expected = xr.DataArray(
data=np.array(
[
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 4.35778714, 0.0],
[0.0, 0.0, 4.35778714, 0.0],
[0.0, 0.0, 0.0, 0.0],
]
),
coords={"lat": self.ds.lat, "lon": self.ds.lon},
dims=["lat", "lon"],
)
xr.testing.assert_allclose(result, expected)
def test_area_weights_for_region_in_lat_domain(self):
result = self.ds.spatial._get_weights(
axis=["Y", "X"], lat_bounds=(-5, 5), lon_bounds=None
)
expected = xr.DataArray(
data=np.array(
[
[0.0, 0.0, 0.0, 0.0],
[0.16341702, 15.52461668, 15.52461668, 0.16341702],
[0.16341702, 15.52461668, 15.52461668, 0.16341702],
[0.0, 0.0, 0.0, 0.0],
]
),
coords={"lat": self.ds.lat, "lon": self.ds.lon},
dims=["lat", "lon"],
)
xr.testing.assert_allclose(result, expected)
def test_weights_for_region_in_lon_domain(self):
expected = xr.DataArray(
data=np.array(
[
[0.0, 0.0, 0.00297475, 0.0],
[0.0, 0.0, 49.99702525, 0.0],
[0.0, 0.0, 49.99702525, 0.0],
[0.0, 0.0, 0.00297475, 0.0],
]
),
coords={"lat": self.ds.lat, "lon": self.ds.lon},
dims=["lat", "lon"],
)
result = self.ds.spatial._get_weights(
axis=["Y", "X"], lat_bounds=None, lon_bounds=(-170, -120)
)
xr.testing.assert_allclose(result, expected)
class TestGetLongitudeWeights:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
def test_weights_for_region_in_lon_domain(self):
# Longitude axis orientation swaps from (-180, 180) to (0, 360).
result = self.ds.spatial._get_longitude_weights(
domain_bounds=self.ds.lon_bnds.copy(),
region_bounds=np.array([-170.0, -120.0]),
)
expected = xr.DataArray(
data=np.array([0.0, 0.0, 50.0, 0.0]),
coords={"lon": self.ds.lon},
dims=["lon"],
)
xr.testing.assert_allclose(result, expected)
def test_weights_for_region_in_lon_domain_with_both_spanning_p_meridian(self):
ds = self.ds.copy()
# Domain spans prime meridian.
ds.lon_bnds.data[:] = np.array([[359, 1], [1, 90], [90, 180], [180, 359]])
result = ds.spatial._get_longitude_weights(
domain_bounds=ds.lon_bnds,
# Region spans prime meridian.
region_bounds=np.array([359, 1]),
)
expected = xr.DataArray(
data=np.array([2.0, 0.0, 0.0, 0.0]),
coords={"lon": ds.lon},
dims=["lon"],
)
xr.testing.assert_allclose(result, expected)
def test_weights_for_region_in_lon_domain_with_domain_spanning_p_meridian(self):
ds = self.ds.copy()
# Domain spans prime meridian.
ds.lon_bnds.data[:] = np.array([[359, 1], [1, 90], [90, 180], [180, 359]])
# Longitude axis orientation swaps from (-180, 180) to (0, 360).
result = ds.spatial._get_longitude_weights(
domain_bounds=ds.lon_bnds,
region_bounds=np.array([-170.0, -120.0]),
)
expected = xr.DataArray(
data=np.array([0.0, 0.0, 0.0, 50.0]),
coords={"lon": ds.lon},
dims=["lon"],
)
xr.testing.assert_allclose(result, expected)
def test_weights_for_region_in_lon_domain_with_region_spanning_p_meridian(self):
ds = self.ds.copy()
result = ds.spatial._get_longitude_weights(
domain_bounds=ds.lon_bnds,
# Region spans prime meridian.
region_bounds=np.array([359, 1]),
)
expected = xr.DataArray(
data=np.array([1.875, 0.0625, 0.0, 0.0625]),
coords={"lon": ds.lon},
dims=["lon"],
)
xr.testing.assert_allclose(result, expected)
def test_weights_all_longitudes_for_equal_region_bounds(self):
expected = xr.DataArray(
data=np.array(
[1.875, 178.125, 178.125, 1.875],
),
coords={"lon": self.ds.lon},
dims=["lon"],
)
result = self.ds.spatial._get_longitude_weights(
domain_bounds=self.ds.lon_bnds.copy(),
region_bounds=np.array([0.0, 360.0]),
)
xr.testing.assert_allclose(result, expected)
def test_weights_for_equal_region_bounds_representing_entire_lon_domain(self):
expected = xr.DataArray(
data=np.array(
[1.875, 178.125, 178.125, 1.875],
),
coords={"lon": self.ds.lon},
dims=["lon"],
)
result = self.ds.spatial._get_longitude_weights(
domain_bounds=self.ds.lon_bnds.copy(), region_bounds=np.array([10.0, 10.0])
)
xr.testing.assert_allclose(result, expected)
class TestGetLatitudeWeights:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
def test_weights_for_region_in_lat_domain(self):
expected = xr.DataArray(
data=np.array([0.0, 0.087156, 0.087156, 0.0]),
coords={"lat": self.ds.lat},
dims=["lat"],
)
result = self.ds.spatial._get_latitude_weights(
domain_bounds=self.ds.lat_bnds, region_bounds=np.array([-5.0, 5.0])
)
xr.testing.assert_allclose(result, expected)
class TestValidateDomainBounds:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
def test_raises_error_if_low_bounds_exceeds_high_bound(self):
domain_bounds = xr.DataArray(
name="lon_bnds",
data=np.array([[1, 0], [1, 2], [2, 3], [3, 4]]),
dims=["lon", "bnds"],
)
with pytest.raises(ValueError):
self.ds.spatial._validate_domain_bounds(domain_bounds)
class TestCalculateWeights:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
def test_returns_weights_as_the_absolute_difference_of_upper_and_lower_bounds(self):
lat = xr.DataArray(
name="lat",
data=np.array([-90.0, -88.75, 88.75, 90.0]),
coords={"lat": np.array([-90.0, -88.75, 88.75, 90.0])},
dims=["lat"],
)
lat_bounds = xr.DataArray(
data=np.array(
[[-90.0, -89.375], [-89.375, 0.0], [0.0, 89.375], [89.375, 90.0]]
),
coords={"lat": lat},
dims=["lat", "bnds"],
)
result = self.ds.spatial._calculate_weights(lat_bounds)
expected = xr.DataArray(
data=np.array([0.625, 89.375, 89.375, 0.625]),
coords={"lat": lat},
dims=["lat"],
)
assert result.identical(expected)
class TestScaleDimToRegion:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
@requires_dask
def test_scales_chunked_lat_bounds_when_not_wrapping_around_prime_meridian(self):
domain_bounds = xr.DataArray(
name="lat_bnds",
data=np.array(
[[-90, -89.375], [-89.375, 0.0], [0.0, 89.375], [89.375, 90]]
),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
).chunk(2)
result = self.ds.spatial._scale_domain_to_region(
domain_bounds=domain_bounds, region_bounds=np.array([-5, 5])
)
expected = xr.DataArray(
name="lat_bnds",
data=np.array([[-5.0, -5.0], [-5.0, 0.0], [0.0, 5.0], [5.0, 5.0]]),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
)
assert result.identical(expected)
@requires_dask
def test_scales_chunked_lon_bounds_when_not_wrapping_around_prime_meridian(self):
domain_bounds = xr.DataArray(
name="lon_bnds",
data=np.array(
[
[359.0625, 360.9375],
[0.9375, 179.0625],
[179.0625, 357.1875],
[357.1875, 359.0625],
]
),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
).chunk(2)
result = self.ds.spatial._scale_domain_to_region(
domain_bounds=domain_bounds, region_bounds=np.array([190, 240])
)
expected = xr.DataArray(
name="lon_bnds",
data=np.array(
[[240.0, 240.0], [190.0, 190.0], [190.0, 240.0], [240.0, 240.0]]
),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
)
assert result.identical(expected)
def test_scales_lat_bounds_when_not_wrapping_around_prime_meridian(self):
domain_bounds = xr.DataArray(
name="lat_bnds",
data=np.array(
[[-90, -89.375], [-89.375, 0.0], [0.0, 89.375], [89.375, 90]]
),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
)
result = self.ds.spatial._scale_domain_to_region(
domain_bounds=domain_bounds, region_bounds=np.array([-5, 5])
)
expected = xr.DataArray(
name="lat_bnds",
data=np.array([[-5.0, -5.0], [-5.0, 0.0], [0.0, 5.0], [5.0, 5.0]]),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
)
assert result.identical(expected)
def test_scales_lon_bounds_when_not_wrapping_around_prime_meridian(self):
domain_bounds = xr.DataArray(
name="lon_bnds",
data=np.array(
[
[359.0625, 360.9375],
[0.9375, 179.0625],
[179.0625, 357.1875],
[357.1875, 359.0625],
]
),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
)
result = self.ds.spatial._scale_domain_to_region(
domain_bounds=domain_bounds, region_bounds=np.array([190, 240])
)
expected = xr.DataArray(
name="lon_bnds",
data=np.array(
[[240.0, 240.0], [190.0, 190.0], [190.0, 240.0], [240.0, 240.0]]
),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
)
assert result.identical(expected)
def test_scales_lon_bounds_when_wrapping_around_prime_meridian(self):
domain_bounds = xr.DataArray(
name="lon_bnds",
data=np.array(
[
# Does not apply to any conditional.
[359.0625, 360.9375],
# Grid cells stradling upper boundary.
[0.9375, 179.0625],
# Grid cells in between boundaries.
[179.0625, 357.1875],
# Grid cell straddling lower boundary.
[357.1875, 359.0625],
]
),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
)
result = self.ds.spatial._scale_domain_to_region(
domain_bounds=domain_bounds, region_bounds=np.array([357.5, 10.0])
)
expected = xr.DataArray(
name="lon_bnds",
data=np.array(
[
# Does not apply to any conditional.
[359.0625, 360.9375],
# Grid cells stradling upper boundary.
[0.9375, 10.0],
# Grid cells in between boundaries.
[10.0, 10.0],
# Grid cell straddling lower boundary.
[357.5, 359.0625],
]
),
coords={"lat": self.ds.lat},
dims=["lat", "bnds"],
)
assert result.identical(expected)
class TestCombineWeights:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
self.axis_weights = {
"lat": xr.DataArray(
name="lat_wts",
data=np.array([1, 2, 3, 4]),
coords={"lat": self.ds.lat},
dims=["lat"],
),
"lon": xr.DataArray(
name="lon_wts",
data=np.array([1, 2, 3, 4]),
coords={"lon": self.ds.lon},
dims=["lon"],
),
}
def test_weights_for_single_axis_are_identical(self):
axis_weights = self.axis_weights
del axis_weights["lon"]
result = self.ds.spatial._combine_weights(axis_weights=self.axis_weights)
expected = self.axis_weights["lat"]
assert result.identical(expected)
def test_weights_for_multiple_axis_is_the_product_of_matrix_multiplication(self):
result = self.ds.spatial._combine_weights(axis_weights=self.axis_weights)
expected = xr.DataArray(
data=np.array([[1, 2, 3, 4], [2, 4, 6, 8], [3, 6, 9, 12], [4, 8, 12, 16]]),
coords={"lat": self.ds.lat, "lon": self.ds.lon},
dims=["lat", "lon"],
)
assert result.identical(expected)
class TestAverager:
@pytest.fixture(autouse=True)
def setup(self):
self.ds = generate_dataset(cf_compliant=True, has_bounds=True)
@requires_dask
def test_chunked_weighted_avg_over_lat_and_lon_axes(self):
ds = self.ds.copy().chunk(2)
weights = xr.DataArray(
data=np.array([[1, 2, 3, 4], [2, 4, 6, 8], [3, 6, 9, 12], [4, 8, 12, 16]]),
coords={"lat": ds.lat, "lon": ds.lon},
dims=["lat", "lon"],
)
result = ds.spatial._averager(ds.ts, axis=["X", "Y"], weights=weights)
expected = xr.DataArray(
name="ts", data=np.ones(15), coords={"time": ds.time}, dims=["time"]
)
assert result.identical(expected)
def test_weighted_avg_over_lat_axis(self):
weights = xr.DataArray(
name="lat_wts",
data=np.array([1, 2, 3, 4]),
coords={"lat": self.ds.lat},
dims=["lat"],
)
result = self.ds.spatial._averager(self.ds.ts, axis=["Y"], weights=weights)
expected = xr.DataArray(
name="ts",
data=np.ones((15, 4)),
coords={"time": self.ds.time, "lon": self.ds.lon},
dims=["time", "lon"],
)
assert result.identical(expected)
def test_weighted_avg_over_lon_axis(self):
weights = xr.DataArray(
name="lon_wts",
data=np.array([1, 2, 3, 4]),
coords={"lon": self.ds.lon},
dims=["lon"],
)
result = self.ds.spatial._averager(self.ds.ts, axis=["X"], weights=weights)
expected = xr.DataArray(
name="ts",
data=np.ones((15, 4)),
coords={"time": self.ds.time, "lat": self.ds.lat},
dims=["time", "lat"],
)
assert result.identical(expected)
def test_weighted_avg_over_lat_and_lon_axis(self):
weights = xr.DataArray(
data=np.array([[1, 2, 3, 4], [2, 4, 6, 8], [3, 6, 9, 12], [4, 8, 12, 16]]),
coords={"lat": self.ds.lat, "lon": self.ds.lon},
dims=["lat", "lon"],
)
result = self.ds.spatial._averager(self.ds.ts, axis=["X", "Y"], weights=weights)
expected = xr.DataArray(
name="ts", data=np.ones(15), coords={"time": self.ds.time}, dims=["time"]
)
assert result.identical(expected)
| 33.616345 | 90 | 0.54349 | 3,742 | 29,616 | 4.094067 | 0.070283 | 0.055614 | 0.015078 | 0.014883 | 0.88094 | 0.84517 | 0.813642 | 0.784465 | 0.755026 | 0.724413 | 0 | 0.069688 | 0.307131 | 29,616 | 880 | 91 | 33.654545 | 0.676901 | 0.028802 | 0 | 0.65864 | 1 | 0 | 0.038377 | 0 | 0 | 0 | 0 | 0 | 0.053824 | 1 | 0.093484 | false | 0 | 0.008499 | 0 | 0.121813 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3f9935828d5331ec64b5e6444be137ef99b57cb9 | 7,740 | py | Python | src/prefect/tasks/gsheets/gsheets.py | Progressive-Insurance/prefect | 4e4c0e244877a45b2a39f24218318ba7e8952653 | [
"Apache-2.0"
] | null | null | null | src/prefect/tasks/gsheets/gsheets.py | Progressive-Insurance/prefect | 4e4c0e244877a45b2a39f24218318ba7e8952653 | [
"Apache-2.0"
] | 1 | 2022-02-28T04:29:32.000Z | 2022-02-28T04:29:32.000Z | src/prefect/tasks/gsheets/gsheets.py | Progressive-Insurance/prefect | 4e4c0e244877a45b2a39f24218318ba7e8952653 | [
"Apache-2.0"
] | null | null | null | <<<<<<< HEAD
from prefect.utilities.tasks import defaults_from_attrs
import gspread
from typing import Any, List, Union
from prefect import Task
import pathlib
class WriteGsheetRow(Task):
"""
A task for writing a row to a Google Sheet.
Note that _all_ initialization settings can be provided / overwritten at runtime.
Args:
- credentials_filename (Union[str, pathlib.Path]): Location of credentials file
- sheet_key (str): The key corresponding to the Google Sheet
- worksheet_name (str): The worksheet to target
- **kwargs (optional): additional kwargs to pass to the `Task` constructor
"""
def __init__(
self,
credentials_filename: Union[str, pathlib.Path] = None,
sheet_key: str = None,
worksheet_name: str = None,
**kwargs: Any
):
self.credentials_filename = credentials_filename
self.sheet_key = sheet_key
self.worksheet_name = worksheet_name
super().__init__(**kwargs)
@defaults_from_attrs("credentials_filename", "sheet_key", "worksheet_name")
def run(
self,
data: List[Any],
credentials_filename: Union[str, pathlib.Path] = None,
sheet_key: str = None,
worksheet_name: str = None,
) -> dict:
"""
Appends a row of data to a Google Sheets worksheet
Args:
- data (list): the data to insert. This should be formatted as a list
- credentials_filename (Union[str, pathlib.Path]): Location of credentials file
- sheet_key (str): The key corresponding to the Google Sheet
- worksheet_name (str): The worksheet to target
Returns:
- a dictionary containing information about the successful insert
"""
client = gspread.service_account(filename=credentials_filename)
google_sheet = client.open_by_key(sheet_key)
worksheet = google_sheet.worksheet(worksheet_name)
return worksheet.append_row(data)
class ReadGsheetRow(Task):
"""
A task for reading a row from a Google Sheet.
Note that _all_ initialization settings can be provided / overwritten at runtime.
Args:
- credentials_filename (Union[str, pathlib.Path]): Location of credentials file
- sheet_key (str): The key corresponding to the Google Sheet
- worksheet_name (str): The worksheet to target
- **kwargs (optional): additional kwargs to pass to the `Task` constructor
"""
def __init__(
self,
credentials_filename: Union[str, pathlib.Path] = None,
sheet_key: str = None,
worksheet_name: str = None,
**kwargs: Any
):
self.credentials_filename = credentials_filename
self.sheet_key = sheet_key
self.worksheet_name = worksheet_name
super().__init__(**kwargs)
@defaults_from_attrs("credentials_filename", "sheet_key", "worksheet_name")
def run(
self,
row: int,
credentials_filename: Union[str, pathlib.Path] = None,
sheet_key: str = None,
worksheet_name: str = None,
) -> List[Any]:
"""
Appends a row of data to a Google Sheets worksheet
Args:
- row (int): The number of the row to read
- credentials_filename (Union[str, pathlib.Path]): Location of credentials file
- sheet_key (str): The key corresponding to the Google Sheet
- worksheet_name (str): The worksheet to target
Returns:
- a list of values from the row
"""
client = gspread.service_account(filename=credentials_filename)
google_sheet = client.open_by_key(sheet_key)
worksheet = google_sheet.worksheet(worksheet_name)
return worksheet.row_values(row)
=======
from prefect.utilities.tasks import defaults_from_attrs
import gspread
from typing import Any, List, Union
from prefect import Task
import pathlib
class WriteGsheetRow(Task):
"""
A task for writing a row to a Google Sheet.
Note that _all_ initialization settings can be provided / overwritten at runtime.
Args:
- credentials_filename (Union[str, pathlib.Path]): Location of credentials file
- sheet_key (str): The key corresponding to the Google Sheet
- worksheet_name (str): The worksheet to target
- **kwargs (optional): additional kwargs to pass to the `Task` constructor
"""
def __init__(
self,
credentials_filename: Union[str, pathlib.Path] = None,
sheet_key: str = None,
worksheet_name: str = None,
**kwargs: Any
):
self.credentials_filename = credentials_filename
self.sheet_key = sheet_key
self.worksheet_name = worksheet_name
super().__init__(**kwargs)
@defaults_from_attrs("credentials_filename", "sheet_key", "worksheet_name")
def run(
self,
data: List[Any],
credentials_filename: Union[str, pathlib.Path] = None,
sheet_key: str = None,
worksheet_name: str = None,
) -> dict:
"""
Appends a row of data to a Google Sheets worksheet
Args:
- data (list): the data to insert. This should be formatted as a list
- credentials_filename (Union[str, pathlib.Path]): Location of credentials file
- sheet_key (str): The key corresponding to the Google Sheet
- worksheet_name (str): The worksheet to target
Returns:
- a dictionary containing information about the successful insert
"""
client = gspread.service_account(filename=credentials_filename)
google_sheet = client.open_by_key(sheet_key)
worksheet = google_sheet.worksheet(worksheet_name)
return worksheet.append_row(data)
class ReadGsheetRow(Task):
"""
A task for reading a row from a Google Sheet.
Note that _all_ initialization settings can be provided / overwritten at runtime.
Args:
- credentials_filename (Union[str, pathlib.Path]): Location of credentials file
- sheet_key (str): The key corresponding to the Google Sheet
- worksheet_name (str): The worksheet to target
- **kwargs (optional): additional kwargs to pass to the `Task` constructor
"""
def __init__(
self,
credentials_filename: Union[str, pathlib.Path] = None,
sheet_key: str = None,
worksheet_name: str = None,
**kwargs: Any
):
self.credentials_filename = credentials_filename
self.sheet_key = sheet_key
self.worksheet_name = worksheet_name
super().__init__(**kwargs)
@defaults_from_attrs("credentials_filename", "sheet_key", "worksheet_name")
def run(
self,
row: int,
credentials_filename: Union[str, pathlib.Path] = None,
sheet_key: str = None,
worksheet_name: str = None,
) -> List[Any]:
"""
Appends a row of data to a Google Sheets worksheet
Args:
- row (int): The number of the row to read
- credentials_filename (Union[str, pathlib.Path]): Location of credentials file
- sheet_key (str): The key corresponding to the Google Sheet
- worksheet_name (str): The worksheet to target
Returns:
- a list of values from the row
"""
client = gspread.service_account(filename=credentials_filename)
google_sheet = client.open_by_key(sheet_key)
worksheet = google_sheet.worksheet(worksheet_name)
return worksheet.row_values(row)
>>>>>>> prefect clone
| 36.168224 | 92 | 0.636693 | 905 | 7,740 | 5.255249 | 0.098343 | 0.127839 | 0.08074 | 0.090833 | 0.996636 | 0.996636 | 0.996636 | 0.996636 | 0.996636 | 0.996636 | 0 | 0 | 0.2823 | 7,740 | 213 | 93 | 36.338028 | 0.856166 | 0 | 0 | 0.972477 | 0 | 0 | 0.043249 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.091743 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3fc38d3970ee4869c82b0e05e7a231b300ae4c2d | 5,123 | py | Python | cci_1_8.py | MrCsabaToth/coderpad | b21d994e4ca95c51b4b72f0d819ee086b41e0822 | [
"Apache-2.0"
] | 2 | 2019-11-11T20:11:10.000Z | 2019-11-11T20:11:15.000Z | cci_1_8.py | MrCsabaToth/coderpad | b21d994e4ca95c51b4b72f0d819ee086b41e0822 | [
"Apache-2.0"
] | null | null | null | cci_1_8.py | MrCsabaToth/coderpad | b21d994e4ca95c51b4b72f0d819ee086b41e0822 | [
"Apache-2.0"
] | null | null | null | def zero1(m):
h = len(m)
w = len(m[0])
cols = [False] * w
rows = [False] * h
for i in range(h):
for j in range(w):
if m[i][j] == 0:
rows[i] = True
cols[j] = True
for i in range(h):
for j in range(w):
if rows[i] or cols[j]:
m[i][j] = 0
def zero2(m):
h = len(m)
w = len(m[0])
cols = [False] * w
rows = [False] * h
for i in range(h):
for j in range(w):
if m[i][j] == 0:
rows[i] = True
cols[j] = True
for i in range(h):
if rows[i]:
m[i] = [0] * w
for i in range(h):
for j in range(w):
if cols[j]:
m[i][j] = 0
def zero3(m):
h = len(m)
w = len(m[0])
zero0row = any(el == 0 for el in m[0])
zero0col = any(m[i][0] == 0 for i in range(h))
for i in range(h):
for j in range(w):
if m[i][j] == 0:
m[i][0] = 0
m[0][j] = 0
for i in range(1, h):
if m[i][0] == 0:
m[i] = [0] * w
for j in range(1, w):
if m[0][j] == 0:
for i in range(h):
m[i][j] = 0
if zero0row:
m[0] = [0] * w
if zero0col:
for i in range(h):
m[i][0] = 0
import pytest
import copy
@pytest.mark.parametrize("matrix,expected", [
([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]],
[[1, 1, 1],
[1, 1, 1],
[1, 1, 1]]),
([[1, 1, 1],
[1, 0, 1],
[1, 1, 1]],
[[1, 0, 1],
[0, 0, 0],
[1, 0, 1]]),
([[0, 1, 1],
[1, 1, 1],
[1, 1, 1]],
[[0, 0, 0],
[0, 1, 1],
[0, 1, 1]]),
([[1, 1, 0],
[1, 1, 1],
[1, 1, 1]],
[[0, 0, 0],
[1, 1, 0],
[1, 1, 0]]),
([[1, 1, 1],
[1, 1, 1],
[1, 1, 0]],
[[1, 1, 0],
[1, 1, 0],
[0, 0, 0]]),
([[1, 1, 1],
[1, 1, 1],
[0, 1, 1]],
[[0, 1, 1],
[0, 1, 1],
[0, 0, 0]]),
([[1, 1, 1, 1],
[1, 0, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[1, 0, 1, 1],
[0, 0, 0, 0],
[1, 0, 1, 1],
[1, 0, 1, 1]]),
([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 0, 1],
[1, 1, 1, 1]],
[[1, 1, 0, 1],
[1, 1, 0, 1],
[0, 0, 0, 0],
[1, 1, 0, 1]]),
([[0, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[0, 0, 0, 0],
[0, 1, 1, 1],
[0, 1, 1, 1],
[0, 1, 1, 1]]),
([[1, 1, 1, 0],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[0, 0, 0, 0],
[1, 1, 1, 0],
[1, 1, 1, 0],
[1, 1, 1, 0]]),
([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 0]],
[[1, 1, 1, 0],
[1, 1, 1, 0],
[1, 1, 1, 0],
[0, 0, 0, 0]]),
([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 1, 1, 1]],
[[0, 1, 1, 1],
[0, 1, 1, 1],
[0, 1, 1, 1],
[0, 0, 0, 0]]),
([[1, 1, 1, 1],
[1, 0, 1, 1],
[1, 1, 1, 1]],
[[1, 0, 1, 1],
[0, 0, 0, 0],
[1, 0, 1, 1]]),
([[1, 1, 1, 1],
[1, 1, 0, 1],
[1, 1, 1, 1]],
[[1, 1, 0, 1],
[0, 0, 0, 0],
[1, 1, 0, 1]]),
([[0, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[0, 0, 0, 0],
[0, 1, 1, 1],
[0, 1, 1, 1]]),
([[1, 1, 1, 0],
[1, 1, 1, 1],
[1, 1, 1, 1]],
[[0, 0, 0, 0],
[1, 1, 1, 0],
[1, 1, 1, 0]]),
([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 0]],
[[1, 1, 1, 0],
[1, 1, 1, 0],
[0, 0, 0, 0]]),
([[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 1, 1, 1]],
[[0, 1, 1, 1],
[0, 1, 1, 1],
[0, 0, 0, 0]]),
([[1, 1, 1],
[1, 0, 1],
[1, 1, 1],
[1, 1, 1]],
[[1, 0, 1],
[0, 0, 0],
[1, 0, 1],
[1, 0, 1]]),
([[1, 1, 1],
[1, 1, 1],
[1, 1, 0],
[1, 1, 1]],
[[1, 0, 1],
[1, 0, 1],
[0, 0, 0],
[1, 0, 1]]),
([[0, 1, 1],
[1, 1, 1],
[1, 1, 1],
[1, 1, 1]],
[[0, 0, 0],
[0, 1, 1],
[0, 1, 1],
[0, 1, 1]]),
([[1, 1, 0],
[1, 1, 1],
[1, 1, 1],
[1, 1, 1]],
[[0, 0, 0],
[1, 1, 0],
[1, 1, 0],
[1, 1, 0]]),
([[1, 1, 1],
[1, 1, 1],
[1, 1, 1],
[1, 1, 0]],
[[1, 1, 0],
[1, 1, 0],
[1, 1, 0],
[0, 0, 0]]),
([[1, 1, 1],
[1, 1, 1],
[1, 1, 1],
[0, 1, 1]],
[[0, 1, 1],
[0, 1, 1],
[0, 1, 1],
[0, 0, 0]]),
])
def test_zero(matrix, expected):
m = copy.deepcopy(matrix)
zero1(m)
for i in range(len(matrix)):
for j in range(len(matrix[0])):
matrix[i][j] == expected[i][j]
m = copy.deepcopy(matrix)
zero2(m)
for i in range(len(matrix)):
for j in range(len(matrix[0])):
matrix[i][j] == expected[i][j]
m = copy.deepcopy(matrix)
zero3(m)
for i in range(len(matrix)):
for j in range(len(matrix[0])):
matrix[i][j] == expected[i][j]
pytest.main()
| 19.553435 | 50 | 0.276205 | 907 | 5,123 | 1.558986 | 0.039691 | 0.466761 | 0.526167 | 0.54314 | 0.830976 | 0.817539 | 0.809052 | 0.780057 | 0.750354 | 0.750354 | 0 | 0.224439 | 0.452079 | 5,123 | 261 | 51 | 19.628352 | 0.279302 | 0 | 0 | 0.88843 | 0 | 0 | 0.002928 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016529 | false | 0 | 0.008264 | 0 | 0.024793 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
3fc6e2d2075090c66b1c1537314b9f53f0231501 | 341 | py | Python | qualipy/__init__.py | baasman/qualipy | e246a44ea3a5dcc92291983c52a89189338f808f | [
"Apache-2.0"
] | 1 | 2019-07-15T15:16:44.000Z | 2019-07-15T15:16:44.000Z | qualipy/__init__.py | baasman/qualipy | e246a44ea3a5dcc92291983c52a89189338f808f | [
"Apache-2.0"
] | null | null | null | qualipy/__init__.py | baasman/qualipy | e246a44ea3a5dcc92291983c52a89189338f808f | [
"Apache-2.0"
] | null | null | null | from qualipy.run import Qualipy
from qualipy.project import Project, generate_config, load_project
from qualipy.reflect.function import function
from qualipy.reflect.table import pandas_table, sql_table
from qualipy.reflect.column import Column, column
from qualipy import datasets
from qualipy import cli
from qualipy import helper
| 37.888889 | 67 | 0.835777 | 48 | 341 | 5.854167 | 0.354167 | 0.313167 | 0.192171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 341 | 8 | 68 | 42.625 | 0.946128 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3ffb7804f80fcecf61aa7b4a88fed216dfb349b7 | 2,222 | py | Python | simple_neural_net/activation_models.py | tcquinn/simple-neural-net | ddf430ff45f7a22897d8e919cb08148434beb4bb | [
"MIT"
] | null | null | null | simple_neural_net/activation_models.py | tcquinn/simple-neural-net | ddf430ff45f7a22897d8e919cb08148434beb4bb | [
"MIT"
] | null | null | null | simple_neural_net/activation_models.py | tcquinn/simple-neural-net | ddf430ff45f7a22897d8e919cb08148434beb4bb | [
"MIT"
] | null | null | null | import numpy as np
class ActivationModel:
def activation(
self,
Z
):
# Z: (num_outputs, num_examples)
# outputs: (num_outputs, num_examples)
raise NotImplementedError('Method must be implemented by child class')
def d_activation_d_Z(
self,
Z,
outputs
):
# Z: (num_outputs, num_examples)
# outputs: (num_outputs, num_examples)
# d_activation_d_Z: (num_outputs, num_examples)
raise NotImplementedError('Method must be implemented by child class')
class SigmoidActivationModel(ActivationModel):
def activation(
self,
Z
):
# Z: (num_outputs, num_examples)
# outputs: (num_outputs, num_examples)
return np.reciprocal(np.add(1.0, np.exp(-Z))) # (num_outputs, num_examples)
def d_activation_d_Z(
self,
Z,
outputs
):
# Z: (num_outputs, num_examples)
# outputs: (num_outputs, num_examples)
# d_activation_d_Z: (num_outputs, num_examples)
return np.multiply(outputs, np.subtract(1, outputs)) # (num_outputs, num_examples)
class TanhActivationModel(ActivationModel):
def activation(
self,
Z
):
# Z: (num_outputs, num_examples)
# outputs: (num_outputs, num_examples)
return np.tanh(Z) # (num_outputs, num_examples)
def d_activation_d_Z(
self,
Z,
outputs
):
# Z: (num_outputs, num_examples)
# outputs: (num_outputs, num_examples)
# d_activation_d_Z: (num_outputs, num_examples)
return np.subtract(1, np.square(outputs)) # (num_outputs, num_examples)
class ReLUActivationModel(ActivationModel):
def activation(
self,
Z
):
# Z: (num_outputs, num_examples)
# outputs: (num_outputs, num_examples)
return np.multiply(Z, Z > 0.0) # (num_outputs, num_examples)
def d_activation_d_Z(
self,
Z,
outputs
):
# Z: (num_outputs, num_examples)
# outputs: (num_outputs, num_examples)
# d_activation_d_Z: (num_outputs, num_examples)
return np.multiply(1.0, Z > 0.0) # (num_outputs, num_examples)
| 28.487179 | 90 | 0.608911 | 261 | 2,222 | 4.89272 | 0.1341 | 0.281911 | 0.264683 | 0.427565 | 0.88332 | 0.88332 | 0.831637 | 0.804229 | 0.804229 | 0.804229 | 0 | 0.006353 | 0.291629 | 2,222 | 77 | 91 | 28.857143 | 0.804956 | 0.40279 | 0 | 0.693878 | 0 | 0 | 0.063077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163265 | false | 0 | 0.020408 | 0.122449 | 0.387755 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 10 |
b20649367271e23395a97b890fa97be666b49bae | 5,248 | py | Python | infrastructure/documents.py | YossyMejia/api-hacienda_python | 82c83fbf7673c37005ded12f1787b5d65247c4aa | [
"MIT"
] | null | null | null | infrastructure/documents.py | YossyMejia/api-hacienda_python | 82c83fbf7673c37005ded12f1787b5d65247c4aa | [
"MIT"
] | null | null | null | infrastructure/documents.py | YossyMejia/api-hacienda_python | 82c83fbf7673c37005ded12f1787b5d65247c4aa | [
"MIT"
] | null | null | null | import json
from extensions import mysql
def get_document(key_mh):
try:
conn = mysql.connect()
cursor = conn.cursor()
cursor.callproc('sp_getDocumentByKey', (key_mh,))
row_headers = [x[0] for x in cursor.description]
data = cursor.fetchall()
if len(data) is not 0:
conn.commit()
json_data = []
for row in data:
json_data.append(dict(zip(row_headers, row)))
return json_data
else:
return {'error': 'Error: Not get information of the document'}
except Exception as e:
return {'error': str(e)}
finally:
cursor.close()
conn.close()
def get_documents(company_id, state):
try:
conn = mysql.connect()
cursor = conn.cursor()
cursor.callproc('sp_getDocumentByCompany', (company_id, state))
data = cursor.fetchall()
if len(data) is 0:
conn.commit()
return True
else:
return json.dumps({'error': str(data[0])})
except Exception as e:
return json.dumps({'error': str(e)})
finally:
cursor.close()
conn.close()
def get_documents(state):
try:
conn = mysql.connect()
cursor = conn.cursor()
cursor.callproc('sp_getDocuments', state)
data = cursor.fetchall()
if len(data) is 0:
conn.commit()
return True
else:
return json.dumps({'error': str(data[0])})
except Exception as e:
return json.dumps({'error': str(e)})
finally:
cursor.close()
conn.close()
def get_documentsreport(company_id, document_type):
try:
conn = mysql.connect()
cursor = conn.cursor()
cursor.callproc('sp_getDocumentsReport', (company_id, document_type,))
row_headers = [x[0] for x in cursor.description]
data = cursor.fetchall()
if len(data) is not 0:
conn.commit()
json_data = []
for row in data:
json_data.append(dict(zip(row_headers, row)))
return json_data
else:
return json.dumps({'error': str(data[0])})
except Exception as e:
return json.dumps({'error': str(e)})
finally:
cursor.close()
conn.close()
def save_document(company_id, key_mh, sign_xml, status, date, document_type, receiver,
total_document, total_taxed, pdf, email, email_costs):
try:
if receiver is not None:
receiver_type = receiver['tipoIdentificacion']
receiver_dni = receiver['numeroIdentificacion']
else:
receiver_type = None
receiver_dni = None
conn = mysql.connect()
cursor = conn.cursor()
cursor.callproc('sp_saveDocument', (company_id, key_mh, sign_xml, status, date, document_type,receiver_type,
receiver_dni, total_document, total_taxed, pdf, email, email_costs))
data = cursor.rowcount
if data != 0:
conn.commit()
return True
else:
return json.dumps({'error': str(data[0])})
except Exception as e:
return json.dumps({'error': str(e)})
finally:
cursor.close()
conn.close()
def update_document(company_id, key_mh, answer_xml, status, date):
try:
conn = mysql.connect()
cursor = conn.cursor()
cursor.callproc('sp_updateDocument', (company_id, key_mh, answer_xml, status, date))
data = cursor.fetchall()
if len(data) is 0:
conn.commit()
return True
else:
return json.dumps({'error': str(data[0])})
except Exception as e:
return json.dumps({'error': str(e)})
finally:
cursor.close()
conn.close()
def save_document_line_info(id_company, line_number, quantity, unity
, detail, unit_price, net_tax, total_line, key_mh):
try:
conn = mysql.connect()
cursor = conn.cursor()
cursor.callproc('sp_createDocumentLineInfo', (id_company, line_number, quantity, unity
, detail, unit_price, net_tax, total_line, key_mh))
data = cursor.rowcount
if data != 0:
conn.commit()
return True
else:
return json.dumps({'error': str(data[0])})
except Exception as e:
return json.dumps({'error': str(e)})
finally:
cursor.close()
conn.close()
def save_document_line_taxes(id_company, line_number, rate_code, code, rate, amount, key_mh):
try:
conn = mysql.connect()
cursor = conn.cursor()
cursor.callproc('sp_createDocumentTaxInfo', (id_company, line_number, rate_code
, code, rate, amount, key_mh))
data = cursor.fetchall()
if len(data) is 0:
conn.commit()
return True
else:
return json.dumps({'error': str(data[0])})
except Exception as e:
return json.dumps({'error': str(e)})
finally:
cursor.close()
conn.close() | 31.806061 | 116 | 0.555831 | 597 | 5,248 | 4.745394 | 0.157454 | 0.056477 | 0.074126 | 0.098835 | 0.834804 | 0.823509 | 0.823509 | 0.823509 | 0.771267 | 0.754324 | 0 | 0.004847 | 0.331745 | 5,248 | 165 | 117 | 31.806061 | 0.802966 | 0 | 0 | 0.798658 | 0 | 0 | 0.060773 | 0.017718 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053691 | false | 0 | 0.013423 | 0 | 0.228188 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b75cd8936921c8b057e3d49ac1068cb92a92d72c | 1,506 | py | Python | inventory/models.py | brkyavuz/pfna | 082300a673a2b884a92bda61ed001943377fc8b1 | [
"MIT"
] | null | null | null | inventory/models.py | brkyavuz/pfna | 082300a673a2b884a92bda61ed001943377fc8b1 | [
"MIT"
] | null | null | null | inventory/models.py | brkyavuz/pfna | 082300a673a2b884a92bda61ed001943377fc8b1 | [
"MIT"
] | null | null | null | from django.db import models
class Group(models.Model):
group_name = models.CharField(max_length=64,null=False, primary_key=True, unique=True)
platform = models.CharField(max_length=16, blank=True, null=True)
username = models.CharField(max_length=16, blank=True, null=True)
password = models.CharField(max_length=16, blank=True, null=True)
country = models.CharField(max_length=16, blank=True, null=True)
city = models.CharField(max_length=16, blank=True, null=True)
def __str__(self):
return self.group_name
class Data(models.Model):
name = models.CharField(max_length=64, null=False, primary_key=True)
device_type = models.CharField(max_length=16)
country = models.CharField(max_length=16, blank=True, null=True)
city = models.CharField(max_length=16, blank=True, null=True)
def __str__(self):
return self.group_name
class Host(models.Model):
name = models.CharField(max_length=64, null=False, primary_key=True)
hostname = models.GenericIPAddressField(unique=True)
platform = models.CharField(max_length=16, blank=True, null=True)
username = models.CharField(max_length=16, blank=True, null=True)
password = models.CharField(max_length=16, blank=True, null=True)
port = models.PositiveBigIntegerField()
groups = models.ForeignKey(Group, on_delete=models.SET_NULL, null=True)
data = models.ForeignKey(Data, on_delete=models.SET_NULL, null=True)
def __str__(self):
return self.name | 41.833333 | 90 | 0.731076 | 210 | 1,506 | 5.066667 | 0.204762 | 0.197368 | 0.236842 | 0.315789 | 0.820489 | 0.796053 | 0.796053 | 0.722744 | 0.722744 | 0.722744 | 0 | 0.021926 | 0.152058 | 1,506 | 36 | 91 | 41.833333 | 0.811276 | 0 | 0 | 0.607143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0.071429 | 0.035714 | 0.107143 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 11 |
b784f91b4d0bd2a0ec28f59e1a16c6e4665126f6 | 132 | py | Python | openproblems/tasks/dimensionality_reduction/methods/__init__.py | rcannood/openproblems | 29bd1ad061704158482d214d3b34523870a56eaf | [
"MIT"
] | 134 | 2020-08-19T07:35:56.000Z | 2021-05-19T11:37:50.000Z | openproblems/tasks/dimensionality_reduction/methods/__init__.py | rcannood/openproblems | 29bd1ad061704158482d214d3b34523870a56eaf | [
"MIT"
] | 175 | 2020-08-17T15:26:06.000Z | 2021-05-14T11:03:46.000Z | openproblems/tasks/dimensionality_reduction/methods/__init__.py | rcannood/openproblems | 29bd1ad061704158482d214d3b34523870a56eaf | [
"MIT"
] | 46 | 2020-10-08T21:11:37.000Z | 2021-04-25T07:05:28.000Z | from .pca import pca
from .phate import phate_default
from .phate import phate_scanpy
from .tsne import tsne
from .umap import umap
| 22 | 32 | 0.810606 | 22 | 132 | 4.772727 | 0.363636 | 0.171429 | 0.285714 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 132 | 5 | 33 | 26.4 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b7ebb3a9f56363043ce77a6f605b2490d30a00d9 | 20,096 | py | Python | wifi/iw_test.py | DentonGentry/gfiber-platform | 2ba5266103aad0b7b676555eebd3c2061ddb8333 | [
"Apache-2.0"
] | 8 | 2017-09-24T03:11:46.000Z | 2021-08-24T04:29:14.000Z | wifi/iw_test.py | DentonGentry/gfiber-platform | 2ba5266103aad0b7b676555eebd3c2061ddb8333 | [
"Apache-2.0"
] | null | null | null | wifi/iw_test.py | DentonGentry/gfiber-platform | 2ba5266103aad0b7b676555eebd3c2061ddb8333 | [
"Apache-2.0"
] | 1 | 2017-10-05T23:04:10.000Z | 2017-10-05T23:04:10.000Z | #!/usr/bin/python -S
"""Tests for iw.py."""
import iw
from wvtest import wvtest
PHY_OUTPUT = """Wiphy phy1
max # scan SSIDs: 16
max scan IEs length: 199 bytes
Retry short limit: 7
Retry long limit: 4
Coverage class: 0 (up to 0m)
Device supports AP-side u-APSD.
Supported Ciphers:
* WEP40 (00-0f-ac:1)
* WEP104 (00-0f-ac:5)
* TKIP (00-0f-ac:2)
* CCMP (00-0f-ac:4)
* CMAC (00-0f-ac:6)
Available Antennas: TX 0x7 RX 0x7
Configured Antennas: TX 0x7 RX 0x7
Supported interface modes:
* managed
* AP
* AP/VLAN
* monitor
Band 2:
Capabilities: 0x19e3
RX LDPC
HT20/HT40
Static SM Power Save
RX HT20 SGI
RX HT40 SGI
TX STBC
RX STBC 1-stream
Max AMSDU length: 7935 bytes
DSSS/CCK HT40
Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
Minimum RX AMPDU time spacing: 8 usec (0x06)
HT TX/RX MCS rate indexes supported: 0-23
VHT Capabilities (0x338001b2):
Max MPDU length: 11454
Supported Channel Width: neither 160 nor 80+80
RX LDPC
short GI (80 MHz)
TX STBC
RX antenna pattern consistency
TX antenna pattern consistency
VHT RX MCS set:
1 streams: MCS 0-9
2 streams: MCS 0-9
3 streams: MCS 0-9
4 streams: not supported
5 streams: not supported
6 streams: not supported
7 streams: not supported
8 streams: not supported
VHT RX highest supported: 0 Mbps
VHT TX MCS set:
1 streams: MCS 0-9
2 streams: MCS 0-9
3 streams: MCS 0-9
4 streams: not supported
5 streams: not supported
6 streams: not supported
7 streams: not supported
8 streams: not supported
VHT TX highest supported: 0 Mbps
Bitrates (non-HT):
* 6.0 Mbps
* 9.0 Mbps
* 12.0 Mbps
* 18.0 Mbps
* 24.0 Mbps
* 36.0 Mbps
* 48.0 Mbps
* 54.0 Mbps
Frequencies:
* 5180 MHz [36] (17.0 dBm)
* 5200 MHz [40] (17.0 dBm)
* 5220 MHz [44] (17.0 dBm)
* 5240 MHz [48] (17.0 dBm)
* 5260 MHz [52] (23.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5280 MHz [56] (23.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5300 MHz [60] (23.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5320 MHz [64] (23.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5500 MHz [100] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5520 MHz [104] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5540 MHz [108] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5560 MHz [112] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5580 MHz [116] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5600 MHz [120] (disabled)
* 5620 MHz [124] (disabled)
* 5640 MHz [128] (disabled)
* 5660 MHz [132] (30.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5680 MHz [136] (30.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5700 MHz [140] (30.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5745 MHz [149] (30.0 dBm)
* 5765 MHz [153] (30.0 dBm)
* 5785 MHz [157] (30.0 dBm)
* 5805 MHz [161] (30.0 dBm)
* 5825 MHz [165] (30.0 dBm)
Supported commands:
* new_interface
* set_interface
* new_key
* start_ap
* new_station
* set_bss
* authenticate
* associate
* deauthenticate
* disassociate
* join_ibss
* remain_on_channel
* set_tx_bitrate_mask
* frame
* frame_wait_cancel
* set_wiphy_netns
* set_channel
* set_wds_peer
* probe_client
* set_noack_map
* register_beacons
* start_p2p_device
* set_mcast_rate
* channel_switch
* Unknown command (104)
* connect
* disconnect
Supported TX frame types:
* IBSS: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* managed: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* AP: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* AP/VLAN: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* mesh point: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* P2P-client: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* P2P-GO: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* P2P-device: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
Supported RX frame types:
* IBSS: 0x40 0xb0 0xc0 0xd0
* managed: 0x40 0xd0
* AP: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0
* AP/VLAN: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0
* mesh point: 0xb0 0xc0 0xd0
* P2P-client: 0x40 0xd0
* P2P-GO: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0
* P2P-device: 0x40 0xd0
software interface modes (can always be added):
* AP/VLAN
* monitor
valid interface combinations:
* #{ AP } <= 8,
total <= 8, #channels <= 1, STA/AP BI must match, radar detect widths: { 20 MHz (no HT), 20 MHz, 40 MHz, 80 MHz }
HT Capability overrides:
* MCS: ff ff ff ff ff ff ff ff ff ff
* maximum A-MSDU length
* supported channel width
* short GI for 40 MHz
* max A-MPDU length exponent
* min MPDU start spacing
Device supports TX status socket option.
Device supports HT-IBSS.
Device supports SAE with AUTHENTICATE command
Device supports scan flush.
Device supports per-vif TX power setting
Driver supports a userspace MPM
Driver/device bandwidth changes during BSS lifetime (AP/GO mode)
Device supports static SMPS
Wiphy phy0
max # scan SSIDs: 4
max scan IEs length: 2257 bytes
Retry short limit: 7
Retry long limit: 4
Coverage class: 0 (up to 0m)
Device supports RSN-IBSS.
Device supports AP-side u-APSD.
Device supports T-DLS.
Supported Ciphers:
* WEP40 (00-0f-ac:1)
* WEP104 (00-0f-ac:5)
* TKIP (00-0f-ac:2)
* CCMP (00-0f-ac:4)
* CMAC (00-0f-ac:6)
Available Antennas: TX 0x7 RX 0x7
Configured Antennas: TX 0x7 RX 0x7
Supported interface modes:
* IBSS
* managed
* AP
* AP/VLAN
* WDS
* monitor
* P2P-client
* P2P-GO
Band 1:
Capabilities: 0x11ef
RX LDPC
HT20/HT40
SM Power Save disabled
RX HT20 SGI
RX HT40 SGI
TX STBC
RX STBC 1-stream
Max AMSDU length: 3839 bytes
DSSS/CCK HT40
Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
Minimum RX AMPDU time spacing: 8 usec (0x06)
HT TX/RX MCS rate indexes supported: 0-23
Bitrates (non-HT):
* 1.0 Mbps
* 2.0 Mbps (short preamble supported)
* 5.5 Mbps (short preamble supported)
* 11.0 Mbps (short preamble supported)
* 6.0 Mbps
* 9.0 Mbps
* 12.0 Mbps
* 18.0 Mbps
* 24.0 Mbps
* 36.0 Mbps
* 48.0 Mbps
* 54.0 Mbps
Frequencies:
* 2412 MHz [1] (30.0 dBm)
* 2417 MHz [2] (30.0 dBm)
* 2422 MHz [3] (30.0 dBm)
* 2427 MHz [4] (30.0 dBm)
* 2432 MHz [5] (30.0 dBm)
* 2437 MHz [6] (30.0 dBm)
* 2442 MHz [7] (30.0 dBm)
* 2447 MHz [8] (30.0 dBm)
* 2452 MHz [9] (30.0 dBm)
* 2457 MHz [10] (30.0 dBm)
* 2462 MHz [11] (30.0 dBm)
* 2467 MHz [12] (disabled)
* 2472 MHz [13] (disabled)
* 2484 MHz [14] (disabled)
Band 2:
Capabilities: 0x11ef
RX LDPC
HT20/HT40
SM Power Save disabled
RX HT20 SGI
RX HT40 SGI
TX STBC
RX STBC 1-stream
Max AMSDU length: 3839 bytes
DSSS/CCK HT40
Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
Minimum RX AMPDU time spacing: 8 usec (0x06)
HT TX/RX MCS rate indexes supported: 0-23
Bitrates (non-HT):
* 6.0 Mbps
* 9.0 Mbps
* 12.0 Mbps
* 18.0 Mbps
* 24.0 Mbps
* 36.0 Mbps
* 48.0 Mbps
* 54.0 Mbps
Frequencies:
* 5180 MHz [36] (17.0 dBm)
* 5200 MHz [40] (17.0 dBm)
* 5220 MHz [44] (17.0 dBm)
* 5240 MHz [48] (17.0 dBm)
* 5260 MHz [52] (23.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5280 MHz [56] (23.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5300 MHz [60] (23.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5320 MHz [64] (23.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5500 MHz [100] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5520 MHz [104] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5540 MHz [108] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5560 MHz [112] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5580 MHz [116] (20.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5600 MHz [120] (disabled)
* 5620 MHz [124] (disabled)
* 5640 MHz [128] (disabled)
* 5660 MHz [132] (30.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5680 MHz [136] (30.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5700 MHz [140] (30.0 dBm) (no IR, radar detection)
DFS state: usable (for 7965 sec)
DFS CAC time: 60000 ms
* 5745 MHz [149] (30.0 dBm)
* 5765 MHz [153] (30.0 dBm)
* 5785 MHz [157] (30.0 dBm)
* 5805 MHz [161] (30.0 dBm)
* 5825 MHz [165] (30.0 dBm)
Supported commands:
* new_interface
* set_interface
* new_key
* start_ap
* new_station
* set_bss
* authenticate
* associate
* deauthenticate
* disassociate
* join_ibss
* remain_on_channel
* set_tx_bitrate_mask
* frame
* frame_wait_cancel
* set_wiphy_netns
* set_channel
* set_wds_peer
* tdls_mgmt
* tdls_oper
* probe_client
* set_noack_map
* register_beacons
* start_p2p_device
* set_mcast_rate
* channel_switch
* Unknown command (104)
* connect
* disconnect
Supported TX frame types:
* IBSS: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* managed: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* AP: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* AP/VLAN: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* mesh point: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* P2P-client: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* P2P-GO: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
* P2P-device: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0
Supported RX frame types:
* IBSS: 0x40 0xb0 0xc0 0xd0
* managed: 0x40 0xd0
* AP: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0
* AP/VLAN: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0
* mesh point: 0xb0 0xc0 0xd0
* P2P-client: 0x40 0xd0
* P2P-GO: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0
* P2P-device: 0x40 0xd0
software interface modes (can always be added):
* AP/VLAN
* monitor
valid interface combinations:
* #{ managed } <= 2048, #{ AP } <= 8, #{ P2P-client, P2P-GO } <= 1,
total <= 2048, #channels <= 1, STA/AP BI must match
* #{ WDS } <= 2048,
total <= 2048, #channels <= 1, STA/AP BI must match
* #{ IBSS, AP } <= 1,
total <= 1, #channels <= 1, STA/AP BI must match, radar detect widths: { 20 MHz (no HT), 20 MHz }
HT Capability overrides:
* MCS: ff ff ff ff ff ff ff ff ff ff
* maximum A-MSDU length
* supported channel width
* short GI for 40 MHz
* max A-MPDU length exponent
* min MPDU start spacing
Device supports TX status socket option.
Device supports HT-IBSS.
Device supports SAE with AUTHENTICATE command
Device supports low priority scan.
Device supports scan flush.
Device supports AP scan.
Device supports per-vif TX power setting
P2P GO supports CT window setting
Driver supports a userspace MPM
Device supports active monitor (which will ACK incoming frames)
Driver/device bandwidth changes during BSS lifetime (AP/GO mode)
"""
DEV_OUTPUT = """phy#1
Interface wlan1_portal
ifindex 11
wdev 0x100000002
addr 8a:dc:96:0c:8d:bb
type managed
Interface wlan1
ifindex 9
wdev 0x100000001
addr 88:dc:96:0c:8d:bb
type AP
phy#0
Interface wcli0
ifindex 20
wdev 0x3
addr 88:dc:96:08:60:2d
type managed
Interface wlan0_portal
ifindex 10
wdev 0x2
addr 8a:dc:96:08:60:2c
type managed
Interface wlan0
ifindex 5
wdev 0x1
addr 88:dc:96:08:60:2c
type AP
"""
INTERFACE_INFO_OUTPUT = """Interface wcli0
ifindex 20
wdev 0x3
addr 88:dc:96:08:60:2d
type managed
wiphy 0
channel 6 (2437 MHz), width: 20 MHz, center1: 2437 MHz
"""
INTERFACE_LINK_OUTPUT = """SSID: some_ssid
freq: 2437
RX: 56110 bytes (537 packets)
TX: 926 bytes (10 packets)
signal: -36 dBm
tx bitrate: 43.3 MBit/s MCS 4 short GI
bss flags: short-preamble short-slot-time
dtim period: 2
beacon int: 100
"""
# Monkey-patch the stuff that actually depends on IW being available.
iw.RUNNABLE_IW = lambda: True
# pylint: disable=unused-argument,protected-access
def fake_phy(*args, **kwargs):
return PHY_OUTPUT
iw._phy = fake_phy
# pylint: disable=unused-argument,protected-access
def fake_dev(*args, **kwargs):
return DEV_OUTPUT
iw._dev = fake_dev
# pylint: disable=unused-argument,protected-access
def fake_info(*args, **kwargs):
return INTERFACE_INFO_OUTPUT
iw._info = fake_info
# pylint: disable=unused-argument,protected-access
def fake_link(*args, **kwargs):
return INTERFACE_LINK_OUTPUT
iw._link = fake_link
@wvtest.wvtest
def find_phy_test():
wvtest.WVPASSEQ('phy0', iw.find_phy('2.4', 'auto'))
wvtest.WVPASSEQ('phy0', iw.find_phy('2.4', '11'))
wvtest.WVPASSEQ('phy1', iw.find_phy('5', 'auto'))
wvtest.WVPASSEQ('phy1', iw.find_phy('5', '165'))
wvtest.WVPASSEQ(None, iw.find_phy('asdf', '544312'))
@wvtest.wvtest
def find_interface_from_phy_test():
wvtest.WVPASSEQ('wlan0',
iw.find_interface_from_phy('phy0', iw.INTERFACE_TYPE.ap, ''))
wvtest.WVPASSEQ('wlan1',
iw.find_interface_from_phy('phy1', iw.INTERFACE_TYPE.ap, ''))
wvtest.WVPASSEQ('wcli0',
iw.find_interface_from_phy('phy0', iw.INTERFACE_TYPE.client,
''))
# The output is from a device with no client interface on phy1.
wvtest.WVPASSEQ(None,
iw.find_interface_from_phy('phy1', iw.INTERFACE_TYPE.client,
''))
@wvtest.wvtest
def find_all_interfaces_from_phy_test():
wvtest.WVPASSEQ(set(['wlan0', 'wlan0_portal', 'wcli0']),
iw.find_all_interfaces_from_phy('phy0'))
wvtest.WVPASSEQ(set(['wlan0', 'wlan0_portal']),
iw.find_all_interfaces_from_phy('phy0', iw.INTERFACE_TYPE.ap))
wvtest.WVPASSEQ(set(['wcli0']),
iw.find_all_interfaces_from_phy('phy0',
iw.INTERFACE_TYPE.client))
wvtest.WVPASSEQ(set(['wlan1', 'wlan1_portal']),
iw.find_all_interfaces_from_phy('phy1'))
@wvtest.wvtest
def find_interface_from_band_test():
wvtest.WVPASSEQ('wlan0',
iw.find_interface_from_band('2.4', iw.INTERFACE_TYPE.ap, ''))
wvtest.WVPASSEQ('wlan1',
iw.find_interface_from_band('5', iw.INTERFACE_TYPE.ap, ''))
wvtest.WVPASSEQ('wcli0',
iw.find_interface_from_band('2.4', iw.INTERFACE_TYPE.client,
''))
# The output is from a device with no client interface on phy1.
wvtest.WVPASSEQ(None,
iw.find_interface_from_band('5', iw.INTERFACE_TYPE.client,
''))
@wvtest.wvtest
def find_all_interfaces_from_band_test():
wvtest.WVPASSEQ(set(['wlan0', 'wlan0_portal', 'wcli0']),
iw.find_all_interfaces_from_band('2.4'))
wvtest.WVPASSEQ(set(['wlan0', 'wlan0_portal']),
iw.find_all_interfaces_from_band('2.4', iw.INTERFACE_TYPE.ap))
wvtest.WVPASSEQ(set(['wcli0']),
iw.find_all_interfaces_from_band('2.4',
iw.INTERFACE_TYPE.client))
wvtest.WVPASSEQ(set(['wlan1', 'wlan1_portal']),
iw.find_all_interfaces_from_band('5'))
@wvtest.wvtest
def find_interfaces_from_band_and_suffix_test():
"""Test find_interfaces_from_band_and_suffix."""
wvtest.WVPASSEQ(set(['wlan0', 'wlan0_portal', 'wcli0']),
iw.find_interfaces_from_band_and_suffix('2.4', 'ALL'))
wvtest.WVPASSEQ(set(['wlan0', 'wcli0']),
iw.find_interfaces_from_band_and_suffix('2.4', ''))
wvtest.WVPASSEQ(set(['wlan0_portal']),
iw.find_interfaces_from_band_and_suffix('2.4', '_portal'))
wvtest.WVPASSEQ(set([]),
iw.find_interfaces_from_band_and_suffix('2.4', 'fake_suffix'))
wvtest.WVPASSEQ(set(['wlan0', 'wlan0_portal']),
iw.find_interfaces_from_band_and_suffix('2.4', 'ALL',
iw.INTERFACE_TYPE.ap))
wvtest.WVPASSEQ(set(['wcli0']),
iw.find_interfaces_from_band_and_suffix(
'2.4', 'ALL', iw.INTERFACE_TYPE.client))
wvtest.WVPASSEQ(set(['wlan1', 'wlan1_portal']),
iw.find_interfaces_from_band_and_suffix('5', 'ALL'))
@wvtest.wvtest
def info_parsed_test():
wvtest.WVPASSEQ({
'wdev': '0x3',
'wiphy': '0',
'addr': '88:dc:96:08:60:2d',
'width': '20',
'ifindex': '20',
'type': 'managed',
'channel': '6'
}, iw.info_parsed('wcli0'))
@wvtest.wvtest
def find_width_and_channel_test():
wvtest.WVPASSEQ(('20', '6'), iw.find_width_and_channel('wcli0'))
# Now, test the 'iw link' fallback.
global INTERFACE_INFO_OUTPUT
hold = INTERFACE_INFO_OUTPUT
INTERFACE_INFO_OUTPUT = ''
wvtest.WVPASSEQ(('20', '6'), iw.find_width_and_channel('wcli0'))
INTERFACE_INFO_OUTPUT = hold
@wvtest.wvtest
def phy_bands_test():
# phy0 claims to support 5 GHz, but phy1 only supports 5 GHz and so it
# supercedes it.
wvtest.WVPASSEQ(set(['2.4']), iw.phy_bands('phy0'))
wvtest.WVPASSEQ(set(['5']), iw.phy_bands('phy1'))
# Now remove phy1 from the 'iw phy' output and see that phy0 gets both bands.
global PHY_OUTPUT
hold = PHY_OUTPUT
PHY_OUTPUT = PHY_OUTPUT[PHY_OUTPUT.find('Wiphy phy0'):]
wvtest.WVPASSEQ(set(['2.4', '5']), iw.phy_bands('phy0'))
PHY_OUTPUT = hold
if __name__ == '__main__':
wvtest.wvtest_main()
| 32.360709 | 120 | 0.615446 | 2,974 | 20,096 | 4.06456 | 0.142905 | 0.017538 | 0.013402 | 0.015884 | 0.806171 | 0.795417 | 0.764146 | 0.750993 | 0.721542 | 0.705079 | 0 | 0.156855 | 0.284932 | 20,096 | 620 | 121 | 32.412903 | 0.684342 | 0.032892 | 0 | 0.704947 | 0 | 0.019435 | 0.763509 | 0 | 0 | 0 | 0.072735 | 0 | 0 | 1 | 0.022968 | false | 0.060071 | 0.003534 | 0.007067 | 0.033569 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
b7f2492f2e96160ed41e16abdbcf8f4e33d48c22 | 47 | py | Python | show_example/models.py | andsild/workshop_django | 81738b52c6b8cfa952fb85fd4fb6cb54678e6549 | [
"MIT"
] | null | null | null | show_example/models.py | andsild/workshop_django | 81738b52c6b8cfa952fb85fd4fb6cb54678e6549 | [
"MIT"
] | null | null | null | show_example/models.py | andsild/workshop_django | 81738b52c6b8cfa952fb85fd4fb6cb54678e6549 | [
"MIT"
] | null | null | null | from django.db import models
#TODO: Write me!
| 11.75 | 28 | 0.744681 | 8 | 47 | 4.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 3 | 29 | 15.666667 | 0.897436 | 0.319149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b7fb0a24048cf4ecea9b885163d81b2baebecff4 | 5,610 | py | Python | cmds/poll.py | Toricane/Encourage-Bot | 65be63ee405b0e228738617c3ae12c395c3c6a08 | [
"MIT"
] | 4 | 2021-03-26T04:20:05.000Z | 2022-03-30T16:42:29.000Z | cmds/poll.py | Toricane/Perseverance-Bot | 65be63ee405b0e228738617c3ae12c395c3c6a08 | [
"MIT"
] | null | null | null | cmds/poll.py | Toricane/Perseverance-Bot | 65be63ee405b0e228738617c3ae12c395c3c6a08 | [
"MIT"
] | null | null | null | async def create_poll(ctx, question, choices, mention): # noqa: C901
try:
content = choices.split("/")
if mention != None:
if len(content) > 0:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
"""
if len(content) > 1:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
"""
if len(content) > 2:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
"""
if len(content) > 3:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
"""
if len(content) > 4:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
"""
if len(content) > 5:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
"""
if len(content) > 6:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
:seven:: {content[6]}
"""
if len(content) > 7:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
:seven:: {content[6]}
:seven:: {content[7]}
"""
if len(content) > 8:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
:seven:: {content[6]}
:seven:: {content[7]}
:seven:: {content[8]}
"""
if len(content) > 9:
message = f"""
{mention.mention} {ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
:seven:: {content[6]}
:seven:: {content[7]}
:seven:: {content[8]}
:seven:: {content[9]}
"""
else:
if len(content) > 0:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
"""
if len(content) > 1:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
"""
if len(content) > 2:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
"""
if len(content) > 3:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
"""
if len(content) > 4:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
"""
if len(content) > 5:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
"""
if len(content) > 6:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
:seven:: {content[6]}
"""
if len(content) > 7:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
:seven:: {content[6]}
:eight:: {content[7]}
"""
if len(content) > 8:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
:seven:: {content[6]}
:eight:: {content[7]}
:nine:: {content[8]}
"""
if len(content) > 9:
message = f"""
{ctx.author.mention} asks: {question}
:one:: {content[0]}
:two:: {content[1]}
:three:: {content[2]}
:four:: {content[3]}
:five:: {content[4]}
:six:: {content[5]}
:seven:: {content[6]}
:eight:: {content[7]}
:nine:: {content[8]}
:ten:: {content[9]}
"""
messgae = await ctx.send(message)
if len(content) > 0:
await messgae.add_reaction("1️⃣")
if len(content) > 1:
await messgae.add_reaction("2️⃣")
if len(content) > 2:
await messgae.add_reaction("3️⃣")
if len(content) > 3:
await messgae.add_reaction("4️⃣")
if len(content) > 4:
await messgae.add_reaction("5️⃣")
if len(content) > 5:
await messgae.add_reaction("6️⃣")
if len(content) > 6:
await messgae.add_reaction("7️⃣")
if len(content) > 7:
await messgae.add_reaction("8️⃣")
if len(content) > 8:
await messgae.add_reaction("9️⃣")
if len(content) > 9:
await messgae.add_reaction("🔟")
except Exception as e:
print(str(e)) | 23.472803 | 68 | 0.525312 | 699 | 5,610 | 4.227468 | 0.077253 | 0.050761 | 0.121827 | 0.135364 | 0.812183 | 0.812183 | 0.803384 | 0.803384 | 0.785787 | 0.785787 | 0 | 0.035807 | 0.243316 | 5,610 | 239 | 69 | 23.472803 | 0.65583 | 0.001783 | 0 | 0.908257 | 0 | 0 | 0.589927 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.004587 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4d5d59c2548ade7d1fba10a3163e3f4c82eca88c | 55,702 | py | Python | run_subjects.py | daanvanes/pRF_attention_analysis | a1825eaa5290d08c8cafd0487427d62b1512d05c | [
"MIT"
] | null | null | null | run_subjects.py | daanvanes/pRF_attention_analysis | a1825eaa5290d08c8cafd0487427d62b1512d05c | [
"MIT"
] | null | null | null | run_subjects.py | daanvanes/pRF_attention_analysis | a1825eaa5290d08c8cafd0487427d62b1512d05c | [
"MIT"
] | 1 | 2018-11-19T15:41:34.000Z | 2018-11-19T15:41:34.000Z | # !/usr/bin/env python
# encoding: utf-8
# import python packages
from IPython import embed as shell
from datetime import date
import os, sys, datetime
import subprocess, logging
import scipy as sp
import numpy as np
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pylab as pl
import socket
# import toolbox functionality (available on github.com/xxx)
sys.path.append( os.environ['ANALYSIS_HOME'] )
from Tools.Subjects.Subject import *
from Tools.Projects.Project import *
from Tools.Sessions.Session import *
# import rest of analysis functionality
from PopulationReceptiveFieldMappingSession import *
# determine current host
if socket.gethostname() == 'aeneas':
this_raw_folder = '/home/raw_data/2015/visual/PRF_2/'
this_project_folder = '/home/shared/2015/visual/PRF_2/'
else: # if on cartesius:
this_raw_folder = '/projects/0/pqsh283/raw_data/'
this_project_folder = '/projects/0/pqsh283/PRF_2'
################################################
# parse command line input arguments
################################################
# 1. the first additional input refers to which subjects to fit and can be
# * 'all', meaning fit all subjects,
# * a multipel subjects, e.g. 'DE JS'
# * a single subject e.g. DE
# 2. the second input argument is the number of jobs
# 3. slice number to fit prfs on
# 4. condition, 'PRF' or 'Mapper' for which files to use for preprocessing
# if wanting to compute analyses for all subjects, specify who they are:
if sys.argv[1] == 'all':
which_subjects = ['TK','DE','JW','JS','NA']
# if more than one subject is entered, split them
elif len(sys.argv[1]) > 2:
which_subjects = sys.argv[1].split(' ')
# otherwise take over single subject initials:
else:
which_subjects = [sys.argv[1]]
# n jobs
if np.size(sys.argv) > 2:
n_jobs = int(sys.argv[2])
else:
n_jobs = 20
# slice no
if np.size(sys.argv) == 4:
slice_no = int(sys.argv[3])
this_condition = sys.argv[4]
else:
slice_no = None
this_condition = 'PRF' # this determines which files the are in the 'runarray'
################################################
# some analysis parameters:
################################################
# anatomical mask
anat_mask = 'bet_mask_dilated'
all_mask = 'bet_mask_dilated'
# this determines which data is fitted on
# mcf = motion corrected file
# sgtf = savitzky golay temporally filtered
# psc = percent signal changed
postFix = ['mcf','fnirted','sgtf','psc']
# which pRF model (OG or DoG)
model = 'OG'
# how to combine hrfs across voxels:
hrf_type = 'median'
################################################
# determine which part of analysis is run:
################################################
# eye / behavior preprocessing
add_behavior_to_hdf5 = True
eye_analysis = False
# fMRI preprocessing
spatial_preprocessing = False
create_masks = False
temporal_preprocessing = False
regressor_preparation = False
# mapper
mapper_analyses = False
# pRF fitting
fit_single_pRF = False
combine_all_slice_niftis = False
fit_multiple_pRFs = False
combine_multiple_pRF_slice_niftis = False
convert_to_surf = False
# output data
add_data_to_hdf5 = True
### START SUBJECT LOOP
for which_subject in which_subjects:
def runWholeSession( rA, session ):
conditions = np.array([r['condition'] for r in rA])
run_idx = np.where(conditions==this_condition)[0][0]
for ri,r in enumerate(rA):
thisRun = Run( **r )
session.addRun(thisRun)
session.parcelateConditions()
session.parallelize = True
########################
#### ANALYZE BEHAVIOR
########################
if add_behavior_to_hdf5:
session.add_behavior_to_hdf5()
#######################################################################
##### RUN PREPROCESSING ON AENEAS:
#######################################################################
if spatial_preprocessing:
# SETUP FILES:
session.setupFiles(rawBase = presentSubject.initials, process_eyelink_file = False)
# WE'LL FIRST MOTION CORRECT THE EPIS TO THEMSELVES
session.motionCorrectFunctionals(use_ref_file=False)
session.create_moco_check_gifs()
## NOW, LET'S FLIRT ALL MEAN MOTION CORRECTED VOLUMES TO THE SESSIONS T2
# SO THAT WE CAN VISUALLY CHECK WHICH ONE HAS THE LEAST B0 DISTORTION
session.flirt_mean_moco_to_session_T2()
session.create_B0_distortion_check_gifs()
## WITH A TARGET EPI VOLUME SELECTED AND MARKED IN THE SUBJECT DEFINITION
# WE CAN NOW REGISTER TO THE FREESURFER T1
session.registerSession(input_type='target_meanvol_moco_epi')
## WITH A TARGET EPI VOLUME SELECTED AND MARKED IN THE SUBJECT DEFINITION
# WE CAN NOW FLIRT ALL MEAN MOTION CORRECTED EPIS TO THAT EPI
# AND CREATE VISUAL SANITY CHECKS
session.flirt_mean_moco_to_mean_target_EPI()
session.check_EPI_alignment(postFix=['mcf','meanvol','NB','flirted2targetEPI'])
# ## FOR THE FINAL TOUCH, WE'LL NOW FNIRT THE MEAN MOTION CORRECTED AND FLIRTED
# EPI TO THE TARGET MEAN MOTION CORRECTED EPI
session.fnirt_mean_moco_to_mean_target_EPI()
session.check_EPI_alignment(postFix=['mcf','meanvol','fnirted2targetEPI'])
# NOW COMBINE MOCO / FLIRT / FNIRT AND APPLY TO ALL DATA
session.applywarp_to_moco_data()
session.create_mean_vol(postFix=['mcf','fnirted'])
session.check_EPI_alignment(postFix=['mcf','fnirted','meanvol'])
if eye_analysis:
# EYE ANALYSIS
session.eye_analysis(conditions=['PRF'],
delete_hdf5=True,
import_raw_data=True,
import_all_data=True,
write_trial_timing_text_files=True,
add_to_group_level_hdf5 = True)
if create_masks:
# MASKS
session.dilate_and_move_func_bet_mask()
session.createMasksFromFreeSurferLabels(annot = False, annotFile = 'aparc.a2009s', labelFolders = ['retmap_PRF'], cortex = False)
session.create_dilated_cortical_mask(dilation_sd = 0.5, label = 'cortex')
session.create_WM_GM_CSF_masks()
if temporal_preprocessing:
# TEMPORAL FILTERING AND PERCENT SIGNAL CHANGE
for condition in ['Mapper','PRF']:
session.rescaleFunctionals(condition=condition,operations = ['sgtf'],filterFreqs={'highpass':120}, funcPostFix = ['mcf','fnirted'], mask_file = os.path.join(session.stageFolder('processed/mri/masks/anat'), 'bet_mask_dilated.nii.gz'))
session.rescaleFunctionals(condition=condition,operations = ['percentsignalchange'], funcPostFix = ['mcf','fnirted','sgtf'])
if regressor_preparation:
# REGRESSOR PREPARATION
session.retroicorFSL(conditions=['Mapper'], postFix=['mcf','fnirted','sgtf'], shim_slice=True, prepare=True, run=False)
session.dt_ddt_moco_pars(conditions=['Mapper'])
# MAPPER ANALYSIS
if mapper_analyses:
session.Mapper_GLM(mask = 'bet_mask_dilated',postFix = ['mcf','fnirted','sgtf','psc'])
session.hrf_from_mapper()
#######################################################################
##### PRF fitting
#######################################################################
if fit_single_pRF:
session.design_matrices_for_concatenated_data(n_pixel_elements_raw = 101,n_pixel_elements_convolved=31,task_conditions=['All'])
session.setup_fit_PRF_on_concatenated_data(
anat_mask_file_name = anat_mask,
all_mask_file_name = all_mask,
n_jobs = n_jobs,
postFix = postFix,
plotbool = True,
model = 'OG',
hrf_type = hrf_type,
fit_on_all_data = True,
slice_no = slice_no
)
if combine_all_slice_niftis:
session.combine_seperate_slice_niftis(mask,postFix,'OG',task_conditions=['All'],hrf_type=hrf_type)
if convert_to_surf:
session.convert_to_surf(mask_file = mask,postFix=postFix,model='OG',hrf_type=hrf_type,depth_min=-1.0,depth_max=2.0,depth_step=0.25,task_conditions=['Fix'],sms=[0])
session.combine_surfaces(mask_file = mask,postFix=postFix,model='OG',hrf_type=hrf_type,depth_min=-1.0,depth_max=2.0,depth_step=0.25,task_conditions=['Fix'],sms=[0])
if fit_multiple_pRFs:
# fit separate pRF per attention condition
# run_num is only for fitting cross-validated, which is not included in this version of the analyses
task_conditions = ['Fix','Color','Speed']
session.design_matrices_for_concatenated_data(n_pixel_elements_raw = 101,n_pixel_elements_convolved=31,
change_type=change_type,run_num=run_num,task_conditions=task_conditions)
r_squared_threshold = 0.0005 # threshold for which voxels to fit in the non-ALL condition
session.setup_fit_PRF_on_concatenated_data(
anat_mask_file_name = anat_mask,
all_mask_file_name =all_mask,
n_jobs = n_jobs,
postFix = postFix,
plotbool = True,
model = 'OG',
hrf_type = 'median',
fit_on_all_data = False,
r_squared_threshold = r_squared_threshold,
slice_no = slice_no,
change_type = change_type,
run_num = run_num,
)
if combine_multiple_pRF_slice_niftis:
session.combine_seperate_slice_niftis(anat_mask,postFix,'OG',task_conditions = ['Fix','Color','Speed'],hrf_type=hrf_type)
if add_data_to_hdf5:
# adding data to the subject specific and group level hdf5 files:
task_conditions = ['Fix','Color','Speed']
session.mask_stats_to_hdf(mask_file = anat_mask , postFix = postFix, task_conditions = task_conditions,model='OG',hrf_type=hrf_type,
add_regular_fit_results=True,add_mapper_data=True,add_hrf_params=True)
if __name__ == '__main__':
#########################################################################
# subject information
#########################################################################
if which_subject == 'TK':
initials = 'TK'
firstName = 'anonymous'
standardFSID = 'TK_290615'
birthdate = date(1950,01,01)
labelFolderOfPreference = ''
presentSubject = Subject( initials, firstName, birthdate, standardFSID, labelFolderOfPreference )
presentProject = Project( 'Population Receptive Field Mapping', subject = presentSubject, base_dir = os.path.join(this_project_folder, 'data'))
sessionDate = date(2015, 6, 1)
sessionID = 'PRF_' + presentSubject.initials
sj_init_data_code = 'TK_010615'
subject_session = PopulationReceptiveFieldMappingSession(sessionID, sessionDate, presentProject, presentSubject, this_project_folder=this_project_folder,targetEPIID=11 )
try:
os.mkdir(os.path.join(this_project_folder, 'data', initials))
os.mkdir(os.path.join(this_project_folder, 'data', initials, sj_init_data_code))
except OSError:
subject_session.logger.debug('output folders already exist')
subject_run_array = [
# SESSION 1 in chronological scan order
{'ID' : 1, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','TK_WIP_RetMap_2.5_1.6_20.32_SENSE_4_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','tk_1_2015-06-01_13.16.47.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','tk_1_2015-06-01_13.16.47_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150601131622.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','3','TK_010615_3_to_12_NB.mat'),
'thisSessionT2ID':3,
},
{'ID' : 2, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','TK_WIP_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','tk_2_2015-06-01_13.42.40.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','tk_2_2015-06-01_13.42.40_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150601133716.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','3','TK_010615_3_to_12_NB.mat'),
'thisSessionT2ID':3,
},
{'ID' : 3, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','TK_WIP_T2W_RetMap_1.25_CLEAR_6_1.nii.gz' ),
'targetSessionT2anatID':12
},
{'ID' : 4, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','TK_WIP_Mapper_2.5_1.6_13.45_SENSE_7_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','tk_1_2015-06-01_14.11.33.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','tk_1_2015-06-01_14.11.33_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150601140915.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','3','TK_010615_3_to_12_NB.mat'),
'thisSessionT2ID':3,
},
# SESSION 2 in chronological order
{'ID' : 5, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','TK_RetMap_2.5_1.6_20.32_SENSE_2_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','tk_4_2015-06-02_13.07.35.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','tk_4_2015-06-02_13.07.35_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150602130732.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','TK_010615_7_to_12_NB.mat'),
'thisSessionT2ID':7,
},
{'ID' : 6, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','TK_Mapper_2.5_1.6_13.45_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','tk_2_2015-06-02_13.30.07.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','tk_2_2015-06-02_13.30.07_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150602132834.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','TK_010615_7_to_12_NB.mat'),
'thisSessionT2ID':7,
},
{'ID' : 7, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','TK_T2W_RetMap_1.25_CLEAR_4_1.nii.gz' ),
'targetSessionT2anatID':12
},
{'ID' : 8, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','TK_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','tk_5_2015-06-02_13.51.21.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','tk_5_2015-06-02_13.51.21_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150602135006.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','TK_010615_7_to_12_NB.mat'),
'thisSessionT2ID':7,
},
# SESSION 3 in chronological order
{'ID' : 9, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','TK_3_WIP_RetMap_2.5_1.6_20.32_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','tk_6_2015-06-03_12.30.05.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','tk_6_2015-06-03_12.30.05_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150603123010.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','12','TK_010615_12_to_12_NB.mat'),
'thisSessionT2ID':12,
},
{'ID' : 10, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','TK_3_WIP_Mapper_2.5_1.6_13.45_SENSE_4_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','tk_3_2015-06-03_12.52.14.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','tk_3_2015-06-03_12.52.14_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150603125106.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','12','TK_010615_12_to_12_NB.mat'),
'thisSessionT2ID':12,
},
{'ID' : 11, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','TK_3_WIP_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','tk_7_2015-06-03_13.07.58.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','tk_7_2015-06-03_13.07.58_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150603130654.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','12','TK_010615_12_to_12_NB.mat'),
'thisSessionT2ID':12,
},
{'ID' : 12, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','TK_3_WIP_T2W_RetMap_1.25_CLEAR_6_1.nii.gz' ),
'targetSessionT2anatID':12
},
]
runWholeSession(subject_run_array, subject_session)
elif which_subject == 'DE':
# first subject; WK
#########################################################################
# subject information
initials = 'DE'
firstName = 'anonymous'
standardFSID = 'DE_110412'
birthdate = date(1950,01,01)
labelFolderOfPreference = ''
presentSubject = Subject( initials, firstName, birthdate, standardFSID, labelFolderOfPreference )
presentProject = Project( 'Population Receptive Field Mapping', subject = presentSubject, base_dir = os.path.join(this_project_folder, 'data'))
sessionDate = date(2015, 6, 1)
sessionID = 'PRF_' + presentSubject.initials
sj_init_data_code = 'DE_010615'
subject_session = PopulationReceptiveFieldMappingSession(sessionID, sessionDate, presentProject, presentSubject, this_project_folder=this_project_folder,targetEPIID=9)
try:
os.mkdir(os.path.join(this_project_folder, 'data', initials))
os.mkdir(os.path.join(this_project_folder, 'data', initials, sj_init_data_code))
except OSError:
subject_session.logger.debug('output folders already exist')
subject_run_array = [
# SESSION 1 in chronological order
{'ID' : 1, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','DE_WIP_RetMap_2.5_1.6_20.32_SENSE_2_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','de_1_2015-06-01_15.10.36.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','de_1_2015-06-01_15.10.36_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150601150932.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','4','DE_010615_4_to_11_NB.mat'),
'thisSessionT2ID':4,
'mocoT2anatIDtarget':11,
},
{'ID' : 2, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','DE_WIP_Mapper_2.5_1.6_13.45_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','de_1_2015-06-01_15.33.48.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','de_1_2015-06-01_15.33.48_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150601153103.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','4','DE_010615_4_to_11_NB.mat'),
'thisSessionT2ID':4,
'mocoT2anatIDtarget':11,
},
{'ID' : 3, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','DE_WIP_RetMap_2.5_1.6_20.32_SENSE_4_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','de_2_2015-06-01_15.51.06.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','de_2_2015-06-01_15.51.06_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150601154841.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','4','DE_010615_4_to_11_NB.mat'),
'thisSessionT2ID':4,
'mocoT2anatIDtarget':11,
},
{'ID' : 4, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','DE_WIP_T2W_RetMap_1.25_CLEAR_5_1.nii.gz' ),
'targetSessionT2anatID':11,
},
# SESSION 2 in chronological order
{'ID' : 5, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','DE_2_RetMap_2.5_1.6_20.32_SENSE_2_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','de_3_2015-06-02_14.29.29.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','de_3_2015-06-02_14.29.29_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150602142744.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','DE_010615_7_to_11_NB.mat'),
'thisSessionT2ID':7,
'mocoT2anatIDtarget':11,
},
{'ID' : 6, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','DE_2_Mapper_2.5_1.6_13.45_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','de_2_2015-06-02_14.51.32.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','de_2_2015-06-02_14.51.32_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150602145005.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','DE_010615_7_to_11_NB.mat'),
'thisSessionT2ID':7,
'mocoT2anatIDtarget':11,
},
{'ID' : 7, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','DE_2_T2W_RetMap_1.25_CLEAR_4_1.nii.gz' ),
'targetSessionT2anatID':11
},
{'ID' : 8, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','DE_2_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','de_4_2015-06-02_15.13.12.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','de_4_2015-06-02_15.13.12_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150602151054.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','DE_010615_7_to_11_NB.mat'),
'thisSessionT2ID':7,
'mocoT2anatIDtarget':11,
},
# # SESSION 3 in chronological order
{'ID' : 9, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','DE_3_WIP_RetMap_2.5_1.6_20.32_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','de_5_2015-06-03_14.16.01.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','de_5_2015-06-03_14.16.01_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150603142105.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','DE_010615_11_to_11_NB.mat'),
'thisSessionT2ID':11,
'mocoT2anatIDtarget':11,
},
{'ID' : 10, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','DE_3_WIP_Mapper_2.5_1.6_13.45_SENSE_4_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','de_3_2015-06-03_14.43.06.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','de_3_2015-06-03_14.43.06_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150603144203.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','DE_010615_11_to_11_NB.mat'),
'mocoT2anatIDtarget':11,
'thisSessionT2ID':11,
},
{'ID' : 11, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','DE_3_WIP_T2W_RetMap_1.25_CLEAR_5_1.nii.gz' ),
'targetSessionT2anatID':11
},
{'ID' : 12, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','DE_3_WIP_RetMap_2.5_1.6_20.32_SENSE_6_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','de_6_2015-06-03_15.05.52.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','de_6_2015-06-03_15.05.52_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150603150409.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','DE_010615_11_to_11_NB.mat'),
'thisSessionT2ID':11,
'mocoT2anatIDtarget':11,
},
{'ID' : 13, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','DE_3_WIP_RetMap_2.5_1.6_20.32_SENSE_7_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','de_7_2015-06-03_15.28.00.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','de_7_2015-06-03_15.28.00_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150603152604.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','DE_010615_11_to_11_NB.mat'),
'thisSessionT2ID':11,
'mocoT2anatIDtarget':11,
},
]
runWholeSession(subject_run_array, subject_session)
elif which_subject == 'JW':
# first subject; WK
#########################################################################
# subject information
initials = 'JW'
firstName = 'anonymous'
standardFSID = 'JW_310312'
birthdate = date(1950,01,01)
labelFolderOfPreference = ''
presentSubject = Subject( initials, firstName, birthdate, standardFSID, labelFolderOfPreference )
presentProject = Project( 'Population Receptive Field Mapping', subject = presentSubject, base_dir = os.path.join(this_project_folder, 'data'))
sessionDate = date(2015, 6, 2)
sessionID = 'PRF_' + presentSubject.initials
sj_init_data_code = 'JW_020615'
subject_session = PopulationReceptiveFieldMappingSession(sessionID, sessionDate, presentProject, presentSubject, this_project_folder=this_project_folder,targetEPIID=4 )
try:
os.mkdir(os.path.join(this_project_folder, 'data', initials))
os.mkdir(os.path.join(this_project_folder, 'data', initials, sj_init_data_code))
except OSError:
subject_session.logger.debug('output folders already exist')
subject_run_array = [
# # SESSION 1 in chronological order
{'ID' : 1, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','JW_RetMap_2.5_1.6_20.32_SENSE_2_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','jw_1_2015-06-02_15.44.19.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','jw_1_2015-06-02_15.44.19_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150602154242.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','5','JW_020615_5_to_9_NB.mat'),
'thisSessionT2ID':5,
},
# # NO BEHAVIOR FILE WRITTEN DURING SCAN. Try and retrieve trial info from edf...
{'ID' : 2, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','JW_Mapper_2.5_1.6_13.45_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','jw_1_2015-06-02_16.05.36.edf' ),
# 'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150602160434.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','5','JW_020615_5_to_9_NB.mat'),
'thisSessionT2ID':5,
},
{'ID' : 3, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','JW_RetMap_2.5_1.6_20.32_SENSE_4_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','jw_2_2015-06-02_16.22.27.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','jw_2_2015-06-02_16.22.27_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150602161945.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','5','JW_020615_5_to_9_NB.mat'),
'thisSessionT2ID':5,
},
# scanid 5 was failed T2, because JW moved
{'ID' : 4, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','JW_RetMap_2.5_1.6_20.32_SENSE_6_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','jw_3_2015-06-02_16.49.10.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','jw_3_2015-06-02_16.49.10_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150602164849.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','5','JW_020615_5_to_9_NB.mat'),
'thisSessionT2ID':5,
},
{'ID' : 5, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','JW_T2W_RetMap_1.25_CLEAR_7_1.nii.gz' ),
'targetSessionT2anatID':9
},
# SESSION 2 in chronological order
{'ID' : 6, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','JW_2_WIP_RetMap_2.5_1.6_20.32_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','jw_4_2015-06-03_16.20.22.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','jw_4_2015-06-03_16.20.22_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150603161920.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','9','JW_020615_9_to_9_NB.mat'),
'thisSessionT2ID':9,
},
{'ID' : 7, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','JW_2_WIP_Mapper_2.5_1.6_13.45_SENSE_4_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','jw_2_2015-06-03_16.41.35.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','jw_2_2015-06-03_16.41.35_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150603164022.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','9','JW_020615_9_to_9_NB.mat'),
'thisSessionT2ID':9,
},
{'ID' : 8, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','JW_2_WIP_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','jw_5_2015-06-03_16.57.43.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','jw_5_2015-06-03_16.57.43_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150603165538.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','9','JW_020615_9_to_9_NB.mat'),
'thisSessionT2ID':9,
},
{'ID' : 9, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','JW_2_WIP_T2W_RetMap_1.25_CLEAR_6_1.nii.gz' ),
'targetSessionT2anatID':9
},
{'ID' : 10, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','JW_2_WIP_RetMap_2.5_1.6_20.32_SENSE_7_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','jw_6_2015-06-03_17.19.59.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','jw_6_2015-06-03_17.19.59_outputDict.pickle' ),
# timing of physlogfile is again later than the edf, probably turned on the experiment during the T2
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150603172438.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','9','JW_020615_9_to_9_NB.mat'),
'thisSessionT2ID':9,
},
]
runWholeSession(subject_run_array, subject_session)
elif which_subject == 'NA':
# first subject; WK
#########################################################################
# subject information
initials = 'NA'
firstName = 'anonymous'
standardFSID = 'NA_220813_12'
birthdate = date(1950,01,01)
labelFolderOfPreference = ''
presentSubject = Subject( initials, firstName, birthdate, standardFSID, labelFolderOfPreference )
presentProject = Project( 'Population Receptive Field Mapping', subject = presentSubject, base_dir = os.path.join(this_project_folder, 'data'))
sessionDate = date(2015, 6, 23)
sessionID = 'PRF_' + presentSubject.initials
sj_init_data_code = 'NA_230615'
subject_session = PopulationReceptiveFieldMappingSession(sessionID, sessionDate, presentProject, presentSubject, this_project_folder=this_project_folder,targetEPIID=12 )
try:
os.mkdir(os.path.join(this_project_folder, 'data', initials))
os.mkdir(os.path.join(this_project_folder, 'data', initials, sj_init_data_code))
except OSError:
subject_session.logger.debug('output folders already exist')
subject_run_array = [
# SESSION 1 in chronological order
{'ID' : 1, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','NA_RetMap_2.5_1.6_20.32_SENSE_4_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','na_1_2015-06-22_15.09.36.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','na_1_2015-06-22_15.09.36_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150622150912.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','3','NA_230615_3_to_11_NB.mat'),
'thisSessionT2ID':3,
},
{'ID' : 2, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','NA_Mapper_2.5_1.6_13.45_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','na_1_2015-06-22_15.31.24.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','na_1_2015-06-22_15.31.24_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150622153050.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','3','NA_230615_3_to_11_NB.mat'),
'thisSessionT2ID':3,
},
{'ID' : 3, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','NA_T2W_RetMap_1.25_CLEAR_6_1.nii.gz' ),
'targetSessionT2anatID':11
},
{'ID' : 4, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','NA_RetMap_2.5_1.6_20.32_SENSE_7_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','na_2_2015-06-22_15.52.07.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','na_2_2015-06-22_15.52.07_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150622155140.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','3','NA_230615_3_to_11_NB.mat'),
'thisSessionT2ID':3,
},
# SESSION 2 in chronological order
{'ID' : 5, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','NA_WIP_RetMap_2.5_1.6_20.32_SENSE_2_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','na_3_2015-06-23_14.05.04.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','na_3_2015-06-23_14.05.04_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150623140509.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','NA_230615_7_to_11_NB.mat'),
'thisSessionT2ID':7,
},
{'ID' : 6, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','NA_WIP_Mapper_2.5_1.6_13.45_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','na_2_2015-06-23_14.27.52.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','na_2_2015-06-23_14.27.52_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150623142751.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','NA_230615_7_to_11_NB.mat'),
'thisSessionT2ID':7,
},
{'ID' : 7, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','NA_WIP_T2W_RetMap_1.25_CLEAR_4_1.nii.gz' ),
'targetSessionT2anatID':11
},
{'ID' : 8, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','NA_WIP_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','na_4_2015-06-23_14.48.42.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','na_4_2015-06-23_14.48.42_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150623144752.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','NA_230615_7_to_11_NB.mat'),
'thisSessionT2ID':7,
},
# SESSION 3 in chronological order
{'ID' : 9, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','NA_RetMap_2.5_1.6_20.32_SENSE_2_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','na_5_2015-06-25_12.13.06.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','na_5_2015-06-25_12.13.06_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150625121250.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','NA_230615_11_to_11_NB.mat'),
'thisSessionT2ID':11,
},
{'ID' : 10, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','NA_Mapper_2.5_1.6_13.45_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','na_3_2015-06-25_12.35.15.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','na_3_2015-06-25_12.35.15_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150625123349.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','NA_230615_11_to_11_NB.mat'),
'thisSessionT2ID':11,
},
{'ID' : 11, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','NA_T2W_RetMap_1.25_CLEAR_4_1.nii.gz' ),
'targetSessionT2anatID':11
},
{'ID' : 12, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','NA_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','na_6_2015-06-25_12.55.46.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','na_6_2015-06-25_12.55.46_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150625125444.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','NA_230615_11_to_11_NB.mat'),
'thisSessionT2ID':11,
},
]
runWholeSession(subject_run_array, subject_session)
elif which_subject == 'JS':
# first subject; WK
#########################################################################
# subject information
initials = 'JS'
firstName = 'anonymous'
standardFSID = 'JVS_091014'
birthdate = date(1950,01,01)
labelFolderOfPreference = ''
presentSubject = Subject( initials, firstName, birthdate, standardFSID, labelFolderOfPreference )
presentProject = Project( 'Population Receptive Field Mapping', subject = presentSubject, base_dir = os.path.join(this_project_folder, 'data'))
sessionDate = date(2015, 6, 23)
sessionID = 'PRF_' + presentSubject.initials
sj_init_data_code = 'JS_230615'
subject_session = PopulationReceptiveFieldMappingSession(sessionID, sessionDate, presentProject, presentSubject, this_project_folder=this_project_folder,targetEPIID=16 )
try:
os.mkdir(os.path.join(this_project_folder, 'data', initials))
os.mkdir(os.path.join(this_project_folder, 'data', initials, sj_init_data_code))
except OSError:
subject_session.logger.debug('output folders already exist')
subject_run_array = [
# SESSION 1 in chronological order
{'ID' : 1, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','JS_RetMap_2.5_1.6_20.32_SENSE_2_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','js_1_2015-06-22_16.33.23.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','js_1_2015-06-22_16.33.23_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150622163455.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','3','JS_230615_3_to_15_NB.mat'),
'thisSessionT2ID':3,
},
{'ID' : 2, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','JS_Mapper_2.5_1.6_13.45_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','js_1_2015-06-22_16.57.02.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','js_1_2015-06-22_16.57.02_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150622165555.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','3','JS_230615_3_to_15_NB.mat'),
'thisSessionT2ID':3,
},
{'ID' : 3, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','JS_T2W_RetMap_1.25_CLEAR_4_1.nii.gz' ),
'targetSessionT2anatID':15
},
{'ID' : 4, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw','mri','JS_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw','edf','js_2_2015-06-22_17.18.07.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw','behavior','js_2_2015-06-22_17.18.07_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw','hr','SCANPHYSLOG20150622171703.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','3','JS_230615_3_to_15_NB.mat'),
'thisSessionT2ID':3,
},
# SESSION 2 in chronological order
{'ID' : 5, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','JS_WIP_RetMap_2.5_1.6_20.32_SENSE_2_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','js_3_2015-06-23_12.27.39.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','js_3_2015-06-23_12.27.39_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150623122720.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','JS_230615_7_to_15_NB.mat'),
'thisSessionT2ID':7,
},
{'ID' : 6, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','JS_WIP_Mapper_2.5_1.6_13.45_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','js_2_2015-06-23_12.51.27.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','js_2_2015-06-23_12.51.27_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150623125055.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','JS_230615_7_to_15_NB.mat'),
'thisSessionT2ID':7,
},
{'ID' : 7, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','JS_WIP_T2W_RetMap_1.25_CLEAR_4_1.nii.gz' ),
'targetSessionT2anatID':15
},
{'ID' : 8, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_2','mri','JS_WIP_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_2','edf','js_4_2015-06-23_13.11.30.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_2','behavior','js_4_2015-06-23_13.11.30_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_2','hr','SCANPHYSLOG20150623131129.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','7','JS_230615_7_to_15_NB.mat'),
'thisSessionT2ID':7,
},
# SESSION 3 in chronological order
{'ID' : 9, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','JS_RetMap_2.5_1.6_20.32_SENSE_2_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','js_5_2015-06-25_16.07.49.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','js_5_2015-06-25_16.07.49_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150625160719.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','JS_230615_11_to_15_NB.mat'),
'thisSessionT2ID':11,
},
{'ID' : 10, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','JS_Mapper_2.5_1.6_13.45_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','js_3_2015-06-25_16.29.36.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','js_3_2015-06-25_16.29.36_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150625162835.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','JS_230615_11_to_15_NB.mat'),
'thisSessionT2ID':11,
},
{'ID' : 11, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','JS_T2W_RetMap_1.25_CLEAR_4_1.nii.gz' ),
'targetSessionT2anatID':15
},
{'ID' : 12, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_3','mri','JS_RetMap_2.5_1.6_20.32_SENSE_5_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_3','edf','js_6_2015-06-25_16.52.02.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_3','behavior','js_6_2015-06-25_16.52.02_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_3','hr','SCANPHYSLOG20150625164936.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','11','JS_230615_11_to_15_NB.mat'),
'thisSessionT2ID':11,
},
# SESSION 4 in chronological order
{'ID' : 13, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_4','mri','JS_RetMap_2.5_1.6_20.32_SENSE_3_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_4','edf','js_7_2015-07-02_18.00.51.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_4','behavior','js_7_2015-07-02_18.00.51_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_4','hr','SCANPHYSLOG20150702180023.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','15','JS_230615_15_to_15_NB.mat'),
'thisSessionT2ID':15,
},
{'ID' : 14, 'scanType': 'epi_bold', 'condition': 'Mapper',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_4','mri','JS_Mapper_2.5_1.6_13.45_SENSE_4_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_4','edf','js_4_2015-07-02_18.22.54.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_4','behavior','js_4_2015-07-02_18.22.54_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_4','hr','SCANPHYSLOG20150702182153.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','15','JS_230615_15_to_15_NB.mat'),
'thisSessionT2ID':15,
},
{'ID' : 15, 'scanType': 'inplane_anat', 'condition': 'T2_anat',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_4','mri','JS_T2W_RetMap_1.25_CLEAR_5_1.nii.gz' ),
'targetSessionT2anatID':15
},
{'ID' : 16, 'scanType': 'epi_bold', 'condition': 'PRF',
'rawDataFilePath': os.path.join(this_raw_folder, initials, 'raw_4','mri','JS_RetMap_2.5_1.6_20.32_SENSE_6_1.nii.gz' ),
'eyeLinkFilePath': os.path.join(this_raw_folder, initials, 'raw_4','edf','js_8_2015-07-02_18.44.11.edf' ),
'rawBehaviorFile': os.path.join(this_raw_folder, initials, 'raw_4','behavior','js_8_2015-07-02_18.44.11_outputDict.pickle' ),
'physiologyFile': os.path.join(this_raw_folder, initials, 'raw_4','hr','SCANPHYSLOG20150702184345.log' ),
'transformationMatrixFile': os.path.join(this_project_folder, 'data', initials, sj_init_data_code,'processed','mri','T2_anat','15','JS_230615_15_to_15_NB.mat'),
'thisSessionT2ID':15,
},
]
runWholeSession(subject_run_array, subject_session)
| 61.413451 | 237 | 0.695128 | 7,796 | 55,702 | 4.65726 | 0.069523 | 0.044784 | 0.074639 | 0.104109 | 0.827669 | 0.813733 | 0.803459 | 0.801036 | 0.765864 | 0.762174 | 0 | 0.080373 | 0.126854 | 55,702 | 906 | 238 | 61.481236 | 0.666153 | 0.058005 | 0 | 0.370529 | 0 | 0 | 0.389069 | 0.19738 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.001431 | 0.021459 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4d5f2d2c71d0d2a0b5f3dbd59c24a886e101c7e9 | 2,504 | py | Python | good_spot/places/migrations/0068_auto_20180330_1229.py | jasmine92122/NightClubBackend | 7f59129b78baaba0e0c25de2b493033b858f1b00 | [
"MIT"
] | null | null | null | good_spot/places/migrations/0068_auto_20180330_1229.py | jasmine92122/NightClubBackend | 7f59129b78baaba0e0c25de2b493033b858f1b00 | [
"MIT"
] | 5 | 2020-02-12T03:13:11.000Z | 2022-01-13T01:41:14.000Z | good_spot/places/migrations/0068_auto_20180330_1229.py | jasmine92122/NightClubBackend | 7f59129b78baaba0e0c25de2b493033b858f1b00 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.7 on 2018-03-30 12:29
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('places', '0067_auto_20180330_1220'),
]
operations = [
migrations.AlterField(
model_name='city',
name='country_name',
field=models.CharField(blank=True, help_text='Will be autopopulated.', max_length=200),
),
migrations.AlterField(
model_name='city',
name='country_name_en',
field=models.CharField(blank=True, help_text='Will be autopopulated.', max_length=200, null=True),
),
migrations.AlterField(
model_name='city',
name='country_name_fr',
field=models.CharField(blank=True, help_text='Will be autopopulated.', max_length=200, null=True),
),
migrations.AlterField(
model_name='city',
name='country_name_ru',
field=models.CharField(blank=True, help_text='Will be autopopulated.', max_length=200, null=True),
),
migrations.AlterField(
model_name='city',
name='country_name_uk',
field=models.CharField(blank=True, help_text='Will be autopopulated.', max_length=200, null=True),
),
migrations.AlterField(
model_name='city',
name='name',
field=models.CharField(blank=True, db_index=True, help_text='Will be autopopulated.', max_length=200),
),
migrations.AlterField(
model_name='city',
name='name_en',
field=models.CharField(blank=True, db_index=True, help_text='Will be autopopulated.', max_length=200, null=True),
),
migrations.AlterField(
model_name='city',
name='name_fr',
field=models.CharField(blank=True, db_index=True, help_text='Will be autopopulated.', max_length=200, null=True),
),
migrations.AlterField(
model_name='city',
name='name_ru',
field=models.CharField(blank=True, db_index=True, help_text='Will be autopopulated.', max_length=200, null=True),
),
migrations.AlterField(
model_name='city',
name='name_uk',
field=models.CharField(blank=True, db_index=True, help_text='Will be autopopulated.', max_length=200, null=True),
),
]
| 37.939394 | 125 | 0.601837 | 281 | 2,504 | 5.163701 | 0.19573 | 0.137836 | 0.172295 | 0.199862 | 0.868367 | 0.868367 | 0.868367 | 0.854583 | 0.813921 | 0.813921 | 0 | 0.034653 | 0.273962 | 2,504 | 65 | 126 | 38.523077 | 0.763476 | 0.027157 | 0 | 0.655172 | 1 | 0 | 0.161529 | 0.009453 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.086207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4d67e32471d8bcdfce30a95878e2f83a0397a42c | 5,796 | py | Python | tests/test_fembv_weights.py | CourtneyQuinn/FEM-BV-VAR_dynamics | be80f67f3b49fd6510d739aeeb4e6415a0a5c15a | [
"MIT"
] | 1 | 2020-09-18T12:58:30.000Z | 2020-09-18T12:58:30.000Z | tests/test_fembv_weights.py | CourtneyQuinn/FEM-BV-VAR_dynamics | be80f67f3b49fd6510d739aeeb4e6415a0a5c15a | [
"MIT"
] | null | null | null | tests/test_fembv_weights.py | CourtneyQuinn/FEM-BV-VAR_dynamics | be80f67f3b49fd6510d739aeeb4e6415a0a5c15a | [
"MIT"
] | null | null | null | """
Provides test routines for FEM-BV weights solver.
"""
# License: MIT
import numpy as np
from sklearn.utils import check_random_state
from clustering_dynamics.models.fembv_weights import FEMBVWeights
from clustering_dynamics.utils.stochastic_matrix import right_stochastic_matrix
def test_weights_equality_constraints_with_no_max_tv_norm():
"""Test equality constraints match expected form."""
random_seed = 0
random_state = check_random_state(random_seed)
n_samples = 4
n_components = 3
max_tv_norm = None
initial_weights = right_stochastic_matrix((n_samples, n_components),
random_state=random_state)
weights_solver = FEMBVWeights(initial_weights, max_tv_norm=max_tv_norm)
a_eq = np.array(weights_solver._a_eq.todense()) # pylint: disable=protected-access
b_eq = np.array(weights_solver._b_eq) # pylint: disable=protected-access
expected_a_eq = np.array(
[[1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0]])
expected_b_eq = np.array([[1.0], [1.0], [1.0], [1.0]])
assert np.allclose(a_eq, expected_a_eq)
assert np.allclose(b_eq, expected_b_eq)
def test_weights_upper_bound_constraints_with_no_max_tv_norm():
"""Test inequality constraints match expected form."""
random_seed = 0
random_state = check_random_state(random_seed)
n_samples = 3
n_components = 3
max_tv_norm = None
initial_weights = right_stochastic_matrix((n_samples, n_components),
random_state=random_state)
weights_solver = FEMBVWeights(initial_weights, max_tv_norm=max_tv_norm)
a_ub = np.array(weights_solver._a_ub.todense()) # pylint: disable=protected-access
b_ub = np.array(weights_solver._b_ub) # pylint: disable=protected-access
expected_a_ub = -np.eye(n_components * n_samples)
expected_b_ub = np.zeros((n_components * n_samples,))
assert np.allclose(a_ub, expected_a_ub)
assert np.allclose(b_ub, expected_b_ub)
def test_weights_equality_constraints_with_max_tv_norm():
"""Test equality constraints match expected form."""
random_seed = 0
random_state = check_random_state(random_seed)
n_samples = 4
n_components = 3
max_tv_norm = 5
initial_weights = right_stochastic_matrix((n_samples, n_components),
random_state=random_state)
weights_solver = FEMBVWeights(initial_weights, max_tv_norm=max_tv_norm)
a_eq = np.array(weights_solver._a_eq.todense()) # pylint: disable=protected-access
b_eq = np.array(weights_solver._b_eq) # pylint: disable=protected-access
n_parameters = n_components * (2 * n_samples - 1)
expected_a_eq = np.zeros((n_samples, n_parameters), dtype='f8')
expected_a_eq[0, 0:3] = 1.0
expected_a_eq[1, 3:6] = 1.0
expected_a_eq[2, 6:9] = 1.0
expected_a_eq[3, 9:12] = 1.0
expected_b_eq = np.array([[1.0], [1.0], [1.0], [1.0]])
assert np.allclose(a_eq, expected_a_eq)
assert np.allclose(b_eq, expected_b_eq)
def test_weights_upper_bound_constraints_with_max_tv_norm():
"""Test inequality constraints match expected form."""
random_seed = 0
random_state = check_random_state(random_seed)
n_samples = 3
n_components = 2
max_tv_norm = 5
initial_weights = right_stochastic_matrix((n_samples, n_components),
random_state=random_state)
weights_solver = FEMBVWeights(initial_weights, max_tv_norm=max_tv_norm)
a_ub = np.array(weights_solver._a_ub.todense()) # pylint: disable=protected-access
b_ub = np.array(weights_solver._b_ub) # pylint: disable=protected-access
n_parameters = n_components * (2 * n_samples - 1)
n_constraints = (n_parameters + 2 * n_components * (n_samples - 1) +
n_components)
assert a_ub.shape == (n_constraints, n_parameters)
assert b_ub.shape == (n_constraints,)
expected_a_ub = np.zeros((n_constraints, n_parameters))
expected_a_ub[:n_parameters, :n_parameters] = -np.eye(n_parameters)
expected_a_ub = np.array(
[[-1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -1.0],
[-1.0, 0.0, 1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0],
[0.0, -1.0, 0.0, 1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0],
[0.0, 0.0, -1.0, 0.0, 1.0, 0.0, 0.0, 0.0, -1.0, 0.0],
[0.0, 0.0, 0.0, -1.0, 0.0, 1.0, 0.0, 0.0, 0.0, -1.0],
[1.0, 0.0, -1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, -1.0, 0.0, 0.0, 0.0, -1.0, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0, -1.0, 0.0, 0.0, 0.0, -1.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0]])
expected_b_ub = np.zeros((n_constraints,), dtype='f8')
expected_b_ub[-n_components:] = max_tv_norm
assert np.allclose(a_ub, expected_a_ub)
assert np.allclose(b_ub, expected_b_ub)
| 37.636364 | 86 | 0.602484 | 1,115 | 5,796 | 2.912108 | 0.067265 | 0.244533 | 0.330767 | 0.39421 | 0.832461 | 0.797043 | 0.761626 | 0.755775 | 0.755775 | 0.753003 | 0 | 0.123856 | 0.226881 | 5,796 | 153 | 87 | 37.882353 | 0.600759 | 0.089545 | 0 | 0.504951 | 0 | 0 | 0.000763 | 0 | 0 | 0 | 0 | 0 | 0.09901 | 1 | 0.039604 | false | 0 | 0.039604 | 0 | 0.079208 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
4d69109773df47b936b1c66296bf2b8ad9151395 | 6,269 | py | Python | loldib/getratings/models/NA/na_ryze/na_ryze_top.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_ryze/na_ryze_top.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_ryze/na_ryze_top.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Ryze_Top_Aatrox(Ratings):
pass
class NA_Ryze_Top_Ahri(Ratings):
pass
class NA_Ryze_Top_Akali(Ratings):
pass
class NA_Ryze_Top_Alistar(Ratings):
pass
class NA_Ryze_Top_Amumu(Ratings):
pass
class NA_Ryze_Top_Anivia(Ratings):
pass
class NA_Ryze_Top_Annie(Ratings):
pass
class NA_Ryze_Top_Ashe(Ratings):
pass
class NA_Ryze_Top_AurelionSol(Ratings):
pass
class NA_Ryze_Top_Azir(Ratings):
pass
class NA_Ryze_Top_Bard(Ratings):
pass
class NA_Ryze_Top_Blitzcrank(Ratings):
pass
class NA_Ryze_Top_Brand(Ratings):
pass
class NA_Ryze_Top_Braum(Ratings):
pass
class NA_Ryze_Top_Caitlyn(Ratings):
pass
class NA_Ryze_Top_Camille(Ratings):
pass
class NA_Ryze_Top_Cassiopeia(Ratings):
pass
class NA_Ryze_Top_Chogath(Ratings):
pass
class NA_Ryze_Top_Corki(Ratings):
pass
class NA_Ryze_Top_Darius(Ratings):
pass
class NA_Ryze_Top_Diana(Ratings):
pass
class NA_Ryze_Top_Draven(Ratings):
pass
class NA_Ryze_Top_DrMundo(Ratings):
pass
class NA_Ryze_Top_Ekko(Ratings):
pass
class NA_Ryze_Top_Elise(Ratings):
pass
class NA_Ryze_Top_Evelynn(Ratings):
pass
class NA_Ryze_Top_Ezreal(Ratings):
pass
class NA_Ryze_Top_Fiddlesticks(Ratings):
pass
class NA_Ryze_Top_Fiora(Ratings):
pass
class NA_Ryze_Top_Fizz(Ratings):
pass
class NA_Ryze_Top_Galio(Ratings):
pass
class NA_Ryze_Top_Gangplank(Ratings):
pass
class NA_Ryze_Top_Garen(Ratings):
pass
class NA_Ryze_Top_Gnar(Ratings):
pass
class NA_Ryze_Top_Gragas(Ratings):
pass
class NA_Ryze_Top_Graves(Ratings):
pass
class NA_Ryze_Top_Hecarim(Ratings):
pass
class NA_Ryze_Top_Heimerdinger(Ratings):
pass
class NA_Ryze_Top_Illaoi(Ratings):
pass
class NA_Ryze_Top_Irelia(Ratings):
pass
class NA_Ryze_Top_Ivern(Ratings):
pass
class NA_Ryze_Top_Janna(Ratings):
pass
class NA_Ryze_Top_JarvanIV(Ratings):
pass
class NA_Ryze_Top_Jax(Ratings):
pass
class NA_Ryze_Top_Jayce(Ratings):
pass
class NA_Ryze_Top_Jhin(Ratings):
pass
class NA_Ryze_Top_Jinx(Ratings):
pass
class NA_Ryze_Top_Kalista(Ratings):
pass
class NA_Ryze_Top_Karma(Ratings):
pass
class NA_Ryze_Top_Karthus(Ratings):
pass
class NA_Ryze_Top_Kassadin(Ratings):
pass
class NA_Ryze_Top_Katarina(Ratings):
pass
class NA_Ryze_Top_Kayle(Ratings):
pass
class NA_Ryze_Top_Kayn(Ratings):
pass
class NA_Ryze_Top_Kennen(Ratings):
pass
class NA_Ryze_Top_Khazix(Ratings):
pass
class NA_Ryze_Top_Kindred(Ratings):
pass
class NA_Ryze_Top_Kled(Ratings):
pass
class NA_Ryze_Top_KogMaw(Ratings):
pass
class NA_Ryze_Top_Leblanc(Ratings):
pass
class NA_Ryze_Top_LeeSin(Ratings):
pass
class NA_Ryze_Top_Leona(Ratings):
pass
class NA_Ryze_Top_Lissandra(Ratings):
pass
class NA_Ryze_Top_Lucian(Ratings):
pass
class NA_Ryze_Top_Lulu(Ratings):
pass
class NA_Ryze_Top_Lux(Ratings):
pass
class NA_Ryze_Top_Malphite(Ratings):
pass
class NA_Ryze_Top_Malzahar(Ratings):
pass
class NA_Ryze_Top_Maokai(Ratings):
pass
class NA_Ryze_Top_MasterYi(Ratings):
pass
class NA_Ryze_Top_MissFortune(Ratings):
pass
class NA_Ryze_Top_MonkeyKing(Ratings):
pass
class NA_Ryze_Top_Mordekaiser(Ratings):
pass
class NA_Ryze_Top_Morgana(Ratings):
pass
class NA_Ryze_Top_Nami(Ratings):
pass
class NA_Ryze_Top_Nasus(Ratings):
pass
class NA_Ryze_Top_Nautilus(Ratings):
pass
class NA_Ryze_Top_Nidalee(Ratings):
pass
class NA_Ryze_Top_Nocturne(Ratings):
pass
class NA_Ryze_Top_Nunu(Ratings):
pass
class NA_Ryze_Top_Olaf(Ratings):
pass
class NA_Ryze_Top_Orianna(Ratings):
pass
class NA_Ryze_Top_Ornn(Ratings):
pass
class NA_Ryze_Top_Pantheon(Ratings):
pass
class NA_Ryze_Top_Poppy(Ratings):
pass
class NA_Ryze_Top_Quinn(Ratings):
pass
class NA_Ryze_Top_Rakan(Ratings):
pass
class NA_Ryze_Top_Rammus(Ratings):
pass
class NA_Ryze_Top_RekSai(Ratings):
pass
class NA_Ryze_Top_Renekton(Ratings):
pass
class NA_Ryze_Top_Rengar(Ratings):
pass
class NA_Ryze_Top_Riven(Ratings):
pass
class NA_Ryze_Top_Rumble(Ratings):
pass
class NA_Ryze_Top_Ryze(Ratings):
pass
class NA_Ryze_Top_Sejuani(Ratings):
pass
class NA_Ryze_Top_Shaco(Ratings):
pass
class NA_Ryze_Top_Shen(Ratings):
pass
class NA_Ryze_Top_Shyvana(Ratings):
pass
class NA_Ryze_Top_Singed(Ratings):
pass
class NA_Ryze_Top_Sion(Ratings):
pass
class NA_Ryze_Top_Sivir(Ratings):
pass
class NA_Ryze_Top_Skarner(Ratings):
pass
class NA_Ryze_Top_Sona(Ratings):
pass
class NA_Ryze_Top_Soraka(Ratings):
pass
class NA_Ryze_Top_Swain(Ratings):
pass
class NA_Ryze_Top_Syndra(Ratings):
pass
class NA_Ryze_Top_TahmKench(Ratings):
pass
class NA_Ryze_Top_Taliyah(Ratings):
pass
class NA_Ryze_Top_Talon(Ratings):
pass
class NA_Ryze_Top_Taric(Ratings):
pass
class NA_Ryze_Top_Teemo(Ratings):
pass
class NA_Ryze_Top_Thresh(Ratings):
pass
class NA_Ryze_Top_Tristana(Ratings):
pass
class NA_Ryze_Top_Trundle(Ratings):
pass
class NA_Ryze_Top_Tryndamere(Ratings):
pass
class NA_Ryze_Top_TwistedFate(Ratings):
pass
class NA_Ryze_Top_Twitch(Ratings):
pass
class NA_Ryze_Top_Udyr(Ratings):
pass
class NA_Ryze_Top_Urgot(Ratings):
pass
class NA_Ryze_Top_Varus(Ratings):
pass
class NA_Ryze_Top_Vayne(Ratings):
pass
class NA_Ryze_Top_Veigar(Ratings):
pass
class NA_Ryze_Top_Velkoz(Ratings):
pass
class NA_Ryze_Top_Vi(Ratings):
pass
class NA_Ryze_Top_Viktor(Ratings):
pass
class NA_Ryze_Top_Vladimir(Ratings):
pass
class NA_Ryze_Top_Volibear(Ratings):
pass
class NA_Ryze_Top_Warwick(Ratings):
pass
class NA_Ryze_Top_Xayah(Ratings):
pass
class NA_Ryze_Top_Xerath(Ratings):
pass
class NA_Ryze_Top_XinZhao(Ratings):
pass
class NA_Ryze_Top_Yasuo(Ratings):
pass
class NA_Ryze_Top_Yorick(Ratings):
pass
class NA_Ryze_Top_Zac(Ratings):
pass
class NA_Ryze_Top_Zed(Ratings):
pass
class NA_Ryze_Top_Ziggs(Ratings):
pass
class NA_Ryze_Top_Zilean(Ratings):
pass
class NA_Ryze_Top_Zyra(Ratings):
pass
| 15.033573 | 46 | 0.75642 | 972 | 6,269 | 4.452675 | 0.151235 | 0.223198 | 0.350739 | 0.446396 | 0.791359 | 0.791359 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177221 | 6,269 | 416 | 47 | 15.069712 | 0.839085 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
4d6c5bd63f303af87476ce522d92572457f96aec | 10,737 | py | Python | kanga/migrations/0001_initial.py | deptofdefense/kanga | 9c8d926a4828e2fca528915ddf35759d1c328c85 | [
"MIT"
] | 1 | 2022-03-05T01:17:59.000Z | 2022-03-05T01:17:59.000Z | kanga/migrations/0001_initial.py | deptofdefense/kanga | 9c8d926a4828e2fca528915ddf35759d1c328c85 | [
"MIT"
] | null | null | null | kanga/migrations/0001_initial.py | deptofdefense/kanga | 9c8d926a4828e2fca528915ddf35759d1c328c85 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.6 on 2021-08-30 20:57
# =================================================================
#
# Work of the U.S. Department of Defense, Defense Digital Service.
# Released as open source under the MIT License. See LICENSE file.
#
# =================================================================
from django.db import migrations, models
import django.db.models.deletion
import phonenumber_field.modelfields
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Account',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('name', models.CharField(db_index=True, max_length=255)),
('sid', models.CharField(db_index=True, max_length=255)),
('auth_token', models.CharField(db_index=True, max_length=255)),
('active', models.BooleanField()),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Asset',
fields=[
('created_at', models.DateTimeField(auto_now_add=True)),
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.CharField(db_index=True, max_length=36)),
('name', models.CharField(db_index=True, max_length=255)),
('file', models.FileField(upload_to='assets/')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Execution',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('plan', models.TextField()),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Group',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('name', models.CharField(db_index=True, max_length=255)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='MessageContent',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('subject', models.TextField()),
('body', models.TextField()),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Origin',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('phone_number', phonenumber_field.modelfields.PhoneNumberField(max_length=128, region=None)),
('voice', models.BooleanField()),
('sms', models.BooleanField()),
('mms', models.BooleanField()),
('fax', models.BooleanField()),
('active', models.BooleanField()),
('account', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.account')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Plan',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('name', models.CharField(db_index=True, max_length=36)),
('account', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.account')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Receipt',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('data', models.TextField()),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Template',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('name', models.CharField(db_index=True, max_length=255)),
('body', models.TextField()),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Target',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('first_name', models.CharField(db_index=True, max_length=255)),
('last_name', models.CharField(db_index=True, max_length=255)),
('phone_number', phonenumber_field.modelfields.PhoneNumberField(max_length=128, region=None)),
('group', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.group')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='PlanTemplate',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('plan', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.plan')),
('template', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.template')),
],
),
migrations.CreateModel(
name='PlanPlatform',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('platform', models.CharField(db_index=True, max_length=24)),
('created_at', models.DateTimeField(auto_now_add=True)),
('plan', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.plan')),
],
),
migrations.CreateModel(
name='PlanOrigin',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('origin', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.origin')),
('plan', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.plan')),
],
),
migrations.CreateModel(
name='PlanGroup',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('group', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.group')),
('plan', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.plan')),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('sid', models.CharField(db_index=True, max_length=255)),
('platform', models.CharField(db_index=True, max_length=255)),
('sent_at', models.DateTimeField(auto_now_add=True)),
('account', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.account')),
('content', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.messagecontent')),
('execution', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.execution')),
('origin', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.origin')),
('target', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.target')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='ExecutionResults',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('uuid', models.UUIDField(db_index=True, default=uuid.uuid4, editable=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('results', models.TextField()),
('execution', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.execution')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Attachment',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('asset', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.asset')),
('template', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kanga.template')),
],
),
]
| 45.303797 | 119 | 0.539443 | 991 | 10,737 | 5.709384 | 0.125126 | 0.029692 | 0.04666 | 0.073878 | 0.859844 | 0.859844 | 0.812124 | 0.805938 | 0.750088 | 0.672676 | 0 | 0.009139 | 0.306976 | 10,737 | 236 | 120 | 45.495763 | 0.751243 | 0.028686 | 0 | 0.734234 | 1 | 0 | 0.095096 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018018 | 0 | 0.036036 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4d83c3fb376ab5e61a227c0ec28fd07d8ff4d9dc | 2,188 | py | Python | python/cudf/cudf/tests/test_numerical.py | hafixo/cudf | 7f91a17a967b0e0b502d56d37818aa0461d38c67 | [
"Apache-2.0"
] | null | null | null | python/cudf/cudf/tests/test_numerical.py | hafixo/cudf | 7f91a17a967b0e0b502d56d37818aa0461d38c67 | [
"Apache-2.0"
] | null | null | null | python/cudf/cudf/tests/test_numerical.py | hafixo/cudf | 7f91a17a967b0e0b502d56d37818aa0461d38c67 | [
"Apache-2.0"
] | null | null | null | import numpy as np
from cudf import Series
def test_can_cast_safely_same_kind():
data = Series([1, 2, 3], dtype="int32")._column
to_dtype = np.dtype("int64")
assert data.can_cast_safely(to_dtype)
data = Series([1, 2, 3], dtype="int64")._column
to_dtype = np.dtype("int32")
assert data.can_cast_safely(to_dtype)
data = Series([1, 2, 2 ** 31], dtype="int64")._column
assert not data.can_cast_safely(to_dtype)
data = Series([1, 2, 3], dtype="uint32")._column
to_dtype = np.dtype("uint64")
assert data.can_cast_safely(to_dtype)
data = Series([1, 2, 3], dtype="uint64")._column
to_dtype = np.dtype("uint32")
assert data.can_cast_safely(to_dtype)
data = Series([1, 2, 2 ** 33], dtype="uint64")._column
assert not data.can_cast_safely(to_dtype)
def test_can_cast_safely_mixed_kind():
data = Series([1, 2, 3], dtype="int32")._column
to_dtype = np.dtype("float32")
assert data.can_cast_safely(to_dtype)
# too big to fit into f32 exactly
data = Series([1, 2, 2 ** 24 + 1], dtype="int32")._column
assert not data.can_cast_safely(to_dtype)
data = Series([1, 2, 3], dtype="uint32")._column
to_dtype = np.dtype("float32")
assert data.can_cast_safely(to_dtype)
# too big to fit into f32 exactly
data = Series([1, 2, 2 ** 24 + 1], dtype="uint32")._column
assert not data.can_cast_safely(to_dtype)
to_dtype = np.dtype("float64")
assert data.can_cast_safely(to_dtype)
data = Series([1.0, 2.0, 3.0], dtype="float32")._column
to_dtype = np.dtype("int32")
assert data.can_cast_safely(to_dtype)
# not integer float
data = Series([1.0, 2.0, 3.5], dtype="float32")._column
assert not data.can_cast_safely(to_dtype)
# float out of int range
data = Series([1.0, 2.0, 1.0 * (2 ** 31)], dtype="float32")._column
assert not data.can_cast_safely(to_dtype)
def test_can_cast_safely_has_nulls():
data = Series([1, 2, 3, None], dtype="float32")._column
to_dtype = np.dtype("int64")
assert data.can_cast_safely(to_dtype)
data = Series([1, 2, 3.1, None], dtype="float32")._column
assert not data.can_cast_safely(to_dtype)
| 29.173333 | 71 | 0.663163 | 355 | 2,188 | 3.842254 | 0.140845 | 0.128299 | 0.181085 | 0.199413 | 0.867302 | 0.828446 | 0.818182 | 0.786657 | 0.786657 | 0.758065 | 0 | 0.067682 | 0.189671 | 2,188 | 74 | 72 | 29.567568 | 0.701636 | 0.047532 | 0 | 0.577778 | 0 | 0 | 0.068783 | 0 | 0 | 0 | 0 | 0 | 0.355556 | 1 | 0.066667 | false | 0 | 0.044444 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4d87c410bbc81ea165977da0b3ef0310eafe9948 | 12,455 | py | Python | funfact/lang/interpreter/test_einop_compiler.py | campsd/FunFact | 477bcf06794f09608240eba992823fae6fde8dad | [
"BSD-3-Clause-LBNL"
] | 37 | 2021-09-22T17:28:35.000Z | 2022-03-07T00:11:17.000Z | funfact/lang/interpreter/test_einop_compiler.py | campsd/FunFact | 477bcf06794f09608240eba992823fae6fde8dad | [
"BSD-3-Clause-LBNL"
] | 125 | 2021-11-04T16:50:24.000Z | 2022-03-28T17:54:13.000Z | funfact/lang/interpreter/test_einop_compiler.py | campsd/FunFact | 477bcf06794f09608240eba992823fae6fde8dad | [
"BSD-3-Clause-LBNL"
] | 2 | 2021-12-13T07:28:42.000Z | 2021-12-13T07:51:41.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import pytest # noqa: F401
from unittest.mock import MagicMock as M
from funfact.util.iterable import as_namedtuple
from ._einop_compiler import EinopCompiler
_colon = slice(None)
@pytest.fixture
def intr():
return EinopCompiler()
@pytest.mark.parametrize('test_case', [
# ,->
as_namedtuple(
'NULL',
tsrex=dict(
lhs=M(
live_indices=[],
kron_indices=[],
),
rhs=M(
live_indices=[],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=[],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(),
index_lhs=(),
index_rhs=(),
ax_contraction=(),
)
),
# i,j
as_namedtuple(
'OUTER',
tsrex=dict(
lhs=M(
live_indices=['i'],
kron_indices=[],
),
rhs=M(
live_indices=['j'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i', 'j'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(),
index_lhs=(_colon, None),
index_rhs=(None, _colon),
ax_contraction=(),
)
),
# *i,*i
as_namedtuple(
'KRON1',
tsrex=dict(
lhs=M(
live_indices=['i'],
kron_indices=['i'],
),
rhs=M(
live_indices=['i'],
kron_indices=['i'],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(),
index_lhs=(_colon, None),
index_rhs=(None, _colon),
ax_contraction=(),
)
),
# *i,*i
as_namedtuple(
'KRON2',
tsrex=dict(
lhs=M(
live_indices=['i', 'j'],
kron_indices=['i', 'j'],
),
rhs=M(
live_indices=['i', 'j'],
kron_indices=['i', 'j'],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i', 'j'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(),
index_lhs=(_colon, None, _colon, None),
index_rhs=(None, _colon, None, _colon),
ax_contraction=(),
)
),
# ij,ij
as_namedtuple(
'SUM',
tsrex=dict(
lhs=M(
live_indices=['i', 'j'],
kron_indices=[],
),
rhs=M(
live_indices=['i', 'j'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=[],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(),
index_lhs=(_colon, _colon),
index_rhs=(_colon, _colon),
ax_contraction=(0, 1),
)
),
# ij,j
as_namedtuple(
'MATVEC',
tsrex=dict(
lhs=M(
live_indices=['i', 'j'],
kron_indices=[],
),
rhs=M(
live_indices=['j'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(),
index_lhs=(_colon, _colon),
index_rhs=(None, _colon),
ax_contraction=(1,),
)
),
# i,ij
as_namedtuple(
'VECMAT',
tsrex=dict(
lhs=M(
live_indices=['i'],
kron_indices=[],
),
rhs=M(
live_indices=['i', 'j'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['j'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(1, 0),
index_lhs=(None, _colon),
index_rhs=(_colon, _colon),
ax_contraction=(1,),
)
),
# ij,jk
as_namedtuple(
'MATMAT',
tsrex=dict(
lhs=M(
live_indices=['i', 'j'],
kron_indices=[],
),
rhs=M(
live_indices=['j', 'k'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i', 'k'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(1, 0),
index_lhs=(_colon, None, _colon),
index_rhs=(None, _colon, _colon),
ax_contraction=(2,),
)
),
# ij,jk->ijk
as_namedtuple(
'MATELEMMAT',
tsrex=dict(
lhs=M(
live_indices=['i', 'j'],
kron_indices=[],
),
rhs=M(
live_indices=['j', 'k'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i', 'j', 'k'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(),
index_lhs=(_colon, _colon, None),
index_rhs=(None, _colon, _colon),
ax_contraction=(),
)
),
# ij,jk->ki
as_namedtuple(
'MATMAT_TRANSPOSE',
tsrex=dict(
lhs=M(
live_indices=['i', 'j'],
kron_indices=[],
),
rhs=M(
live_indices=['j', 'k'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['k', 'i'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(1, 0),
index_lhs=(None, _colon, _colon),
index_rhs=(_colon, None, _colon),
ax_contraction=(2,),
)
),
# ijk,k
as_namedtuple(
'TENVEC',
tsrex=dict(
lhs=M(
live_indices=['i', 'j', 'k'],
kron_indices=[],
),
rhs=M(
live_indices=['k'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i', 'j'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(),
index_lhs=(_colon, _colon, _colon),
index_rhs=(None, None, _colon),
ax_contraction=(2,),
)
),
# ijk,jk
as_namedtuple(
'TENMAT',
tsrex=dict(
lhs=M(
live_indices=['i', 'j', 'k'],
kron_indices=[],
),
rhs=M(
live_indices=['j', 'k'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(),
index_lhs=(_colon, _colon, _colon),
index_rhs=(None, _colon, _colon),
ax_contraction=(1, 2),
)
),
# ijk,jl
as_namedtuple(
'MODEWISE',
tsrex=dict(
lhs=M(
live_indices=['i', 'j', 'k'],
kron_indices=[],
),
rhs=M(
live_indices=['j', 'l'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i', 'l', 'k'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(0, 2, 1),
tran_rhs=(1, 0),
index_lhs=(_colon, None, _colon, _colon),
index_rhs=(None, _colon, None, _colon),
ax_contraction=(3,),
)
),
# ijk,il
as_namedtuple(
'MODEWISE',
tsrex=dict(
lhs=M(
live_indices=['i', 'j', 'k'],
kron_indices=[],
),
rhs=M(
live_indices=['i', 'l'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['l', 'j', 'k'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(1, 2, 0),
tran_rhs=(1, 0),
index_lhs=(None, _colon, _colon, _colon),
index_rhs=(_colon, None, None, _colon),
ax_contraction=(3,),
)
),
# ijk,jkl
as_namedtuple(
'TENTEN',
tsrex=dict(
lhs=M(
live_indices=['i', 'j', 'k'],
kron_indices=[],
),
rhs=M(
live_indices=['j', 'k', 'l'],
kron_indices=[],
),
precedence=None,
reduction='reduction',
pairwise='pairwise',
outidx=None,
live_indices=['i', 'l'],
kron_indices=[]
),
truth=as_namedtuple(
'einspec',
op_reduce='reduction',
op_elementwise='pairwise',
tran_lhs=(),
tran_rhs=(2, 0, 1),
index_lhs=(_colon, None, _colon, _colon),
index_rhs=(None, _colon, _colon, _colon),
ax_contraction=(2, 3),
)
),
])
def test_einop_compiler(test_case, intr):
input, truth = test_case
_, spec = intr.ein(**input)
assert spec == truth
| 26.331924 | 53 | 0.412525 | 1,028 | 12,455 | 4.714981 | 0.085603 | 0.102125 | 0.074273 | 0.050959 | 0.87353 | 0.868372 | 0.826697 | 0.801733 | 0.776563 | 0.75098 | 0 | 0.005567 | 0.451947 | 12,455 | 472 | 54 | 26.387712 | 0.704512 | 0.012124 | 0 | 0.85078 | 0 | 0 | 0.065929 | 0 | 0 | 0 | 0 | 0 | 0.002227 | 1 | 0.004454 | false | 0 | 0.008909 | 0.002227 | 0.01559 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1506f167a462f171fbdbb9ab39a26f10ae7d7895 | 11,728 | py | Python | tests/integration/user_pool_manager_test.py | stevekineeve88/doubloon | 4c7c9163e96877ad23663c3dd9a73ef6ccde3e22 | [
"MIT"
] | null | null | null | tests/integration/user_pool_manager_test.py | stevekineeve88/doubloon | 4c7c9163e96877ad23663c3dd9a73ef6ccde3e22 | [
"MIT"
] | 8 | 2021-01-29T15:49:17.000Z | 2021-10-14T01:03:27.000Z | tests/integration/user_pool_manager_test.py | stevekineeve88/doubloon | 4c7c9163e96877ad23663c3dd9a73ef6ccde3e22 | [
"MIT"
] | null | null | null | from modules.pool.managers.pool_manager import PoolManager
from modules.pool.objects.pool_ticket import PoolTicket
from modules.user.exceptions.user_pool.user_pool_add_error import UserPoolAddError
from modules.user.managers.user_pool_manager import UserPoolManager
from modules.user.objects.tickets.name_ticket import NameTicket
from modules.user.objects.tickets.password_ticket import PasswordTicket
from modules.user.objects.tickets.username_ticket import UsernameTicket
from modules.user.objects.user import User
from modules.util.managers.postgres_conn_manager import PostgresConnManager
from tests.integration.setup.integration_setup import IntegrationSetup
class UserPoolManagerTest(IntegrationSetup):
@classmethod
def setUpClass(cls) -> None:
super().setUpClass()
cls.user_pool_manager: UserPoolManager = UserPoolManager()
def test_add_adds_user_to_pool(self):
pool = self.pool_manager.create(PoolTicket("MY_POOL"))
user = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("Somepassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool.get_id(), user.get_id())
result = self.user_pool_manager.search([pool.get_id()])
self.assertEqual(1, len(result.get_data()))
def test_add_fails_on_duplicate_add(self):
pool = self.pool_manager.create(PoolTicket("MY_POOL"))
user = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("Somepassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool.get_id(), user.get_id())
with self.assertRaises(UserPoolAddError):
self.user_pool_manager.add(pool.get_id(), user.get_id())
self.fail("Did not fail on duplicate")
def test_add_fails_on_fake_user_id(self):
pool = self.pool_manager.create(PoolTicket("MY_POOL"))
with self.assertRaises(UserPoolAddError):
self.user_pool_manager.add(pool.get_id(), 123456678)
self.fail("Did not fail on fake user id")
def test_add_fails_on_fake_pool_id(self):
user = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("Somepassword#123"),
NameTicket("Stephen", "Ayre")
)
with self.assertRaises(UserPoolAddError):
self.user_pool_manager.add(12345678, user.get_id())
self.fail("Did not fail on fake pool id")
def test_delete_deletes_user_from_pool(self):
pool = self.pool_manager.create(PoolTicket("MY_POOL"))
user = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("Somepassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool.get_id(), user.get_id())
result = self.user_pool_manager.search([pool.get_id()])
self.assertEqual(1, len(result.get_data()))
self.user_pool_manager.delete(pool.get_id(), user.get_id())
result = self.user_pool_manager.search([pool.get_id()])
self.assertEqual(0, len(result.get_data()))
def test_search_searches_by_username(self):
pool_1 = self.pool_manager.create(PoolTicket("MY_POOL"))
pool_2 = self.pool_manager.create(PoolTicket("MY_POOL_TWO"))
user_1 = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
user_2 = self.user_manager.create(
UsernameTicket("other"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool_1.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_2.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_1.get_id(), user_2.get_id())
search = user_1.get_username()
result = self.user_pool_manager.search([pool_1.get_id()], username=search)
self.assertEqual(1, result.get_full_count())
user_found: User = result.get_data()[0]
self.assertEqual(search, user_found.get_username())
def test_search_searches_by_first_name(self):
pool_1 = self.pool_manager.create(PoolTicket("MY_POOL"))
pool_2 = self.pool_manager.create(PoolTicket("MY_POOL_TWO"))
user_1 = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("SomePassword#123"),
NameTicket("Bob", "Ayre")
)
user_2 = self.user_manager.create(
UsernameTicket("other"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool_1.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_2.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_1.get_id(), user_2.get_id())
search = user_1.get_first_name()
result = self.user_pool_manager.search([pool_1.get_id()], first_name=search)
self.assertEqual(1, result.get_full_count())
user_found: User = result.get_data()[0]
self.assertEqual(search, user_found.get_first_name())
def test_search_searches_by_last_name(self):
pool_1 = self.pool_manager.create(PoolTicket("MY_POOL"))
pool_2 = self.pool_manager.create(PoolTicket("MY_POOL_TWO"))
user_1 = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Smith")
)
user_2 = self.user_manager.create(
UsernameTicket("other"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool_1.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_2.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_1.get_id(), user_2.get_id())
search = user_1.get_last_name()
result = self.user_pool_manager.search([pool_1.get_id()], last_name=search)
self.assertEqual(1, result.get_full_count())
user_found: User = result.get_data()[0]
self.assertEqual(search, user_found.get_last_name())
def test_search_searches_by_status(self):
pool_1 = self.pool_manager.create(PoolTicket("MY_POOL"))
pool_2 = self.pool_manager.create(PoolTicket("MY_POOL_TWO"))
user_1 = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
user_2 = self.user_manager.create(
UsernameTicket("other"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool_1.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_2.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_1.get_id(), user_2.get_id())
self.user_manager.update_status(user_1.get_id(), self.user_statuses.DELETED.get_id())
result = self.user_pool_manager.search([pool_1.get_id()], statuses=[self.user_statuses.DELETED.get_id()])
self.assertEqual(1, result.get_full_count())
user_found: User = result.get_data()[0]
self.assertEqual(user_1.get_username(), user_found.get_username())
def test_search_sorts_result(self):
username_1 = "sayre"
username_2 = "other"
pool_1 = self.pool_manager.create(PoolTicket("MY_POOL"))
pool_2 = self.pool_manager.create(PoolTicket("MY_POOL_TWO"))
user_1 = self.user_manager.create(
UsernameTicket(username_1),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
user_2 = self.user_manager.create(
UsernameTicket(username_2),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool_1.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_2.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_1.get_id(), user_2.get_id())
result = self.user_pool_manager.search([pool_1.get_id()], sorts={"username": 1})
sort_1 = username_2
sort_2 = username_1
data = result.get_data()
self.assertEqual(sort_1, data[0].get_username())
self.assertEqual(sort_2, data[1].get_username())
def test_search_searches_with_limit_and_offset(self):
pool_1 = self.pool_manager.create(PoolTicket("MY_POOL"))
pool_2 = self.pool_manager.create(PoolTicket("MY_POOL_TWO"))
user_1 = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
user_2 = self.user_manager.create(
UsernameTicket("other"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
user_3 = self.user_manager.create(
UsernameTicket("other1"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool_1.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_2.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_1.get_id(), user_2.get_id())
self.user_pool_manager.add(pool_1.get_id(), user_3.get_id())
result = self.user_pool_manager.search([pool_1.get_id()], limit=1, offset=1, sorts={"username": 1})
self.assertEqual(3, result.get_full_count())
self.assertEqual(1, len(result.get_data()))
user_found: User = result.get_data()[0]
found_username = user_3.get_username()
self.assertEqual(found_username, user_found.get_username())
def test_search_searches_by_multiple_pools(self):
pool_1 = self.pool_manager.create(PoolTicket("MY_POOL"))
pool_2 = self.pool_manager.create(PoolTicket("MY_POOL_TWO"))
user_1 = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
user_2 = self.user_manager.create(
UsernameTicket("other"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool_1.get_id(), user_1.get_id())
self.user_pool_manager.add(pool_2.get_id(), user_2.get_id())
self.user_pool_manager.add(pool_2.get_id(), user_1.get_id())
result = self.user_pool_manager.search([pool_1.get_id(), pool_2.get_id()])
self.assertEqual(2, result.get_full_count())
def test_search_only_searches_pool_users(self):
pool = self.pool_manager.create(PoolTicket("MY_POOL"))
user = self.user_manager.create(
UsernameTicket("sayre"),
PasswordTicket("SomePassword#123"),
NameTicket("Bob", "Ayre")
)
self.user_manager.create(
UsernameTicket("other"),
PasswordTicket("SomePassword#123"),
NameTicket("Stephen", "Ayre")
)
self.user_pool_manager.add(pool.get_id(), user.get_id())
result = self.user_pool_manager.search([pool.get_id()])
self.assertEqual(1, result.get_full_count())
def tearDown(self) -> None:
postgres_conn_manager: PostgresConnManager = PostgresConnManager()
postgres_conn_manager.query(f"""
TRUNCATE users.users CASCADE;
TRUNCATE users.users_pool CASCADE;
TRUNCATE pool.pools CASCADE;
""")
| 45.457364 | 113 | 0.652115 | 1,463 | 11,728 | 4.912509 | 0.071087 | 0.050786 | 0.089745 | 0.10839 | 0.802978 | 0.779046 | 0.747043 | 0.734382 | 0.727564 | 0.692918 | 0 | 0.021911 | 0.221692 | 11,728 | 257 | 114 | 45.634241 | 0.765447 | 0 | 0 | 0.594142 | 0 | 0 | 0.090638 | 0 | 0 | 0 | 0 | 0 | 0.087866 | 1 | 0.062762 | false | 0.09205 | 0.041841 | 0 | 0.108787 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
12749bc93b2fd6f53d54f524a54314a65ae09575 | 9,454 | py | Python | nexus/tests/unit/test_pyscf_input.py | djstaros/qmcpack | 280f67e638bae280448b47fa618f05b848c530d2 | [
"NCSA"
] | null | null | null | nexus/tests/unit/test_pyscf_input.py | djstaros/qmcpack | 280f67e638bae280448b47fa618f05b848c530d2 | [
"NCSA"
] | 11 | 2020-05-09T20:57:21.000Z | 2020-06-10T00:00:17.000Z | nexus/tests/unit/test_pyscf_input.py | djstaros/qmcpack | 280f67e638bae280448b47fa618f05b848c530d2 | [
"NCSA"
] | null | null | null |
import testing
from testing import value_eq,object_eq,text_eq
h2o_xyz = '''3
O 0.000000 0.000000 0.000000
H 0.000000 0.757160 0.586260
H 0.000000 0.757160 -0.586260
'''
scf_template = '''#! /usr/bin/env python3
from pyscf import scf
$system
mf = scf.RHF(mol)
mf.kernel()
'''
def test_import():
from pyscf_input import PyscfInput,generate_pyscf_input
#end def test_import
def test_empty_init():
from generic import obj
from pyscf_input import PyscfInput,generate_pyscf_input
ref = obj(
addendum = None,
allow_not_set = set([]),
checkpoint = False,
keywords = set([]),
prefix = None,
save_qmc = False,
template = None,
values = obj(),
)
pi = PyscfInput()
assert(object_eq(pi.to_obj(),ref))
pi2 = generate_pyscf_input()
assert(isinstance(pi2,PyscfInput))
assert(object_eq(pi,pi2))
#end def test_empty_init
def test_generate():
import os
from generic import obj
from physical_system import generate_physical_system
from pyscf_input import generate_pyscf_input
tpath = testing.setup_unit_test_output_directory('pyscf_input','test_generate')
# water molecule
xyz_path = os.path.join(tpath,'H2O.xyz')
template_path = os.path.join(tpath,'scf_template.py')
open(xyz_path,'w').write(h2o_xyz)
open(template_path,'w').write(scf_template)
system = generate_physical_system(
structure = xyz_path,
)
pi = generate_pyscf_input(
template = template_path,
system = system,
mole = obj(
basis = 'ccpvtz',
symmetry = True,
),
)
ref_system = '''
### generated system text ###
from pyscf import gto as gto_loc
mol = gto_loc.Mole()
mol.atom = {0}
O 0.00000000 0.00000000 0.00000000
H 0.00000000 0.75716000 0.58626000
H 0.00000000 0.75716000 -0.58626000
{0}
mol.basis = 'ccpvtz'
mol.unit = 'A'
mol.charge = 0
mol.spin = 0
mol.symmetry = True
mol.build()
### end generated system text ###
'''.format("'''")
assert(pi.template is not None)
assert(len(pi.values)==1 and 'system' in pi.values)
assert(text_eq(pi.values.system,ref_system))
ref_internal = obj(
addendum = None,
allow_not_set = set([]),
checkpoint = False,
keywords = set(['system']),
prefix = None,
save_qmc = False,
)
del pi.template
del pi.values
assert(object_eq(pi.to_obj(),ref_internal))
# diamond crystal
system = generate_physical_system(
units = 'A',
axes = '''1.785 1.785 0.000
0.000 1.785 1.785
1.785 0.000 1.785''',
elem_pos = '''
C 0.0000 0.0000 0.0000
C 0.8925 0.8925 0.8925
''',
kgrid = (1,1,1),
kshift = (0,0,0),
C = 4,
)
pi = generate_pyscf_input(
template = template_path,
system = system,
cell = obj(
basis = 'bfd-vdz',
ecp = 'bfd',
drop_exponent = 0.1,
verbose = 5,
),
)
ref_system = '''
### generated system text ###
from numpy import array
from pyscf.pbc import gto as gto_loc
cell = gto_loc.Cell()
cell.a = {0}
1.78500000 1.78500000 0.00000000
0.00000000 1.78500000 1.78500000
1.78500000 0.00000000 1.78500000
{0}
cell.basis = 'bfd-vdz'
cell.dimension = 3
cell.ecp = 'bfd'
cell.unit = 'A'
cell.atom = {0}
C 0.00000000 0.00000000 0.00000000
C 0.89250000 0.89250000 0.89250000
{0}
cell.drop_exponent = 0.1
cell.verbose = 5
cell.charge = 0
cell.spin = 0
cell.build()
kpts = array([
[0.0, 0.0, 0.0]])
### end generated system text ###
'''.format("'''")
assert(pi.template is not None)
assert(len(pi.values)==1 and 'system' in pi.values)
assert(text_eq(pi.values.system,ref_system))
del pi.template
del pi.values
assert(object_eq(pi.to_obj(),ref_internal))
#end def test_generate
def test_write():
import os
from generic import obj
from physical_system import generate_physical_system
from pyscf_input import generate_pyscf_input
tpath = testing.setup_unit_test_output_directory('pyscf_input','test_write')
# water molecule
xyz_path = os.path.join(tpath,'H2O.xyz')
template_path = os.path.join(tpath,'scf_template.py')
open(xyz_path,'w').write(h2o_xyz)
open(template_path,'w').write(scf_template)
system = generate_physical_system(
structure = xyz_path,
)
pi = generate_pyscf_input(
prefix = 'scf',
template = template_path,
system = system,
mole = obj(
verbose = 5,
basis = 'ccpvtz',
symmetry = True,
),
save_qmc = True,
)
write_path = os.path.join(tpath,'h2o.py')
pi.write(write_path)
assert(os.path.exists(write_path))
text = open(write_path,'r').read()
ref_text = '''
#! /usr/bin/env python3
from pyscf import scf
### generated system text ###
from pyscf import gto as gto_loc
mol = gto_loc.Mole()
mol.verbose = 5
mol.atom = {0}
O 0.00000000 0.00000000 0.00000000
H 0.00000000 0.75716000 0.58626000
H 0.00000000 0.75716000 -0.58626000
{0}
mol.basis = 'ccpvtz'
mol.unit = 'A'
mol.charge = 0
mol.spin = 0
mol.symmetry = True
mol.build()
### end generated system text ###
mf = scf.RHF(mol)
mf.kernel()
### generated conversion text ###
from PyscfToQmcpack import savetoqmcpack
savetoqmcpack(mol,mf,'scf')
### end generated conversion text ###
'''.format("'''")
assert(text_eq(text,ref_text))
# diamond crystal
system = generate_physical_system(
units = 'A',
axes = '''1.785 1.785 0.000
0.000 1.785 1.785
1.785 0.000 1.785''',
elem_pos = '''
C 0.0000 0.0000 0.0000
C 0.8925 0.8925 0.8925
''',
tiling = (2,1,1),
kgrid = (1,1,1),
kshift = (0,0,0),
C = 4,
)
pi = generate_pyscf_input(
prefix = 'scf',
template = template_path,
system = system,
cell = obj(
basis = 'bfd-vdz',
ecp = 'bfd',
drop_exponent = 0.1,
verbose = 5,
),
save_qmc = True,
)
write_path = os.path.join(tpath,'diamond.py')
pi.write(write_path)
assert(os.path.exists(write_path))
text = open(write_path,'r').read()
ref_text = '''
#! /usr/bin/env python3
from pyscf import scf
### generated system text ###
from numpy import array
from pyscf.pbc import gto as gto_loc
cell = gto_loc.Cell()
cell.a = {0}
1.78500000 1.78500000 0.00000000
0.00000000 1.78500000 1.78500000
1.78500000 0.00000000 1.78500000
{0}
cell.basis = 'bfd-vdz'
cell.dimension = 3
cell.ecp = 'bfd'
cell.unit = 'A'
cell.atom = {0}
C 0.00000000 0.00000000 0.00000000
C 0.89250000 0.89250000 0.89250000
{0}
cell.drop_exponent = 0.1
cell.verbose = 5
cell.charge = 0
cell.spin = 0
cell.build()
kpts = array([
[0.0, 0.0, 0.0] ,
[0.4656748546088228, 0.4656748546088228, -0.4656748546088228]])
### end generated system text ###
mf = scf.RHF(mol)
mf.kernel()
### generated conversion text ###
from PyscfToQmcpack import savetoqmcpack
savetoqmcpack(cell,mf,'scf',kpts)
### end generated conversion text ###
'''.format("'''")
text = text.replace('[',' [ ').replace(']',' ] ')
ref_text = ref_text.replace('[',' [ ').replace(']',' ] ')
assert(text_eq(text,ref_text))
#end def test_write
| 27.088825 | 83 | 0.481912 | 1,048 | 9,454 | 4.216603 | 0.129771 | 0.044807 | 0.031681 | 0.040733 | 0.882553 | 0.84544 | 0.825752 | 0.807196 | 0.777778 | 0.754696 | 0 | 0.136306 | 0.411783 | 9,454 | 348 | 84 | 27.166667 | 0.658335 | 0.015126 | 0 | 0.826415 | 0 | 0 | 0.51161 | 0.008815 | 0 | 0 | 0 | 0 | 0.056604 | 1 | 0.015094 | false | 0 | 0.09434 | 0 | 0.109434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
12cb4e9a414d274a40f7562cd0227218ba7e6909 | 9,103 | py | Python | tests/game/test_casualty.py | derNarr/ffai | 2f0eae852f8f292e0a253c281ba39c447cbf38ad | [
"Apache-2.0"
] | 7 | 2021-11-19T13:17:58.000Z | 2022-03-23T10:32:13.000Z | tests/game/test_casualty.py | ernestvmo/botbowl | 8b70faf615fc70eb40aa8b3519a7d2339872ea15 | [
"Apache-2.0"
] | 32 | 2021-11-19T15:06:55.000Z | 2022-03-31T16:36:46.000Z | tests/game/test_casualty.py | ernestvmo/botbowl | 8b70faf615fc70eb40aa8b3519a7d2339872ea15 | [
"Apache-2.0"
] | 9 | 2021-11-21T16:38:48.000Z | 2022-03-30T14:12:36.000Z | from tests.util import *
def test_casualty():
game = get_game_turn()
team = game.get_agent_team(game.actor)
team.state.rerolls = 0
attacker, defender = get_block_players(game, team)
attacker.extra_st = defender.get_st() - attacker.get_st() + 1 # make this a 2 die block.
attacker.extra_skills.append(Skill.BLOCK)
defender_pos = Square(defender.position.x, defender.position.y)
# it's a 2 dice block
BBDie.clear_fixes()
BBDie.fix(BBDieResult.BOTH_DOWN)
BBDie.fix(BBDieResult.BOTH_DOWN)
D6.FixedRolls.clear()
# fix the armour roll
D6.fix(5)
D6.fix(5)
# fix the injury roll to casualty
D6.fix(5)
D6.fix(5)
# fix the casualty roll #1 (Gouged Eye / MNG)
D6.fix(4)
D8.fix(3)
game.step(Action(ActionType.START_BLOCK, player=attacker))
game.step(Action(ActionType.BLOCK, position=defender.position))
game.step(Action(ActionType.SELECT_BOTH_DOWN))
assert game.has_report_of_type(OutcomeType.CASUALTY)
assert defender.state.injuries_gained[0] is CasualtyEffect.MNG
assert not game.has_report_of_type(OutcomeType.SUCCESSFUL_REGENERATION)
assert not game.has_report_of_type(OutcomeType.FAILED_REGENERATION)
def test_casualty_regeneration_success():
game = get_game_turn()
team = game.get_agent_team(game.actor)
team.state.rerolls = 0
attacker, defender = get_block_players(game, team)
attacker.extra_st = defender.get_st() - attacker.get_st() + 1 # make this a 2 die block.
attacker.extra_skills.append(Skill.BLOCK)
defender_pos = Square(defender.position.x, defender.position.y)
defender.extra_skills.append(Skill.REGENERATION)
# it's a 2 dice block
BBDie.clear_fixes()
BBDie.fix(BBDieResult.BOTH_DOWN)
BBDie.fix(BBDieResult.BOTH_DOWN)
D6.FixedRolls.clear()
# fix the armour roll
D6.fix(5)
D6.fix(5)
# fix the injury roll to casualty
D6.fix(5)
D6.fix(5)
# add a value for casualty effect
D6.fix(3)
# fix the regeneration roll
D6.fix(4)
game.step(Action(ActionType.START_BLOCK, player=attacker))
game.step(Action(ActionType.BLOCK, position=defender.position))
game.step(Action(ActionType.SELECT_BOTH_DOWN))
assert game.has_report_of_type(OutcomeType.CASUALTY)
assert game.has_report_of_type(OutcomeType.SUCCESSFUL_REGENERATION)
assert defender in game.get_reserves(defender.team)
assert not defender.state.injuries_gained
def test_casualty_regeneration_fail():
game = get_game_turn()
team = game.get_agent_team(game.actor)
team.state.rerolls = 0
attacker, defender = get_block_players(game, team)
attacker.extra_st = defender.get_st() - attacker.get_st() + 1 # make this a 2 die block.
attacker.extra_skills.append(Skill.BLOCK)
defender_pos = Square(defender.position.x, defender.position.y)
defender.extra_skills.append(Skill.REGENERATION)
# it's a 2 dice block
BBDie.clear_fixes()
BBDie.fix(BBDieResult.BOTH_DOWN)
BBDie.fix(BBDieResult.BOTH_DOWN)
D6.FixedRolls.clear()
# fix the armour roll
D6.fix(5)
D6.fix(5)
# fix the injury roll to casualty
D6.fix(5)
D6.fix(5)
# add a value for casualty effect
D6.fix(4)
# fix the regeneration roll
D6.fix(3)
game.step(Action(ActionType.START_BLOCK, player=attacker))
game.step(Action(ActionType.BLOCK, position=defender.position))
game.step(Action(ActionType.SELECT_BOTH_DOWN))
assert game.has_report_of_type(OutcomeType.CASUALTY)
assert game.has_report_of_type(OutcomeType.FAILED_REGENERATION)
assert defender not in game.get_reserves(defender.team)
assert defender.state.injuries_gained
def test_casualty_with_decay():
game = get_game_turn()
team = game.get_agent_team(game.actor)
team.state.rerolls = 0
attacker, defender = get_block_players(game, team)
attacker.extra_st = defender.get_st() - attacker.get_st() + 1 # make this a 2 die block.
attacker.extra_skills.append(Skill.BLOCK)
defender_pos = Square(defender.position.x, defender.position.y)
defender.extra_skills.append(Skill.DECAY)
# it's a 2 dice block
BBDie.clear_fixes()
BBDie.fix(BBDieResult.BOTH_DOWN)
BBDie.fix(BBDieResult.BOTH_DOWN)
D6.FixedRolls.clear()
# fix the armour roll
D6.fix(5)
D6.fix(5)
# fix the injury roll to casualty
D6.fix(5)
D6.fix(5)
# fix the casualty roll #1 (Gouged Eye / MNG)
D6.fix(4)
D8.fix(3)
# fix the casualty roll #2 (BH / none)
D6.fix(3)
D8.fix(1)
game.step(Action(ActionType.START_BLOCK, player=attacker))
game.step(Action(ActionType.BLOCK, position=defender.position))
game.step(Action(ActionType.SELECT_BOTH_DOWN))
assert game.has_report_of_type(OutcomeType.CASUALTY)
assert defender.state.injuries_gained[0] is CasualtyEffect.MNG
assert game.has_report_of_type(OutcomeType.MISS_NEXT_GAME)
assert len(defender.state.injuries_gained) == 1
assert game.has_report_of_type(OutcomeType.BADLY_HURT)
def test_casualty_with_decay_mng_twice_is_just_one():
game = get_game_turn()
team = game.get_agent_team(game.actor)
team.state.rerolls = 0
attacker, defender = get_block_players(game, team)
attacker.extra_st = defender.get_st() - attacker.get_st() + 1 # make this a 2 die block.
attacker.extra_skills.append(Skill.BLOCK)
defender_pos = Square(defender.position.x, defender.position.y)
defender.extra_skills.append(Skill.DECAY)
# it's a 2 dice block
BBDie.clear_fixes()
BBDie.fix(BBDieResult.BOTH_DOWN)
BBDie.fix(BBDieResult.BOTH_DOWN)
D6.FixedRolls.clear()
# fix the armour roll
D6.fix(5)
D6.fix(5)
# fix the injury roll to casualty
D6.fix(5)
D6.fix(5)
# fix the casualty roll #1 (Gouged Eye / MNG)
D6.fix(4)
D8.fix(3)
# fix the casualty roll #2 (BH / none)
D6.fix(4)
D8.fix(4)
game.step(Action(ActionType.START_BLOCK, player=attacker))
game.step(Action(ActionType.BLOCK, position=defender.position))
game.step(Action(ActionType.SELECT_BOTH_DOWN))
assert game.has_report_of_type(OutcomeType.CASUALTY)
assert defender.state.injuries_gained[0] is CasualtyEffect.MNG
assert game.has_report_of_type(OutcomeType.MISS_NEXT_GAME)
assert len(defender.state.injuries_gained) == 1
def test_casualty_regeneration_success():
game = get_game_turn()
team = game.get_agent_team(game.actor)
team.state.rerolls = 0
attacker, defender = get_block_players(game, team)
attacker.extra_st = defender.get_st() - attacker.get_st() + 1 # make this a 2 die block.
attacker.extra_skills.append(Skill.BLOCK)
defender_pos = Square(defender.position.x, defender.position.y)
defender.extra_skills.append(Skill.REGENERATION)
defender.extra_skills.append(Skill.DECAY)
# it's a 2 dice block
BBDie.clear_fixes()
BBDie.fix(BBDieResult.BOTH_DOWN)
BBDie.fix(BBDieResult.BOTH_DOWN)
D6.FixedRolls.clear()
# fix the armour roll
D6.fix(5)
D6.fix(5)
# fix the injury roll to casualty
D6.fix(5)
D6.fix(5)
# add a value for casualty effect
D6.fix(3)
# fix the regeneration roll
D6.fix(4)
game.step(Action(ActionType.START_BLOCK, player=attacker))
game.step(Action(ActionType.BLOCK, position=defender.position))
game.step(Action(ActionType.SELECT_BOTH_DOWN))
assert game.has_report_of_type(OutcomeType.CASUALTY)
assert game.has_report_of_type(OutcomeType.SUCCESSFUL_REGENERATION)
assert defender in game.get_reserves(defender.team)
assert len(defender.state.injuries_gained) == 0
def test_casualty_regeneration_failure():
game = get_game_turn()
team = game.get_agent_team(game.actor)
team.state.rerolls = 0
attacker, defender = get_block_players(game, team)
attacker.extra_st = defender.get_st() - attacker.get_st() + 1 # make this a 2 die block.
attacker.extra_skills.append(Skill.BLOCK)
defender_pos = Square(defender.position.x, defender.position.y)
defender.extra_skills.append(Skill.REGENERATION)
defender.extra_skills.append(Skill.DECAY)
# it's a 2 dice block
BBDie.clear_fixes()
BBDie.fix(BBDieResult.BOTH_DOWN)
BBDie.fix(BBDieResult.BOTH_DOWN)
D6.FixedRolls.clear()
# fix the armour roll
D6.fix(5)
D6.fix(5)
# fix the injury roll to casualty
D6.fix(5)
D6.fix(5)
# add a value for casualty effect - BH
D6.fix(3)
# fix the regeneration roll
D6.fix(2)
# add a value for casualty effect #2 - DEAD
D6.fix(6)
game.step(Action(ActionType.START_BLOCK, player=attacker))
game.step(Action(ActionType.BLOCK, position=defender.position))
game.step(Action(ActionType.SELECT_BOTH_DOWN))
assert game.has_report_of_type(OutcomeType.CASUALTY)
assert game.has_report_of_type(OutcomeType.FAILED_REGENERATION)
assert not defender in game.get_reserves(defender.team)
assert game.has_report_of_type(OutcomeType.BADLY_HURT)
assert game.has_report_of_type(OutcomeType.DEAD)
assert len(defender.state.injuries_gained) == 1
| 34.221805 | 93 | 0.718335 | 1,332 | 9,103 | 4.736486 | 0.077327 | 0.033286 | 0.026629 | 0.079886 | 0.977175 | 0.971311 | 0.958631 | 0.928198 | 0.913774 | 0.885085 | 0 | 0.018954 | 0.176975 | 9,103 | 265 | 94 | 34.350943 | 0.823145 | 0.127101 | 0 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161458 | 1 | 0.036458 | false | 0 | 0.005208 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
12e89fff7d495fd5d91865a7742475123c96fcb5 | 122 | py | Python | __init__.py | hariprasad1003/calcymath | 8e395397ce128ece6e15ca16a87495841b4f5ad0 | [
"MIT"
] | null | null | null | __init__.py | hariprasad1003/calcymath | 8e395397ce128ece6e15ca16a87495841b4f5ad0 | [
"MIT"
] | null | null | null | __init__.py | hariprasad1003/calcymath | 8e395397ce128ece6e15ca16a87495841b4f5ad0 | [
"MIT"
] | null | null | null | def add(a, b):
return a+b
def sub(a, b):
return a-b
def mul(a, b):
return a*b
def div(a, b):
return a/b | 11.090909 | 14 | 0.52459 | 28 | 122 | 2.285714 | 0.285714 | 0.25 | 0.5 | 0.5625 | 0.765625 | 0.609375 | 0 | 0 | 0 | 0 | 0 | 0 | 0.311475 | 122 | 11 | 15 | 11.090909 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
42258a276e7b4005908cbcbb5103dd77819e60b8 | 232 | py | Python | logo.py | hunaenzia/Hackfile | 823c5a505198780408365186313b999bbcfa4465 | [
"Apache-2.0"
] | null | null | null | logo.py | hunaenzia/Hackfile | 823c5a505198780408365186313b999bbcfa4465 | [
"Apache-2.0"
] | null | null | null | logo.py | hunaenzia/Hackfile | 823c5a505198780408365186313b999bbcfa4465 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
import os
os.system('echo "\n _________ ____ ____ \n / ____/ | / __ \/ _/ \n / /_ / /| | / /_/ // / \n / __/ / ___ |/ _, _// / \033[1;91m║ Version: 3.0 \n/_/ /_/ |_/_/ |_/___/ " | lolcat ')
| 58 | 205 | 0.422414 | 21 | 232 | 2.761905 | 0.761905 | 0.103448 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056962 | 0.318966 | 232 | 3 | 206 | 77.333333 | 0.303797 | 0.056034 | 0 | 0 | 0 | 0.5 | 0.884793 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.