hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a1c4d6b793161dc88a0aeba519f4195516fedd99 | 8,976 | py | Python | tests/core/tests/api.py | mdornseif/django-tastypie | b898311e9ff1f6a096d3c05c9843dbae5b5fcf4a | [
"BSD-3-Clause"
] | null | null | null | tests/core/tests/api.py | mdornseif/django-tastypie | b898311e9ff1f6a096d3c05c9843dbae5b5fcf4a | [
"BSD-3-Clause"
] | null | null | null | tests/core/tests/api.py | mdornseif/django-tastypie | b898311e9ff1f6a096d3c05c9843dbae5b5fcf4a | [
"BSD-3-Clause"
] | null | null | null | from django.contrib.auth.models import User
from django.http import HttpRequest
from django.test import TestCase
import tastypie
from tastypie.api import Api
from tastypie.exceptions import NotRegistered, URLReverseError
from tastypie.resources import Resource
from tastypie.representations.models import ModelRepresentation
from core.models import Note
class NoteRepresentation(ModelRepresentation):
class Meta:
queryset = Note.objects.filter(is_active=True)
class UserRepresentation(ModelRepresentation):
class Meta:
queryset = User.objects.all()
class NoteResource(Resource):
representation = NoteRepresentation
resource_name = 'notes'
class UserResource(Resource):
representation = UserRepresentation
resource_name = 'users'
class ApiTestCase(TestCase):
urls = 'core.tests.api_urls'
def test_register(self):
api = Api()
self.assertEqual(len(api._registry), 0)
api.register(NoteResource())
self.assertEqual(len(api._registry), 1)
self.assertEqual(sorted(api._registry.keys()), ['notes'])
api.register(UserResource())
self.assertEqual(len(api._registry), 2)
self.assertEqual(sorted(api._registry.keys()), ['notes', 'users'])
api.register(UserResource())
self.assertEqual(len(api._registry), 2)
self.assertEqual(sorted(api._registry.keys()), ['notes', 'users'])
self.assertEqual(len(api._canonicals), 2)
api.register(UserResource(), canonical=False)
self.assertEqual(len(api._registry), 2)
self.assertEqual(sorted(api._registry.keys()), ['notes', 'users'])
self.assertEqual(len(api._canonicals), 2)
def test_global_registry(self):
tastypie.available_apis = {}
api = Api()
self.assertEqual(len(api._registry), 0)
self.assertEqual(len(tastypie.available_apis), 0)
api.register(NoteResource())
self.assertEqual(len(api._registry), 1)
self.assertEqual(sorted(api._registry.keys()), ['notes'])
self.assertEqual(len(tastypie.available_apis), 1)
self.assertEqual(tastypie.available_apis['v1']['class'], api)
self.assertEqual(tastypie.available_apis['v1']['resources'], ['notes'])
self.assertEqual(tastypie.available_apis['v1']['representations'], {'NoteRepresentation': 'notes'})
api.register(UserResource())
self.assertEqual(len(api._registry), 2)
self.assertEqual(sorted(api._registry.keys()), ['notes', 'users'])
self.assertEqual(len(tastypie.available_apis), 1)
self.assertEqual(tastypie.available_apis['v1']['class'], api)
self.assertEqual(tastypie.available_apis['v1']['resources'], ['notes', 'users'])
self.assertEqual(tastypie.available_apis['v1']['representations'], {'UserRepresentation': 'users', 'NoteRepresentation': 'notes'})
api.register(UserResource())
self.assertEqual(len(api._registry), 2)
self.assertEqual(sorted(api._registry.keys()), ['notes', 'users'])
self.assertEqual(len(tastypie.available_apis), 1)
self.assertEqual(tastypie.available_apis['v1']['class'], api)
self.assertEqual(tastypie.available_apis['v1']['resources'], ['notes', 'users'])
self.assertEqual(tastypie.available_apis['v1']['representations'], {'UserRepresentation': 'users', 'NoteRepresentation': 'notes'})
self.assertEqual(len(api._canonicals), 2)
api.register(UserResource(), canonical=False)
self.assertEqual(len(api._registry), 2)
self.assertEqual(sorted(api._registry.keys()), ['notes', 'users'])
self.assertEqual(len(api._canonicals), 2)
self.assertEqual(len(tastypie.available_apis), 1)
self.assertEqual(tastypie.available_apis['v1']['class'], api)
self.assertEqual(tastypie.available_apis['v1']['resources'], ['notes', 'users'])
self.assertEqual(tastypie.available_apis['v1']['representations'], {'UserRepresentation': 'users', 'NoteRepresentation': 'notes'})
def test_unregister(self):
tastypie.available_apis = {}
api = Api()
api.register(NoteResource())
api.register(UserResource(), canonical=False)
self.assertEqual(sorted(api._registry.keys()), ['notes', 'users'])
self.assertEqual(len(tastypie.available_apis), 1)
self.assertEqual(tastypie.available_apis['v1']['class'], api)
self.assertEqual(tastypie.available_apis['v1']['resources'], ['notes', 'users'])
self.assertEqual(tastypie.available_apis['v1']['representations'], {'NoteRepresentation': 'notes'})
self.assertEqual(len(api._canonicals), 1)
api.unregister('users')
self.assertEqual(len(api._registry), 1)
self.assertEqual(sorted(api._registry.keys()), ['notes'])
self.assertEqual(len(api._canonicals), 1)
self.assertEqual(tastypie.available_apis['v1']['class'], api)
self.assertEqual(tastypie.available_apis['v1']['resources'], ['notes'])
self.assertEqual(tastypie.available_apis['v1']['representations'], {'NoteRepresentation': 'notes'})
api.unregister('notes')
self.assertEqual(len(api._registry), 0)
self.assertEqual(sorted(api._registry.keys()), [])
self.assertEqual(tastypie.available_apis['v1']['class'], api)
self.assertEqual(tastypie.available_apis['v1']['resources'], [])
self.assertEqual(tastypie.available_apis['v1']['representations'], {})
api.unregister('users')
self.assertEqual(len(api._registry), 0)
self.assertEqual(sorted(api._registry.keys()), [])
self.assertEqual(tastypie.available_apis['v1']['class'], api)
self.assertEqual(tastypie.available_apis['v1']['resources'], [])
self.assertEqual(tastypie.available_apis['v1']['representations'], {})
def test_canonical_resource_for(self):
tastypie.available_apis = {}
api = Api()
note_resource = NoteResource()
user_resource = UserResource()
api.register(note_resource)
api.register(user_resource)
self.assertEqual(len(api._canonicals), 2)
self.assertEqual(isinstance(api.canonical_resource_for('notes'), NoteResource), True)
api_2 = Api()
self.assertRaises(URLReverseError, tastypie._get_canonical_resource_name, api_2, NoteRepresentation)
self.assertEqual(tastypie._get_canonical_resource_name(api.api_name, NoteRepresentation), 'notes')
self.assertEqual(tastypie._get_canonical_resource_name(api.api_name, NoteRepresentation()), 'notes')
self.assertEqual(tastypie._get_canonical_resource_name(api.api_name, note_resource.detail_representation), 'notes')
self.assertEqual(tastypie._get_canonical_resource_name(api.api_name, UserRepresentation), 'users')
self.assertEqual(tastypie._get_canonical_resource_name(api.api_name, UserRepresentation()), 'users')
self.assertEqual(tastypie._get_canonical_resource_name(api.api_name, user_resource.detail_representation), 'users')
api.unregister(user_resource.resource_name)
self.assertRaises(NotRegistered, api.canonical_resource_for, 'users')
def test_urls(self):
api = Api()
api.register(NoteResource())
api.register(UserResource())
patterns = api.urls
self.assertEqual(len(patterns), 3)
self.assertEqual(sorted([pattern.name for pattern in patterns if hasattr(pattern, 'name')]), ['api_v1_top_level'])
self.assertEqual([[pattern.name for pattern in include.url_patterns if hasattr(pattern, 'name')] for include in patterns if hasattr(include, 'reverse_dict')], [['api_dispatch_list', 'api_get_schema', 'api_get_multiple', 'api_dispatch_detail'], ['api_dispatch_list', 'api_get_schema', 'api_get_multiple', 'api_dispatch_detail']])
api = Api(api_name='v2')
api.register(NoteResource())
api.register(UserResource())
patterns = api.urls
self.assertEqual(len(patterns), 3)
self.assertEqual(sorted([pattern.name for pattern in patterns if hasattr(pattern, 'name')]), ['api_v2_top_level'])
self.assertEqual([[pattern.name for pattern in include.url_patterns if hasattr(pattern, 'name')] for include in patterns if hasattr(include, 'reverse_dict')], [['api_dispatch_list', 'api_get_schema', 'api_get_multiple', 'api_dispatch_detail'], ['api_dispatch_list', 'api_get_schema', 'api_get_multiple', 'api_dispatch_detail']])
def test_top_level(self):
api = Api()
api.register(NoteResource())
api.register(UserResource())
request = HttpRequest()
resp = api.top_level(request)
self.assertEqual(resp.status_code, 200)
self.assertEqual(resp.content, '{"notes": "/api/v1/notes/", "users": "/api/v1/users/"}')
| 49.318681 | 336 | 0.674577 | 962 | 8,976 | 6.113306 | 0.092516 | 0.196395 | 0.117837 | 0.13059 | 0.780309 | 0.779459 | 0.755654 | 0.745621 | 0.708043 | 0.692909 | 0 | 0.008431 | 0.180704 | 8,976 | 181 | 337 | 49.59116 | 0.79127 | 0 | 0 | 0.653061 | 0 | 0 | 0.122326 | 0 | 0 | 0 | 0 | 0 | 0.537415 | 1 | 0.040816 | false | 0 | 0.061224 | 0 | 0.183673 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a1ed5905e2ab9144c6509063d86be297552c252f | 42 | py | Python | dashboard/__init__.py | lipopo/micromanager | 1e362a71c0ec72115b617f6bf0c3b69e5ae88745 | [
"MIT"
] | null | null | null | dashboard/__init__.py | lipopo/micromanager | 1e362a71c0ec72115b617f6bf0c3b69e5ae88745 | [
"MIT"
] | null | null | null | dashboard/__init__.py | lipopo/micromanager | 1e362a71c0ec72115b617f6bf0c3b69e5ae88745 | [
"MIT"
] | null | null | null | from dashboard.app import app as dash_app
| 21 | 41 | 0.833333 | 8 | 42 | 4.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 42 | 1 | 42 | 42 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b809c403559988162994782ecff64095a0920dd9 | 19,247 | py | Python | sdk/python/pulumi_aws_native/ec2/vpc.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 29 | 2021-09-30T19:32:07.000Z | 2022-03-22T21:06:08.000Z | sdk/python/pulumi_aws_native/ec2/vpc.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 232 | 2021-09-30T19:26:26.000Z | 2022-03-31T23:22:06.000Z | sdk/python/pulumi_aws_native/ec2/vpc.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 4 | 2021-11-10T19:42:01.000Z | 2022-02-05T10:15:49.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['VPCArgs', 'VPC']
@pulumi.input_type
class VPCArgs:
def __init__(__self__, *,
cidr_block: Optional[pulumi.Input[str]] = None,
enable_dns_hostnames: Optional[pulumi.Input[bool]] = None,
enable_dns_support: Optional[pulumi.Input[bool]] = None,
instance_tenancy: Optional[pulumi.Input[str]] = None,
ipv4_ipam_pool_id: Optional[pulumi.Input[str]] = None,
ipv4_netmask_length: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input['VPCTagArgs']]]] = None):
"""
The set of arguments for constructing a VPC resource.
:param pulumi.Input[str] cidr_block: The primary IPv4 CIDR block for the VPC.
:param pulumi.Input[bool] enable_dns_hostnames: Indicates whether the instances launched in the VPC get DNS hostnames. If enabled, instances in the VPC get DNS hostnames; otherwise, they do not. Disabled by default for nondefault VPCs.
:param pulumi.Input[bool] enable_dns_support: Indicates whether the DNS resolution is supported for the VPC. If enabled, queries to the Amazon provided DNS server at the 169.254.169.253 IP address, or the reserved IP address at the base of the VPC network range "plus two" succeed. If disabled, the Amazon provided DNS service in the VPC that resolves public DNS hostnames to IP addresses is not enabled. Enabled by default.
:param pulumi.Input[str] instance_tenancy: The allowed tenancy of instances launched into the VPC.
"default": An instance launched into the VPC runs on shared hardware by default, unless you explicitly specify a different tenancy during instance launch.
"dedicated": An instance launched into the VPC is a Dedicated Instance by default, unless you explicitly specify a tenancy of host during instance launch. You cannot specify a tenancy of default during instance launch.
Updating InstanceTenancy requires no replacement only if you are updating its value from "dedicated" to "default". Updating InstanceTenancy from "default" to "dedicated" requires replacement.
:param pulumi.Input[str] ipv4_ipam_pool_id: The ID of an IPv4 IPAM pool you want to use for allocating this VPC's CIDR
:param pulumi.Input[int] ipv4_netmask_length: The netmask length of the IPv4 CIDR you want to allocate to this VPC from an Amazon VPC IP Address Manager (IPAM) pool
:param pulumi.Input[Sequence[pulumi.Input['VPCTagArgs']]] tags: The tags for the VPC.
"""
if cidr_block is not None:
pulumi.set(__self__, "cidr_block", cidr_block)
if enable_dns_hostnames is not None:
pulumi.set(__self__, "enable_dns_hostnames", enable_dns_hostnames)
if enable_dns_support is not None:
pulumi.set(__self__, "enable_dns_support", enable_dns_support)
if instance_tenancy is not None:
pulumi.set(__self__, "instance_tenancy", instance_tenancy)
if ipv4_ipam_pool_id is not None:
pulumi.set(__self__, "ipv4_ipam_pool_id", ipv4_ipam_pool_id)
if ipv4_netmask_length is not None:
pulumi.set(__self__, "ipv4_netmask_length", ipv4_netmask_length)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="cidrBlock")
def cidr_block(self) -> Optional[pulumi.Input[str]]:
"""
The primary IPv4 CIDR block for the VPC.
"""
return pulumi.get(self, "cidr_block")
@cidr_block.setter
def cidr_block(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cidr_block", value)
@property
@pulumi.getter(name="enableDnsHostnames")
def enable_dns_hostnames(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether the instances launched in the VPC get DNS hostnames. If enabled, instances in the VPC get DNS hostnames; otherwise, they do not. Disabled by default for nondefault VPCs.
"""
return pulumi.get(self, "enable_dns_hostnames")
@enable_dns_hostnames.setter
def enable_dns_hostnames(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_dns_hostnames", value)
@property
@pulumi.getter(name="enableDnsSupport")
def enable_dns_support(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether the DNS resolution is supported for the VPC. If enabled, queries to the Amazon provided DNS server at the 169.254.169.253 IP address, or the reserved IP address at the base of the VPC network range "plus two" succeed. If disabled, the Amazon provided DNS service in the VPC that resolves public DNS hostnames to IP addresses is not enabled. Enabled by default.
"""
return pulumi.get(self, "enable_dns_support")
@enable_dns_support.setter
def enable_dns_support(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_dns_support", value)
@property
@pulumi.getter(name="instanceTenancy")
def instance_tenancy(self) -> Optional[pulumi.Input[str]]:
"""
The allowed tenancy of instances launched into the VPC.
"default": An instance launched into the VPC runs on shared hardware by default, unless you explicitly specify a different tenancy during instance launch.
"dedicated": An instance launched into the VPC is a Dedicated Instance by default, unless you explicitly specify a tenancy of host during instance launch. You cannot specify a tenancy of default during instance launch.
Updating InstanceTenancy requires no replacement only if you are updating its value from "dedicated" to "default". Updating InstanceTenancy from "default" to "dedicated" requires replacement.
"""
return pulumi.get(self, "instance_tenancy")
@instance_tenancy.setter
def instance_tenancy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "instance_tenancy", value)
@property
@pulumi.getter(name="ipv4IpamPoolId")
def ipv4_ipam_pool_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of an IPv4 IPAM pool you want to use for allocating this VPC's CIDR
"""
return pulumi.get(self, "ipv4_ipam_pool_id")
@ipv4_ipam_pool_id.setter
def ipv4_ipam_pool_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ipv4_ipam_pool_id", value)
@property
@pulumi.getter(name="ipv4NetmaskLength")
def ipv4_netmask_length(self) -> Optional[pulumi.Input[int]]:
"""
The netmask length of the IPv4 CIDR you want to allocate to this VPC from an Amazon VPC IP Address Manager (IPAM) pool
"""
return pulumi.get(self, "ipv4_netmask_length")
@ipv4_netmask_length.setter
def ipv4_netmask_length(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "ipv4_netmask_length", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['VPCTagArgs']]]]:
"""
The tags for the VPC.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['VPCTagArgs']]]]):
pulumi.set(self, "tags", value)
class VPC(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
cidr_block: Optional[pulumi.Input[str]] = None,
enable_dns_hostnames: Optional[pulumi.Input[bool]] = None,
enable_dns_support: Optional[pulumi.Input[bool]] = None,
instance_tenancy: Optional[pulumi.Input[str]] = None,
ipv4_ipam_pool_id: Optional[pulumi.Input[str]] = None,
ipv4_netmask_length: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VPCTagArgs']]]]] = None,
__props__=None):
"""
Resource Type definition for AWS::EC2::VPC
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] cidr_block: The primary IPv4 CIDR block for the VPC.
:param pulumi.Input[bool] enable_dns_hostnames: Indicates whether the instances launched in the VPC get DNS hostnames. If enabled, instances in the VPC get DNS hostnames; otherwise, they do not. Disabled by default for nondefault VPCs.
:param pulumi.Input[bool] enable_dns_support: Indicates whether the DNS resolution is supported for the VPC. If enabled, queries to the Amazon provided DNS server at the 169.254.169.253 IP address, or the reserved IP address at the base of the VPC network range "plus two" succeed. If disabled, the Amazon provided DNS service in the VPC that resolves public DNS hostnames to IP addresses is not enabled. Enabled by default.
:param pulumi.Input[str] instance_tenancy: The allowed tenancy of instances launched into the VPC.
"default": An instance launched into the VPC runs on shared hardware by default, unless you explicitly specify a different tenancy during instance launch.
"dedicated": An instance launched into the VPC is a Dedicated Instance by default, unless you explicitly specify a tenancy of host during instance launch. You cannot specify a tenancy of default during instance launch.
Updating InstanceTenancy requires no replacement only if you are updating its value from "dedicated" to "default". Updating InstanceTenancy from "default" to "dedicated" requires replacement.
:param pulumi.Input[str] ipv4_ipam_pool_id: The ID of an IPv4 IPAM pool you want to use for allocating this VPC's CIDR
:param pulumi.Input[int] ipv4_netmask_length: The netmask length of the IPv4 CIDR you want to allocate to this VPC from an Amazon VPC IP Address Manager (IPAM) pool
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VPCTagArgs']]]] tags: The tags for the VPC.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: Optional[VPCArgs] = None,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Resource Type definition for AWS::EC2::VPC
:param str resource_name: The name of the resource.
:param VPCArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(VPCArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
cidr_block: Optional[pulumi.Input[str]] = None,
enable_dns_hostnames: Optional[pulumi.Input[bool]] = None,
enable_dns_support: Optional[pulumi.Input[bool]] = None,
instance_tenancy: Optional[pulumi.Input[str]] = None,
ipv4_ipam_pool_id: Optional[pulumi.Input[str]] = None,
ipv4_netmask_length: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['VPCTagArgs']]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = VPCArgs.__new__(VPCArgs)
__props__.__dict__["cidr_block"] = cidr_block
__props__.__dict__["enable_dns_hostnames"] = enable_dns_hostnames
__props__.__dict__["enable_dns_support"] = enable_dns_support
__props__.__dict__["instance_tenancy"] = instance_tenancy
__props__.__dict__["ipv4_ipam_pool_id"] = ipv4_ipam_pool_id
__props__.__dict__["ipv4_netmask_length"] = ipv4_netmask_length
__props__.__dict__["tags"] = tags
__props__.__dict__["cidr_block_associations"] = None
__props__.__dict__["default_network_acl"] = None
__props__.__dict__["default_security_group"] = None
__props__.__dict__["ipv6_cidr_blocks"] = None
__props__.__dict__["vpc_id"] = None
super(VPC, __self__).__init__(
'aws-native:ec2:VPC',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None) -> 'VPC':
"""
Get an existing VPC resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = VPCArgs.__new__(VPCArgs)
__props__.__dict__["cidr_block"] = None
__props__.__dict__["cidr_block_associations"] = None
__props__.__dict__["default_network_acl"] = None
__props__.__dict__["default_security_group"] = None
__props__.__dict__["enable_dns_hostnames"] = None
__props__.__dict__["enable_dns_support"] = None
__props__.__dict__["instance_tenancy"] = None
__props__.__dict__["ipv4_ipam_pool_id"] = None
__props__.__dict__["ipv4_netmask_length"] = None
__props__.__dict__["ipv6_cidr_blocks"] = None
__props__.__dict__["tags"] = None
__props__.__dict__["vpc_id"] = None
return VPC(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="cidrBlock")
def cidr_block(self) -> pulumi.Output[Optional[str]]:
"""
The primary IPv4 CIDR block for the VPC.
"""
return pulumi.get(self, "cidr_block")
@property
@pulumi.getter(name="cidrBlockAssociations")
def cidr_block_associations(self) -> pulumi.Output[Sequence[str]]:
"""
A list of IPv4 CIDR block association IDs for the VPC.
"""
return pulumi.get(self, "cidr_block_associations")
@property
@pulumi.getter(name="defaultNetworkAcl")
def default_network_acl(self) -> pulumi.Output[str]:
"""
The default network ACL ID that is associated with the VPC.
"""
return pulumi.get(self, "default_network_acl")
@property
@pulumi.getter(name="defaultSecurityGroup")
def default_security_group(self) -> pulumi.Output[str]:
"""
The default security group ID that is associated with the VPC.
"""
return pulumi.get(self, "default_security_group")
@property
@pulumi.getter(name="enableDnsHostnames")
def enable_dns_hostnames(self) -> pulumi.Output[Optional[bool]]:
"""
Indicates whether the instances launched in the VPC get DNS hostnames. If enabled, instances in the VPC get DNS hostnames; otherwise, they do not. Disabled by default for nondefault VPCs.
"""
return pulumi.get(self, "enable_dns_hostnames")
@property
@pulumi.getter(name="enableDnsSupport")
def enable_dns_support(self) -> pulumi.Output[Optional[bool]]:
"""
Indicates whether the DNS resolution is supported for the VPC. If enabled, queries to the Amazon provided DNS server at the 169.254.169.253 IP address, or the reserved IP address at the base of the VPC network range "plus two" succeed. If disabled, the Amazon provided DNS service in the VPC that resolves public DNS hostnames to IP addresses is not enabled. Enabled by default.
"""
return pulumi.get(self, "enable_dns_support")
@property
@pulumi.getter(name="instanceTenancy")
def instance_tenancy(self) -> pulumi.Output[Optional[str]]:
"""
The allowed tenancy of instances launched into the VPC.
"default": An instance launched into the VPC runs on shared hardware by default, unless you explicitly specify a different tenancy during instance launch.
"dedicated": An instance launched into the VPC is a Dedicated Instance by default, unless you explicitly specify a tenancy of host during instance launch. You cannot specify a tenancy of default during instance launch.
Updating InstanceTenancy requires no replacement only if you are updating its value from "dedicated" to "default". Updating InstanceTenancy from "default" to "dedicated" requires replacement.
"""
return pulumi.get(self, "instance_tenancy")
@property
@pulumi.getter(name="ipv4IpamPoolId")
def ipv4_ipam_pool_id(self) -> pulumi.Output[Optional[str]]:
"""
The ID of an IPv4 IPAM pool you want to use for allocating this VPC's CIDR
"""
return pulumi.get(self, "ipv4_ipam_pool_id")
@property
@pulumi.getter(name="ipv4NetmaskLength")
def ipv4_netmask_length(self) -> pulumi.Output[Optional[int]]:
"""
The netmask length of the IPv4 CIDR you want to allocate to this VPC from an Amazon VPC IP Address Manager (IPAM) pool
"""
return pulumi.get(self, "ipv4_netmask_length")
@property
@pulumi.getter(name="ipv6CidrBlocks")
def ipv6_cidr_blocks(self) -> pulumi.Output[Sequence[str]]:
"""
A list of IPv6 CIDR blocks that are associated with the VPC.
"""
return pulumi.get(self, "ipv6_cidr_blocks")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Sequence['outputs.VPCTag']]]:
"""
The tags for the VPC.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="vpcId")
def vpc_id(self) -> pulumi.Output[str]:
"""
The Id for the model.
"""
return pulumi.get(self, "vpc_id")
| 51.739247 | 432 | 0.676469 | 2,452 | 19,247 | 5.070147 | 0.088907 | 0.052204 | 0.053491 | 0.029038 | 0.828588 | 0.787806 | 0.741876 | 0.70777 | 0.665943 | 0.625563 | 0 | 0.007556 | 0.236764 | 19,247 | 371 | 433 | 51.878706 | 0.838734 | 0.40318 | 0 | 0.433962 | 1 | 0 | 0.130717 | 0.014681 | 0 | 0 | 0 | 0 | 0 | 1 | 0.150943 | false | 0.004717 | 0.033019 | 0 | 0.287736 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
62c4f61e5cfc08612a67defa6a030b9b8b2071e5 | 21 | py | Python | deploy/__init__.py | roman-st/DeployTool | af6bda37ef84f06358c875f4d07609287432c4f3 | [
"MIT"
] | null | null | null | deploy/__init__.py | roman-st/DeployTool | af6bda37ef84f06358c875f4d07609287432c4f3 | [
"MIT"
] | null | null | null | deploy/__init__.py | roman-st/DeployTool | af6bda37ef84f06358c875f4d07609287432c4f3 | [
"MIT"
] | null | null | null | from install import * | 21 | 21 | 0.809524 | 3 | 21 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
62ce7369c23f84ff1d60c81c0e79974bd57a010c | 2,160 | py | Python | epytope/Data/pssms/tepitopepan/mat/DRB1_1315_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_1315_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_1315_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | DRB1_1315_9 = {0: {'A': -999.0, 'E': -999.0, 'D': -999.0, 'G': -999.0, 'F': -0.98558, 'I': -0.014418, 'H': -999.0, 'K': -999.0, 'M': -0.014418, 'L': -0.014418, 'N': -999.0, 'Q': -999.0, 'P': -999.0, 'S': -999.0, 'R': -999.0, 'T': -999.0, 'W': -0.98558, 'V': -0.014418, 'Y': -0.98558}, 1: {'A': 0.0, 'E': 0.1, 'D': -1.3, 'G': 0.5, 'F': 0.8, 'I': 1.1, 'H': 0.8, 'K': 1.1, 'M': 1.1, 'L': 1.0, 'N': 0.8, 'Q': 1.2, 'P': -0.5, 'S': -0.3, 'R': 2.2, 'T': 0.0, 'W': -0.1, 'V': 2.1, 'Y': 0.9}, 2: {'A': 0.0, 'E': -1.2, 'D': -1.3, 'G': 0.2, 'F': 0.8, 'I': 1.5, 'H': 0.2, 'K': 0.0, 'M': 1.4, 'L': 1.0, 'N': 0.5, 'Q': 0.0, 'P': 0.3, 'S': 0.2, 'R': 0.7, 'T': 0.0, 'W': 0.0, 'V': 0.5, 'Y': 0.8}, 3: {'A': 0.0, 'E': -1.2332, 'D': -1.4652, 'G': -1.5198, 'F': 0.70997, 'I': -0.29282, 'H': 1.073, 'K': 0.5441, 'M': 0.83882, 'L': 0.51927, 'N': 0.070494, 'Q': 0.38032, 'P': -1.5311, 'S': -0.62628, 'R': 0.050286, 'T': -0.99207, 'W': 0.55289, 'V': -0.63743, 'Y': 0.18514}, 4: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 5: {'A': 0.0, 'E': -1.4081, 'D': -2.388, 'G': -0.70593, 'F': -1.3968, 'I': 0.69291, 'H': -0.11107, 'K': 1.2682, 'M': -0.90107, 'L': 0.18915, 'N': -0.58346, 'Q': -0.3103, 'P': 0.49538, 'S': -0.090422, 'R': 0.97166, 'T': 0.80858, 'W': -1.3961, 'V': 1.1966, 'Y': -1.3998}, 6: {'A': 0.0, 'E': -1.0228, 'D': -1.6072, 'G': -1.3613, 'F': 0.4947, 'I': -0.27362, 'H': 0.21253, 'K': -0.068935, 'M': 0.20935, 'L': 0.51728, 'N': 0.033665, 'Q': -0.33127, 'P': -0.4723, 'S': -0.84305, 'R': 0.98101, 'T': -0.83729, 'W': 0.34561, 'V': -0.11038, 'Y': -0.13001}, 7: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 8: {'A': 0.0, 'E': -0.54182, 'D': -0.78869, 'G': 0.1478, 'F': 0.55352, 'I': 0.43948, 'H': -0.38613, 'K': -0.2285, 'M': 0.82817, 'L': -0.20101, 'N': -0.73258, 'Q': -0.073797, 'P': -0.48481, 'S': 1.0175, 'R': 0.22077, 'T': -0.6178, 'W': -0.99494, 'V': 0.11956, 'Y': 0.066112}} | 2,160 | 2,160 | 0.397685 | 525 | 2,160 | 1.632381 | 0.201905 | 0.114352 | 0.028005 | 0.03734 | 0.224037 | 0.142357 | 0.142357 | 0.142357 | 0.133022 | 0.133022 | 0 | 0.377765 | 0.162963 | 2,160 | 1 | 2,160 | 2,160 | 0.096239 | 0 | 0 | 0 | 0 | 0 | 0.07913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1a079d8f7ac08025b1c1ce53e870d3c5b5b8f98b | 14,274 | py | Python | pybind/slxos/v16r_1_00b/mpls_config/router/mpls/mpls_cmds_holder/ldp/ldp_holder/ldp_session/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/mpls_config/router/mpls/mpls_cmds_holder/ldp/ldp_holder/ldp_session/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/mpls_config/router/mpls/mpls_cmds_holder/ldp/ldp_holder/ldp_session/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class ldp_session(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-mpls - based on the path /mpls-config/router/mpls/mpls-cmds-holder/ldp/ldp-holder/ldp-session. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__ldp_session_ip','__ldp_session_fec_filter_out','__ldp_session_auth_key',)
_yang_name = 'ldp-session'
_rest_name = 'session'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__ldp_session_fec_filter_out = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1..64']}), is_leaf=True, yang_name="ldp-session-fec-filter-out", rest_name="filter-fec-out", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Apply filtering on outbound FECs', u'cli-full-no': None, u'alt-name': u'filter-fec-out'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
self.__ldp_session_auth_key = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1..80']}), is_leaf=True, yang_name="ldp-session-auth-key", rest_name="key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Enable TCP-MD5 authentication', u'cli-full-no': None, u'alt-name': u'key'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
self.__ldp_session_ip = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])(%[\\p{N}\\p{L}]+)?'}), is_leaf=True, yang_name="ldp-session-ip", rest_name="ldp-session-ip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Define LDP peer ip address'}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='inet:ipv4-address', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'mpls-config', u'router', u'mpls', u'mpls-cmds-holder', u'ldp', u'ldp-holder', u'ldp-session']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'router', u'mpls', u'ldp', u'session']
def _get_ldp_session_ip(self):
"""
Getter method for ldp_session_ip, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/ldp/ldp_holder/ldp_session/ldp_session_ip (inet:ipv4-address)
"""
return self.__ldp_session_ip
def _set_ldp_session_ip(self, v, load=False):
"""
Setter method for ldp_session_ip, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/ldp/ldp_holder/ldp_session/ldp_session_ip (inet:ipv4-address)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_session_ip is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_session_ip() directly.
"""
parent = getattr(self, "_parent", None)
if parent is not None and load is False:
raise AttributeError("Cannot set keys directly when" +
" within an instantiated list")
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])(%[\\p{N}\\p{L}]+)?'}), is_leaf=True, yang_name="ldp-session-ip", rest_name="ldp-session-ip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Define LDP peer ip address'}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='inet:ipv4-address', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_session_ip must be of a type compatible with inet:ipv4-address""",
'defined-type': "inet:ipv4-address",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])(%[\\p{N}\\p{L}]+)?'}), is_leaf=True, yang_name="ldp-session-ip", rest_name="ldp-session-ip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Define LDP peer ip address'}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='inet:ipv4-address', is_config=True)""",
})
self.__ldp_session_ip = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_session_ip(self):
self.__ldp_session_ip = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])(%[\\p{N}\\p{L}]+)?'}), is_leaf=True, yang_name="ldp-session-ip", rest_name="ldp-session-ip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Define LDP peer ip address'}}, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='inet:ipv4-address', is_config=True)
def _get_ldp_session_fec_filter_out(self):
"""
Getter method for ldp_session_fec_filter_out, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/ldp/ldp_holder/ldp_session/ldp_session_fec_filter_out (string)
"""
return self.__ldp_session_fec_filter_out
def _set_ldp_session_fec_filter_out(self, v, load=False):
"""
Setter method for ldp_session_fec_filter_out, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/ldp/ldp_holder/ldp_session/ldp_session_fec_filter_out (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_session_fec_filter_out is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_session_fec_filter_out() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1..64']}), is_leaf=True, yang_name="ldp-session-fec-filter-out", rest_name="filter-fec-out", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Apply filtering on outbound FECs', u'cli-full-no': None, u'alt-name': u'filter-fec-out'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_session_fec_filter_out must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1..64']}), is_leaf=True, yang_name="ldp-session-fec-filter-out", rest_name="filter-fec-out", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Apply filtering on outbound FECs', u'cli-full-no': None, u'alt-name': u'filter-fec-out'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)""",
})
self.__ldp_session_fec_filter_out = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_session_fec_filter_out(self):
self.__ldp_session_fec_filter_out = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1..64']}), is_leaf=True, yang_name="ldp-session-fec-filter-out", rest_name="filter-fec-out", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Apply filtering on outbound FECs', u'cli-full-no': None, u'alt-name': u'filter-fec-out'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
def _get_ldp_session_auth_key(self):
"""
Getter method for ldp_session_auth_key, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/ldp/ldp_holder/ldp_session/ldp_session_auth_key (string)
"""
return self.__ldp_session_auth_key
def _set_ldp_session_auth_key(self, v, load=False):
"""
Setter method for ldp_session_auth_key, mapped from YANG variable /mpls_config/router/mpls/mpls_cmds_holder/ldp/ldp_holder/ldp_session/ldp_session_auth_key (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_session_auth_key is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_session_auth_key() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1..80']}), is_leaf=True, yang_name="ldp-session-auth-key", rest_name="key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Enable TCP-MD5 authentication', u'cli-full-no': None, u'alt-name': u'key'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_session_auth_key must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1..80']}), is_leaf=True, yang_name="ldp-session-auth-key", rest_name="key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Enable TCP-MD5 authentication', u'cli-full-no': None, u'alt-name': u'key'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)""",
})
self.__ldp_session_auth_key = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_session_auth_key(self):
self.__ldp_session_auth_key = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'length': [u'1..80']}), is_leaf=True, yang_name="ldp-session-auth-key", rest_name="key", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Enable TCP-MD5 authentication', u'cli-full-no': None, u'alt-name': u'key'}}, namespace='urn:brocade.com:mgmt:brocade-mpls', defining_module='brocade-mpls', yang_type='string', is_config=True)
ldp_session_ip = __builtin__.property(_get_ldp_session_ip, _set_ldp_session_ip)
ldp_session_fec_filter_out = __builtin__.property(_get_ldp_session_fec_filter_out, _set_ldp_session_fec_filter_out)
ldp_session_auth_key = __builtin__.property(_get_ldp_session_auth_key, _set_ldp_session_auth_key)
_pyangbind_elements = {'ldp_session_ip': ldp_session_ip, 'ldp_session_fec_filter_out': ldp_session_fec_filter_out, 'ldp_session_auth_key': ldp_session_auth_key, }
| 72.090909 | 632 | 0.722573 | 2,177 | 14,274 | 4.479559 | 0.095544 | 0.088187 | 0.034454 | 0.04676 | 0.813884 | 0.764766 | 0.735952 | 0.728261 | 0.724774 | 0.710931 | 0 | 0.016514 | 0.126103 | 14,274 | 197 | 633 | 72.456853 | 0.765272 | 0.143688 | 0 | 0.393939 | 0 | 0.045455 | 0.370524 | 0.162052 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.060606 | 0 | 0.280303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1a292474494a0a721881b6607065ecd828951e8e | 61 | py | Python | unit_test/__init__.py | Renata1995/Topic-Distance-and-Coherence | d567d5b3ef71ea5654f214aa3736add7f3ac94bc | [
"Apache-2.0"
] | 5 | 2018-08-25T07:16:31.000Z | 2020-11-12T00:36:15.000Z | unit_test/__init__.py | Renata1995/Topic-Distance-and-Coherence | d567d5b3ef71ea5654f214aa3736add7f3ac94bc | [
"Apache-2.0"
] | 1 | 2018-09-24T16:17:47.000Z | 2018-09-24T16:17:47.000Z | unit_test/__init__.py | Renata1995/Topic-Distance-and-Coherence | d567d5b3ef71ea5654f214aa3736add7f3ac94bc | [
"Apache-2.0"
] | 4 | 2018-05-07T07:52:10.000Z | 2020-11-12T00:36:18.000Z | """
Unit Tests
"""
def test_dir():
return "test_data"
| 6.777778 | 22 | 0.57377 | 8 | 61 | 4.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245902 | 61 | 8 | 23 | 7.625 | 0.717391 | 0.163934 | 0 | 0 | 0 | 0 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a7e7f4ceab2f7cd795371ce1b30ea6c23301d016 | 35 | py | Python | descriptive_dp/__init__.py | hsfzxjy/Generic_DP | 290d0fa0f5e7fe221b65dc478ebac1e58f0f2e92 | [
"MIT"
] | null | null | null | descriptive_dp/__init__.py | hsfzxjy/Generic_DP | 290d0fa0f5e7fe221b65dc478ebac1e58f0f2e92 | [
"MIT"
] | null | null | null | descriptive_dp/__init__.py | hsfzxjy/Generic_DP | 290d0fa0f5e7fe221b65dc478ebac1e58f0f2e92 | [
"MIT"
] | null | null | null | from .decorators import dp # noqa
| 17.5 | 34 | 0.742857 | 5 | 35 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 35 | 1 | 35 | 35 | 0.928571 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a7fbc2f582120fcc90a88850683313b37b7d1cac | 4,611 | py | Python | tests/integration/test_redirect_url_storage/test.py | syominsergey/ClickHouse | 1e1e3a6ad84b3c3f5da516cb255670563854e42a | [
"Apache-2.0"
] | null | null | null | tests/integration/test_redirect_url_storage/test.py | syominsergey/ClickHouse | 1e1e3a6ad84b3c3f5da516cb255670563854e42a | [
"Apache-2.0"
] | null | null | null | tests/integration/test_redirect_url_storage/test.py | syominsergey/ClickHouse | 1e1e3a6ad84b3c3f5da516cb255670563854e42a | [
"Apache-2.0"
] | null | null | null | import pytest
from helpers.cluster import ClickHouseCluster
cluster = ClickHouseCluster(__file__)
node1 = cluster.add_instance('node1', main_configs=['configs/named_collections.xml'], with_zookeeper=False, with_hdfs=True)
@pytest.fixture(scope="module")
def started_cluster():
try:
cluster.start()
yield cluster
finally:
cluster.shutdown()
def test_url_without_redirect(started_cluster):
hdfs_api = started_cluster.hdfs_api
hdfs_api.write_data("/simple_storage", "1\tMark\t72.53\n")
assert hdfs_api.read_data("/simple_storage") == "1\tMark\t72.53\n"
# access datanode port directly
node1.query(
"create table WebHDFSStorage (id UInt32, name String, weight Float64) ENGINE = URL('http://hdfs1:50075/webhdfs/v1/simple_storage?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV')")
assert node1.query("select * from WebHDFSStorage") == "1\tMark\t72.53\n"
def test_url_with_globs(started_cluster):
hdfs_api = started_cluster.hdfs_api
hdfs_api.write_data("/simple_storage_1_1", "1\n")
hdfs_api.write_data("/simple_storage_1_2", "2\n")
hdfs_api.write_data("/simple_storage_1_3", "3\n")
hdfs_api.write_data("/simple_storage_2_1", "4\n")
hdfs_api.write_data("/simple_storage_2_2", "5\n")
hdfs_api.write_data("/simple_storage_2_3", "6\n")
result = node1.query(
"select * from url('http://hdfs1:50075/webhdfs/v1/simple_storage_{1..2}_{1..3}?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV', 'data String') as data order by data")
assert result == "1\n2\n3\n4\n5\n6\n"
def test_url_with_globs_and_failover(started_cluster):
hdfs_api = started_cluster.hdfs_api
hdfs_api.write_data("/simple_storage_1_1", "1\n")
hdfs_api.write_data("/simple_storage_1_2", "2\n")
hdfs_api.write_data("/simple_storage_1_3", "3\n")
hdfs_api.write_data("/simple_storage_3_1", "4\n")
hdfs_api.write_data("/simple_storage_3_2", "5\n")
hdfs_api.write_data("/simple_storage_3_3", "6\n")
result = node1.query(
"select * from url('http://hdfs1:50075/webhdfs/v1/simple_storage_{0|1|2|3}_{1..3}?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV', 'data String') as data order by data")
assert result == "1\n2\n3\n"
def test_url_with_redirect_not_allowed(started_cluster):
hdfs_api = started_cluster.hdfs_api
hdfs_api.write_data("/simple_storage", "1\tMark\t72.53\n")
assert hdfs_api.read_data("/simple_storage") == "1\tMark\t72.53\n"
# access proxy port without allowing redirects
node1.query(
"create table WebHDFSStorageWithoutRedirect (id UInt32, name String, weight Float64) ENGINE = URL('http://hdfs1:50070/webhdfs/v1/simple_storage?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV')")
with pytest.raises(Exception):
assert node1.query("select * from WebHDFSStorageWithoutRedirect") == "1\tMark\t72.53\n"
def test_url_with_redirect_allowed(started_cluster):
hdfs_api = started_cluster.hdfs_api
hdfs_api.write_data("/simple_storage", "1\tMark\t72.53\n")
assert hdfs_api.read_data("/simple_storage") == "1\tMark\t72.53\n"
# access proxy port with allowing redirects
# http://localhost:50070/webhdfs/v1/b?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0
node1.query(
"create table WebHDFSStorageWithRedirect (id UInt32, name String, weight Float64) ENGINE = URL('http://hdfs1:50070/webhdfs/v1/simple_storage?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV')")
assert node1.query("SET max_http_get_redirects=1; select * from WebHDFSStorageWithRedirect") == "1\tMark\t72.53\n"
node1.query("drop table WebHDFSStorageWithRedirect")
def test_predefined_connection_configuration(started_cluster):
hdfs_api = started_cluster.hdfs_api
hdfs_api.write_data("/simple_storage", "1\tMark\t72.53\n")
assert hdfs_api.read_data("/simple_storage") == "1\tMark\t72.53\n"
node1.query(
"create table WebHDFSStorageWithRedirect (id UInt32, name String, weight Float64) ENGINE = URL(url1, url='http://hdfs1:50070/webhdfs/v1/simple_storage?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', format='TSV')")
assert node1.query("SET max_http_get_redirects=1; select * from WebHDFSStorageWithRedirect") == "1\tMark\t72.53\n"
result = node1.query("SET max_http_get_redirects=1; select * from url(url1, url='http://hdfs1:50070/webhdfs/v1/simple_storage?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', format='TSV', structure='id UInt32, name String, weight Float64')")
assert(result == "1\tMark\t72.53\n")
node1.query("drop table WebHDFSStorageWithRedirect")
| 47.05102 | 246 | 0.731512 | 676 | 4,611 | 4.751479 | 0.16716 | 0.069738 | 0.105853 | 0.079701 | 0.803861 | 0.792653 | 0.765567 | 0.753113 | 0.749689 | 0.693649 | 0 | 0.06518 | 0.124919 | 4,611 | 97 | 247 | 47.536082 | 0.730855 | 0.043158 | 0 | 0.454545 | 0 | 0.106061 | 0.520309 | 0.072158 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.106061 | false | 0 | 0.030303 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c525b12b073d2179ad0519bb4357a0bac9f91dab | 1,749 | py | Python | tests/test_update.py | cbenhagen/mhl | 679942fc87b47e095fc1e2734901cb1cd405f75c | [
"MIT"
] | null | null | null | tests/test_update.py | cbenhagen/mhl | 679942fc87b47e095fc1e2734901cb1cd405f75c | [
"MIT"
] | null | null | null | tests/test_update.py | cbenhagen/mhl | 679942fc87b47e095fc1e2734901cb1cd405f75c | [
"MIT"
] | null | null | null | import time
from packaging import version
import requests
def test_updater(requests_mock, mocker):
mocker.patch("mhl.__version__.ascmhl_tool_version", "0.0.1")
requests_mock.get(
"https://api.github.com/repos/ascmitc/mhl/releases/latest",
json={"tag_name": "v0.6.5"},
)
from importlib import reload
import mhl.cli.update
reload(mhl.cli.update)
updater = mhl.cli.update.Updater()
while not updater.finished:
time.sleep(0.1)
assert updater.latest_version == version.parse("0.6.5")
assert updater.needs_update is True
updater.join(timeout=1)
def test_updater_prerelease(requests_mock, mocker):
mocker.patch("mhl.__version__.ascmhl_tool_version", "0.0.1")
requests_mock.get(
"https://api.github.com/repos/ascmitc/mhl/releases/latest",
json={"tag_name": "v0.6.5-alpha.2"},
)
from importlib import reload
import mhl.cli.update
reload(mhl.cli.update)
updater = mhl.cli.update.Updater()
while not updater.finished:
time.sleep(0.1)
assert updater.latest_version == version.parse("0.6.5a2")
assert updater.needs_update is False
updater.join(timeout=1)
def test_updater_timeout(requests_mock, mocker):
mocker.patch("mhl.__version__.ascmhl_tool_version", "0.0.1")
requests_mock.get(
"https://api.github.com/repos/ascmitc/mhl/releases/latest",
exc=requests.exceptions.ConnectTimeout,
)
from importlib import reload
import mhl.cli.update
reload(mhl.cli.update)
updater = mhl.cli.update.Updater()
while not updater.finished:
time.sleep(0.1)
assert updater.latest_version is None
assert updater.needs_update is False
updater.join(timeout=1)
| 25.347826 | 67 | 0.688965 | 241 | 1,749 | 4.846473 | 0.232365 | 0.046233 | 0.092466 | 0.097603 | 0.883562 | 0.861301 | 0.861301 | 0.821062 | 0.821062 | 0.821062 | 0 | 0.022551 | 0.188679 | 1,749 | 68 | 68 | 25.720588 | 0.800564 | 0 | 0 | 0.666667 | 0 | 0 | 0.19211 | 0.060034 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.0625 | false | 0 | 0.1875 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c540d4ca2138a6820762f18bbe93a314bf0bd12a | 194 | py | Python | mysite/dashboard2/admin.py | militiaonly/spark1707 | 3d4a3945ca2190628ea6a8593d3adadfd1a71dfb | [
"MIT"
] | null | null | null | mysite/dashboard2/admin.py | militiaonly/spark1707 | 3d4a3945ca2190628ea6a8593d3adadfd1a71dfb | [
"MIT"
] | null | null | null | mysite/dashboard2/admin.py | militiaonly/spark1707 | 3d4a3945ca2190628ea6a8593d3adadfd1a71dfb | [
"MIT"
] | null | null | null | from django.contrib import admin
# from .models import Config, Machine, Tag
# Register your models here.
# admin.site.register(Config)
# admin.site.register(Machine)
# admin.site.register(Tag)
| 24.25 | 42 | 0.768041 | 27 | 194 | 5.518519 | 0.481481 | 0.181208 | 0.342282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118557 | 194 | 7 | 43 | 27.714286 | 0.871345 | 0.768041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c54ab84cf7359825ca757d966cec8972b758d952 | 41 | py | Python | src/srbpy/stdlib/__init__.py | billhu0228/SmartRoadBridgePy | 4a5d34028a2612aef846b580733bf6f488110798 | [
"MIT"
] | 2 | 2020-08-05T10:46:45.000Z | 2020-08-11T11:05:18.000Z | src/srbpy/stdlib/__init__.py | billhu0228/SmartRoadBridgePy | 4a5d34028a2612aef846b580733bf6f488110798 | [
"MIT"
] | null | null | null | src/srbpy/stdlib/__init__.py | billhu0228/SmartRoadBridgePy | 4a5d34028a2612aef846b580733bf6f488110798 | [
"MIT"
] | 1 | 2020-08-26T07:50:22.000Z | 2020-08-26T07:50:22.000Z | from .substructures import OneColumnPier
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3dcf7fcfd9205f43359cc49b99d4e5f623af556e | 854 | py | Python | x_rebirth_station_calculator/station_data/ol__licensed_distillery.py | Phipsz/XRebirthStationCalculator | ac31c2f5816be34a7df2d7c4eb4bd5e01f7ff835 | [
"MIT"
] | 1 | 2016-04-17T11:00:22.000Z | 2016-04-17T11:00:22.000Z | x_rebirth_station_calculator/station_data/ol__licensed_distillery.py | Phipsz/XRebirthStationCalculator | ac31c2f5816be34a7df2d7c4eb4bd5e01f7ff835 | [
"MIT"
] | null | null | null | x_rebirth_station_calculator/station_data/ol__licensed_distillery.py | Phipsz/XRebirthStationCalculator | ac31c2f5816be34a7df2d7c4eb4bd5e01f7ff835 | [
"MIT"
] | null | null | null | from x_rebirth_station_calculator.station_data import modules
from x_rebirth_station_calculator.station_data.station_base import Station
names = {'L044': 'Licensed Distillery',
'L049': 'Lizensierte Destille'}
smodules = [modules.LiquorStill(production_method='ar', efficiency=179),
modules.LiquorStill(production_method='ar', efficiency=179),
modules.LiquorStill(production_method='ar', efficiency=179),
modules.LiquorStill(production_method='ar', efficiency=179),
modules.LiquorStill(production_method='ar', efficiency=179),
modules.LiquorStill(production_method='ar', efficiency=179),
modules.LiquorStill(production_method='ar', efficiency=179),
modules.LiquorStill(production_method='ar', efficiency=179)]
OL_LicensedDistillery = Station(names, smodules)
| 50.235294 | 74 | 0.73185 | 89 | 854 | 6.820225 | 0.280899 | 0.237232 | 0.369028 | 0.448105 | 0.777595 | 0.777595 | 0.777595 | 0.645799 | 0.645799 | 0.645799 | 0 | 0.041667 | 0.156909 | 854 | 16 | 75 | 53.375 | 0.801389 | 0 | 0 | 0.461538 | 0 | 0 | 0.07377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3de05409fc0e0f25c823759f6871d280c7dd2202 | 1,939 | py | Python | 2020/day05.py | mbcollins2/aoc | b94380fd5e92b4fe9f4af654e7762174c1c6ac91 | [
"MIT"
] | null | null | null | 2020/day05.py | mbcollins2/aoc | b94380fd5e92b4fe9f4af654e7762174c1c6ac91 | [
"MIT"
] | 3 | 2021-12-15T19:12:38.000Z | 2021-12-15T19:14:42.000Z | 2020/day05.py | mbcollins2/aoc | b94380fd5e92b4fe9f4af654e7762174c1c6ac91 | [
"MIT"
] | null | null | null | import numpy as np
class solve_day(object):
with open('inputs/day05.txt', 'r') as f:
data = f.readlines()
def part1(self):
seat_id = []
for d in self.data:
d = d.strip()
front = 0
back = 127
left = 0
right = 7
row = 0
seat = 0
for x in d[:-3]:
if x == 'F':
back = int(np.mean([front, back]))
if x == 'B':
front = int(np.mean([front, back])+1)
row = int(np.mean([front, back]))
for x in d[-3:]:
if x == 'L':
right = int(np.mean([left, right]))
if x == 'R':
left = int(np.mean([left, right])+1)
seat = int(np.mean([left, right]))
seat_id.append(row*8+seat)
return max(seat_id)
def part2(self):
seat_id = []
for d in self.data:
d = d.strip()
front = 0
back = 127
left = 0
right = 7
row = 0
seat = 0
for x in d[:-3]:
if x == 'F':
back = int(np.mean([front, back]))
if x == 'B':
front = int(np.mean([front, back])+1)
row = int(np.mean([front, back]))
for x in d[-3:]:
if x == 'L':
right = int(np.mean([left, right]))
if x == 'R':
left = int(np.mean([left, right])+1)
seat = int(np.mean([left, right]))
seat_id.append(row*8+seat)
return [int(np.mean([x[1],x[0]])) for x in zip(list(set(seat_id)), list(set(seat_id))[1:]) if x[1]-x[0]==2][0]
if __name__ == '__main__':
s = solve_day()
print(f'Part 1: {s.part1()}')
print(f'Part 2: {s.part2()}') | 22.546512 | 118 | 0.38164 | 251 | 1,939 | 2.880478 | 0.223108 | 0.089903 | 0.161826 | 0.116183 | 0.710927 | 0.710927 | 0.710927 | 0.710927 | 0.710927 | 0.710927 | 0 | 0.039347 | 0.46261 | 1,939 | 86 | 119 | 22.546512 | 0.654511 | 0 | 0 | 0.785714 | 0 | 0 | 0.036598 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.017857 | 0 | 0.107143 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3dfa1b85e70170ddc32e02d50c11cc157da0c6b4 | 9,194 | py | Python | privx_api/connection_manager.py | hokenssh/privx-sdk-for-python | 24627d25c0343f350c9b2396677344b771f8aec6 | [
"Apache-2.0"
] | 4 | 2020-06-15T17:14:18.000Z | 2021-12-20T12:12:56.000Z | privx_api/connection_manager.py | hokenssh/privx-sdk-for-python | 24627d25c0343f350c9b2396677344b771f8aec6 | [
"Apache-2.0"
] | 5 | 2019-11-25T07:04:07.000Z | 2021-05-19T08:09:53.000Z | privx_api/connection_manager.py | hokenssh/privx-sdk-for-python | 24627d25c0343f350c9b2396677344b771f8aec6 | [
"Apache-2.0"
] | 23 | 2019-11-22T08:17:58.000Z | 2022-02-21T15:50:36.000Z | from http import HTTPStatus
from typing import Optional
from privx_api.base import BasePrivXAPI
from privx_api.enums import UrlEnum
from privx_api.response import PrivXAPIResponse, PrivXStreamResponse
from privx_api.utils import get_value
class ConnectionManagerAPI(BasePrivXAPI):
def get_connection_manager_service_status(self):
"""
Get microservice status.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_get(UrlEnum.CONNECTION_MANAGER.STATUS)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def get_connections(
self,
offset: Optional[int] = None,
limit: Optional[int] = None,
sort_key: Optional[str] = None,
sort_dir: Optional[str] = None,
) -> PrivXAPIResponse:
"""
Get connections.
Returns:
PrivXAPIResponse
"""
search_params = self._get_search_params(
offset=offset, limit=limit, sortkey=sort_key, sortdir=sort_dir
)
response_status, data = self._http_get(
UrlEnum.CONNECTION_MANAGER.CONNECTIONS,
query_params=search_params,
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def search_connections(
self,
offset: Optional[int] = None,
limit: Optional[int] = None,
sort_key: Optional[str] = None,
sort_dir: Optional[str] = None,
connection_params: Optional[dict] = None,
) -> PrivXAPIResponse:
"""
Search for connections.
Returns:
PrivXAPIResponse
"""
search_params = self._get_search_params(
offset=offset, limit=limit, sortkey=sort_key, sortdir=sort_dir
)
response_status, data = self._http_post(
UrlEnum.CONNECTION_MANAGER.SEARCH,
query_params=search_params,
body=get_value(connection_params, dict()),
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def get_connection(self, connection_id: str) -> PrivXAPIResponse:
"""
Get a single connection.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_get(
UrlEnum.CONNECTION_MANAGER.CONNECTION,
path_params={"connection_id": connection_id},
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def create_trail_download_handle(
self, connection_id: str, channel_id: str, file_id: str
) -> PrivXAPIResponse:
"""
Create session ID for trail stored file download.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_post(
UrlEnum.CONNECTION_MANAGER.TRAIL_SESSION_ID,
path_params={
"connection_id": connection_id,
"channel_id": channel_id,
"file_id": file_id,
},
)
return PrivXAPIResponse(response_status, HTTPStatus.CREATED, data)
def download_trail(
self,
connection_id: str,
channel_id: str,
file_id: str,
session_id: str,
) -> PrivXStreamResponse:
"""
Download trail stored file transferred within audited connection channel.
use object.iter_content() for consuming the chunked response
Returns:
StreamResponse
"""
response_obj = self._http_stream(
UrlEnum.CONNECTION_MANAGER.TRAIL,
path_params={
"connection_id": connection_id,
"channel_id": channel_id,
"file_id": file_id,
"session_id": session_id,
},
)
return PrivXStreamResponse(response_obj, HTTPStatus.OK)
def create_trail_log_download_handle(
self, connection_id: str, channel_id: str
) -> PrivXAPIResponse:
"""
Create session ID for trail stored file download.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_post(
UrlEnum.CONNECTION_MANAGER.TRAIL_LOG,
path_params={
"connection_id": connection_id,
"channel_id": channel_id,
},
)
return PrivXAPIResponse(response_status, HTTPStatus.CREATED, data)
def download_trail_log(
self,
connection_id: str,
channel_id: str,
session_id: str,
format_param: Optional[str] = None,
filter_param: Optional[str] = None,
) -> PrivXStreamResponse:
"""
Download trail log of audited connection channel.
use object.iter_content() for consuming the chunked response
Returns:
StreamResponse
"""
search_params = self._get_search_params(
format=format_param, filter=filter_param
)
response_obj = self._http_stream(
UrlEnum.CONNECTION_MANAGER.TRAIL_LOG_SESSION_ID,
path_params={
"connection_id": connection_id,
"channel_id": channel_id,
"session_id": session_id,
},
query_params=search_params,
)
return PrivXStreamResponse(response_obj, HTTPStatus.OK)
def get_connection_access_roles(self, connection_id: str) -> PrivXAPIResponse:
"""
Get saved access roles for a connection.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_get(
UrlEnum.CONNECTION_MANAGER.CONNECTION_ACCESS_ROLES,
path_params={
"connection_id": connection_id,
},
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def grant_access_role_to_connection(
self,
connection_id: str,
role_id: str,
) -> PrivXAPIResponse:
"""
Grant a permission for a role for a connection.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_post(
UrlEnum.CONNECTION_MANAGER.CONNECTION_ACCESS_ROLE,
path_params={
"connection_id": connection_id,
"role_id": role_id,
},
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def revoke_access_role_from_connection(
self,
connection_id: str,
role_id: str,
) -> PrivXAPIResponse:
"""
Revoke a permission for a role from a connection.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_delete(
UrlEnum.CONNECTION_MANAGER.CONNECTION_ACCESS_ROLE,
path_params={
"connection_id": connection_id,
"role_id": role_id,
},
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def revoke_role_permissions_from_connections(
self,
role_id: str,
) -> PrivXAPIResponse:
"""
Revoke permissions for a role from connections.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_delete(
UrlEnum.CONNECTION_MANAGER.ACCESS_ROLE,
path_params={
"role_id": role_id,
},
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def terminate_connection(
self,
connection_id: str,
termination_params: Optional[dict] = None,
) -> PrivXAPIResponse:
"""
Terminate connection by ID.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_post(
UrlEnum.CONNECTION_MANAGER.TERMINATE_CONNECTION_ID,
path_params={
"connection_id": connection_id,
},
body=termination_params,
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def terminate_connection_by_host(
self,
host_id: str,
termination_params: Optional[dict] = None,
) -> PrivXAPIResponse:
"""
Terminate connection by ID.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_post(
UrlEnum.CONNECTION_MANAGER.TERMINATE_HOST_ID,
path_params={
"host_id": host_id,
},
body=termination_params,
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
def terminate_connection_by_user(
self,
user_id: str,
termination_params: Optional[dict] = None,
) -> PrivXAPIResponse:
"""
Terminate connection(s) of a user.
Returns:
PrivXAPIResponse
"""
response_status, data = self._http_post(
UrlEnum.CONNECTION_MANAGER.TERMINATE_USER_ID,
path_params={
"user_id": user_id,
},
body=termination_params,
)
return PrivXAPIResponse(response_status, HTTPStatus.OK, data)
| 30.045752 | 82 | 0.590494 | 858 | 9,194 | 6.027972 | 0.111888 | 0.064965 | 0.139211 | 0.055298 | 0.824439 | 0.776295 | 0.755607 | 0.723318 | 0.723318 | 0.62413 | 0 | 0 | 0.332717 | 9,194 | 305 | 83 | 30.144262 | 0.843032 | 0.128018 | 0 | 0.606218 | 0 | 0 | 0.030479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07772 | false | 0 | 0.031088 | 0 | 0.19171 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9a84b28be20d7208d59d1c2b5fac11e686811134 | 133 | py | Python | lager/__init__.py | vforgione/lager | 8957ea0a9a90c2f987d13693397e19176e7ff737 | [
"MIT"
] | null | null | null | lager/__init__.py | vforgione/lager | 8957ea0a9a90c2f987d13693397e19176e7ff737 | [
"MIT"
] | 3 | 2021-06-01T21:22:31.000Z | 2022-03-15T18:42:50.000Z | lager/__init__.py | vforgione/lager | 8957ea0a9a90c2f987d13693397e19176e7ff737 | [
"MIT"
] | null | null | null | from lager.enums import Verbosity
from lager.handlers import FileHandler, StdOutHandler, TcpHandler
from lager.loggers import Logger
| 33.25 | 65 | 0.857143 | 17 | 133 | 6.705882 | 0.647059 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 133 | 3 | 66 | 44.333333 | 0.957983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9adae519b53b7b5c9e61d17b207285410580a4bf | 200 | py | Python | remote_git_repo_analyzer/commands/__init__.py | wahlflo/RemoteGitRepoAnalyzer | a2f0f43b034d5a635b5bd2afaef93c0d3a11a34f | [
"MIT"
] | null | null | null | remote_git_repo_analyzer/commands/__init__.py | wahlflo/RemoteGitRepoAnalyzer | a2f0f43b034d5a635b5bd2afaef93c0d3a11a34f | [
"MIT"
] | null | null | null | remote_git_repo_analyzer/commands/__init__.py | wahlflo/RemoteGitRepoAnalyzer | a2f0f43b034d5a635b5bd2afaef93c0d3a11a34f | [
"MIT"
] | null | null | null | from .show_file_names import show_file_names
from .show_file_structure import show_file_structure
from .show_file_extensions import show_file_extensions
from .show_commit_logs import show_commit_logs
| 40 | 54 | 0.9 | 32 | 200 | 5.125 | 0.28125 | 0.292683 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 200 | 4 | 55 | 50 | 0.891304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1130b89420def9f0ab576e58d0edd9ff7212185 | 153 | py | Python | built-in/TensorFlow/Research/cv/image_classification/Cars_for_TensorFlow/automl/vega/search_space/networks/pytorch/heads/__init__.py | Huawei-Ascend/modelzoo | df51ed9c1d6dbde1deef63f2a037a369f8554406 | [
"Apache-2.0"
] | 12 | 2020-12-13T08:34:24.000Z | 2022-03-20T15:17:17.000Z | built-in/TensorFlow/Research/cv/image_classification/Cars_for_TensorFlow/automl/vega/search_space/networks/pytorch/heads/__init__.py | Huawei-Ascend/modelzoo | df51ed9c1d6dbde1deef63f2a037a369f8554406 | [
"Apache-2.0"
] | 3 | 2021-03-31T20:15:40.000Z | 2022-02-09T23:50:46.000Z | built-in/TensorFlow/Research/cv/image_classification/Darts_for_TensorFlow/automl/vega/search_space/networks/pytorch/heads/__init__.py | Huawei-Ascend/modelzoo | df51ed9c1d6dbde1deef63f2a037a369f8554406 | [
"Apache-2.0"
] | 2 | 2021-07-10T12:40:46.000Z | 2021-12-17T07:55:15.000Z | from .linear_head import LinearClassificationHead
from .rpn_head import RPNHead
from .bbox_head import BBoxHead
from .auto_lane_head import AutoLaneHead
| 30.6 | 49 | 0.869281 | 21 | 153 | 6.095238 | 0.571429 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104575 | 153 | 4 | 50 | 38.25 | 0.934307 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b184052cd71ef4865ff0e151962c390c9c2be5f0 | 89 | py | Python | attendees/users/models/__init__.py | xjlin0/-attendees30 | 48a2f2cbec11ec471d7a40d24903b48890feebf9 | [
"MIT"
] | 1 | 2020-03-26T00:42:04.000Z | 2020-03-26T00:42:04.000Z | attendees/users/models/__init__.py | xjlin0/-attendees30 | 48a2f2cbec11ec471d7a40d24903b48890feebf9 | [
"MIT"
] | null | null | null | attendees/users/models/__init__.py | xjlin0/-attendees30 | 48a2f2cbec11ec471d7a40d24903b48890feebf9 | [
"MIT"
] | null | null | null | from .user import User
from .menu import Menu
from .menu_auth_group import MenuAuthGroup
| 22.25 | 42 | 0.831461 | 14 | 89 | 5.142857 | 0.5 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134831 | 89 | 3 | 43 | 29.666667 | 0.935065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1870f20d91d0b19d57b29b0804d9d96a9dd4fce | 5,243 | py | Python | pirates/leveleditor/worldData/pvp_deathmatchIsland1.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 81 | 2018-04-08T18:14:24.000Z | 2022-01-11T07:22:15.000Z | pirates/leveleditor/worldData/pvp_deathmatchIsland1.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 4 | 2018-09-13T20:41:22.000Z | 2022-01-08T06:57:00.000Z | pirates/leveleditor/worldData/pvp_deathmatchIsland1.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 26 | 2018-05-26T12:49:27.000Z | 2021-09-11T09:11:59.000Z | from pandac.PandaModules import Point3, VBase3
objectStruct = {'Locator Links': [['1170794752.0jubutler', '1170795136.0jubutler3', 'Bi-directional'], ['1170795136.0jubutler2', '1170793344.0jubutler0', 'Bi-directional']],'Objects': {'1170792960.0jubutler0': {'Type': 'Island','Name': 'pvp_deathmatchIsland1','File': '','Objects': {'1170793088.0jubutler': {'Type': 'Island Game Area','File': 'pvp_deathmatchArea1_jungle_c','Hpr': Point3(0.0, 0.0, 0.0),'Objects': {'1170793344.0jubutler0': {'Type': 'Locator Node','Name': 'portal_interior_1','GridPos': Point3(-625.461, -34.917, 65.209),'Hpr': VBase3(0.0, 0.0, 0.0),'Pos': Point3(-648.274, -263.406, 69.975),'Scale': VBase3(1.0, 1.0, 1.0)},'1170793344.0jubutler1': {'Type': 'Locator Node','Name': 'portal_interior_2','GridPos': Point3(327.492, -179.598, 110.539),'Hpr': VBase3(107.903, 0.0, 0.0),'Pos': Point3(304.679, -408.087, 115.305),'Scale': VBase3(1.0, 1.0, 1.0)}},'Pos': Point3(182.389, -1389.934, 400.225),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/jungles/jungle_c_zero'}},'1170794752.0jubutler': {'Type': 'Locator Node','Name': 'portal_exterior_1','Hpr': VBase3(-48.549, 0.0, 0.0),'Pos': Point3(-781.314, -1716.97, 17.643),'Scale': VBase3(1.0, 1.0, 1.0)},'1170794752.0jubutler0': {'Type': 'Locator Node','Name': 'portal_exterior_2','Hpr': VBase3(-42.946, 0.654, 3.149),'Pos': Point3(12.838, -924.207, 46.228),'Scale': VBase3(1.0, 1.0, 1.0)},'1170794752.0jubutler1': {'Type': 'Locator Node','Name': 'portal_exterior_3','Hpr': VBase3(178.75, 0.0, 0.0),'Pos': Point3(758.091, -1814.26, 10.14),'Scale': VBase3(1.0, 1.0, 1.0)},'1170795136.0jubutler1': {'Type': 'Connector Tunnel','File': '','Hpr': Point3(0.0, 0.0, 0.0),'Objects': {'1170795136.0jubutler2': {'Type': 'Locator Node','Name': 'portal_connector_1','GridPos': Point3(-436.754, -1616.766, 11.417),'Hpr': VBase3(90.0, 0.0, 0.0),'Pos': Point3(95.197, 150.0, 0.0),'Scale': VBase3(1.0, 1.0, 1.0)},'1170795136.0jubutler3': {'Type': 'Locator Node','Name': 'portal_connector_2','GridPos': Point3(-531.951, -1763.505, 11.417),'Hpr': VBase3(-90.0, 0.0, 0.0),'Pos': Point3(0.0, 3.262, 0.0),'Scale': VBase3(1.0, 1.0, 1.0)}},'Pos': Point3(-668.706, -2074.038, 363.443),'Scale': VBase3(1.0, 1.0, 1.0),'Visual': {'Model': 'models/tunnels/tunnel_cave_left'}},'1170913172.09darren': {'Type': 'Locator Node','Name': 'portal_exterior_1','Hpr': VBase3(-48.549, 0.0, 0.0),'Pos': Point3(-781.314, -1716.97, 17.643),'Scale': VBase3(1.0, 1.0, 1.0)},'1170913172.11darren': {'Type': 'Locator Node','Name': 'portal_exterior_2','Hpr': VBase3(-42.946, 0.654, 3.149),'Pos': Point3(12.838, -924.207, 46.228),'Scale': VBase3(1.0, 1.0, 1.0)},'1170913172.11darren0': {'Type': 'Locator Node','Name': 'portal_exterior_3','Hpr': VBase3(178.75, 0.0, 0.0),'Pos': Point3(758.091, -1814.26, 10.14),'Scale': VBase3(1.0, 1.0, 1.0)},'1172637372.78HP_Administrator': {'Type': 'Locator Node','Name': 'portal_exterior_1','Hpr': VBase3(-48.549, 0.0, 0.0),'Pos': Point3(-781.314, -1716.97, 17.643),'Scale': VBase3(1.0, 1.0, 1.0)},'1172637372.78HP_Administrator0': {'Type': 'Locator Node','Name': 'portal_exterior_2','Hpr': VBase3(-42.946, 0.654, 3.149),'Pos': Point3(12.838, -924.207, 46.228),'Scale': VBase3(1.0, 1.0, 1.0)},'1172637372.8HP_Administrator': {'Type': 'Locator Node','Name': 'portal_exterior_3','Hpr': VBase3(178.75, 0.0, 0.0),'Pos': Point3(758.091, -1814.26, 10.14),'Scale': VBase3(1.0, 1.0, 1.0)}},'Visual': {'Model': 'models/islands/pir_m_are_isl_driftwood'}}},'Node Links': [],'Layers': {},'ObjectIds': {'1170792960.0jubutler0': '["Objects"]["1170792960.0jubutler0"]','1170793088.0jubutler': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170793088.0jubutler"]','1170793344.0jubutler0': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170793088.0jubutler"]["Objects"]["1170793344.0jubutler0"]','1170793344.0jubutler1': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170793088.0jubutler"]["Objects"]["1170793344.0jubutler1"]','1170794752.0jubutler': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170794752.0jubutler"]','1170794752.0jubutler0': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170794752.0jubutler0"]','1170794752.0jubutler1': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170794752.0jubutler1"]','1170795136.0jubutler1': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170795136.0jubutler1"]','1170795136.0jubutler2': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170795136.0jubutler1"]["Objects"]["1170795136.0jubutler2"]','1170795136.0jubutler3': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170795136.0jubutler1"]["Objects"]["1170795136.0jubutler3"]','1170913172.09darren': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170913172.09darren"]','1170913172.11darren': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170913172.11darren"]','1170913172.11darren0': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1170913172.11darren0"]','1172637372.78HP_Administrator': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1172637372.78HP_Administrator"]','1172637372.78HP_Administrator0': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1172637372.78HP_Administrator0"]','1172637372.8HP_Administrator': '["Objects"]["1170792960.0jubutler0"]["Objects"]["1172637372.8HP_Administrator"]'}} | 2,621.5 | 5,196 | 0.683197 | 732 | 5,243 | 4.829235 | 0.206284 | 0.027157 | 0.028006 | 0.033946 | 0.665629 | 0.571711 | 0.441301 | 0.429986 | 0.345403 | 0.313154 | 0 | 0.289954 | 0.048827 | 5,243 | 2 | 5,196 | 2,621.5 | 0.418889 | 0 | 0 | 0 | 0 | 0 | 0.587147 | 0.380244 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4922033be93bfc4d19627778a2c7257c5cbe170f | 3,386 | py | Python | tests/test_host_delete.py | StackStorm-Exchange/zabbix | 8a613dad10808cc5cd2f32e278e09d189b067cdf | [
"Apache-2.0"
] | 10 | 2018-03-07T06:12:13.000Z | 2022-01-23T20:44:20.000Z | tests/test_host_delete.py | StackStorm-Exchange/zabbix | 8a613dad10808cc5cd2f32e278e09d189b067cdf | [
"Apache-2.0"
] | 36 | 2017-10-28T07:23:57.000Z | 2021-08-18T14:38:47.000Z | tests/test_host_delete.py | StackStorm-Exchange/zabbix | 8a613dad10808cc5cd2f32e278e09d189b067cdf | [
"Apache-2.0"
] | 21 | 2017-10-31T01:06:42.000Z | 2022-02-08T14:59:36.000Z | import mock
from zabbix_base_action_test_case import ZabbixBaseActionTestCase
from host_delete import HostDelete
from six.moves.urllib.error import URLError
from pyzabbix.api import ZabbixAPIException
class HostDeleteTestCase(ZabbixBaseActionTestCase):
__test__ = True
action_cls = HostDelete
@mock.patch('lib.actions.ZabbixBaseAction.connect')
def test_run_connection_error(self, mock_connect):
action = self.get_action_instance(self.full_config)
mock_connect.side_effect = URLError('connection error')
test_dict = {'host': "test"}
host_dict = {'name': "test", 'hostid': '1'}
mock.MagicMock(return_value=host_dict['hostid'])
with self.assertRaises(URLError):
action.run(**test_dict)
@mock.patch('lib.actions.ZabbixBaseAction.connect')
def test_run_host_error(self, mock_connect):
action = self.get_action_instance(self.full_config)
mock_connect.return_vaue = "connect return"
test_dict = {'host': "test"}
host_dict = {'name': "test", 'hostid': '1'}
action.find_host = mock.MagicMock(return_value=host_dict['hostid'],
side_effect=ZabbixAPIException('host error'))
action.connect = mock_connect
with self.assertRaises(ZabbixAPIException):
action.run(**test_dict)
@mock.patch('lib.actions.ZabbixAPI')
@mock.patch('lib.actions.ZabbixBaseAction.connect')
def test_run(self, mock_connect, mock_client):
action = self.get_action_instance(self.full_config)
mock_connect.return_vaue = "connect return"
test_dict = {'host': "test"}
host_dict = {'name': "test", 'hostid': '1'}
action.connect = mock_connect
action.find_host = mock.MagicMock(return_value=host_dict['hostid'])
mock_client.host.delete.return_value = "delete return"
action.client = mock_client
result = action.run(**test_dict)
mock_client.host.delete.assert_called_with(host_dict['hostid'])
self.assertEqual(result, True)
@mock.patch('lib.actions.ZabbixAPI')
@mock.patch('lib.actions.ZabbixBaseAction.connect')
def test_run_id(self, mock_connect, mock_client):
action = self.get_action_instance(self.full_config)
mock_connect.return_vaue = "connect return"
test_dict = {'host_id': "1"}
action.connect = mock_connect
mock_client.host.delete.return_value = "delete return"
action.client = mock_client
result = action.run(**test_dict)
mock_client.host.delete.assert_called_with(test_dict['host_id'])
self.assertEqual(result, True)
@mock.patch('lib.actions.ZabbixAPI')
@mock.patch('lib.actions.ZabbixBaseAction.connect')
def test_run_delete_error(self, mock_connect, mock_client):
action = self.get_action_instance(self.full_config)
mock_connect.return_vaue = "connect return"
test_dict = {'host': "test"}
host_dict = {'name': "test", 'hostid': '1'}
action.connect = mock_connect
action.find_host = mock.MagicMock(return_value=host_dict['hostid'])
mock_client.host.delete.side_effect = ZabbixAPIException('host error')
mock_client.host.delete.return_value = "delete return"
action.client = mock_client
with self.assertRaises(ZabbixAPIException):
action.run(**test_dict)
| 40.795181 | 78 | 0.686946 | 405 | 3,386 | 5.481481 | 0.140741 | 0.069369 | 0.043243 | 0.068468 | 0.824324 | 0.77973 | 0.77973 | 0.762613 | 0.705405 | 0.658559 | 0 | 0.001839 | 0.196988 | 3,386 | 82 | 79 | 41.292683 | 0.814638 | 0 | 0 | 0.676471 | 0 | 0 | 0.150916 | 0.071766 | 0 | 0 | 0 | 0 | 0.102941 | 1 | 0.073529 | false | 0 | 0.073529 | 0 | 0.191176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4950155979fc1853091a79a768149ffae913f2df | 21,734 | py | Python | tests/unit/test_apt.py | petevg/operator-libs-linux | 2020c04f38eb8d6b5569a18c40b416d5a20a298b | [
"Apache-2.0"
] | 2 | 2021-11-12T09:41:34.000Z | 2021-11-30T22:11:41.000Z | tests/unit/test_apt.py | petevg/operator-libs-linux | 2020c04f38eb8d6b5569a18c40b416d5a20a298b | [
"Apache-2.0"
] | 22 | 2021-11-03T13:11:06.000Z | 2022-03-03T22:36:12.000Z | tests/unit/test_apt.py | petevg/operator-libs-linux | 2020c04f38eb8d6b5569a18c40b416d5a20a298b | [
"Apache-2.0"
] | 7 | 2021-11-03T20:20:03.000Z | 2022-03-03T03:58:25.000Z | # Copyright 2021 Canonical Ltd.
# See LICENSE file for licensing details.
import subprocess
import unittest
from unittest.mock import patch
from charms.operator_libs_linux.v0 import apt
dpkg_output_zsh = """Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-====================================-=========================================================================-============-===============================================================================
ii zsh 5.8-3ubuntu1 amd64 shell with lots of features
"""
dpkg_output_vim = """Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-====================================-=========================================================================-============-===============================================================================
ii vim 2:8.1.2269-1ubuntu5 amd64 Vi IMproved - Common files
"""
dpkg_output_all_arch = """Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-====================================-=========================================================================-============-===============================================================================
ii postgresql 12+214ubuntu0.1 all object-relational SQL database (supported version)
"""
dpkg_output_multi_arch = """Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-====================================-=========================================================================-============-===============================================================================
ii vim 2:8.1.2269-1ubuntu5 amd64 Vi IMproved - Common files
ii vim 2:8.1.2269-1ubuntu5 i386 Vi IMproved - Common files
"""
dpkg_output_not_installed = """Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=================================-=====================-=====================-========================================================================
rc ubuntu-advantage-tools 27.2.2~16.04.1 amd64 management tools for Ubuntu Advantage
"""
apt_cache_mocktester = """
Package: mocktester
Architecture: amd64
Version: 1:1.2.3-4
Priority: optional
Section: test
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Debian GNOME Maintainers <pkg-gnome-maintainers@lists.alioth.debian.org>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 1234
Depends: vim-common
Recommends: zsh
Suggests: foobar
Filename: pool/main/m/mocktester/mocktester_1:1.2.3-4_amd64.deb
Size: 65536
MD5sum: a87e414ad5aede7c820ce4c4e6bc7fa9
SHA1: b21d6ce47cb471c73fb4ec07a24c6f4e56fd19fc
SHA256: 89e7d5f61a0e3d32ef9aebd4b16e61840cd97e10196dfa186b06b6cde2f900a2
Homepage: https://wiki.gnome.org/Apps/MockTester
Description: Testing Package
Task: ubuntu-desktop
Description-md5: e7f99df3aa92cf870d335784e155ec33
"""
apt_cache_mocktester_all_arch = """
Package: mocktester
Architecture: all
Version: 1:1.2.3-4
Priority: optional
Section: test
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Debian GNOME Maintainers <pkg-gnome-maintainers@lists.alioth.debian.org>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 1234
Depends: vim-common
Recommends: zsh
Suggests: foobar
Filename: pool/main/m/mocktester/mocktester_1:1.2.3-4_amd64.deb
Size: 65536
MD5sum: a87e414ad5aede7c820ce4c4e6bc7fa9
SHA1: b21d6ce47cb471c73fb4ec07a24c6f4e56fd19fc
SHA256: 89e7d5f61a0e3d32ef9aebd4b16e61840cd97e10196dfa186b06b6cde2f900a2
Homepage: https://wiki.gnome.org/Apps/MockTester
Description: Testing Package
Task: ubuntu-desktop
Description-md5: e7f99df3aa92cf870d335784e155ec33
"""
apt_cache_mocktester_multi = """
Package: mocktester
Architecture: amd64
Version: 1:1.2.3-4
Priority: optional
Section: test
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Debian GNOME Maintainers <pkg-gnome-maintainers@lists.alioth.debian.org>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 1234
Depends: vim-common
Recommends: zsh
Suggests: foobar
Filename: pool/main/m/mocktester/mocktester_1:1.2.3-4_amd64.deb
Size: 65536
MD5sum: a87e414ad5aede7c820ce4c4e6bc7fa9
SHA1: b21d6ce47cb471c73fb4ec07a24c6f4e56fd19fc
SHA256: 89e7d5f61a0e3d32ef9aebd4b16e61840cd97e10196dfa186b06b6cde2f900a2
Homepage: https://wiki.gnome.org/Apps/MockTester
Description: Testing Package
Task: ubuntu-desktop
Description-md5: e7f99df3aa92cf870d335784e155ec33
Package: mocktester
Architecture: i386
Version: 1:1.2.3-4
Priority: optional
Section: test
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Debian GNOME Maintainers <pkg-gnome-maintainers@lists.alioth.debian.org>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 1234
Depends: vim-common
Recommends: zsh
Suggests: foobar
Filename: pool/main/m/mocktester/mocktester_1:1.2.3-4_amd64.deb
Size: 65536
MD5sum: a87e414ad5aede7c820ce4c4e6bc7fa9
SHA1: b21d6ce47cb471c73fb4ec07a24c6f4e56fd19fc
SHA256: 89e7d5f61a0e3d32ef9aebd4b16e61840cd97e10196dfa186b06b6cde2f900a2
Homepage: https://wiki.gnome.org/Apps/MockTester
Description: Testing Package
Task: ubuntu-desktop
Description-md5: e7f99df3aa92cf870d335784e155ec33
"""
apt_cache_aisleriot = """
Package: aisleriot
Architecture: amd64
Version: 1:3.22.9-1
Priority: optional
Section: games
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Debian GNOME Maintainers <pkg-gnome-maintainers@lists.alioth.debian.org>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 8800
Depends: dconf-gsettings-backend | gsettings-backend, guile-2.2-libs, libatk1.0-0 (>= 1.12.4), libc6 (>= 2.14), libcairo2 (>= 1.10.0), libcanberra-gtk3-0 (>= 0.25), libcanberra0 (>= 0.2), libgdk-pixbuf2.0-0 (>= 2.22.0), libglib2.0-0 (>= 2
.37.3), libgtk-3-0 (>= 3.19.12), librsvg2-2 (>= 2.32.0)
Recommends: yelp
Suggests: gnome-cards-data
Filename: pool/main/a/aisleriot/aisleriot_3.22.9-1_amd64.deb
Size: 843864
MD5sum: a87e414ad5aede7c820ce4c4e6bc7fa9
SHA1: b21d6ce47cb471c73fb4ec07a24c6f4e56fd19fc
SHA256: 89e7d5f61a0e3d32ef9aebd4b16e61840cd97e10196dfa186b06b6cde2f900a2
Homepage: https://wiki.gnome.org/Apps/Aisleriot
Description: GNOME solitaire card game collection
Task: ubuntu-desktop, ubuntukylin-desktop, ubuntu-budgie-desktop
Description-md5: e7f99df3aa92cf870d335784e155ec33
"""
class TestApt(unittest.TestCase):
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_can_load_from_dpkg(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", dpkg_output_vim]
vim = apt.DebianPackage.from_installed_package("vim")
self.assertEqual(vim.epoch, "2")
self.assertEqual(vim.arch, "amd64")
self.assertEqual(vim.fullversion, "2:8.1.2269-1ubuntu5.amd64")
self.assertEqual(str(vim.version), "2:8.1.2269-1ubuntu5")
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_can_load_from_dpkg_with_version(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", dpkg_output_zsh]
zsh = apt.DebianPackage.from_installed_package("zsh", version="5.8-3ubuntu1")
self.assertEqual(zsh.epoch, "")
self.assertEqual(zsh.arch, "amd64")
self.assertEqual(zsh.fullversion, "5.8-3ubuntu1.amd64")
self.assertEqual(str(zsh.version), "5.8-3ubuntu1")
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_will_not_load_from_system_with_bad_version(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", dpkg_output_zsh]
with self.assertRaises(apt.PackageNotFoundError):
apt.DebianPackage.from_installed_package("zsh", version="1.2-3")
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_can_load_from_dpkg_with_arch(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", dpkg_output_zsh]
zsh = apt.DebianPackage.from_installed_package("zsh", arch="amd64")
self.assertEqual(zsh.epoch, "")
self.assertEqual(zsh.arch, "amd64")
self.assertEqual(zsh.fullversion, "5.8-3ubuntu1.amd64")
self.assertEqual(str(zsh.version), "5.8-3ubuntu1")
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_can_load_from_dpkg_with_all_arch(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", dpkg_output_all_arch]
postgresql = apt.DebianPackage.from_installed_package("postgresql")
self.assertEqual(postgresql.epoch, "")
self.assertEqual(postgresql.arch, "all")
self.assertEqual(postgresql.fullversion, "12+214ubuntu0.1.all")
self.assertEqual(str(postgresql.version), "12+214ubuntu0.1")
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_can_load_from_dpkg_multi_arch(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", dpkg_output_multi_arch]
vim = apt.DebianPackage.from_installed_package("vim", arch="i386")
self.assertEqual(vim.epoch, "2")
self.assertEqual(vim.arch, "i386")
self.assertEqual(vim.fullversion, "2:8.1.2269-1ubuntu5.i386")
self.assertEqual(str(vim.version), "2:8.1.2269-1ubuntu5")
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_can_load_from_dpkg_not_installed(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", dpkg_output_not_installed]
with self.assertRaises(apt.PackageNotFoundError) as ctx:
apt.DebianPackage.from_installed_package("ubuntu-advantage-tools")
self.assertEqual(
"<charms.operator_libs_linux.v0.apt.PackageNotFoundError>", ctx.exception.name
)
self.assertIn(
"Package ubuntu-advantage-tools.amd64 is not installed!", ctx.exception.message
)
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_can_load_from_apt_cache(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", apt_cache_mocktester]
tester = apt.DebianPackage.from_apt_cache("mocktester")
self.assertEqual(tester.epoch, "1")
self.assertEqual(tester.arch, "amd64")
self.assertEqual(tester.fullversion, "1:1.2.3-4.amd64")
self.assertEqual(str(tester.version), "1:1.2.3-4")
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_can_load_from_apt_cache_all_arch(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", apt_cache_mocktester_all_arch]
tester = apt.DebianPackage.from_apt_cache("mocktester")
self.assertEqual(tester.epoch, "1")
self.assertEqual(tester.arch, "all")
self.assertEqual(tester.fullversion, "1:1.2.3-4.all")
self.assertEqual(str(tester.version), "1:1.2.3-4")
@patch("charms.operator_libs_linux.v0.apt.check_output")
def test_can_load_from_apt_cache_multi_arch(self, mock_subprocess):
mock_subprocess.side_effect = ["amd64", apt_cache_mocktester_multi]
tester = apt.DebianPackage.from_apt_cache("mocktester", arch="i386")
self.assertEqual(tester.epoch, "1")
self.assertEqual(tester.arch, "i386")
self.assertEqual(tester.fullversion, "1:1.2.3-4.i386")
self.assertEqual(str(tester.version), "1:1.2.3-4")
@patch("charms.operator_libs_linux.v0.apt.check_output")
@patch("charms.operator_libs_linux.v0.apt.check_call")
def test_can_run_apt_commands(self, mock_subprocess_call, mock_subprocess_output):
mock_subprocess_call.return_value = 0
mock_subprocess_output.side_effect = [
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["dpkg", "-l", "mocktester"]),
"amd64",
apt_cache_mocktester,
]
pkg = apt.DebianPackage.from_system("mocktester")
self.assertEqual(pkg.present, False)
self.assertEqual(pkg.version.epoch, "1")
self.assertEqual(pkg.version.number, "1.2.3-4")
pkg.ensure(apt.PackageState.Latest)
mock_subprocess_call.assert_called_with(
[
"apt-get",
"-y",
"--option=Dpkg::Options::=--force-confold",
"install",
"mocktester=1:1.2.3-4",
],
stderr=-1,
stdout=-1,
)
self.assertEqual(pkg.state, apt.PackageState.Latest)
pkg.state = apt.PackageState.Absent
mock_subprocess_call.assert_called_with(
["apt-get", "-y", "remove", "mocktester=1:1.2.3-4"],
stdout=-1,
stderr=-1,
)
@patch("charms.operator_libs_linux.v0.apt.check_output")
@patch("charms.operator_libs_linux.v0.apt.check_call")
def test_will_throw_apt_errors(self, mock_subprocess_call, mock_subprocess_output):
mock_subprocess_call.side_effect = subprocess.CalledProcessError(
returncode=1, cmd=["apt-get", "-y", "install"]
)
mock_subprocess_output.side_effect = [
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["dpkg", "-l", "mocktester"]),
"amd64",
apt_cache_mocktester,
]
pkg = apt.DebianPackage.from_system("mocktester")
self.assertEqual(pkg.present, False)
with self.assertRaises(apt.PackageError) as ctx:
pkg.ensure(apt.PackageState.Latest)
self.assertEqual("<charms.operator_libs_linux.v0.apt.PackageError>", ctx.exception.name)
self.assertIn("Could not install package", ctx.exception.message)
def test_can_compare_versions(self):
old_version = apt.Version("1.0.0", "")
old_dupe = apt.Version("1.0.0", "")
new_version = apt.Version("1.0.1", "")
new_epoch = apt.Version("1.0.1", "1")
self.assertEqual(old_version, old_dupe)
self.assertGreater(new_version, old_version)
self.assertGreater(new_epoch, new_version)
self.assertLess(old_version, new_version)
self.assertLessEqual(new_version, new_epoch)
self.assertGreaterEqual(new_version, old_version)
self.assertNotEqual(new_version, old_version)
def test_can_parse_epoch_and_version(self):
self.assertEqual((None, "1.0.0"), apt.DebianPackage._get_epoch_from_version("1.0.0"))
self.assertEqual(
("2", "9.8-7ubuntu6"), apt.DebianPackage._get_epoch_from_version("2:9.8-7ubuntu6")
)
class TestAptBareMethods(unittest.TestCase):
@patch("charms.operator_libs_linux.v0.apt.check_output")
@patch("charms.operator_libs_linux.v0.apt.check_call")
def test_can_run_bare_changes_on_single_package(self, mock_subprocess, mock_subprocess_output):
mock_subprocess.return_value = 0
mock_subprocess_output.side_effect = [
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["dpkg", "-l", "aisleriot"]),
"amd64",
apt_cache_aisleriot,
]
foo = apt.add_package("aisleriot")
mock_subprocess.assert_called_with(
[
"apt-get",
"-y",
"--option=Dpkg::Options::=--force-confold",
"install",
"aisleriot=1:3.22.9-1",
],
stderr=-1,
stdout=-1,
)
self.assertEqual(foo.present, True)
mock_subprocess_output.side_effect = ["amd64", dpkg_output_zsh]
bar = apt.remove_package("zsh")
bar.ensure(apt.PackageState.Absent)
mock_subprocess.assert_called_with(
["apt-get", "-y", "remove", "zsh=5.8-3ubuntu1"], stderr=-1, stdout=-1
)
self.assertEqual(bar.present, False)
@patch("charms.operator_libs_linux.v0.apt.check_output")
@patch("charms.operator_libs_linux.v0.apt.check_call")
def test_can_run_bare_changes_on_multiple_packages(
self, mock_subprocess, mock_subprocess_output
):
mock_subprocess.return_value = 0
mock_subprocess_output.side_effect = [
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["dpkg", "-l", "aisleriot"]),
"amd64",
apt_cache_aisleriot,
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["dpkg", "-l", "mocktester"]),
"amd64",
apt_cache_mocktester,
]
foo = apt.add_package(["aisleriot", "mocktester"])
mock_subprocess.assert_any_call(
[
"apt-get",
"-y",
"--option=Dpkg::Options::=--force-confold",
"install",
"aisleriot=1:3.22.9-1",
],
stderr=-1,
stdout=-1,
)
mock_subprocess.assert_any_call(
[
"apt-get",
"-y",
"--option=Dpkg::Options::=--force-confold",
"install",
"mocktester=1:1.2.3-4",
],
stderr=-1,
stdout=-1,
)
self.assertEqual(foo[0].present, True)
self.assertEqual(foo[1].present, True)
mock_subprocess_output.side_effect = ["amd64", dpkg_output_vim, "amd64", dpkg_output_zsh]
bar = apt.remove_package(["vim", "zsh"])
mock_subprocess.assert_any_call(
["apt-get", "-y", "remove", "vim=2:8.1.2269-1ubuntu5"], stderr=-1, stdout=-1
)
mock_subprocess.assert_any_call(
["apt-get", "-y", "remove", "zsh=5.8-3ubuntu1"], stderr=-1, stdout=-1
)
self.assertEqual(bar[0].present, False)
self.assertEqual(bar[1].present, False)
@patch("charms.operator_libs_linux.v0.apt.check_output")
@patch("charms.operator_libs_linux.v0.apt.check_call")
def test_refreshes_apt_cache_if_not_found(self, mock_subprocess, mock_subprocess_output):
mock_subprocess.return_value = 0
mock_subprocess_output.side_effect = [
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["dpkg", "-l", "nothere"]),
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["apt-cache", "show", "nothere"]),
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["dpkg", "-l", "nothere"]),
"amd64",
apt_cache_aisleriot,
]
pkg = apt.add_package("aisleriot")
mock_subprocess.assert_any_call(["apt-get", "update"], stderr=-1, stdout=-1)
self.assertEqual(pkg.name, "aisleriot")
self.assertEqual(pkg.present, True)
@patch("charms.operator_libs_linux.v0.apt.check_output")
@patch("charms.operator_libs_linux.v0.apt.check_call")
def test_raises_package_not_found_error(self, mock_subprocess, mock_subprocess_output):
mock_subprocess.return_value = 0
mock_subprocess_output.side_effect = [
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["dpkg", "-l", "nothere"]),
"amd64",
subprocess.CalledProcessError(returncode=100, cmd=["apt-cache", "show", "nothere"]),
] * 2 # Double up for the retry after update
with self.assertRaises(apt.PackageError) as ctx:
apt.add_package("nothere")
mock_subprocess.assert_any_call(["apt-get", "update"], stderr=-1, stdout=-1)
self.assertEqual("<charms.operator_libs_linux.v0.apt.PackageError>", ctx.exception.name)
self.assertIn("Failed to install packages: nothere", ctx.exception.message)
@patch("charms.operator_libs_linux.v0.apt.check_output")
@patch("charms.operator_libs_linux.v0.apt.check_call")
def test_remove_package_not_installed(self, mock_subprocess, mock_subprocess_output):
mock_subprocess_output.side_effect = ["amd64", dpkg_output_not_installed]
packages = apt.remove_package("ubuntu-advantage-tools")
mock_subprocess.assert_not_called()
self.assertEqual(packages, [])
| 44.445808 | 238 | 0.635732 | 2,421 | 21,734 | 5.520446 | 0.115655 | 0.062851 | 0.03771 | 0.048186 | 0.82297 | 0.785784 | 0.771642 | 0.741265 | 0.72413 | 0.691059 | 0 | 0.063511 | 0.203828 | 21,734 | 488 | 239 | 44.536885 | 0.708853 | 0.004877 | 0 | 0.634884 | 0 | 0.039535 | 0.481178 | 0.224103 | 0 | 0 | 0 | 0 | 0.176744 | 1 | 0.044186 | false | 0 | 0.009302 | 0 | 0.05814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
77204a3b23bcc067a5c74c451437e983a97f2eeb | 32,055 | py | Python | qa/L0_server_status/server_status_test.py | szalpal/server | 85bf86813bce30a6b8e9f66bde057e2145530b7e | [
"BSD-3-Clause"
] | null | null | null | qa/L0_server_status/server_status_test.py | szalpal/server | 85bf86813bce30a6b8e9f66bde057e2145530b7e | [
"BSD-3-Clause"
] | null | null | null | qa/L0_server_status/server_status_test.py | szalpal/server | 85bf86813bce30a6b8e9f66bde057e2145530b7e | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import sys
sys.path.append("../common")
from builtins import range
from future.utils import iteritems
import numpy as np
import os
import unittest
import json
import requests
import infer_util as iu
import test_util as tu
import tritongrpcclient as grpcclient
import tritonhttpclient as httpclient
from tritonclientutils import *
class ServerMetadataTest(tu.TestResultCollector):
def test_basic(self):
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
model_name = "graphdef_int32_int8_int8"
extensions = [
'classification', 'sequence', 'model_repository',
'schedule_policy', 'model_configuration',
'system_shared_memory', 'cuda_shared_memory',
'binary_tensor_data', 'statistics'
]
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
server_metadata = triton_client.get_server_metadata()
model_metadata = triton_client.get_model_metadata(model_name)
if pair[1] == "http":
self.assertEqual(os.environ["TRITON_SERVER_VERSION"],
server_metadata['version'])
self.assertEqual("triton", server_metadata['name'])
for ext in extensions:
self.assertTrue(ext in server_metadata['extensions'])
self.assertEqual(model_name, model_metadata['name'])
else:
self.assertEqual(os.environ["TRITON_SERVER_VERSION"],
server_metadata.version)
self.assertEqual("triton", server_metadata.name)
for ext in extensions:
self.assertTrue(ext in server_metadata.extensions)
self.assertEqual(model_name, model_metadata.name)
except InferenceServerException as ex:
self.assertTrue(False, "unexpected error {}".format(ex))
def test_unknown_model(self):
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
model_name = "foo"
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
server_metadata = triton_client.get_server_metadata()
if pair[1] == "http":
self.assertEqual(os.environ["TRITON_SERVER_VERSION"],
server_metadata['version'])
self.assertEqual("triton", server_metadata['name'])
else:
self.assertEqual(os.environ["TRITON_SERVER_VERSION"],
server_metadata.version)
self.assertEqual("triton", server_metadata.name)
model_metadata = triton_client.get_model_metadata(model_name)
self.assertTrue(False, "expected unknown model failure")
except InferenceServerException as ex:
self.assertTrue(ex.message().startswith(
"Request for unknown model: 'foo' is not found"))
def test_unknown_model_version(self):
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
model_name = "graphdef_int32_int8_int8"
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
model_metadata = triton_client.get_model_metadata(
model_name, model_version="99")
self.assertTrue(False, "expected unknown model version failure")
except InferenceServerException as ex:
self.assertTrue(ex.message().startswith(
"Request for unknown model: 'graphdef_int32_int8_int8' version 99 is not found"
))
def test_model_latest_infer(self):
input_size = 16
tensor_shape = (1, input_size)
platform_name = {
'graphdef': 'tensorflow_graphdef',
'netdef': 'caffe2_netdef'
}
# There are 3 versions of *_int32_int32_int32 and all
# should be available.
for platform in ('graphdef', 'netdef'):
model_name = platform + "_int32_int32_int32"
# Initially there should be no version stats..
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
model_metadata = triton_client.get_model_metadata(
model_name)
# verify all versions are reported when no model version is specified
if pair[1] == "http":
self.assertEqual(model_name, model_metadata['name'])
self.assertEqual(len(model_metadata['versions']), 3)
for v in (1, 2, 3):
self.assertTrue(
str(v) in model_metadata['versions'])
else:
self.assertEqual(model_name, model_metadata.name)
self.assertEqual(len(model_metadata.versions), 3)
for v in (1, 2, 3):
self.assertTrue(str(v) in model_metadata.versions)
# verify contents of model metadata
if pair[1] == "http":
model_platform = model_metadata['platform']
model_inputs = model_metadata['inputs']
model_outputs = model_metadata['outputs']
else:
model_platform = model_metadata.platform
model_inputs = model_metadata.inputs
model_outputs = model_metadata.outputs
self.assertEqual(platform_name[platform], model_platform)
self.assertEqual(len(model_inputs), 2)
self.assertEqual(len(model_outputs), 2)
for model_input in model_inputs:
if pair[1] == "http":
input_dtype = model_input['datatype']
input_shape = model_input['shape']
input_name = model_input['name']
else:
input_dtype = model_input.datatype
input_shape = model_input.shape
input_name = model_input.name
self.assertTrue(input_name in ["INPUT0", "INPUT1"])
self.assertEqual(input_dtype, "INT32")
self.assertEqual(input_shape, [-1, 16])
for model_output in model_outputs:
if pair[1] == "http":
output_dtype = model_output['datatype']
output_shape = model_output['shape']
output_name = model_output['name']
else:
output_dtype = model_output.datatype
output_shape = model_output.shape
output_name = model_output.name
self.assertTrue(output_name in ["OUTPUT0", "OUTPUT1"])
self.assertEqual(output_dtype, "INT32")
self.assertEqual(output_shape, [-1, 16])
except InferenceServerException as ex:
self.assertTrue(False, "unexpected error {}".format(ex))
# Infer using latest version (which is 3)...
iu.infer_exact(self,
platform,
tensor_shape,
1,
np.int32,
np.int32,
np.int32,
model_version=None,
swap=True)
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
for v in (1, 2, 3):
self.assertTrue(
triton_client.is_model_ready(model_name,
model_version=str(v)))
# Only version 3 should have infer stats
infer_stats = triton_client.get_inference_statistics(
model_name)
if pair[1] == "http":
stats = infer_stats['model_stats']
else:
stats = infer_stats.model_stats
self.assertEqual(
len(stats), 3,
"expected 3 infer stats for model " + model_name)
for s in stats:
if pair[1] == "http":
v = s['version']
stat = s['inference_stats']
else:
v = s.version
stat = s.inference_stats
if v == "3":
if pair[1] == "http":
self.assertTrue(stat['success']['count'], 3)
else:
self.assertTrue(stat.success.count, 3)
else:
if pair[1] == "http":
self.assertEqual(
stat['success']['count'], 0,
"unexpected infer success counts for version "
+ str(v) + " of model " + model_name)
else:
self.assertEqual(
stat.success.count, 0,
"unexpected infer success counts for version "
+ str(v) + " of model " + model_name)
except InferenceServerException as ex:
self.assertTrue(False, "unexpected error {}".format(ex))
def test_model_specific_infer(self):
input_size = 16
tensor_shape = (1, input_size)
# There are 3 versions of *_float32_float32_float32 but only
# versions 1 and 3 should be available.
for platform in ('graphdef', 'netdef', 'plan'):
tensor_shape = (1, input_size)
model_name = platform + "_float32_float32_float32"
# Initially there should be no version status...
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
self.assertTrue(
triton_client.is_model_ready(model_name,
model_version="1"))
self.assertFalse(
triton_client.is_model_ready(model_name,
model_version="2"))
self.assertTrue(
triton_client.is_model_ready(model_name,
model_version="3"))
except InferenceServerException as ex:
self.assertTrue(False, "unexpected error {}".format(ex))
# Infer using version 1...
iu.infer_exact(self,
platform,
tensor_shape,
1,
np.float32,
np.float32,
np.float32,
model_version=1,
swap=False)
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
self.assertTrue(
triton_client.is_model_ready(model_name,
model_version="1"))
self.assertFalse(
triton_client.is_model_ready(model_name,
model_version="2"))
self.assertTrue(
triton_client.is_model_ready(model_name,
model_version="3"))
# Only version 1 should have infer stats
infer_stats = triton_client.get_inference_statistics(
model_name, model_version='1')
if pair[1] == "http":
self.assertEqual(
len(infer_stats['model_stats']), 1,
"expected 1 infer stats for version 1"
" of model " + model_name)
stats = infer_stats['model_stats'][0]['inference_stats']
self.assertTrue(stats['success']['count'], 3)
else:
self.assertEqual(
len(infer_stats.model_stats), 1,
"expected 1 infer stats for version 1"
" of model " + model_name)
stats = infer_stats.model_stats[0].inference_stats
self.assertTrue(stats.success.count, 3)
infer_stats = triton_client.get_inference_statistics(
model_name, model_version='3')
if pair[1] == "http":
stats = infer_stats['model_stats'][0]['inference_stats']
self.assertEqual(
stats['success']['count'], 0,
"unexpected infer stats for version 3"
" of model " + model_name)
else:
stats = infer_stats.model_stats[0].inference_stats
self.assertEqual(
stats.success.count, 0,
"unexpected infer stats for version 3"
" of model " + model_name)
except InferenceServerException as ex:
self.assertTrue(False, "unexpected error {}".format(ex))
class ModelMetadataTest(tu.TestResultCollector):
'''
These tests must be run after the ServerMetadataTest. See test.sh
file for correct test running.
'''
def test_model_versions_deleted(self):
# Originally There were 3 versions of *_int32_int32_int32 and
# version 3 was executed once. Version 2 and 3 models were
# deleted from the model repository so now only expect version 1 to
# be ready and show stats.
for platform in ('graphdef', 'netdef'):
model_name = platform + "_int32_int32_int32"
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
model_metadata = triton_client.get_model_metadata(
model_name)
if pair[1] == "http":
self.assertEqual(model_name, model_metadata['name'])
self.assertEqual(len(model_metadata['versions']), 1)
self.assertEqual("1", model_metadata['versions'][0])
else:
self.assertEqual(model_name, model_metadata.name)
self.assertEqual(len(model_metadata.versions), 1)
self.assertEqual("1", model_metadata.versions[0])
# Only version 3 should have infer stats, only 1 is ready
for v in (1, 2, 3):
if v == 1:
self.assertTrue(
triton_client.is_model_ready(
model_name, model_version=str(v)))
infer_stats = triton_client.get_inference_statistics(
model_name, model_version=str(v))
if pair[1] == "http":
self.assertEqual(
len(infer_stats['model_stats']), 1,
"expected 1 infer stats for version " +
str(v) + " of model " + model_name)
stats = infer_stats['model_stats'][0][
'inference_stats']
self.assertEqual(stats['success']['count'], 0)
else:
self.assertEqual(
len(infer_stats.model_stats), 1,
"expected 1 infer stats for version " +
str(v) + " of model " + model_name)
stats = infer_stats.model_stats[
0].inference_stats
self.assertEqual(stats.success.count, 0)
else:
self.assertFalse(
triton_client.is_model_ready(
model_name, model_version=str(v)))
except InferenceServerException as ex:
self.assertTrue(False, "unexpected error {}".format(ex))
def test_model_versions_added(self):
# Originally There was version 1 of *_float16_float32_float32.
# Version 7 was added so now expect just version 7 to be ready
# and provide infer stats.
for platform in ('graphdef',):
model_name = platform + "_float16_float32_float32"
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
model_metadata = triton_client.get_model_metadata(
model_name)
if pair[1] == "http":
self.assertEqual(
model_name, model_metadata['name'],
"expected status for model " + model_name)
self.assertEqual(
len(model_metadata['versions']), 1,
"expected status for 1 versions for model " +
model_name)
self.assertEqual("7", model_metadata['versions'][0])
else:
self.assertEqual(
model_name, model_metadata.name,
"expected status for model " + model_name)
self.assertEqual(
len(model_metadata.versions), 1,
"expected status for 1 versions for model " +
model_name)
self.assertEqual("7", model_metadata.versions[0])
# Only version 7 should be ready and show infer stat.
for v in (1, 7):
if v == 7:
self.assertTrue(
triton_client.is_model_ready(
model_name, model_version=str(v)))
infer_stats = triton_client.get_inference_statistics(
model_name, model_version=str(v))
if pair[1] == "http":
stats = infer_stats['model_stats'][0][
'inference_stats']
self.assertEqual(
stats['success']['count'], 0,
"unexpected infer stats for version " +
str(v) + " of model " + model_name)
else:
stats = infer_stats.model_stats[
0].inference_stats
self.assertEqual(
stats.success.count, 0,
"unexpected infer stats for version " +
str(v) + " of model " + model_name)
else:
self.assertFalse(
triton_client.is_model_ready(
model_name, model_version=str(v)))
try:
infer_stats = triton_client.get_inference_statistics(
model_name, model_version=str(v))
self.assertTrue(
False,
"unexpected infer stats for the model that is not ready"
)
except InferenceServerException as ex:
self.assertTrue(
"requested model version is not available for model"
in str(ex))
except InferenceServerException as ex:
self.assertTrue(False, "unexpected error {}".format(ex))
def test_infer_stats_no_model_version(self):
# Originally There were 3 versions of *_int32_int32_int32 and
# version 3 was executed once. Version 2 and 3 models were
# deleted from the model repository so now only expect version 1 to
# be ready and show infer stats.
for platform in ('graphdef', 'netdef'):
model_name = platform + "_int32_int32_int32"
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
model_metadata = triton_client.get_model_metadata(
model_name)
if pair[1] == "http":
self.assertEqual(model_name, model_metadata['name'])
self.assertEqual(len(model_metadata['versions']), 1)
self.assertEqual("1", model_metadata['versions'][0])
else:
self.assertEqual(model_name, model_metadata.name)
self.assertEqual(len(model_metadata.versions), 1)
self.assertEqual("1", model_metadata.versions[0])
# Only version 3 should have infer stats, only 1 is ready
for v in (1, 2, 3):
if v == 1:
self.assertTrue(
triton_client.is_model_ready(
model_name, model_version=str(v)))
else:
self.assertFalse(
triton_client.is_model_ready(
model_name, model_version=str(v)))
infer_stats = triton_client.get_inference_statistics(
model_name)
if pair[1] == "http":
stats = infer_stats['model_stats']
else:
stats = infer_stats.model_stats
self.assertEqual(
len(stats), 1,
"expected 1 infer stats for model " + model_name)
if pair[1] == "http":
version = stats[0]['version']
stat = stats[0]['inference_stats']
else:
version = stats[0].version
stat = stats[0].inference_stats
if version != "1":
self.assertTrue(
False,
"expected version 1 for infer stat, got " + version)
else:
if pair[1] == "http":
self.assertEqual(
stat['success']['count'], 0,
"unexpected infer stats for version " +
str(version) + " of model " + model_name)
else:
self.assertEqual(
stat.success.count, 0,
"unexpected infer stats for version " +
str(version) + " of model " + model_name)
except InferenceServerException as ex:
self.assertTrue(False, "unexpected error {}".format(ex))
def test_infer_stats_no_model(self):
# Test get_inference_statistics when no model/model_version is passed.
try:
for pair in [("localhost:8000", "http"),
("localhost:8001", "grpc")]:
if pair[1] == "http":
triton_client = httpclient.InferenceServerClient(
url=pair[0], verbose=True)
else:
triton_client = grpcclient.InferenceServerClient(
url=pair[0], verbose=True)
self.assertTrue(triton_client.is_server_live())
self.assertTrue(triton_client.is_server_ready())
# Returns infer stats for ALL models + ready versions
infer_stats = triton_client.get_inference_statistics()
if pair[1] == "http":
stats = infer_stats['model_stats']
else:
stats = infer_stats.model_stats
self.assertEqual(
len(stats), 200,
"expected 200 infer stats for all ready versions of all model"
)
except InferenceServerException as ex:
self.assertTrue(False, "unexpected error {}".format(ex))
if __name__ == '__main__':
unittest.main()
| 48.789954 | 95 | 0.475838 | 2,828 | 32,055 | 5.208274 | 0.105375 | 0.060289 | 0.033268 | 0.023898 | 0.790617 | 0.777582 | 0.758096 | 0.742481 | 0.737728 | 0.723199 | 0 | 0.022101 | 0.44667 | 32,055 | 656 | 96 | 48.864329 | 0.80831 | 0.091998 | 0 | 0.759399 | 0 | 0 | 0.099607 | 0.007095 | 0 | 0 | 0 | 0 | 0.219925 | 1 | 0.016917 | false | 0 | 0.024436 | 0 | 0.045113 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
774d42c0f75cab1442b8a7a623cc3e7868b01ad4 | 12,449 | py | Python | application/tests/data_prep_tests.py | matt-quantblack/loan-application-completion-model | e2757c987b4b1ccc0e8b61e22617b13f14a110a6 | [
"MIT"
] | null | null | null | application/tests/data_prep_tests.py | matt-quantblack/loan-application-completion-model | e2757c987b4b1ccc0e8b61e22617b13f14a110a6 | [
"MIT"
] | null | null | null | application/tests/data_prep_tests.py | matt-quantblack/loan-application-completion-model | e2757c987b4b1ccc0e8b61e22617b13f14a110a6 | [
"MIT"
] | null | null | null | import pandas as pd
import pytest
from pandas.testing import assert_frame_equal, assert_series_equal
from application import model_builder
def test_validate_types_numeric_success():
# Arrange
df = pd.DataFrame()
new_expect = pd.DataFrame()
new_expect["Some Feature"] = [3, 4, 5]
new_expect["Answer"] = [1, 2, 3]
df["Some Feature"] = new_expect["Some Feature"]
df["Answer"] = new_expect["Answer"]
fields = [["Some Feature", "Numeric"],
["Answer", "Response Variable"]]
# Act
x = model_builder.validate_types(df, fields)
# Assert
assert_frame_equal(x, new_expect, check_dtype=False)
def test_validate_types_numeric_string_converts_success():
# Arrange
df = pd.DataFrame()
new_expect = pd.DataFrame()
new_expect["Some Feature"] = [3, 4, 5]
new_expect["Answer"] = [1, 2, 3]
df["Some Feature"] = ["3", "4", "5"]
df["Answer"] = new_expect["Answer"]
fields = [["Some Feature", "Numeric"],
["Answer", "Response Variable"]]
# Act
x = model_builder.validate_types(df, fields)
# Assert
assert_frame_equal(x, new_expect, check_dtype=False)
def test_validate_types_numeric_string_converts_throws_error():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = ["3d", "4d", "5d"]
df["Answer"] = [1, 2, 3]
fields = [["Some Feature", "Numeric"],
["Answer", "Response Variable"]]
# Act and Assert
with pytest.raises(ValueError):
model_builder.validate_types(df, fields)
def test_validate_types_percentage_converts_throws_value_error():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = ["0.3s c", "0.4", "0.5"]
df["Answer"] = [1, 2, 3]
fields = [["Some Feature", "Percentage"],
["Answer", "Response Variable"]]
# Act and Assert
with pytest.raises(ValueError):
model_builder.validate_types(df, fields)
def test_validate_types_percentage_converts_success():
# Arrange
df = pd.DataFrame()
new_expect = pd.DataFrame()
new_expect["Some Feature"] = [30.0, 40.0, 50.0]
new_expect["Some Feature 2"] = [30.0, 40.0, 50.0]
new_expect["Some Feature 3"] = [30.0, 40.0, 50.0]
new_expect["Answer"] = [1, 2, 3]
df["Some Feature"] = [0.3, 0.4, 0.5]
df["Some Feature 2"] = ["0.3%", "0.4 %", " 0.5 %"]
df["Some Feature 3"] = ["30", "40", " 50"]
df["Answer"] = new_expect["Answer"]
fields = [["Some Feature", "Percentage"],
["Some Feature 2", "Percentage"],
["Some Feature 3", "Percentage"],
["Answer", "Response Variable"]]
# Act
x = model_builder.validate_types(df, fields)
# Assert
assert_frame_equal(x, new_expect, check_dtype=False)
def test_validate_types_money_converts_throws_value_error():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = ["0.3s$", "$0.4", "0.5"]
df["Answer"] = [1, 2, 3]
fields = [["Some Feature", "Money"],
["Answer", "Response Variable"]]
# Act and Assert
with pytest.raises(ValueError):
model_builder.validate_types(df, fields)
def test_validate_types_percentage_converts_success():
# Arrange
df = pd.DataFrame()
new_expect = pd.DataFrame()
new_expect["Some Feature"] = [30.0, 40.0, 50.0]
new_expect["Some Feature 2"] = [30.0, 40.0, 50.0]
new_expect["Some Feature 3"] = [50000, 40000.0, 50000]
new_expect["Answer"] = [1, 2, 3]
df["Some Feature"] = [30, 40, 50]
df["Some Feature 2"] = ["$30", "$ 40 ", " $50 "]
df["Some Feature 3"] = ["$50,000", "40000", " 50,000"]
df["Answer"] = new_expect["Answer"]
fields = [["Some Feature", "Money"],
["Some Feature 2", "Money"],
["Some Feature 3", "Money"],
["Answer", "Response Variable"]]
# Act
x = model_builder.validate_types(df, fields)
# Assert
assert_frame_equal(x, new_expect, check_dtype=False)
def test_validate_types_value_set_success():
# Arrange
df = pd.DataFrame()
new_expect = pd.DataFrame()
new_expect["Some Feature"] = ["Married", "Single", "Married"]
new_expect["Answer"] = [1, 2, 3]
df["Some Feature"] = new_expect["Some Feature"]
df["Answer"] = new_expect["Answer"]
fields = [["Some Feature", "Value Set"],
["Answer", "Response Variable"]]
# Act
x = model_builder.validate_types(df, fields)
# Assert
assert_frame_equal(x, new_expect, check_dtype=False)
def test_validate_types_value_set_throws_value_exception_too_many_values():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = range(1, 2000)
df["Answer"] = range(1, 2000)
fields = [["Some Feature", "Value Set"],
["Answer", "Response Variable"]]
# Act and Assert
with pytest.raises(ValueError):
model_builder.validate_types(df, fields)
def test_validate_types_yes_no_success():
# Arrange
df = pd.DataFrame()
new_expect = pd.DataFrame()
new_expect["Some Feature"] = ["Yes", "No", "No Data"]
new_expect["Answer"] = [1, 2, 3]
df["Some Feature"] = new_expect["Some Feature"]
df["Answer"] = new_expect["Answer"]
fields = [["Some Feature", "Yes/No"],
["Answer", "Response Variable"]]
# Act
x = model_builder.validate_types(df, fields)
# Assert
assert_frame_equal(x, new_expect, check_dtype=False)
def test_validate_types_yes_no_throws_value_exception_too_many_values():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = range(1, 5)
df["Answer"] = range(1, 5)
fields = [["Some Feature", "Yes/No"],
["Answer", "Response Variable"]]
# Act and Assert
with pytest.raises(ValueError):
model_builder.validate_types(df, fields)
def test_validate_types_invalid_field_type():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = range(1, 5)
df["Answer"] = range(1, 5)
fields = [["Some Feature", "Invalid Type"],
["Answer", "Response Variable"]]
# Act and Assert
with pytest.raises(ValueError):
model_builder.validate_types(df, fields)
def test_stripdown_splits_x_variables():
# Arrange
df = pd.DataFrame()
x_expect = pd.DataFrame()
x_expect["Some Feature"] = [3, 4, 5]
df["Some Feature"] = x_expect["Some Feature"]
df["Answer"] = [1, 2, 3]
fields = [["Some Feature", "Numeric"], ["Answer", "Response Variable"]]
# Act
x, y, fields = model_builder.stripdown_features(df, fields)
# Assert
assert_frame_equal(x, x_expect)
def test_stripdown_splits_response_variable_works():
# Arrange
df = pd.DataFrame()
y_expect = pd.Series([1, 0, 0], name="Answer")
df["Some Feature"] = [3, 4, 5]
df["Answer"] = y_expect
fields = [["Some Feature", "Numeric"], ["Answer", "Response Variable"]]
# Act
x, y, fields = model_builder.stripdown_features(df, fields)
# Assert
assert_series_equal(y, y_expect)
def test_stripdown_splits_response_variable_works_if_scale_of_0_to_100():
# Arrange
df = pd.DataFrame()
y_expect = pd.Series([0, 0, 1], dtype="int32")
df["Some Feature"] = [3, 4, 5]
df["Answer"] = [50, 70, 100]
fields = [["Some Feature", "Numeric"], ["Answer", "Response Variable"]]
# Act
x, y, fields = model_builder.stripdown_features(df, fields)
# Assert
assert_series_equal(y, y_expect)
def test_stripdown_removes_contact_details():
# Arrange
df = pd.DataFrame()
x_expect = pd.DataFrame()
x_expect["Some Feature"] = [3, 4, 5]
df["Some Feature"] = x_expect["Some Feature"]
df["Contacts1"] = ["tom", "john", "sarah"]
df["Contacts2"] = ["tom", "john", "sarah"]
df["Answer"] = [1, 2, 3]
fields = [["Some Feature", "Numeric"], ["Answer", "Response Variable"],
["Contacts1", "Contact Details"], ["Contacts2", "Contact Details"]]
# Act
x, y, fields = model_builder.stripdown_features(df, fields)
# Assert
assert_frame_equal(x, x_expect)
def test_stripdown_removes_string_fields():
# Arrange
df = pd.DataFrame()
x_expect = pd.DataFrame()
x_expect["Some Feature"] = [3, 4, 5]
df["Some Feature"] = x_expect["Some Feature"]
df["Postcodes"] = ["2104", "2000", "2756"]
df["Answer"] = [1, 2, 3]
fields = [["Some Feature", "Numeric"], ["Answer", "Response Variable"],
["Postcodes", "String"]]
# Act
x, y, fields = model_builder.stripdown_features(df, fields)
# Assert
assert_frame_equal(x, x_expect)
def test_stripdown_removes_columns_with_many_nulls_fields():
# Arrange
df = pd.DataFrame()
x_expect = pd.DataFrame()
x_expect["Some Feature"] = range(1, 12)
df["Some Feature"] = x_expect["Some Feature"]
df["A lot of Nulls"] = [None, 1, 2, 3, 4, 5, 6, 7, 8, None, 9]
df["Answer"] = range(1, 12)
fields = [["Some Feature", "Numeric"], ["Answer", "Response Variable"],
["A lot of Nulls", "Numeric"]]
# Act
x, y, fields = model_builder.stripdown_features(df, fields)
# Assert
assert_frame_equal(x, x_expect)
def test_stripdown_doesnt_remove_columns_with_some_nulls():
# Arrange
df = pd.DataFrame()
x_expect = pd.DataFrame()
x_expect["Some Feature"] = range(1, 12)
x_expect["A lot of Nulls"] = [None, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
df["Some Feature"] = x_expect["Some Feature"]
df["A lot of Nulls"] = x_expect["A lot of Nulls"]
df["Answer"] = range(1, 12)
fields = [["Some Feature", "Numeric"], ["Answer", "Response Variable"],
["A lot of Nulls", "Numeric"]]
# Act
x, y, fields = model_builder.stripdown_features(df, fields)
# Assert
assert_frame_equal(x, x_expect)
def test_knn_imputer_fills_nulls_on_numeric():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = range(1, 12)
df["A lot of Nulls"] = [None, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
df["Answer"] = range(1, 12)
fields = [["Some Feature", "Numeric"], ["Answer", "Response Variable"],
["A lot of Nulls", "Numeric"]]
# Act
new_df = model_builder.impute_nulls(df, fields)
# Assert
assert new_df["A lot of Nulls"].isna().sum() == 0
def test_knn_imputer_does_nothing_if_not_numeric():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = range(1, 12)
df["Some Feature 2"] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
df["Answer"] = range(1, 12)
fields = [["Some Feature", "Value Set"], ["Answer", "Response Variable"],
["Some Feature 2", "Value Set"]]
# Act
new_df = model_builder.impute_nulls(df, fields)
# Assert
assert_frame_equal(df, new_df)
def test_knn_imputer_with_value_set():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = ["Single", None, "", "Married", "Married", "Married", pd.NA, "Married"]
df["Numeric Feature"] = [None, 1, 2, 3, 4, 5, 6, 7]
df["Answer"] = range(1, 9)
fields = [["Some Feature", "Value Set"], ["Answer", "Response Variable"],
["Numeric Feature", "Numeric"]]
# Act
new_df = model_builder.impute_nulls(df, fields)
# Assert
assert new_df["Some Feature"].isna().sum() == 0
assert len(new_df[new_df["Some Feature"] == '']) == 0
assert new_df["Numeric Feature"].isna().sum() == 0
def test_knn_imputer_with_yes_no():
# Arrange
df = pd.DataFrame()
df["Some Feature"] = ["Yes", None, "", "No", pd.NA, "Yes"]
df["Numeric Feature"] = [None, 1, 2, 3, 4, 5]
df["Answer"] = range(1, 7)
fields = [["Some Feature", "Yes/No"], ["Answer", "Response Variable"],
["Numeric Feature", "Numeric"]]
# Act
new_df = model_builder.impute_nulls(df, fields)
# Assert
assert new_df["Some Feature"].isna().sum() == 0
assert len(new_df[new_df["Some Feature"] == '']) == 0
assert new_df["Numeric Feature"].isna().sum() == 0
def test_categorical_encoding():
# Arrange
fields = [["Cat", "Yes/No"], ["Cat2", "Value Set"],
["Some Feature 3", "Numeric"]]
df = pd.DataFrame()
df["Cat"] = ["Yes", "No", "Maybe", "Yes", "Yes", "No"]
df["Cat2"] = ["Yes2", "No2", "Maybe2", "Yes2", "Yes2", "No2"]
df["Some Feature 3"] = [100, 90, 90, 91, 90, 101]
# Act
x = model_builder.encode_categorical(df, fields)
# Assert
assert list(x.columns) == ["Some Feature 3", "Cat_Maybe", "Cat_No", "Cat_Yes", "Cat2_Maybe2", "Cat2_No2", "Cat2_Yes2"]
| 29.570071 | 122 | 0.602137 | 1,648 | 12,449 | 4.35801 | 0.086165 | 0.131718 | 0.059733 | 0.064049 | 0.853244 | 0.836953 | 0.825397 | 0.811334 | 0.756753 | 0.72988 | 0 | 0.038338 | 0.226846 | 12,449 | 420 | 123 | 29.640476 | 0.707844 | 0.038477 | 0 | 0.688 | 0 | 0 | 0.226666 | 0 | 0 | 0 | 0 | 0 | 0.092 | 1 | 0.096 | false | 0 | 0.016 | 0 | 0.112 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6222a6aa132cff7deb90069e9d06b77846f1929b | 14,984 | py | Python | colour/models/rgb/transfer_functions/tests/test_rimm_romm_rgb.py | BPearlstine/colour | 40f0281295496774d2a19eee017d50fd0c265bd8 | [
"Cube",
"BSD-3-Clause"
] | 2 | 2020-05-03T20:15:42.000Z | 2021-04-09T18:19:06.000Z | colour/models/rgb/transfer_functions/tests/test_rimm_romm_rgb.py | BPearlstine/colour | 40f0281295496774d2a19eee017d50fd0c265bd8 | [
"Cube",
"BSD-3-Clause"
] | null | null | null | colour/models/rgb/transfer_functions/tests/test_rimm_romm_rgb.py | BPearlstine/colour | 40f0281295496774d2a19eee017d50fd0c265bd8 | [
"Cube",
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Defines unit tests for
:mod:`colour.models.rgb.transfer_functions.rimm_romm_rgb` module.
"""
from __future__ import division, unicode_literals
import numpy as np
import unittest
from colour.models.rgb.transfer_functions import (
oetf_ROMMRGB, eotf_ROMMRGB, oetf_RIMMRGB, eotf_RIMMRGB,
log_encoding_ERIMMRGB, log_decoding_ERIMMRGB)
from colour.utilities import domain_range_scale, ignore_numpy_errors
__author__ = 'Colour Developers'
__copyright__ = 'Copyright (C) 2013-2019 - Colour Developers'
__license__ = 'New BSD License - https://opensource.org/licenses/BSD-3-Clause'
__maintainer__ = 'Colour Developers'
__email__ = 'colour-science@googlegroups.com'
__status__ = 'Production'
__all__ = [
'TestOetf_ROMMRGB', 'TestEotf_ROMMRGB', 'TestOetf_RIMMRGB',
'TestEotf_RIMMRGB', 'TestLog_encoding_ERIMMRGB',
'TestLog_decoding_ERIMMRGB'
]
class TestOetf_ROMMRGB(unittest.TestCase):
"""
Defines :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_ROMMRGB` definition unit tests methods.
"""
def test_oetf_ROMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_ROMMRGB` definition.
"""
self.assertAlmostEqual(oetf_ROMMRGB(0.0), 0.0, places=7)
self.assertAlmostEqual(oetf_ROMMRGB(0.18), 0.385711424751138, places=7)
self.assertAlmostEqual(oetf_ROMMRGB(1.0), 1.0, places=7)
self.assertEqual(oetf_ROMMRGB(0.18, out_int=True), 98)
self.assertEqual(oetf_ROMMRGB(0.18, bit_depth=12, out_int=True), 1579)
def test_n_dimensional_oetf_ROMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_ROMMRGB` definition n-dimensional arrays support.
"""
X = 0.18
X_ROMM = oetf_ROMMRGB(X)
X = np.tile(X, 6)
X_ROMM = np.tile(X_ROMM, 6)
np.testing.assert_almost_equal(oetf_ROMMRGB(X), X_ROMM, decimal=7)
X = np.reshape(X, (2, 3))
X_ROMM = np.reshape(X_ROMM, (2, 3))
np.testing.assert_almost_equal(oetf_ROMMRGB(X), X_ROMM, decimal=7)
X = np.reshape(X, (2, 3, 1))
X_ROMM = np.reshape(X_ROMM, (2, 3, 1))
np.testing.assert_almost_equal(oetf_ROMMRGB(X), X_ROMM, decimal=7)
def test_domain_range_scale_oetf_ROMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_ROMMRGB` definition domain and range scale support.
"""
X = 0.18
X_p = oetf_ROMMRGB(X)
d_r = (('reference', 1), (1, 1), (100, 100))
for scale, factor in d_r:
with domain_range_scale(scale):
np.testing.assert_almost_equal(
oetf_ROMMRGB(X * factor), X_p * factor, decimal=7)
@ignore_numpy_errors
def test_nan_oetf_ROMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_ROMMRGB` definition nan support.
"""
oetf_ROMMRGB(np.array([-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]))
class TestEotf_ROMMRGB(unittest.TestCase):
"""
Defines :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.
eotf_ROMMRGB` definition unit tests methods.
"""
def test_eotf_ROMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
eotf_ROMMRGB` definition.
"""
self.assertAlmostEqual(eotf_ROMMRGB(0.0), 0.0, places=7)
self.assertAlmostEqual(eotf_ROMMRGB(0.385711424751138), 0.18, places=7)
self.assertAlmostEqual(eotf_ROMMRGB(1.0), 1.0, places=7)
np.testing.assert_allclose(
eotf_ROMMRGB(98, in_int=True), 0.18, atol=0.001, rtol=0.001)
np.testing.assert_allclose(
eotf_ROMMRGB(1579, bit_depth=12, in_int=True),
0.18,
atol=0.001,
rtol=0.001)
def test_n_dimensional_eotf_ROMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
eotf_ROMMRGB` definition n-dimensional arrays support.
"""
X_p = 0.385711424751138
X = eotf_ROMMRGB(X_p)
X_p = np.tile(X_p, 6)
X = np.tile(X, 6)
np.testing.assert_almost_equal(eotf_ROMMRGB(X_p), X, decimal=7)
X_p = np.reshape(X_p, (2, 3))
X = np.reshape(X, (2, 3))
np.testing.assert_almost_equal(eotf_ROMMRGB(X_p), X, decimal=7)
X_p = np.reshape(X_p, (2, 3, 1))
X = np.reshape(X, (2, 3, 1))
np.testing.assert_almost_equal(eotf_ROMMRGB(X_p), X, decimal=7)
def test_domain_range_scale_eotf_ROMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
eotf_ROMMRGB` definition domain and range scale support.
"""
X_p = 0.385711424751138
X = eotf_ROMMRGB(X_p)
d_r = (('reference', 1), (1, 1), (100, 100))
for scale, factor in d_r:
with domain_range_scale(scale):
np.testing.assert_almost_equal(
eotf_ROMMRGB(X_p * factor), X * factor, decimal=7)
@ignore_numpy_errors
def test_nan_eotf_ROMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
eotf_ROMMRGB` definition nan support.
"""
eotf_ROMMRGB(np.array([-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]))
class TestOetf_RIMMRGB(unittest.TestCase):
"""
Defines :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_RIMMRGB` definition unit tests methods.
"""
def test_oetf_RIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_RIMMRGB` definition.
"""
self.assertAlmostEqual(oetf_RIMMRGB(0.0), 0.0, places=7)
self.assertAlmostEqual(oetf_RIMMRGB(0.18), 0.291673732475746, places=7)
self.assertAlmostEqual(oetf_RIMMRGB(1.0), 0.713125234297525, places=7)
self.assertEqual(oetf_RIMMRGB(0.18, out_int=True), 74)
self.assertEqual(oetf_RIMMRGB(0.18, bit_depth=12, out_int=True), 1194)
def test_n_dimensional_oetf_RIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_RIMMRGB` definition n-dimensional arrays support.
"""
X = 0.18
X_p = oetf_RIMMRGB(X)
X = np.tile(X, 6)
X_p = np.tile(X_p, 6)
np.testing.assert_almost_equal(oetf_RIMMRGB(X), X_p, decimal=7)
X = np.reshape(X, (2, 3))
X_p = np.reshape(X_p, (2, 3))
np.testing.assert_almost_equal(oetf_RIMMRGB(X), X_p, decimal=7)
X = np.reshape(X, (2, 3, 1))
X_p = np.reshape(X_p, (2, 3, 1))
np.testing.assert_almost_equal(oetf_RIMMRGB(X), X_p, decimal=7)
def test_domain_range_scale_oetf_RIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_RIMMRGB` definition domain and range scale support.
"""
X = 0.18
X_p = oetf_RIMMRGB(X)
d_r = (('reference', 1), (1, 1), (100, 100))
for scale, factor in d_r:
with domain_range_scale(scale):
np.testing.assert_almost_equal(
oetf_RIMMRGB(X * factor), X_p * factor, decimal=7)
@ignore_numpy_errors
def test_nan_oetf_RIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
oetf_RIMMRGB` definition nan support.
"""
oetf_RIMMRGB(np.array([-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]))
class TestEotf_RIMMRGB(unittest.TestCase):
"""
Defines :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.
eotf_RIMMRGB` definition unit tests methods.
"""
def test_eotf_RIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
eotf_RIMMRGB` definition.
"""
self.assertAlmostEqual(eotf_RIMMRGB(0.0), 0.0, places=7)
self.assertAlmostEqual(eotf_RIMMRGB(0.291673732475746), 0.18, places=7)
self.assertAlmostEqual(eotf_RIMMRGB(0.713125234297525), 1.0, places=7)
np.testing.assert_allclose(
eotf_RIMMRGB(74, in_int=True), 0.18, atol=0.005, rtol=0.005)
np.testing.assert_allclose(
eotf_RIMMRGB(1194, bit_depth=12, in_int=True),
0.18,
atol=0.005,
rtol=0.005)
def test_n_dimensional_eotf_RIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
eotf_RIMMRGB` definition n-dimensional arrays support.
"""
X_p = 0.291673732475746
X = eotf_RIMMRGB(X_p)
X_p = np.tile(X_p, 6)
X = np.tile(X, 6)
np.testing.assert_almost_equal(eotf_RIMMRGB(X_p), X, decimal=7)
X_p = np.reshape(X_p, (2, 3))
X = np.reshape(X, (2, 3))
np.testing.assert_almost_equal(eotf_RIMMRGB(X_p), X, decimal=7)
X_p = np.reshape(X_p, (2, 3, 1))
X = np.reshape(X, (2, 3, 1))
np.testing.assert_almost_equal(eotf_RIMMRGB(X_p), X, decimal=7)
def test_domain_range_scale_eotf_RIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
eotf_RIMMRGB` definition domain and range scale support.
"""
X_p = 0.291673732475746
X = eotf_RIMMRGB(X_p)
d_r = (('reference', 1), (1, 1), (100, 100))
for scale, factor in d_r:
with domain_range_scale(scale):
np.testing.assert_almost_equal(
eotf_RIMMRGB(X_p * factor), X * factor, decimal=7)
@ignore_numpy_errors
def test_nan_eotf_RIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
eotf_RIMMRGB` definition nan support.
"""
eotf_RIMMRGB(np.array([-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]))
class TestLog_encoding_ERIMMRGB(unittest.TestCase):
"""
Defines :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
log_encoding_ERIMMRGB` definition unit tests methods.
"""
def test_log_encoding_ERIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
log_encoding_ERIMMRGB` definition.
"""
self.assertAlmostEqual(log_encoding_ERIMMRGB(0.0), 0.0, places=7)
self.assertAlmostEqual(
log_encoding_ERIMMRGB(0.18), 0.410052389492129, places=7)
self.assertAlmostEqual(
log_encoding_ERIMMRGB(1.0), 0.545458327405113, places=7)
self.assertEqual(log_encoding_ERIMMRGB(0.18, out_int=True), 105)
self.assertEqual(
log_encoding_ERIMMRGB(0.18, bit_depth=12, out_int=True), 1679)
def test_n_dimensional_log_encoding_ERIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
log_encoding_ERIMMRGB` definition n-dimensional arrays support.
"""
X = 0.18
X_p = log_encoding_ERIMMRGB(X)
X = np.tile(X, 6)
X_p = np.tile(X_p, 6)
np.testing.assert_almost_equal(
log_encoding_ERIMMRGB(X), X_p, decimal=7)
X = np.reshape(X, (2, 3))
X_p = np.reshape(X_p, (2, 3))
np.testing.assert_almost_equal(
log_encoding_ERIMMRGB(X), X_p, decimal=7)
X = np.reshape(X, (2, 3, 1))
X_p = np.reshape(X_p, (2, 3, 1))
np.testing.assert_almost_equal(
log_encoding_ERIMMRGB(X), X_p, decimal=7)
def test_domain_range_scale_log_encoding_ERIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
log_encoding_ERIMMRGB` definition domain and range scale support.
"""
X = 0.18
X_p = log_encoding_ERIMMRGB(X)
d_r = (('reference', 1), (1, 1), (100, 100))
for scale, factor in d_r:
with domain_range_scale(scale):
np.testing.assert_almost_equal(
log_encoding_ERIMMRGB(X * factor), X_p * factor, decimal=7)
@ignore_numpy_errors
def test_nan_log_encoding_ERIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
log_encoding_ERIMMRGB` definition nan support.
"""
log_encoding_ERIMMRGB(
np.array([-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]))
class TestLog_decoding_ERIMMRGB(unittest.TestCase):
"""
Defines :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.
log_decoding_ERIMMRGB` definition unit tests methods.
"""
def test_log_decoding_ERIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
log_decoding_ERIMMRGB` definition.
"""
self.assertAlmostEqual(log_decoding_ERIMMRGB(0.0), 0.0, places=7)
self.assertAlmostEqual(
log_decoding_ERIMMRGB(0.410052389492129), 0.18, places=7)
self.assertAlmostEqual(
log_decoding_ERIMMRGB(0.545458327405113), 1.0, places=7)
np.testing.assert_allclose(
log_decoding_ERIMMRGB(105, in_int=True),
0.18,
atol=0.005,
rtol=0.005)
np.testing.assert_allclose(
log_decoding_ERIMMRGB(1679, bit_depth=12, in_int=True),
0.18,
atol=0.005,
rtol=0.005)
def test_n_dimensional_log_decoding_ERIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
log_decoding_ERIMMRGB` definition n-dimensional arrays support.
"""
X_p = 0.410052389492129
X = log_decoding_ERIMMRGB(X_p)
X_p = np.tile(X_p, 6)
X = np.tile(X, 6)
np.testing.assert_almost_equal(
log_decoding_ERIMMRGB(X_p), X, decimal=7)
X_p = np.reshape(X_p, (2, 3))
X = np.reshape(X, (2, 3))
np.testing.assert_almost_equal(
log_decoding_ERIMMRGB(X_p), X, decimal=7)
X_p = np.reshape(X_p, (2, 3, 1))
X = np.reshape(X, (2, 3, 1))
np.testing.assert_almost_equal(
log_decoding_ERIMMRGB(X_p), X, decimal=7)
def test_domain_range_scale_log_decoding_ERIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
log_decoding_ERIMMRGB` definition domain and range scale support.
"""
X_p = 0.410052389492129
X = log_decoding_ERIMMRGB(X_p)
d_r = (('reference', 1), (1, 1), (100, 100))
for scale, factor in d_r:
with domain_range_scale(scale):
np.testing.assert_almost_equal(
log_decoding_ERIMMRGB(X_p * factor), X * factor, decimal=7)
@ignore_numpy_errors
def test_nan_log_decoding_ERIMMRGB(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.rimm_romm_rgb.\
log_decoding_ERIMMRGB` definition nan support.
"""
log_decoding_ERIMMRGB(
np.array([-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]))
if __name__ == '__main__':
unittest.main()
| 31.745763 | 79 | 0.636746 | 2,063 | 14,984 | 4.345128 | 0.063984 | 0.015172 | 0.053548 | 0.082106 | 0.895471 | 0.864681 | 0.827979 | 0.784583 | 0.741745 | 0.699018 | 0 | 0.059675 | 0.239522 | 14,984 | 471 | 80 | 31.813163 | 0.726986 | 0.232581 | 0 | 0.5671 | 0 | 0 | 0.032908 | 0.007488 | 0 | 0 | 0 | 0 | 0.233766 | 1 | 0.103896 | false | 0 | 0.021645 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6229985af046ff143dc8f1181128974cdf496664 | 4,091 | py | Python | tests/test_outputter.py | nkakouros/py-doq | 542b19d9a046584029a9ce26c4b16adc8b86c034 | [
"BSD-3-Clause"
] | null | null | null | tests/test_outputter.py | nkakouros/py-doq | 542b19d9a046584029a9ce26c4b16adc8b86c034 | [
"BSD-3-Clause"
] | null | null | null | tests/test_outputter.py | nkakouros/py-doq | 542b19d9a046584029a9ce26c4b16adc8b86c034 | [
"BSD-3-Clause"
] | null | null | null | import json
from unittest import TestCase
from doq import (
JSONOutputter,
StringOutptter,
)
class StringOutptterTestCase(TestCase):
def test_same_lines(self):
lines = [
'def foo(arg1, arg2=None):',
' pass',
]
docstrings = [{
'docstring': '"""foo.\n\n:param arg1:\n:param arg2:\n"""',
'start_lineno': 1,
'start_col': 0,
'end_lineno': 2,
'end_col': 0,
'is_doc_exists': False,
}]
output = StringOutptter().format(lines=lines, docstrings=docstrings, indent=4)
expected = '\n'.join([
'def foo(arg1, arg2=None):',
' """foo.',
'',
' :param arg1:',
' :param arg2:',
' """',
' pass',
])
self.assertEqual(expected, output)
def test_multi_lines(self):
lines = [
'def foo(',
' arg1,',
' arg2=None,',
' arg3=None,',
" arg4={'foo': 'spam', 'bar': 'ham'},",
'):',
' pass',
]
docstrings = [{
'docstring': '"""foo.\n\n:param arg1:\n:param arg2:\n:param arg3:\n:param arg4:\n"""',
'start_lineno': 1,
'start_col': 0,
'end_lineno': 8,
'end_col': 0,
'is_doc_exists': False,
}]
output = StringOutptter().format(lines=lines, docstrings=docstrings, indent=4)
expected = '\n'.join([
'def foo(',
' arg1,',
' arg2=None,',
' arg3=None,',
" arg4={'foo': 'spam', 'bar': 'ham'},",
'):',
' """foo.',
'',
' :param arg1:',
' :param arg2:',
' :param arg3:',
' :param arg4:',
' """',
' pass',
])
self.assertEqual(expected, output)
def test_multi_lines_with_return_type(self):
lines = [
'def foo(',
' arg1,',
' arg2=None,',
" arg3={'foo': 'spam', 'bar': 'ham'},",
') -> List[int]:',
' pass',
]
docstrings = [{
'docstring': '"""foo.\n\n:param arg1:\n:param arg2:\n:param arg3:\n:rtype List[int]:\n"""',
'start_lineno': 1,
'start_col': 0,
'end_lineno': 7,
'end_col': 0,
'is_doc_exists': False,
}]
output = StringOutptter().format(lines=lines, docstrings=docstrings, indent=4)
expected = '\n'.join([
'def foo(',
' arg1,',
' arg2=None,',
" arg3={'foo': 'spam', 'bar': 'ham'},",
') -> List[int]:',
' """foo.',
'',
' :param arg1:',
' :param arg2:',
' :param arg3:',
' :rtype List[int]:',
' """',
' pass',
])
self.assertEqual(expected, output)
class JSONOutptterTestCase(TestCase):
def test_same_lines(self):
lines = [
'def foo(arg1, arg2=None):',
' pass',
]
docstrings = [{
'docstring': '"""foo.\n\n:param arg1:\n:param arg2:\n"""',
'start_lineno': 1,
'start_col': 0,
'end_lineno': 2,
'end_col': 0,
'is_doc_exists': False,
}]
output = JSONOutputter().format(
lines=lines,
docstrings=docstrings,
indent=4,
)
expected = [{
'docstring': '\n'.join([
' """foo.',
'',
' :param arg1:',
' :param arg2:',
' """',
]),
'start_col': 4,
'start_lineno': 1,
'end_col': 0,
'end_lineno': 2,
}]
self.assertEqual(json.dumps(expected), output)
| 28.213793 | 103 | 0.387191 | 343 | 4,091 | 4.504373 | 0.169096 | 0.042718 | 0.045307 | 0.06343 | 0.822006 | 0.764401 | 0.764401 | 0.725566 | 0.680906 | 0.576052 | 0 | 0.029037 | 0.44439 | 4,091 | 144 | 104 | 28.409722 | 0.650682 | 0 | 0 | 0.776119 | 0 | 0.014925 | 0.30946 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 1 | 0.029851 | false | 0.052239 | 0.022388 | 0 | 0.067164 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
6234c244e93be8e512ae2e485ac380beddf5a339 | 2,008 | py | Python | tests/tsunami_test.py | Ostorlab/agent_tsunami | 405ca0629a1ac42103d5f04719f3d8b87ddca406 | [
"Apache-2.0"
] | 2 | 2022-03-04T11:56:13.000Z | 2022-03-05T23:07:36.000Z | tests/tsunami_test.py | Ostorlab/agent_tsunami | 405ca0629a1ac42103d5f04719f3d8b87ddca406 | [
"Apache-2.0"
] | null | null | null | tests/tsunami_test.py | Ostorlab/agent_tsunami | 405ca0629a1ac42103d5f04719f3d8b87ddca406 | [
"Apache-2.0"
] | null | null | null | """Unittests for tsunami class."""
import json
from agent.tsunami import tsunami
def _start_scan_success(self, target, output_file):
data = {
'scanStatus': 'SUCCEEDED',
'scanFindings': []
}
with open(output_file, 'w', encoding='utf-8') as outfile:
json.dump(data, outfile)
def _start_scan_failed(self, target, output_file):
data = {
'scanStatus': 'FAILED',
'scanFindings': []
}
with open(output_file, 'w', encoding='utf-8') as outfile:
json.dump(data, outfile)
def testTsunamiClass_WhenTsunamiScanStatusIsSuccess_ShouldReturnValidDict(agent_mock, mocker):
"""Tsunami class is responsible for running a scan using Tsunami scanned CLi on a specific target.
when provided with valid Target the class method scan() should return a valid dict with all the findings from
tsunami output file.
"""
mocker.patch('agent.tsunami.tsunami.Tsunami._start_scan', _start_scan_success)
target = tsunami.Target(address='0.0.0.0', version='v6', domain=None)
with tsunami.Tsunami() as tsunami_scanner:
scan_result = tsunami_scanner.scan(target)
assert 'vulnerabilities' in scan_result
assert 'status' in scan_result
assert 'success' in scan_result['status']
def testTsunamiClass_WhenTsunamiScanFailed_ShouldReturnValidDict(agent_mock, mocker):
"""Tsunami class is responsible for running a scan using Tsunami scanned CLi on a specific target.
when provided with valid Target the class method scan() should return a valid dict with all the findings from
tsunami output file.
"""
mocker.patch('agent.tsunami.tsunami.Tsunami._start_scan', _start_scan_failed)
target = tsunami.Target(address='0.0.0.0', version='v6', domain=None)
with tsunami.Tsunami() as tsunami_scanner:
scan_result = tsunami_scanner.scan(target)
assert 'vulnerabilities' in scan_result
assert 'status' in scan_result
assert 'failed' in scan_result['status']
| 37.185185 | 113 | 0.710159 | 255 | 2,008 | 5.443137 | 0.266667 | 0.057637 | 0.051873 | 0.051873 | 0.81268 | 0.81268 | 0.763689 | 0.763689 | 0.763689 | 0.763689 | 0 | 0.007417 | 0.194223 | 2,008 | 53 | 114 | 37.886792 | 0.850433 | 0.24004 | 0 | 0.5625 | 0 | 0 | 0.160377 | 0.055256 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
624bf961a1609d805e368aeb047523304e0c80d9 | 297 | py | Python | frag_pele/Helpers/conda_deploy.py | BSC-CNS-EAPM/frag_pele | beefddaab56fc46dbe1e2e73ec6b24de98afe741 | [
"MIT"
] | 26 | 2019-05-17T08:21:23.000Z | 2022-03-17T22:27:30.000Z | frag_pele/Helpers/conda_deploy.py | BSC-CNS-EAPM/frag_pele | beefddaab56fc46dbe1e2e73ec6b24de98afe741 | [
"MIT"
] | 37 | 2019-09-04T08:47:51.000Z | 2021-07-13T12:57:23.000Z | frag_pele/Helpers/conda_deploy.py | BSC-CNS-EAPM/frag_pele | beefddaab56fc46dbe1e2e73ec6b24de98afe741 | [
"MIT"
] | 9 | 2019-05-17T08:04:32.000Z | 2021-04-07T03:54:53.000Z | import os
PYTHONS = ["3.8", "3.7", "3.6"]
for python in PYTHONS:
print("conda build -c conda-forge -c rdkit -c nostrumbiodiscovery conda_recipe/ --python={}".format(python))
os.system("conda build -c conda-forge -c rdkit -c nostrumbiodiscovery conda_recipe/ --python={}".format(python))
| 37.125 | 116 | 0.693603 | 44 | 297 | 4.636364 | 0.454545 | 0.098039 | 0.107843 | 0.156863 | 0.745098 | 0.745098 | 0.745098 | 0.745098 | 0.745098 | 0.745098 | 0 | 0.023438 | 0.138047 | 297 | 7 | 117 | 42.428571 | 0.773438 | 0 | 0 | 0 | 0 | 0 | 0.59596 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
628c16ef34856632f65b149fbcd05a15f81a56ea | 96 | py | Python | venv/lib/python3.8/site-packages/setuptools/extension.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/setuptools/extension.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/setuptools/extension.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/34/c3/38/e978cd7557a559e99cd31f02c95280e4ab3a666df14d6480d924bac593 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
655f51f2bda5106b941bba70e6ac89c993f3ce6f | 1,975 | py | Python | migrations/versions/0106_null_noti_status.py | cds-snc/notifier-api | 90b385ec49efbaee7e607516fc7d9f08991af813 | [
"MIT"
] | 41 | 2019-11-28T16:58:41.000Z | 2022-01-28T21:11:16.000Z | migrations/versions/0106_null_noti_status.py | cds-snc/notification-api | b1c1064f291eb860b494c3fa65ac256ad70bf47c | [
"MIT"
] | 1,083 | 2019-07-08T12:57:24.000Z | 2022-03-08T18:53:40.000Z | migrations/versions/0106_null_noti_status.py | cds-snc/notifier-api | 90b385ec49efbaee7e607516fc7d9f08991af813 | [
"MIT"
] | 9 | 2020-01-24T19:56:43.000Z | 2022-01-27T21:36:53.000Z | """
Revision ID: 0106_null_noti_status
Revises: 0105_opg_letter_org
Create Date: 2017-07-10 11:18:27.267721
"""
from alembic import op
from sqlalchemy.dialects import postgresql
revision = "0106_null_noti_status"
down_revision = "0105_opg_letter_org"
def upgrade():
op.alter_column(
"notification_history",
"status",
existing_type=postgresql.ENUM(
"created",
"sending",
"delivered",
"pending",
"failed",
"technical-failure",
"temporary-failure",
"permanent-failure",
"sent",
name="notify_status_type",
),
nullable=True,
)
op.alter_column(
"notifications",
"status",
existing_type=postgresql.ENUM(
"created",
"sending",
"delivered",
"pending",
"failed",
"technical-failure",
"temporary-failure",
"permanent-failure",
"sent",
name="notify_status_type",
),
nullable=True,
)
def downgrade():
op.alter_column(
"notifications",
"status",
existing_type=postgresql.ENUM(
"created",
"sending",
"delivered",
"pending",
"failed",
"technical-failure",
"temporary-failure",
"permanent-failure",
"sent",
name="notify_status_type",
),
nullable=False,
)
op.alter_column(
"notification_history",
"status",
existing_type=postgresql.ENUM(
"created",
"sending",
"delivered",
"pending",
"failed",
"technical-failure",
"temporary-failure",
"permanent-failure",
"sent",
name="notify_status_type",
),
nullable=False,
)
| 22.443182 | 42 | 0.491646 | 154 | 1,975 | 6.103896 | 0.350649 | 0.029787 | 0.055319 | 0.119149 | 0.77234 | 0.77234 | 0.77234 | 0.77234 | 0.77234 | 0.77234 | 0 | 0.03005 | 0.393418 | 1,975 | 87 | 43 | 22.701149 | 0.754591 | 0.052152 | 0 | 0.864865 | 0 | 0 | 0.303974 | 0.011278 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.027027 | 0 | 0.054054 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6578a1c0c6d480f0a65a3120b04664142533dd6d | 109 | py | Python | FUNDAMENTALS_MODULE/Basic_Syntax_Conditional_Statements_and_Loops/LAB/01_Largest_Of_Three_Numbers.py | sleepychild/ProgramingBasicsPython | d96dc4662adc1c8329b731b9c9b7fa4ecf69ec16 | [
"MIT"
] | null | null | null | FUNDAMENTALS_MODULE/Basic_Syntax_Conditional_Statements_and_Loops/LAB/01_Largest_Of_Three_Numbers.py | sleepychild/ProgramingBasicsPython | d96dc4662adc1c8329b731b9c9b7fa4ecf69ec16 | [
"MIT"
] | 1 | 2022-01-15T10:33:56.000Z | 2022-01-15T10:33:56.000Z | FUNDAMENTALS_MODULE/Basic_Syntax_Conditional_Statements_and_Loops/LAB/01_Largest_Of_Three_Numbers.py | sleepychild/ProgramingBasicsPython | d96dc4662adc1c8329b731b9c9b7fa4ecf69ec16 | [
"MIT"
] | null | null | null | nums_list: list = [int(input()),int(input()),int(input()),]
nums_list.sort(reverse=True)
print(nums_list[0])
| 27.25 | 59 | 0.697248 | 18 | 109 | 4.055556 | 0.5 | 0.328767 | 0.30137 | 0.438356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009709 | 0.055046 | 109 | 3 | 60 | 36.333333 | 0.699029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65ad8b32a3d9e2b5376e15f084bd6537f857eaba | 191 | py | Python | src/utils.py | mradbourne/jump-to-py | 42688123cfdf3e40330b5859d911d84d9a7601ca | [
"MIT"
] | null | null | null | src/utils.py | mradbourne/jump-to-py | 42688123cfdf3e40330b5859d911d84d9a7601ca | [
"MIT"
] | null | null | null | src/utils.py | mradbourne/jump-to-py | 42688123cfdf3e40330b5859d911d84d9a7601ca | [
"MIT"
] | null | null | null | import subprocess
import os
def cmd(command):
return subprocess.check_output(command, universal_newlines=True).strip()
def open_in_default_editor(filepath):
cmd(['open', filepath])
| 21.222222 | 76 | 0.769634 | 25 | 191 | 5.68 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120419 | 191 | 8 | 77 | 23.875 | 0.845238 | 0 | 0 | 0 | 0 | 0 | 0.020942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
0298b6fc80296b5925deb55cd4b44864bc798ca9 | 118 | py | Python | splinext/pokedex/tests/test_pokemon_flavor.py | hugopeixoto/spline-pokedex | 17b8d22118c9d4b02a01c2271120c162b8dd41da | [
"MIT"
] | 7 | 2015-05-28T22:37:26.000Z | 2020-10-26T17:28:32.000Z | splinext/pokedex/tests/test_pokemon_flavor.py | hugopeixoto/spline-pokedex | 17b8d22118c9d4b02a01c2271120c162b8dd41da | [
"MIT"
] | 28 | 2015-02-28T04:58:47.000Z | 2021-03-19T03:32:43.000Z | splinext/pokedex/tests/test_pokemon_flavor.py | hugopeixoto/spline-pokedex | 17b8d22118c9d4b02a01c2271120c162b8dd41da | [
"MIT"
] | 3 | 2015-11-25T17:02:32.000Z | 2020-08-07T09:52:31.000Z | # encoding: utf8
from spline.tests import TestController
class TestPokemonFlavorController(TestController):
pass
| 19.666667 | 50 | 0.822034 | 11 | 118 | 8.818182 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009709 | 0.127119 | 118 | 5 | 51 | 23.6 | 0.932039 | 0.118644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
02cb428669714caed3d0a41bbd6d8f82a70899e5 | 47 | py | Python | scripts/qgis_fixes/fix_filter.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | null | null | null | scripts/qgis_fixes/fix_filter.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | null | null | null | scripts/qgis_fixes/fix_filter.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | 1 | 2021-12-25T08:40:30.000Z | 2021-12-25T08:40:30.000Z | from lib2to3.fixes.fix_filter import FixFilter
| 23.5 | 46 | 0.87234 | 7 | 47 | 5.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 0.085106 | 47 | 1 | 47 | 47 | 0.883721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
02d092d4aaaab6ea92003894feb10324f42dfb6e | 28 | py | Python | natpmp/__init__.py | yimingliu/py-natpmp | 6cd61a3ee28c085453c2f40138b0d5f10525ce81 | [
"Unlicense"
] | 31 | 2015-03-12T01:51:49.000Z | 2021-11-08T11:44:28.000Z | natpmp/__init__.py | yimingliu/py-natpmp | 6cd61a3ee28c085453c2f40138b0d5f10525ce81 | [
"Unlicense"
] | 2 | 2016-01-19T23:35:22.000Z | 2017-10-06T08:04:46.000Z | natpmp/__init__.py | yimingliu/py-natpmp | 6cd61a3ee28c085453c2f40138b0d5f10525ce81 | [
"Unlicense"
] | 6 | 2015-06-05T15:47:39.000Z | 2018-11-15T09:08:11.000Z | from natpmp.NATPMP import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b85cccd32bed48c022fab840698e08e6e1313230 | 33,644 | py | Python | skrebatedev/scoring_utils.py | athril/scikit-rebate | bfc8dd6aab9ea7b5a196e318322a938d36723955 | [
"MIT"
] | null | null | null | skrebatedev/scoring_utils.py | athril/scikit-rebate | bfc8dd6aab9ea7b5a196e318322a938d36723955 | [
"MIT"
] | null | null | null | skrebatedev/scoring_utils.py | athril/scikit-rebate | bfc8dd6aab9ea7b5a196e318322a938d36723955 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
scikit-rebate was primarily developed at the University of Pennsylvania by:
- Randal S. Olson (rso@randalolson.com)
- Pete Schmitt (pschmitt@upenn.edu)
- Ryan J. Urbanowicz (ryanurb@upenn.edu)
- Weixuan Fu (weixuanf@upenn.edu)
- and many more generous open source contributors
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
import numpy as np
# (Subset of continuous-valued feature data, Subset of discrete-valued feature data, max/min difference, instance index, boolean mask for continuous, boolean mask for discrete)
def get_row_missing(xc, xd, cdiffs, index, cindices, dindices):
""" Calculate distance between index instance and all other instances. """
row = np.empty(0, dtype=np.double) # initialize empty row
cinst1 = xc[index] # continuous-valued features for index instance
dinst1 = xd[index] # discrete-valued features for index instance
# Boolean mask locating missing values for continuous features for index instance
can = cindices[index]
# Boolean mask locating missing values for discrete features for index instance
dan = dindices[index]
tf = len(cinst1) + len(dinst1) # total number of features.
# Progressively compare current instance to all others. Excludes comparison with self indexed instance. (Building the distance matrix triangle).
for j in range(index):
dist = 0
dinst2 = xd[j] # discrete-valued features for compared instance
cinst2 = xc[j] # continuous-valued features for compared instance
# Manage missing values in discrete features
# Boolean mask locating missing values for discrete features for compared instance
dbn = dindices[j]
# indexes where there is at least one missing value in the feature between an instance pair.
idx = np.unique(np.append(dan, dbn))
# Number of features excluded from distance calculation due to one or two missing values within instance pair. Used to normalize distance values for comparison.
dmc = len(idx)
d1 = np.delete(dinst1, idx) # delete unique missing features from index instance
d2 = np.delete(dinst2, idx) # delete unique missing features from compared instance
# Manage missing values in continuous features
# Boolean mask locating missing values for continuous features for compared instance
cbn = cindices[j]
# indexes where there is at least one missing value in the feature between an instance pair.
idx = np.unique(np.append(can, cbn))
# Number of features excluded from distance calculation due to one or two missing values within instance pair. Used to normalize distance values for comparison.
cmc = len(idx)
c1 = np.delete(cinst1, idx) # delete unique missing features from index instance
c2 = np.delete(cinst2, idx) # delete unique missing features from compared instance
# delete unique missing features from continuous value difference scores
cdf = np.delete(cdiffs, idx)
# Add discrete feature distance contributions (missing values excluded) - Hamming distance
dist += len(d1[d1 != d2])
# Add continuous feature distance contributions (missing values excluded) - Manhattan distance (Note that 0-1 continuous value normalization is included ~ subtraction of minimums cancel out)
dist += np.sum(np.absolute(np.subtract(c1, c2)) / cdf)
# Normalize distance calculation based on total number of missing values bypassed in either discrete or continuous features.
tnmc = tf - dmc - cmc # Total number of unique missing counted
# Distance normalized by number of features included in distance sum (this seeks to handle missing values neutrally in distance calculation)
dist = dist/float(tnmc)
row = np.append(row, dist)
return row
# For iter relief
def get_row_missing_iter(xc, xd, cdiffs, index, cindices, dindices, weights):
""" Calculate distance between index instance and all other instances. """
row = np.empty(0, dtype=np.double) # initialize empty row
cinst1 = xc[index] # continuous-valued features for index instance
dinst1 = xd[index] # discrete-valued features for index instance
# Boolean mask locating missing values for continuous features for index instance
can = cindices[index]
# Boolean mask locating missing values for discrete features for index instance
dan = dindices[index]
tf = len(cinst1) + len(dinst1) # total number of features.
# Progressively compare current instance to all others. Excludes comparison with self indexed instance. (Building the distance matrix triangle).
for j in range(index):
dist = 0
dinst2 = xd[j] # discrete-valued features for compared instance
cinst2 = xc[j] # continuous-valued features for compared instance
# Manage missing values in discrete features
# Boolean mask locating missing values for discrete features for compared instance
dbn = dindices[j]
# indexes where there is at least one missing value in the feature between an instance pair.
idx = np.unique(np.append(dan, dbn))
# Number of features excluded from distance calculation due to one or two missing values within instance pair. Used to normalize distance values for comparison.
dmc = len(idx)
d1 = np.delete(dinst1, idx) # delete unique missing features from index instance
d2 = np.delete(dinst2, idx) # delete unique missing features from compared instance
wd = np.delete(weights, idx) # delete weights corresponding to missing discrete features
# Manage missing values in continuous features
# Boolean mask locating missing values for continuous features for compared instance
cbn = cindices[j]
# indexes where there is at least one missing value in the feature between an instance pair.
idx = np.unique(np.append(can, cbn))
# Number of features excluded from distance calculation due to one or two missing values within instance pair. Used to normalize distance values for comparison.
cmc = len(idx)
c1 = np.delete(cinst1, idx) # delete unique missing features from index instance
c2 = np.delete(cinst2, idx) # delete unique missing features from compared instance
# delete unique missing features from continuous value difference scores
cdf = np.delete(cdiffs, idx)
wc = np.delete(weights, idx) # delete weights corresponding to missing continuous features
# Add discrete feature distance contributions (missing values excluded) - Hamming distance
if len(d1)!=0: #To ensure there is atleast one discrete variable
hamming_dist = np.not_equal(d1, d2).astype(float)
weight_hamming_dist = np.dot(hamming_dist, wd)/np.sum(wd)
dist += weight_hamming_dist
# Add continuous feature distance contributions (missing values excluded) - Manhattan distance (Note that 0-1 continuous value normalization is included ~ subtraction of minimums cancel out)
if len(c1)!=0: #To ensure there is atleast one continuous variable
dist += np.dot((np.absolute(np.subtract(c1, c2)) / cdf), wc)/np.sum(wc)
# Normalize distance calculation based on total number of missing values bypassed in either discrete or continuous features.
tnmc = tf - dmc - cmc # Total number of unique missing counted
# Distance normalized by number of features included in distance sum (this seeks to handle missing values neutrally in distance calculation)
dist = dist/float(tnmc)
row = np.append(row, dist)
return row
def ramp_function(data_type, attr, fname, xinstfeature, xNNifeature):
""" Our own user simplified variation of the ramp function suggested by Hong 1994, 1997. Hong's method requires the user to specifiy two thresholds
that indicate the max difference before a score of 1 is given, as well a min difference before a score of 0 is given, and any in the middle get a
score that is the normalized difference between the two continuous feature values. This was done because when discrete and continuous features were mixed,
continuous feature scores were underestimated. Towards simplicity, automation, and a dataset adaptable approach,
here we simply check whether the difference is greater than the standard deviation for the given feature; if so we assign a score of 1, otherwise we
assign the normalized feature score difference. This should help compensate for the underestimation. """
diff = 0
mmdiff = attr[fname][3] # Max/Min range of values for target feature
rawfd = abs(xinstfeature - xNNifeature) # prenormalized feature value difference
if data_type == 'mixed': # Ramp function utilized
# Check whether feature value difference is greater than the standard deviation
standDev = attr[fname][4]
if rawfd > standDev: # feature value difference is is wider than a standard deviation
diff = 1
else:
diff = abs(xinstfeature - xNNifeature) / mmdiff
else: # Normal continuous feature scoring
diff = abs(xinstfeature - xNNifeature) / mmdiff
return diff
def compute_score(attr, mcmap, NN, feature, inst, nan_entries, headers, class_type, X, y, labels_std, data_type, near=True):
"""Flexible feature scoring method that can be used with any core Relief-based method. Scoring proceeds differently
based on whether endpoint is binary, multiclass, or continuous. This method is called for a single target instance
+ feature combination and runs over all items in NN. """
fname = headers[feature] # feature identifier
ftype = attr[fname][0] # feature type
ctype = class_type # class type (binary, multiclass, continuous)
diff_hit = diff_miss = 0.0 # Tracks the score contribution
# Tracks the number of hits/misses. Used in normalizing scores by 'k' in ReliefF, and by m or h in SURF, SURF*, MultiSURF*, and MultiSURF
count_hit = count_miss = 0.0
# Initialize 'diff' (The score contribution for this target instance and feature over all NN)
diff = 0
# mmdiff = attr[fname][3] # Max/Min range of values for target feature
datalen = float(len(X))
# If target instance is missing, then a 'neutral' score contribution of 0 is returned immediately since all NN comparisons will be against this missing value.
if nan_entries[inst][feature]:
return 0.
# Note missing data normalization below regarding missing NN feature values is accomplished by counting hits and misses (missing values are not counted) (happens in parallel with hit/miss imbalance normalization)
xinstfeature = X[inst][feature] # value of target instances target feature.
#--------------------------------------------------------------------------
if ctype == 'binary':
for i in range(len(NN)):
if nan_entries[NN[i]][feature]: # skip any NN with a missing value for this feature.
continue
xNNifeature = X[NN[i]][feature]
if near: # SCORING FOR NEAR INSTANCES
if y[inst] == y[NN[i]]: # HIT
count_hit += 1
if ftype == 'continuous':
# diff_hit -= abs(xinstfeature - xNNifeature) / mmdiff #Normalize absolute value of feature value difference by max-min value range for feature (so score update lies between 0 and 1)
diff_hit -= ramp_function(data_type, attr, fname, xinstfeature, xNNifeature)
else: # discrete feature
if xinstfeature != xNNifeature: # A difference in feature value is observed
# Feature score is reduced when we observe feature difference between 'near' instances with the same class.
diff_hit -= 1
else: # MISS
count_miss += 1
if ftype == 'continuous':
# diff_miss += abs(xinstfeature - xNNifeature) / mmdiff
diff_miss += ramp_function(data_type, attr, fname,
xinstfeature, xNNifeature)
else: # discrete feature
if xinstfeature != xNNifeature: # A difference in feature value is observed
# Feature score is increase when we observe feature difference between 'near' instances with different class values.
diff_miss += 1
else: # SCORING FOR FAR INSTANCES (ONLY USED BY MULTISURF* BASED ON HOW CODED)
if y[inst] == y[NN[i]]: # HIT
count_hit += 1
if ftype == 'continuous':
# diff_hit -= abs(xinstfeature - xNNifeature) / mmdiff #Hits differently add continuous value differences rather than subtract them
# Sameness should yield most negative score
diff_hit -= (1-ramp_function(data_type, attr,
fname, xinstfeature, xNNifeature))
else: # discrete feature
# The same feature value is observed (Used for more efficient 'far' scoring, since there should be fewer same values for 'far' instances)
if xinstfeature == xNNifeature:
# Feature score is reduced when we observe the same feature value between 'far' instances with the same class.
diff_hit -= 1
else: # MISS
count_miss += 1
if ftype == 'continuous':
# diff_miss += abs(xinstfeature - xNNifeature) / mmdiff #Misses differntly subtract continuous value differences rather than add them
# Sameness should yield most negative score
diff_miss += (1-ramp_function(data_type, attr,
fname, xinstfeature, xNNifeature))
else: # discrete feature
# The same feature value is observed (Used for more efficient 'far' scoring, since there should be fewer same values for 'far' instances)
if xinstfeature == xNNifeature:
# Feature score is increased when we observe the same feature value between 'far' instances with different class values.
diff_miss += 1
""" Score Normalizations:
*'n' normalization dividing by the number of training instances (this helps ensure that all final scores end up in the -1 to 1 range
*'k','h','m' normalization dividing by the respective number of hits and misses in NN (after ignoring missing values), also helps account for class imbalance within nearest neighbor radius)"""
if count_hit == 0.0 or count_miss == 0.0: # Special case, avoid division error
if count_hit == 0.0 and count_miss == 0.0:
return 0.0
elif count_hit == 0.0:
diff = (diff_miss / count_miss) / datalen
else: # count_miss == 0.0
diff = (diff_hit / count_hit) / datalen
else: # Normal diff normalization
diff = ((diff_hit / count_hit) + (diff_miss / count_miss)) / datalen
#--------------------------------------------------------------------------
elif ctype == 'multiclass':
class_store = dict() # only 'miss' classes will be stored
# missClassPSum = 0
for each in mcmap:
if(each != y[inst]): # Identify miss classes for current target instance.
class_store[each] = [0, 0]
# missClassPSum += mcmap[each]
for i in range(len(NN)):
if nan_entries[NN[i]][feature]: # skip any NN with a missing value for this feature.
continue
xNNifeature = X[NN[i]][feature]
if near: # SCORING FOR NEAR INSTANCES
if(y[inst] == y[NN[i]]): # HIT
count_hit += 1
if ftype == 'continuous':
# diff_hit -= abs(xinstfeature - xNNifeature) / mmdiff
diff_hit -= ramp_function(data_type, attr, fname, xinstfeature, xNNifeature)
else: # discrete feature
if xinstfeature != xNNifeature:
# Feature score is reduced when we observe feature difference between 'near' instances with the same class.
diff_hit -= 1
else: # MISS
for missClass in class_store:
if(y[NN[i]] == missClass): # Identify which miss class is present
class_store[missClass][0] += 1
if ftype == 'continuous':
# class_store[missClass][1] += abs(xinstfeature - xNNifeature) / mmdiff
class_store[missClass][1] += ramp_function(
data_type, attr, fname, xinstfeature, xNNifeature)
else: # discrete feature
if xinstfeature != xNNifeature:
# Feature score is increase when we observe feature difference between 'near' instances with different class values.
class_store[missClass][1] += 1
else: # SCORING FOR FAR INSTANCES (ONLY USED BY MULTISURF* BASED ON HOW CODED)
if(y[inst] == y[NN[i]]): # HIT
count_hit += 1
if ftype == 'continuous':
# diff_hit -= abs(xinstfeature - xNNifeature) / mmdiff #Hits differently add continuous value differences rather than subtract them
# Sameness should yield most negative score
diff_hit -= (1-ramp_function(data_type, attr,
fname, xinstfeature, xNNifeature))
else: # discrete features
if xinstfeature == xNNifeature:
# Feature score is reduced when we observe the same feature value between 'far' instances with the same class.
diff_hit -= 1
else: # MISS
for missClass in class_store:
if(y[NN[i]] == missClass):
class_store[missClass][0] += 1
if ftype == 'continuous':
# class_store[missClass][1] += abs(xinstfeature - xNNifeature) / mmdiff
# Sameness should yield most negative score
class_store[missClass][1] += (1-ramp_function(data_type,
attr, fname, xinstfeature, xNNifeature))
else: # discrete feature
if xinstfeature == xNNifeature:
# Feature score is increased when we observe the same feature value between 'far' instances with different class values.
class_store[missClass][1] += 1
""" Score Normalizations:
*'n' normalization dividing by the number of training instances (this helps ensure that all final scores end up in the -1 to 1 range
*'k','h','m' normalization dividing by the respective number of hits and misses in NN (after ignoring missing values), also helps account for class imbalance within nearest neighbor radius)
* multiclass normalization - accounts for scoring by multiple miss class, so miss scores don't have too much weight in contrast with hit scoring. If a given miss class isn't included in NN
then this normalization will account for that possibility. """
# Miss component
for each in class_store:
count_miss += class_store[each][0]
if count_hit == 0.0 and count_miss == 0.0:
return 0.0
else:
if count_miss == 0:
pass
else: # Normal diff normalization
for each in class_store: # multiclass normalization
# Contribution of given miss class weighted by it's observed frequency within NN set.
diff += class_store[each][1] * \
(class_store[each][0] / count_miss) * len(class_store)
diff = diff / count_miss # 'm' normalization
# Hit component: with 'h' normalization
if count_hit == 0:
pass
else:
diff += (diff_hit / count_hit)
diff = diff / datalen # 'n' normalization
#--------------------------------------------------------------------------
else: # CONTINUOUS endpoint
same_class_bound = labels_std
for i in range(len(NN)):
if nan_entries[NN[i]][feature]: # skip any NN with a missing value for this feature.
continue
xNNifeature = X[NN[i]][feature]
if near: # SCORING FOR NEAR INSTANCES
if abs(y[inst] - y[NN[i]]) < same_class_bound: # HIT approximation
count_hit += 1
if ftype == 'continuous':
# diff_hit -= abs(xinstfeature - xNNifeature) / mmdiff
diff_hit -= ramp_function(data_type, attr, fname, xinstfeature, xNNifeature)
else: # discrete feature
if xinstfeature != xNNifeature:
# Feature score is reduced when we observe feature difference between 'near' instances with the same 'class'.
diff_hit -= 1
else: # MISS approximation
count_miss += 1
if ftype == 'continuous':
# diff_miss += abs(xinstfeature - xNNifeature) / mmdiff
diff_miss += ramp_function(data_type, attr, fname,
xinstfeature, xNNifeature)
else: # discrete feature
if xinstfeature != xNNifeature:
# Feature score is increase when we observe feature difference between 'near' instances with different class value.
diff_miss += 1
else: # SCORING FOR FAR INSTANCES (ONLY USED BY MULTISURF* BASED ON HOW CODED)
if abs(y[inst] - y[NN[i]]) < same_class_bound: # HIT approximation
count_hit += 1
if ftype == 'continuous':
# diff_hit += abs(xinstfeature - xNNifeature) / mmdiff
# Sameness should yield most negative score
diff_hit -= (1-ramp_function(data_type, attr,
fname, xinstfeature, xNNifeature))
else: # discrete feature
if xinstfeature == xNNifeature:
# Feature score is reduced when we observe the same feature value between 'far' instances with the same class.
diff_hit -= 1
else: # MISS approximation
count_miss += 1
if ftype == 'continuous':
# diff_miss -= abs(xinstfeature - xNNifeature) / mmdiff
# Sameness should yield most negative score
diff_miss += (1-ramp_function(data_type, attr,
fname, xinstfeature, xNNifeature))
else: # discrete feature
if xinstfeature == xNNifeature:
# Feature score is increased when we observe the same feature value between 'far' instances with different class values.
diff_miss += 1
""" Score Normalizations:
*'n' normalization dividing by the number of training instances (this helps ensure that all final scores end up in the -1 to 1 range
*'k','h','m' normalization dividing by the respective number of hits and misses in NN (after ignoring missing values), also helps account for class imbalance within nearest neighbor radius)"""
if count_hit == 0.0 or count_miss == 0.0: # Special case, avoid division error
if count_hit == 0.0 and count_miss == 0.0:
return 0.0
elif count_hit == 0.0:
diff = (diff_miss / count_miss) / datalen
else: # count_miss == 0.0
diff = (diff_hit / count_hit) / datalen
else: # Normal diff normalization
diff = ((diff_hit / count_hit) + (diff_miss / count_miss)) / datalen
return diff
def ReliefF_compute_scores(inst, attr, nan_entries, num_attributes, mcmap, NN, headers, class_type, X, y, labels_std, data_type, weight_flag=0, weights=None):
""" Unique scoring procedure for ReliefF algorithm. Scoring based on k nearest hits and misses of current target instance. """
scores = np.zeros(num_attributes)
if weight_flag == 2:
for feature_num in range(num_attributes):
scores[feature_num] += weights[feature_num]*compute_score(attr, mcmap, NN, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
else:
for feature_num in range(num_attributes):
scores[feature_num] += compute_score(attr, mcmap, NN, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
return scores
def SURF_compute_scores(inst, attr, nan_entries, num_attributes, mcmap, NN, headers, class_type, X, y, labels_std, data_type, weight_flag=0, weights=None):
""" Unique scoring procedure for SURF algorithm. Scoring based on nearest neighbors within defined radius of current target instance. """
scores = np.zeros(num_attributes)
if weight_flag == 2:
if len(NN) <= 0:
return scores
for feature_num in range(num_attributes):
scores[feature_num] += weights[feature_num]*compute_score(attr, mcmap, NN, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
else:
if len(NN) <= 0:
return scores
for feature_num in range(num_attributes):
scores[feature_num] += compute_score(attr, mcmap, NN, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
return scores
def SURFstar_compute_scores(inst, attr, nan_entries, num_attributes, mcmap, NN_near, NN_far, headers, class_type, X, y, labels_std, data_type, weight_flag=0, weights=None):
""" Unique scoring procedure for SURFstar algorithm. Scoring based on nearest neighbors within defined radius, as well as
'anti-scoring' of far instances outside of radius of current target instance"""
scores = np.zeros(num_attributes)
if weight_flag == 2:
for feature_num in range(num_attributes):
if len(NN_near) > 0:
scores[feature_num] += weights[feature_num]*compute_score(attr, mcmap, NN_near, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
# Note that we are using the near scoring loop in 'compute_score' and then just subtracting it here, in line with original SURF* paper.
if len(NN_far) > 0:
scores[feature_num] -= weights[feature_num]*compute_score(attr, mcmap, NN_far, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
else:
for feature_num in range(num_attributes):
if len(NN_near) > 0:
scores[feature_num] += compute_score(attr, mcmap, NN_near, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
# Note that we are using the near scoring loop in 'compute_score' and then just subtracting it here, in line with original SURF* paper.
if len(NN_far) > 0:
scores[feature_num] -= compute_score(attr, mcmap, NN_far, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
return scores
def MultiSURF_compute_scores(inst, attr, nan_entries, num_attributes, mcmap, NN_near, headers, class_type, X, y, labels_std, data_type, weight_flag=0, weights=None):
""" Unique scoring procedure for MultiSURF algorithm. Scoring based on 'extreme' nearest neighbors within defined radius of current target instance. """
scores = np.zeros(num_attributes)
if weight_flag == 2:
for feature_num in range(num_attributes):
if len(NN_near) > 0:
scores[feature_num] += weights[feature_num]*compute_score(attr, mcmap, NN_near, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
else:
for feature_num in range(num_attributes):
if len(NN_near) > 0:
scores[feature_num] += compute_score(attr, mcmap, NN_near, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
return scores
def MultiSURFstar_compute_scores(inst, attr, nan_entries, num_attributes, mcmap, NN_near, NN_far, headers, class_type, X, y, labels_std, data_type, weight_flag=0, weights=None):
""" Unique scoring procedure for MultiSURFstar algorithm. Scoring based on 'extreme' nearest neighbors within defined radius, as
well as 'anti-scoring' of extreme far instances defined by outer radius of current target instance. """
scores = np.zeros(num_attributes)
if weight_flag == 2:
for feature_num in range(num_attributes):
if len(NN_near) > 0:
scores[feature_num] += weights[feature_num]*compute_score(attr, mcmap, NN_near, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
# Note that we add this term because we used the far scoring above by setting 'near' to False. This is in line with original MultiSURF* paper.
if len(NN_far) > 0:
scores[feature_num] += weights[feature_num]*compute_score(attr, mcmap, NN_far, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type, near=False)
else:
for feature_num in range(num_attributes):
if len(NN_near) > 0:
scores[feature_num] += compute_score(attr, mcmap, NN_near, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type)
# Note that we add this term because we used the far scoring above by setting 'near' to False. This is in line with original MultiSURF* paper.
if len(NN_far) > 0:
scores[feature_num] += compute_score(attr, mcmap, NN_far, feature_num, inst,
nan_entries, headers, class_type, X, y, labels_std, data_type, near=False)
return scores
| 63.599244 | 217 | 0.600731 | 3,976 | 33,644 | 4.993461 | 0.121982 | 0.022665 | 0.016118 | 0.017125 | 0.773396 | 0.762617 | 0.756825 | 0.747154 | 0.744334 | 0.736678 | 0 | 0.008132 | 0.327428 | 33,644 | 528 | 218 | 63.719697 | 0.869277 | 0.424355 | 0 | 0.820433 | 0 | 0 | 0.008197 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027864 | false | 0.006192 | 0.003096 | 0 | 0.077399 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b88008aa9588b98275b1360a7fc50ebbcea13737 | 23,852 | py | Python | src/python/smet-collect/smetcollect/collect/compress.py | ciyer/smet-collect | 93cf94077018654eac262454408402d45ad9d668 | [
"BSD-2-Clause"
] | 1 | 2017-02-12T13:25:17.000Z | 2017-02-12T13:25:17.000Z | src/python/smet-collect/smetcollect/collect/compress.py | ciyer/smet-collect | 93cf94077018654eac262454408402d45ad9d668 | [
"BSD-2-Clause"
] | null | null | null | src/python/smet-collect/smetcollect/collect/compress.py | ciyer/smet-collect | 93cf94077018654eac262454408402d45ad9d668 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
compress.py
Find runs that have already been pruned and compress them.
Created by Chandrasekhar Ramakrishnan on 2015-11-06.
Copyright (c) 2015 Chandrasekhar Ramakrishnan. All rights reserved.
"""
import os
import shutil
import subprocess
import tarfile
import json
from collections import defaultdict
from ..bundle import slug_for_race
from ..bundle.status_db import Run
from ..process.prune import Pruner
from ..process.jq import JqEngineConfig, JqEngine
class CompressorConfig(object):
"""Gathers configuration information for the TweetCollector"""
def __init__(self, max_depth=5):
"""
:param max_depth: The maximum number of runs per race to compress. Use None or non-positive to compress all.
"""
self.max_depth = max_depth if max_depth > 1 else None
class Compressor(object):
"""Compress raw data"""
def __init__(self, status, config=None, race=None):
"""Constructor for the compressor collector
:param status: The CompressorConfig object that tracks status state
:param config: The configuration for the compressor
:param race: The slug for a race if should restrict to one race
"""
self.status = status
self.config = config if config else CompressorConfig()
self.race_slug = race
self.runs_to_compress = defaultdict(list)
def run(self):
"""Run pruning for matching races"""
self.collect_runs_to_compress()
self.log_intermediate_progress_update()
self.do_compress()
msg = 'Compressing finished'
self.status.progress_func({'type': 'progress', 'message': msg})
def log_intermediate_progress_update(self):
races = self.runs_to_compress.keys()
if len(races) < 1:
self.status.progress_func({'type': 'progress', 'message': "No runs to compress."})
return
for key in races:
runs = self.runs_to_compress[key]
msg = "Race {} has {} runs to compress".format(key.name.encode('utf-8'), len(runs))
self.status.progress_func({'type': 'progress', 'message': msg})
if self.config.max_depth is not None:
msg = "\tLimiting to {} runs per race".format(self.config.max_depth)
self.status.progress_func({'type': 'progress', 'message': msg})
def do_compress(self):
"""Really compress the runs"""
races = self.runs_to_compress.keys()
for race in races:
self.status.ensure_folder_exists(self.status.compressed_data_folder_path_for_race(race))
runs = self.runs_to_compress[race]
if self.config.max_depth:
runs = runs[0:self.config.max_depth]
for run in runs:
self.compress_run(race, run)
def compress_run(self, race, run):
raw_data_path = self.status.raw_data_folder_path_for_run(race, run)
if not os.path.exists(raw_data_path):
msg = "No run found at {}. Skipping...".format(self.status.path_relative_to_bundle(raw_data_path).encode('utf-8'))
self.status.progress_func({'type': 'compress', 'message': msg})
return
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
msg = "Compressing run {} to {}".format(
self.status.path_relative_to_bundle(raw_data_path).encode('utf-8'),
self.status.path_relative_to_bundle(compressed_data_path).encode('utf-8'))
self.status.progress_func({'type': 'compress', 'message': msg})
subprocess.call(['tar', '-cjf', compressed_data_path,
'-C', self.status.raw_data_folder_path_for_race(race),
run.results_folder])
if not self.verify_archive(run, raw_data_path, compressed_data_path):
self.status.progress_func({'type': 'compress', 'message': "Removing corrupt archive..."})
os.remove(compressed_data_path)
self.status.progress_func({'type': 'compress', 'message': "Done."})
def verify_archive(self, run, raw_data_path, compressed_data_path):
"""Check that the archive is ok. Return True if it is, False if there is a problem."""
archive = tarfile.open(compressed_data_path)
for path in os.listdir(raw_data_path):
try:
archive_info = archive.getmember(os.path.join(run.results_folder, path))
if archive_info.size < 0:
msg = "File {} is corrupt in archive".format(path)
self.status.progress_func({'type': 'compress', 'message': msg})
return False
except KeyError:
msg = "File {} is not in archive".format(path)
self.status.progress_func({'type': 'compress', 'message': msg})
return False
msg = "Archive verified."
self.status.progress_func({'type': 'compress', 'message': msg})
return True
def collect_runs_to_compress(self):
"""Find runs that need to be compressed"""
if self.race_slug:
matching_races = [race for race in self.status.races() if slug_for_race(race) == self.race_slug]
if len(matching_races) < 1:
msg = "Found no races matching slug {}.".format(self.race_slug)
self.status.progress_func({'type': 'error', 'message': msg})
return
if len(matching_races) > 1:
msg = "Found multiple races matching slug {}.".format(self.race_slug, matching_races)
self.status.progress_func({'type': 'error', 'message': msg})
return
self.collect_runs_from_race(matching_races[0])
else:
for race in self.status.races():
self.collect_runs_from_race(race)
def collect_runs_from_race(self, race):
"""Prune the runs in the race down to the most relevant data
"""
msg = "Collecting runs from race {}".format(race.name.encode('utf-8'))
self.status.progress_func({'type': 'progress', 'message': msg})
for run in race.runs.order_by(Run.start.desc()):
if self.status.has_pruned_data_for_run(race, run):
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
if not os.path.exists(compressed_data_path):
self.runs_to_compress[race].append(run)
class Uncompressor(object):
"""Uncompress raw data"""
def __init__(self, status, config=None, race=None):
"""Constructor for the uncompressor collector
:param status: The bundle status object
:param config: The configuration for the uncompressor
:param race: The slug for a race if should restrict to one race
"""
self.status = status
self.config = config if config else CompressorConfig()
self.race_slug = race
self.runs_to_uncompress = defaultdict(list)
def run(self):
"""Run pruning for matching races"""
self.collect_runs_to_uncompress()
self.log_intermediate_progress_update()
self.do_uncompress()
msg = 'Uncompressing finished'
self.status.progress_func({'type': 'progress', 'message': msg})
def log_intermediate_progress_update(self):
races = self.runs_to_uncompress.keys()
if len(races) < 1:
self.status.progress_func({'type': 'progress', 'message': "No runs to uncompress."})
return
for key in races:
runs = self.runs_to_uncompress[key]
msg = "Race {} has {} runs to uncompress".format(key.name, len(runs))
self.status.progress_func({'type': 'progress', 'message': msg})
if self.config.max_depth is not None:
msg = "\tLimiting to {} runs per race".format(self.config.max_depth)
self.status.progress_func({'type': 'progress', 'message': msg})
def do_uncompress(self):
"""Really uncompress the runs"""
races = self.runs_to_uncompress.keys()
for race in races:
self.status.ensure_folder_exists(self.status.raw_data_folder_path_for_race(race))
runs = self.runs_to_uncompress[race]
if self.config.max_depth:
runs = runs[0:self.config.max_depth]
for run in runs:
self.uncompress_run(race, run)
def uncompress_run(self, race, run):
raw_data_path = self.status.raw_data_folder_path_for_run(race, run)
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
msg = "Uncompressing run {} to {}".format(
self.status.path_relative_to_bundle(compressed_data_path),
self.status.path_relative_to_bundle(raw_data_path))
self.status.progress_func({'type': 'uncompress', 'message': msg})
subprocess.call(['tar', '-xjf', compressed_data_path,
'-C', self.status.raw_data_folder_path_for_race(race)])
def collect_runs_to_uncompress(self):
"""Find runs that need to be compressed"""
if self.race_slug:
matching_races = [race for race in self.status.races() if slug_for_race(race) == self.race_slug]
if len(matching_races) < 1:
msg = "Found no races matching slug {}.".format(self.race_slug)
self.status.progress_func({'type': 'error', 'message': msg})
return
if len(matching_races) > 1:
msg = "Found multiple races matching slug {}.".format(self.race_slug, matching_races)
self.status.progress_func({'type': 'error', 'message': msg})
return
self.collect_runs_from_race(matching_races[0])
else:
for race in self.status.races():
self.collect_runs_from_race(race)
def collect_runs_from_race(self, race):
"""Prune the runs in the race down to the most relevant data
"""
msg = "Collecting runs from race {}".format(race.name)
self.status.progress_func({'type': 'progress', 'message': msg})
for run in race.runs.order_by(Run.start.desc()):
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
raw_data_path = self.status.raw_data_folder_path_for_run(race, run)
if os.path.exists(compressed_data_path) and not os.path.exists(raw_data_path):
self.runs_to_uncompress[race].append(run)
class Rebuilder(object):
"""Rebuild faulty pruned data. -- This is WIP and has not yet been tested."""
def __init__(self, engine, config=None, race=None):
"""Constructor for the uncompressor collector
:param status: The bundle status object
:param config: The configuration for the rebuilder
:param race: The slug for a race if should restrict to one race
"""
self.engine = engine
self.engine_config = engine.config
self.status = engine.status
self.config = config if config else CompressorConfig()
self.race_slug = race
self.runs_to_rebuild = defaultdict(list)
def run(self):
"""Run rebuilding for matching races"""
self.collect_runs_to_rebuild()
self.log_intermediate_progress_update()
self.do_rebuild()
msg = 'Rebuilding finished'
self.status.progress_func({'type': 'progress', 'message': msg})
def log_intermediate_progress_update(self):
races = self.runs_to_rebuild.keys()
if len(races) < 1:
self.status.progress_func({'type': 'progress', 'message': "No runs to rebuild."})
return
for key in races:
runs = self.runs_to_rebuild[key]
msg = "Race {} has {} runs to rebuild".format(key.name, len(runs))
self.status.progress_func({'type': 'progress', 'message': msg})
if self.config.max_depth is not None:
msg = "\tLimiting to {} runs per race".format(self.config.max_depth)
self.status.progress_func({'type': 'progress', 'message': msg})
def do_rebuild(self):
"""Really uncompress the runs"""
races = self.runs_to_rebuild.keys()
for race in races:
self.status.ensure_folder_exists(self.status.raw_data_folder_path_for_race(race))
runs = self.runs_to_rebuild[race]
if self.config.max_depth:
runs = runs[0:self.config.max_depth]
for run in runs:
self.rebuild_run(race, run)
def rebuild_run(self, race, run):
raw_data_path = self.status.raw_data_folder_path_for_run(race, run)
# Uncompress the data to get the raw data to process
no_raw_data = not os.path.exists(raw_data_path)
if no_raw_data:
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
msg = "Uncompressing run {} to {}".format(
self.status.path_relative_to_bundle(compressed_data_path),
self.status.path_relative_to_bundle(raw_data_path))
self.status.progress_func({'type': 'uncompress', 'message': msg})
subprocess.call(['tar', '-xjf', compressed_data_path,
'-C', self.status.raw_data_folder_path_for_race(race)])
pruner = Pruner(self.status)
pruner.queue_processing(race, run)
self.engine.run_without_collect(pruner)
def collect_runs_to_rebuild(self):
"""Find runs that need to be compressed"""
if self.race_slug:
matching_races = [race for race in self.status.races() if slug_for_race(race) == self.race_slug]
if len(matching_races) < 1:
msg = "Found no races matching slug {}.".format(self.race_slug)
self.status.progress_func({'type': 'error', 'message': msg})
return
if len(matching_races) > 1:
msg = "Found multiple races matching slug {}.".format(self.race_slug, matching_races)
self.status.progress_func({'type': 'error', 'message': msg})
return
self.collect_runs_from_race(matching_races[0])
else:
for race in self.status.races():
self.collect_runs_from_race(race)
def collect_runs_from_race(self, race):
"""Prune the runs in the race down to the most relevant data
"""
msg = "Collecting runs from race {}".format(race.name)
self.status.progress_func({'type': 'progress', 'message': msg})
for run in race.runs.order_by(Run.start.desc()):
if self.should_rebuild_run(race, run):
self.runs_to_rebuild[race].append(run)
def should_rebuild_run(self, race, run):
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
raw_data_path = self.status.raw_data_folder_path_for_run(race, run)
has_data = os.path.exists(compressed_data_path) or os.path.exists(raw_data_path)
if not self.status.has_pruned_data_for_run(race, run):
return True if has_data else False
pruned_data_path = self.status.robust_pruned_data_file_path_for_run(run)
with open(pruned_data_path) as f:
json_data = json.load(f)
if len(json_data) < 1 and has_data:
return True
return False
class Archiver(object):
"""Deletes raw data that has been compressed already."""
def __init__(self, status, config=None, race=None):
"""Constructor for the archiver.
:param status: The CompressorConfig object that tracks status state
:param config: The configuration for the archiver (a CompressorConfig)
:param race: The slug for a race if should restrict to one race
"""
self.status = status
self.config = config if config else CompressorConfig()
self.race_slug = race
self.runs_to_archive = defaultdict(list)
def run(self):
"""Run pruning for matching races"""
self.collect_runs_to_archive()
self.log_intermediate_progress_update()
self.do_archive()
msg = 'Archiving finished'
self.status.progress_func({'type': 'progress', 'message': msg})
def log_intermediate_progress_update(self):
races = self.runs_to_archive.keys()
if len(races) < 1:
self.status.progress_func({'type': 'progress', 'message': "No runs to archive."})
return
for key in races:
runs = self.runs_to_archive[key]
msg = "Race {} has {} runs to archive".format(key.name.encode('utf-8'), len(runs))
self.status.progress_func({'type': 'progress', 'message': msg})
if self.config.max_depth is not None:
msg = "\tLimiting to {} runs per race".format(self.config.max_depth)
self.status.progress_func({'type': 'progress', 'message': msg})
def do_archive(self):
"""Really archive the runs"""
races = self.runs_to_archive.keys()
for race in races:
runs = self.runs_to_archive[race]
if self.config.max_depth:
runs = runs[0:self.config.max_depth]
for run in runs:
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
if os.path.exists(compressed_data_path):
self.delete_raw_run(race, run)
def delete_raw_run(self, race, run):
raw_data_path = self.status.raw_data_folder_path_for_run(race, run)
msg = "Deleting raw data for run {}".format(
self.status.path_relative_to_bundle(raw_data_path).encode('utf-8'))
self.status.progress_func({'type': 'archive', 'message': msg})
shutil.rmtree(raw_data_path)
def collect_runs_to_archive(self):
"""Find runs that need to be compressed"""
if self.race_slug:
matching_races = [race for race in self.status.races() if slug_for_race(race) == self.race_slug]
if len(matching_races) < 1:
msg = "Found no races matching slug {}.".format(self.race_slug)
self.status.progress_func({'type': 'error', 'message': msg})
return
if len(matching_races) > 1:
msg = "Found multiple races matching slug {}.".format(self.race_slug, matching_races)
self.status.progress_func({'type': 'error', 'message': msg})
return
self.collect_runs_from_race(matching_races[0])
else:
for race in self.status.races():
self.collect_runs_from_race(race)
def collect_runs_from_race(self, race):
"""Prune the runs in the race down to the most relevant data
"""
msg = "Collecting runs from race {}".format(race.name.encode('utf-8'))
self.status.progress_func({'type': 'progress', 'message': msg})
for run in race.runs.order_by(Run.start):
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
raw_data_path = self.status.raw_data_folder_path_for_run(race, run)
if self.status.has_pruned_data_for_run(race, run) and os.path.exists(compressed_data_path) \
and os.path.exists(raw_data_path):
self.runs_to_archive[race].append(run)
class PurgerConfig(object):
"""Gathers configuration information for the TweetCollector"""
def __init__(self, execute=False):
"""
:param execute: Should the runs actually be deleted?
"""
self.execute = execute
class Purger(object):
"""Purge defective runs."""
def __init__(self, status, config=None, race=None):
"""Constructor for the archiver.
:param status: The CompressorConfig object that tracks status state
:param config: The configuration for the archiver (a CompressorConfig)
:param race: The slug for a race if should restrict to one race
"""
self.status = status
self.config = config if config else PurgerConfig()
self.race_slug = race
self.runs_to_purge = defaultdict(list)
def run(self):
"""Run purging for matching races/runs"""
self.collect_runs_to_purge()
self.log_intermediate_progress_update()
self.do_archive()
msg = 'Purging finished'
self.status.progress_func({'type': 'progress', 'message': msg})
def log_intermediate_progress_update(self):
races = self.runs_to_purge.keys()
if len(races) < 1:
self.status.progress_func({'type': 'progress', 'message': "No runs to purge."})
return
for key in races:
runs = self.runs_to_purge[key]
msg = "Race {} has {} runs to purge".format(key.name, len(runs))
self.status.progress_func({'type': 'progress', 'message': msg})
def do_archive(self):
"""Really archive the runs"""
races = self.runs_to_purge.keys()
for race in races:
runs = self.runs_to_purge[race]
for run in runs:
self.delete_run(race, run)
def delete_run(self, race, run):
raw_data_path = self.status.raw_data_folder_path_for_run(race, run)
self.delete_folder_or_file("raw data", raw_data_path)
pruned_data_path = self.status.pruned_data_file_path_for_run(race, run)
self.delete_folder_or_file("pruned data", pruned_data_path)
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
self.delete_folder_or_file("compressed data", compressed_data_path)
msg = "Removing run\n\t{} : {}\n\tfrom db".format(race.slug, run.start)
self.status.progress_func({'type': 'progress', 'message': msg})
if self.config.execute:
self.status.session.delete(run)
self.status.session.commit()
def delete_folder_or_file(self, folder_desc, folder_or_file):
if not os.path.exists(folder_or_file):
return
msg = "Deleting {} for run {}".format(folder_desc, folder_or_file)
self.status.progress_func({'type': 'progress', 'message': msg})
if self.config.execute:
shutil.rmtree(folder_or_file)
def collect_runs_to_purge(self):
"""Find runs that need to be compressed"""
if self.race_slug:
matching_races = [race for race in self.status.races() if slug_for_race(race) == self.race_slug]
if len(matching_races) < 1:
msg = "Found no races matching slug {}.".format(self.race_slug)
self.status.progress_func({'type': 'error', 'message': msg})
return
if len(matching_races) > 1:
msg = "Found multiple races matching slug {}.".format(self.race_slug, matching_races)
self.status.progress_func({'type': 'error', 'message': msg})
return
self.collect_runs_from_race(matching_races[0])
else:
for race in self.status.races():
self.collect_runs_from_race(race)
def collect_runs_from_race(self, race):
"""Purge the runs that have no data.
"""
msg = "Collecting runs from race {}".format(race.name)
self.status.progress_func({'type': 'progress', 'message': msg})
for run in race.runs.order_by(Run.start.desc()):
compressed_data_path = self.status.compressed_data_file_path_for_run(race, run)
raw_data_path = self.status.raw_data_folder_path_for_run(race, run)
if not os.path.exists(compressed_data_path) \
and not os.path.exists(raw_data_path):
self.runs_to_purge[race].append(run)
| 45.693487 | 126 | 0.628123 | 3,075 | 23,852 | 4.649431 | 0.069594 | 0.07624 | 0.057914 | 0.070784 | 0.825348 | 0.791565 | 0.769882 | 0.744772 | 0.744772 | 0.704833 | 0 | 0.002782 | 0.261655 | 23,852 | 521 | 127 | 45.78119 | 0.80904 | 0.113324 | 0 | 0.617571 | 0 | 0 | 0.105765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103359 | false | 0 | 0.02584 | 0 | 0.206718 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2433834d9b72a7476d46c81090816c3b99b66a8 | 9,106 | py | Python | disba/_sensitivity.py | SeisPider/disba | 5e96c4a64dc473901c20e32c12cd4a5851e9ac79 | [
"BSD-3-Clause"
] | null | null | null | disba/_sensitivity.py | SeisPider/disba | 5e96c4a64dc473901c20e32c12cd4a5851e9ac79 | [
"BSD-3-Clause"
] | null | null | null | disba/_sensitivity.py | SeisPider/disba | 5e96c4a64dc473901c20e32c12cd4a5851e9ac79 | [
"BSD-3-Clause"
] | 1 | 2021-06-10T04:20:40.000Z | 2021-06-10T04:20:40.000Z | from collections import namedtuple
import numpy
from ._base import BaseSensitivity
from ._common import ifunc, ipar
from ._cps import srfker96, swegn96
__all__ = [
"SensitivityKernel",
"PhaseSensitivity",
"GroupSensitivity",
"EllipticitySensitivity",
]
SensitivityKernel = namedtuple(
"SensitivityKernel",
("depth", "kernel", "period", "velocity", "mode", "wave", "type", "parameter"),
)
class PhaseSensitivity(BaseSensitivity):
def __init__(
self,
thickness,
velocity_p,
velocity_s,
density,
algorithm="dunkin",
dc=0.005,
dp=0.025,
):
"""
Phase velocity sensitivity kernel class.
Parameters
----------
thickness : array_like
Layer thickness (in km).
velocity_p : array_like
Layer P-wave velocity (in km/s).
velocity_s : array_like
Layer S-wave velocity (in km/s).
density : array_like
Layer density (in g/cm3).
algorithm : str {'dunkin', 'fast-delta'}, optional, default 'dunkin'
Algorithm to use for computation of Rayleigh-wave dispersion:
- 'dunkin': Dunkin's matrix (adapted from surf96),
- 'fast-delta': fast delta matrix (after Buchen and Ben-Hador, 1996).
dc : scalar, optional, default 0.005
Phase velocity increment for root finding.
dp : scalar, optional, default 0.025
Parameter increment (%) for numerical partial derivatives.
"""
super().__init__(thickness, velocity_p, velocity_s, density, algorithm, dc, dp)
def __call__(self, t, mode=0, wave="rayleigh", parameter="velocity_s"):
"""
Calculate phase velocity sensitivity kernel for a given period and parameter.
Parameters
----------
t : scalar
Period (in s).
mode : int, optional, default 0
Mode number (0 if fundamental).
wave : str {'love', 'rayleigh'}, optional, default 'rayleigh'
Wave type.
parameter : str {'thickness', 'velocity_p', 'velocity_s', 'density'}, optional, default 'velocity_s'
Parameter with respect to which sensitivity kernel is calculated.
Returns
-------
namedtuple
Sensitivity kernel as a namedtuple (depth, kernel, period, velocity, mode, wave, type, parameter).
"""
c1, kernel = srfker96(
t,
self._thickness,
self._velocity_p,
self._velocity_s,
self._density,
mode=mode,
itype=0,
ifunc=ifunc[self._algorithm][wave],
ipar=ipar[parameter],
dc=self._dc,
dp=self._dp,
)
return SensitivityKernel(
self._thickness.cumsum() - self._thickness[0],
kernel,
t,
c1,
mode,
wave,
"phase",
parameter,
)
class GroupSensitivity(BaseSensitivity):
def __init__(
self,
thickness,
velocity_p,
velocity_s,
density,
algorithm="dunkin",
dc=0.005,
dt=0.025,
dp=0.025,
):
"""
Phase velocity sensitivity kernel class.
Parameters
----------
thickness : array_like
Layer thickness (in km).
velocity_p : array_like
Layer P-wave velocity (in km/s).
velocity_s : array_like
Layer S-wave velocity (in km/s).
density : array_like
Layer density (in g/cm3).
algorithm : str {'dunkin', 'fast-delta'}, optional, default 'dunkin'
Algorithm to use for computation of Rayleigh-wave dispersion:
- 'dunkin': Dunkin's matrix (adapted from surf96),
- 'fast-delta': fast delta matrix (after Buchen and Ben-Hador, 1996).
dc : scalar, optional, default 0.005
Phase velocity increment for root finding.
dt : scalar, optional, default 0.025
Frequency increment (%) for calculating group velocity.
dp : scalar, optional, default 0.025
Parameter increment (%) for numerical partial derivatives.
"""
if not isinstance(dt, float):
raise TypeError()
super().__init__(thickness, velocity_p, velocity_s, density, algorithm, dc, dp)
self._dt = dt
def __call__(self, t, mode=0, wave="rayleigh", parameter="velocity_s"):
"""
Calculate group velocity sensitivity kernel for a given period and parameter.
Parameters
----------
t : scalar
Period (in s).
mode : int, optional, default 0
Mode number (0 if fundamental).
wave : str {'love', 'rayleigh'}, optional, default 'rayleigh'
Wave type.
parameter : str {'thickness', 'velocity_p', 'velocity_s', 'density'}, optional, default 'velocity_s'
Parameter with respect to which sensitivity kernel is calculated.
Returns
-------
namedtuple
Sensitivity kernel as a namedtuple (depth, kernel, period, velocity, mode, wave, type, parameter).
"""
c1, kernel = srfker96(
t,
self._thickness,
self._velocity_p,
self._velocity_s,
self._density,
mode=mode,
itype=1,
ifunc=ifunc[self._algorithm][wave],
ipar=ipar[parameter],
dc=self._dc,
dt=self._dt,
dp=self._dp,
)
return SensitivityKernel(
self._thickness.cumsum() - self._thickness[0],
kernel,
t,
c1,
mode,
wave,
"group",
parameter,
)
@property
def dt(self):
"""Return frequency increment (%) for calculating group velocity."""
return self._dt
class EllipticitySensitivity(BaseSensitivity):
def __init__(
self,
thickness,
velocity_p,
velocity_s,
density,
algorithm="dunkin",
dc=0.005,
dp=0.025,
):
"""
Rayleigh-wave ellipticity sensitivity kernel class.
Parameters
----------
thickness : array_like
Layer thickness (in km).
velocity_p : array_like
Layer P-wave velocity (in km/s).
velocity_s : array_like
Layer S-wave velocity (in km/s).
density : array_like
Layer density (in g/cm3).
algorithm : str {'dunkin', 'fast-delta'}, optional, default 'dunkin'
Algorithm to use for computation of Rayleigh-wave dispersion:
- 'dunkin': Dunkin's matrix (adapted from surf96),
- 'fast-delta': fast delta matrix (after Buchen and Ben-Hador, 1996).
dc : scalar, optional, default 0.005
Phase velocity increment for root finding.
dp : scalar, optional, default 0.025
Parameter increment (%) for numerical partial derivatives.
"""
super().__init__(thickness, velocity_p, velocity_s, density, algorithm, dc, dp)
def __call__(self, t, mode=0, parameter="velocity_s"):
"""
Calculate Rayleigh-wave ellipticity sensitivity kernel for a given period and
parameter.
Parameters
----------
t : scalar
Period (in s).
mode : int, optional, default 0
Mode number (0 if fundamental).
parameter : str {'thickness', 'velocity_p', 'velocity_s', 'density'}, optional, default 'velocity_s'
Parameter with respect to which sensitivity kernel is calculated.
Returns
-------
namedtuple
Sensitivity kernel as a namedtuple (depth, kernel, period, velocity, mode, wave, type, parameter).
"""
# Reference ellipticity
ell1 = self._ellipticity(t, mode)
# Initialize kernel
mmax = len(self._thickness)
kernel = numpy.empty(mmax)
# Loop over layers
fac = 1.0 + self._dp
par = getattr(self, parameter)
for i in range(mmax):
tmp = par[i]
par[i] /= fac
ell2 = self._ellipticity(t, mode)
kernel[i] = (ell2 - ell1) / (par[i] - tmp)
par[i] *= fac
return SensitivityKernel(
self._thickness.cumsum() - self._thickness[0],
kernel,
t,
None,
mode,
"rayleigh",
"ellipticity",
parameter,
)
def _ellipticity(self, t, mode):
"""Compute Rayleigh-wave ellipticity for input period and mode."""
eig = swegn96(
t,
self._thickness,
self._velocity_p,
self._velocity_s,
self._density,
mode,
ifunc[self._algorithm]["rayleigh"],
self._dc,
)[:, :2]
return eig[0, 0] / eig[0, 1]
| 30.252492 | 110 | 0.546892 | 923 | 9,106 | 5.260022 | 0.145179 | 0.038929 | 0.034604 | 0.048198 | 0.820391 | 0.805767 | 0.78723 | 0.78723 | 0.777755 | 0.777755 | 0 | 0.019644 | 0.351526 | 9,106 | 300 | 111 | 30.353333 | 0.80254 | 0.458379 | 0 | 0.617021 | 0 | 0 | 0.058765 | 0.005501 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056738 | false | 0 | 0.035461 | 0 | 0.148936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2970ca19e91216c89857287e7d2372236822518 | 186 | py | Python | src/gedml/core/losses/classifier_based_loss/__init__.py | wangck20/GeDML | 1f76ac2094d7b88be7fd4eb6145e5586e547b9ca | [
"MIT"
] | 25 | 2021-09-06T13:26:02.000Z | 2022-01-06T13:25:24.000Z | src/gedml/core/losses/classifier_based_loss/__init__.py | wangck20/GeDML | 1f76ac2094d7b88be7fd4eb6145e5586e547b9ca | [
"MIT"
] | 1 | 2021-09-09T08:29:29.000Z | 2021-09-13T15:05:59.000Z | src/gedml/core/losses/classifier_based_loss/__init__.py | wangck20/GeDML | 1f76ac2094d7b88be7fd4eb6145e5586e547b9ca | [
"MIT"
] | 2 | 2021-09-07T08:44:41.000Z | 2021-09-09T08:31:55.000Z | from .cross_entropy_loss import CrossEntropyLoss
from .large_margin_softmax_loss import LargeMarginSoftmaxLoss
from .arcface_loss import ArcFaceLoss
from .cosface_loss import CosFaceLoss | 46.5 | 61 | 0.897849 | 23 | 186 | 6.956522 | 0.608696 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080645 | 186 | 4 | 62 | 46.5 | 0.935673 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b29f3d7498f710377e0176d472071764efa9aaaa | 640 | py | Python | examples/test-ms/ofdpa/test/policies/participant_1.py | sdn-ixp/sdx-parallel | aa7f3d01ac22c56b5882de50884b0473c8bb6ba2 | [
"Apache-2.0"
] | 49 | 2015-11-15T00:02:35.000Z | 2021-02-12T22:03:57.000Z | examples/test-ms/ofdpa/test/policies/participant_1.py | sdn-ixp/sdx-parallel | aa7f3d01ac22c56b5882de50884b0473c8bb6ba2 | [
"Apache-2.0"
] | 6 | 2016-06-20T06:01:36.000Z | 2019-10-22T19:34:27.000Z | examples/test-ms/ofdpa/test/policies/participant_1.py | sdn-ixp/sdx-parallel | aa7f3d01ac22c56b5882de50884b0473c8bb6ba2 | [
"Apache-2.0"
] | 21 | 2015-11-22T13:02:07.000Z | 2019-06-06T18:15:11.000Z | {
"outbound": [
{
"cookie": 1,
"match":
{
"tcp_dst": 80
},
"action":
{
"fwd": 2
}
},
{
"cookie": 2,
"match":
{
"tcp_dst": 4321
},
"action":
{
"fwd": 3
}
},
{
"cookie": 3,
"match":
{
"tcp_dst": 4322
},
"action":
{
"fwd": 3
}
}
]
}
| 16.842105 | 31 | 0.164063 | 28 | 640 | 3.642857 | 0.464286 | 0.235294 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087432 | 0.714063 | 640 | 37 | 32 | 17.297297 | 0.469945 | 0 | 0 | 0.216216 | 0 | 0 | 0.139063 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2aad8416529a851e6ce5948374f57fed74fb6de | 204 | py | Python | test/test_flickr_stats.py | JulienLeonard/socialstats | 944e3e4ceba2d977537934299e0c91abd5375d53 | [
"MIT"
] | null | null | null | test/test_flickr_stats.py | JulienLeonard/socialstats | 944e3e4ceba2d977537934299e0c91abd5375d53 | [
"MIT"
] | null | null | null | test/test_flickr_stats.py | JulienLeonard/socialstats | 944e3e4ceba2d977537934299e0c91abd5375d53 | [
"MIT"
] | null | null | null | import sys
sys.path.insert(0, './../lib')
import flickr_stats
from mysocialids import *
flickr_stats.flickr_dump(flickr_api_secret(),flickr_api_key(),flickr_user_id(),"flickr_stats.xml")
| 17 | 99 | 0.72549 | 29 | 204 | 4.758621 | 0.586207 | 0.23913 | 0.246377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005682 | 0.137255 | 204 | 11 | 100 | 18.545455 | 0.778409 | 0 | 0 | 0 | 0 | 0 | 0.126984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a26764ec7a035c77a6b9c5e6acfb8fc5bf674332 | 74 | py | Python | source/evaluation/classification/__init__.py | vered1986/NC_embeddings | 8dec4e2f7918ab7606abf61b9d90e4f2786a9652 | [
"Apache-2.0"
] | 9 | 2019-06-11T02:55:07.000Z | 2019-09-04T23:51:36.000Z | source/evaluation/classification/__init__.py | vered1986/NC_embeddings | 8dec4e2f7918ab7606abf61b9d90e4f2786a9652 | [
"Apache-2.0"
] | null | null | null | source/evaluation/classification/__init__.py | vered1986/NC_embeddings | 8dec4e2f7918ab7606abf61b9d90e4f2786a9652 | [
"Apache-2.0"
] | 2 | 2020-08-26T10:20:07.000Z | 2021-02-24T07:00:33.000Z | import random
import numpy as np
random.seed(a=133)
np.random.seed(133)
| 10.571429 | 19 | 0.756757 | 14 | 74 | 4 | 0.571429 | 0.285714 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0.135135 | 74 | 6 | 20 | 12.333333 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a277cbeb1c59cb5f928308b42c287f7359d65cd0 | 1,982 | py | Python | EPW/get_wave.py | Suth-ICQMS/EPW-descriptor | a599a8c5666d4604b7db04c5f61dc031c65a5933 | [
"MIT"
] | 2 | 2022-01-05T12:52:46.000Z | 2022-02-28T07:40:30.000Z | EPW_example/get_wave.py | Suth-ICQMS/EPW-descriptor | a599a8c5666d4604b7db04c5f61dc031c65a5933 | [
"MIT"
] | null | null | null | EPW_example/get_wave.py | Suth-ICQMS/EPW-descriptor | a599a8c5666d4604b7db04c5f61dc031c65a5933 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
from scipy.special import sph_harm
from scipy.special import assoc_laguerre
def get_wave_r1(n,l,m,Zm,r):
x = r
y = 0
z = 0
X, Z = np.meshgrid(x, z)
rho = np.linalg.norm((X,y,Z), axis=0) *Zm / n
Lag = assoc_laguerre(2 * rho, n - l - 1, 2 * l + 1)
Ylm = sph_harm(m, l, np.arctan2(y,X), np.arctan2(np.linalg.norm((X,y), axis=0), Z))
Psi = np.exp(-rho) * np.power((2*rho),l) * Lag * Ylm
density = np.conjugate(Psi) * Psi
density = density.real
return density[0]
def get_wave_r2(n,l,m,r,Zm):
x = 0
y = 0
z = r
X, Z = np.meshgrid(x, z)
rho = np.linalg.norm((X,y,Z), axis=0)*Zm / n
Lag = assoc_laguerre(2 * rho, n - l - 1, 2 * l + 1)
Ylm = sph_harm(m, l, np.arctan2(y,X), np.arctan2(np.linalg.norm((X,y), axis=0), Z))
Psi = np.exp(-rho) * np.power((2*rho),l) * Lag * Ylm
density = np.conjugate(Psi) * Psi
density = density.real
return density[0]
def get_wave_r3(n,l,m,r,Zm):
x = 0
y = r
z = 0
X, Z = np.meshgrid(x, z)
rho = np.linalg.norm((X,y,Z), axis=0)*Zm / n
Lag = assoc_laguerre(2 * rho, n - l - 1, 2 * l + 1)
Ylm = sph_harm(m, l, np.arctan2(y,X), np.arctan2(np.linalg.norm((X,y), axis=0), Z))
Psi = np.exp(-rho) * np.power((2*rho),l) * Lag * Ylm
density = np.conjugate(Psi) * Psi
density = density.real
return density[0]
def get_wave_mean(n,l,m,r,Zm):
n = n
l = l
m = m
Zm = Zm
wave1 =get_wave_r1(n,l,m,r,Zm)[0]
wave2 =get_wave_r2(n,l,m,r,Zm)[0]
wave3 =get_wave_r3(n,l,m,r,Zm)[0]
mean_wave = (wave1+ wave2 + wave3)/3
#return mean_wave+wave1+wave2+wave3
return np.array([mean_wave, wave1, wave2, wave3])
if __name__ == '__main__':
| 30.492308 | 93 | 0.510595 | 343 | 1,982 | 2.854227 | 0.16035 | 0.022472 | 0.02145 | 0.079673 | 0.81716 | 0.740552 | 0.709908 | 0.709908 | 0.656793 | 0.656793 | 0 | 0.043478 | 0.326942 | 1,982 | 64 | 94 | 30.96875 | 0.690405 | 0.017154 | 0 | 0.588235 | 0 | 0 | 0.004249 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.078431 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a282e75aa878ecf7cb6a1c80726702aeb19645fe | 67 | py | Python | bikeys.py | CharlieAlphaFox/BinanceMarginTrader | 8d799be764450b5eaedec73d4874fe419671c24b | [
"MIT"
] | 8 | 2020-08-31T17:21:08.000Z | 2022-02-17T12:39:27.000Z | bikeys.py | CharlieAlphaFox/BinanceMarginTrader | 8d799be764450b5eaedec73d4874fe419671c24b | [
"MIT"
] | null | null | null | bikeys.py | CharlieAlphaFox/BinanceMarginTrader | 8d799be764450b5eaedec73d4874fe419671c24b | [
"MIT"
] | null | null | null | Pass = 'YourBinanceAPIKeyGoesHere'
Sec = 'YourAPIsecretGoesHere'
| 22.333333 | 35 | 0.791045 | 4 | 67 | 13.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119403 | 67 | 2 | 36 | 33.5 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0.707692 | 0.707692 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.5 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a2a5342965d45e1d1035ddba2a18180220f03008 | 27 | py | Python | chrome_game_player/building.py | Weak-Chicken/win_gui_helper | c581633b20d0bd1627615736dd2b155b6decf1e7 | [
"MIT"
] | null | null | null | chrome_game_player/building.py | Weak-Chicken/win_gui_helper | c581633b20d0bd1627615736dd2b155b6decf1e7 | [
"MIT"
] | null | null | null | chrome_game_player/building.py | Weak-Chicken/win_gui_helper | c581633b20d0bd1627615736dd2b155b6decf1e7 | [
"MIT"
] | null | null | null | import base_methods as gh
| 9 | 25 | 0.814815 | 5 | 27 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 27 | 2 | 26 | 13.5 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a2a972026b618593191f15f0754d529ed6c7869c | 4,704 | py | Python | wdt/basket/tests/test_add_basket_item.py | We-Do-Takeaway/wdt_server | 37218d56823e0f265b551b64a9e045bc4455e685 | [
"MIT"
] | 1 | 2021-02-07T19:21:19.000Z | 2021-02-07T19:21:19.000Z | wdt/basket/tests/test_add_basket_item.py | We-Do-Takeaway/wdt_server | 37218d56823e0f265b551b64a9e045bc4455e685 | [
"MIT"
] | 13 | 2021-02-06T19:11:38.000Z | 2021-02-28T12:17:15.000Z | wdt/basket/tests/test_add_basket_item.py | We-Do-Takeaway/wdt_server | 37218d56823e0f265b551b64a9e045bc4455e685 | [
"MIT"
] | null | null | null | import uuid
from http import HTTPStatus
import pytest
MUTATION_QUERY = """
mutation AddBasketItem(
$basketId: ID!,
$basketItem: BasketItemInput!
) {
addBasketItem(
basketId: $basketId,
basketItem: $basketItem
) {
id
items {
id
name
quantity
}
}
}
"""
SAUSAGES_NAME = "Plate of sausages"
CHERRIES_NAME = "Bowl of cherries"
@pytest.mark.django_db
@pytest.mark.usefixtures("example_data")
class TestAddBasketItem:
def test_add_valid_item_to_existing_basket(self, graphql_request, test_values):
variables = {
"basketId": test_values.BASKET_ID,
"basketItem": {
"itemId": test_values.SAUSAGES_ID,
"quantity": 1,
},
}
response = graphql_request(MUTATION_QUERY, variables=variables)
assert response.status_code == HTTPStatus.OK
response_data = response.json()["data"]
assert response_data == {
"addBasketItem": {
"id": test_values.BASKET_ID,
"items": [
{
"id": test_values.CHERRIES_ID,
"name": CHERRIES_NAME,
"quantity": 1,
},
{
"id": test_values.SAUSAGES_ID,
"name": SAUSAGES_NAME,
"quantity": 1,
},
],
},
}
def test_add_valid_item_to_unknown_basket(self, graphql_request, test_values):
new_basket_id = str(uuid.uuid4())
variables = {
"basketId": new_basket_id,
"basketItem": {
"itemId": test_values.SAUSAGES_ID,
"quantity": 1,
},
}
response = graphql_request(MUTATION_QUERY, variables=variables)
assert response.status_code == HTTPStatus.OK
response_data = response.json()["data"]
assert response_data == {
"addBasketItem": {
"id": str(new_basket_id),
"items": [
{
"id": test_values.SAUSAGES_ID,
"name": SAUSAGES_NAME,
"quantity": 1,
},
],
},
}
def test_add_invalid_item_to_existing_basket(self, graphql_request, test_values):
variables = {
"basketId": test_values.BASKET_ID,
"basketItem": {
"itemId": str(uuid.uuid4()),
"quantity": 1,
},
}
response = graphql_request(MUTATION_QUERY, variables=variables)
errors = response.json()["errors"]
assert errors[0]["message"] == "Invalid item id"
def test_add_duplicate_item_to_basket(self, graphql_request, test_values):
variables = {
"basketId": test_values.BASKET_ID,
"basketItem": {
"itemId": test_values.CHERRIES_ID,
"quantity": 1,
},
}
response = graphql_request(MUTATION_QUERY, variables=variables)
assert response.status_code == HTTPStatus.OK
response_data = response.json()["data"]
assert response_data == {
"addBasketItem": {
"id": test_values.BASKET_ID,
"items": [
{
"id": test_values.CHERRIES_ID,
"name": CHERRIES_NAME,
"quantity": 2,
},
],
},
}
def test_too_many_to_basket(self, graphql_request, test_values):
new_basket_id = str(uuid.uuid4())
variables = {
"basketId": new_basket_id,
"basketItem": {
"itemId": test_values.SAUSAGES_ID,
"quantity": 99,
},
}
response = graphql_request(MUTATION_QUERY, variables=variables)
assert response.status_code == HTTPStatus.OK
errors = response.json()["errors"]
assert errors[0]["message"] == "Invalid quantity"
def test_too_few_to_basket(self, graphql_request, test_values):
new_basket_id = str(uuid.uuid4())
variables = {
"basketId": new_basket_id,
"basketItem": {
"itemId": test_values.SAUSAGES_ID,
"quantity": -1,
},
}
response = graphql_request(MUTATION_QUERY, variables=variables)
assert response.status_code == HTTPStatus.OK
errors = response.json()["errors"]
assert errors[0]["message"] == "Invalid quantity"
| 29.037037 | 85 | 0.511267 | 406 | 4,704 | 5.635468 | 0.157635 | 0.087413 | 0.033654 | 0.062937 | 0.820804 | 0.820804 | 0.803759 | 0.802885 | 0.802885 | 0.753497 | 0 | 0.006218 | 0.384566 | 4,704 | 161 | 86 | 29.217391 | 0.784111 | 0 | 0 | 0.557971 | 0 | 0 | 0.143707 | 0 | 0 | 0 | 0 | 0 | 0.07971 | 1 | 0.043478 | false | 0 | 0.021739 | 0 | 0.072464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a2d4028ed48028b383591fe2577407ec6975186e | 9,567 | py | Python | test-framework/test-suites/integration/tests/add/test_add_storage_controller.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 123 | 2015-05-12T23:36:45.000Z | 2017-07-05T23:26:57.000Z | test-framework/test-suites/integration/tests/add/test_add_storage_controller.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 177 | 2015-06-05T19:17:47.000Z | 2017-07-07T17:57:24.000Z | test-framework/test-suites/integration/tests/add/test_add_storage_controller.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 32 | 2015-06-07T02:25:03.000Z | 2017-06-23T07:35:35.000Z | import json
from textwrap import dedent
class TestAddStorageController:
def test_no_arrayid(self, host):
result = host.run('stack add storage controller raidlevel=0 slot=2')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "arrayid" parameter is required
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_no_raidlevel(self, host):
result = host.run('stack add storage controller arrayid=1 slot=2')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "raidlevel" parameter is required
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_no_slot_or_hotspare(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "slot" or "hotspare" parameter is required
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_invalid_adapter(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=2 adapter=test')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "adapter" parameter must be an integer
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_negative_adapter(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=2 adapter=-1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "adapter" parameter must be >= 0
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_invalid_enclosure(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=2 enclosure=test')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "enclosure" parameter must be an integer
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_negative_enclosure(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=2 enclosure=-1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "enclosure" parameter must be >= 0
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_invalid_slot(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=test')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "slot" parameter must be an integer
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_negative_slot(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=-1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "slot" parameter must be >= 0
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_duplicate_slot(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=1,1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "slot" parameter "1" is listed twice
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_existing_slot(self, host):
# Add it once
result = host.run('stack add storage controller adapter=1 enclosure=2 slot=3 arrayid=4 raidlevel=0')
assert result.rc == 0
# Add it again
result = host.run('stack add storage controller adapter=1 enclosure=2 slot=3 arrayid=4 raidlevel=0')
assert result.rc == 255
assert result.stderr == 'error - disk specification for "1/2/3" already exists\n'
def test_invalid_hotspare(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=2 hotspare=test')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "hotspare" parameter must be an integer
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_negative_hotspare(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=2 hotspare=-1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "hotspare" parameter must be >= 0
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_duplicate_hotspare(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=2 hotspare=1,1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "hotspare" parameter "1" is listed twice
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_existing_hotspare(self, host):
# Add it once
result = host.run('stack add storage controller adapter=1 enclosure=2 hotspare=3 arrayid=4 raidlevel=0')
assert result.rc == 0
# Add it again
result = host.run('stack add storage controller adapter=1 enclosure=2 hotspare=3 arrayid=4 raidlevel=0')
assert result.rc == 255
assert result.stderr == 'error - disk specification for "1/2/3" already exists\n'
def test_hotspare_overlap_slots(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=1 hotspare=1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "hotspare" parameter "1" is listed in slots
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_invalid_arrayid(self, host):
result = host.run('stack add storage controller arrayid=test raidlevel=0 slot=2 enclosure=1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "arrayid" parameter must be an integer
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_negative_arrayid(self, host):
result = host.run('stack add storage controller arrayid=-1 raidlevel=0 slot=2 enclosure=1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "arrayid" parameter must be >= 1
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_arrayid_global_no_hotspares(self, host):
result = host.run('stack add storage controller arrayid=global raidlevel=0 slot=2 enclosure=1')
assert result.rc == 255
assert result.stderr == dedent('''\
error - "arrayid" parameter is "global" with no hotspares
{arrayid=string} [adapter=integer] [enclosure=integer] [hotspare=integer] [raidlevel=integer] [slot=integer]
''')
def test_minimal(self, host):
result = host.run('stack add storage controller arrayid=1 raidlevel=0 slot=2')
assert result.rc == 0
result = host.run('stack list storage controller output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'adapter': None,
'arrayid': '*',
'enclosure': None,
'options': '',
'raidlevel': '0',
'slot': '*'
},
{
'adapter': None,
'arrayid': 1,
'enclosure': None,
'options': '',
'raidlevel': '0',
'slot': 2
}
]
def test_all_params(self, host):
result = host.run(
'stack add storage controller raidlevel=0 enclosure=1 '
'adapter=2 arrayid=3 slot=4 hotspare=5 options=test'
)
assert result.rc == 0
result = host.run('stack list storage controller output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'adapter': None,
'arrayid': '*',
'enclosure': None,
'options': '',
'raidlevel': '0',
'slot': '*'
},
{
'adapter': 2,
'arrayid': 3,
'enclosure': 1,
'options': 'test',
'raidlevel': '0',
'slot': 4
},
{
'adapter': 2,
'arrayid': 3,
'enclosure': 1,
'options': 'test',
'raidlevel': 'hotspare',
'slot': 5
}
]
def test_global_hotspares(self, host):
result = host.run('stack add storage controller arrayid=global hotspare=4,5')
assert result.rc == 0
result = host.run('stack list storage controller output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'adapter': None,
'arrayid': '*',
'enclosure': None,
'options': '',
'raidlevel': '0',
'slot': '*'
},
{
'adapter': None,
'arrayid': 'global',
'enclosure': None,
'options': '',
'raidlevel': 'hotspare',
'slot': 4
},
{
'adapter': None,
'arrayid': 'global',
'enclosure': None,
'options': '',
'raidlevel': 'hotspare',
'slot': 5
}
]
def test_stars(self, host):
result = host.run('stack add storage controller arrayid=* enclosure=1 slot=* raidlevel=5')
assert result.rc == 0
result = host.run('stack list storage controller output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [
{
'adapter': None,
'arrayid': '*',
'enclosure': None,
'options': '',
'raidlevel': '0',
'slot': '*'
},
{
'adapter': None,
'arrayid': '*',
'enclosure': 1,
'options': '',
'raidlevel': '5',
'slot': '*'
}
]
| 34.167857 | 111 | 0.675865 | 1,215 | 9,567 | 5.281481 | 0.062551 | 0.089762 | 0.05875 | 0.081346 | 0.945457 | 0.945457 | 0.935796 | 0.935796 | 0.935796 | 0.922549 | 0 | 0.023912 | 0.173827 | 9,567 | 279 | 112 | 34.290323 | 0.787955 | 0.005122 | 0 | 0.616327 | 0 | 0.069388 | 0.556607 | 0 | 0 | 0 | 0 | 0 | 0.212245 | 1 | 0.093878 | false | 0 | 0.008163 | 0 | 0.106122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a2e4168ce03f9cde3b35b6c6a66f5dd8f886ee3c | 75 | py | Python | src/compas/hpc/algorithms/__init__.py | gonzalocasas/compas | 2fabc7e5c966a02d823fa453564151e1a1e7e3c6 | [
"MIT"
] | null | null | null | src/compas/hpc/algorithms/__init__.py | gonzalocasas/compas | 2fabc7e5c966a02d823fa453564151e1a1e7e3c6 | [
"MIT"
] | null | null | null | src/compas/hpc/algorithms/__init__.py | gonzalocasas/compas | 2fabc7e5c966a02d823fa453564151e1a1e7e3c6 | [
"MIT"
] | null | null | null | from .drx_numba import *
from .drx_numba import __all__ as a
__all__ = a
| 12.5 | 35 | 0.746667 | 13 | 75 | 3.538462 | 0.538462 | 0.304348 | 0.521739 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 75 | 5 | 36 | 15 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a2f537b48e32d665c57bfbbce1ab4e5074ca38dd | 89,288 | py | Python | deep_dss/helpers.py | adiraju21s/deep_dss | 360a08f5da38fdb7af9a8534702cc711b66a4343 | [
"MIT"
] | null | null | null | deep_dss/helpers.py | adiraju21s/deep_dss | 360a08f5da38fdb7af9a8534702cc711b66a4343 | [
"MIT"
] | 1 | 2020-07-12T00:58:25.000Z | 2020-07-12T03:16:34.000Z | deep_dss/helpers.py | adiraju21s/deep_dss | 360a08f5da38fdb7af9a8534702cc711b66a4343 | [
"MIT"
] | null | null | null | import numpy as np
import healpy as hp
import pandas as pd
from numba import jit
from sklearn.utils import shuffle
from deepsphere import experiment_helper
from deepsphere.data import LabeledDataset
# Constants
def set_constants():
"""
Sets constants for future functions (especially accelerated ones)
:return: NSIDE, NPIX, PIXEL_AREA (in arcmin^2), ORDER, BIAS, DENSITY_M, DENSITY_KG and ELLIP_SIGMA
"""
nside = 1024
npix = hp.nside2npix(nside)
pixel_area = hp.nside2pixarea(nside, degrees=True) * 3600
order = 2
bias = 1.54
density_m = 0.04377
density_kg = 10
ellip_sigma = 0.25
return nside, npix, pixel_area, order, bias, density_m, density_kg, ellip_sigma
(NSIDE, NPIX, PIXEL_AREA, ORDER, BIAS, DENSITY_M, DENSITY_KG, ELLIP_SIGMA) = set_constants()
# MACHINE = "LOCAL"
# MACHINE = "BRIDGES"
def set_paths():
"""
Sets directory paths based on the machine being used
:return: PATH_TO_INPUT, PATH_TO_OUTPUT, PATH_TO_CHECKPOINTS and PATH_TO_VAL
"""
path_to_input = "../data/flaskv3/input/"
path_to_output = "../data/flaskv3/output/"
path_to_cov_output = "../data/flaskv4/output/"
path_to_checkpoints = ""
path_to_val = "../validation_101.npz"
return path_to_input, path_to_output, path_to_cov_output, path_to_checkpoints, path_to_val
(PATH_TO_INPUT, PATH_TO_OUTPUT, PATH_TO_COV_OUTPUT, PATH_TO_CHECKPOINTS, PATH_TO_VAL) = set_paths()
# C(l) helper functions
def path_to_cl(sigma8, name="f1z1f1z1", path_to_input=PATH_TO_INPUT):
"""
Returns relative path to FLASK input C(l) generated by trough_lenser
:param sigma8:Value of $\\sigma_8$ used to generate the C(l)s
:param name: Name of the C(l)
:param path_to_input: Path to flask101 input directory, ending in / (default assumes data folder in repo)
:return: relative path string to the appropriate C(l) file
"""
return path_to_input + "dss-{0}/dss-{0}-Cl-{1}.dat".format(round(sigma8, 5), name)
def load_cl_from_path(path, lmax=10000):
"""
Generate pandas dataframe for a given input C(l) file
:param path: path to C(l) file
:param lmax: maximum l value in C(l) file
:return: data frame containing vector of ls and corresponding C(l) values
"""
data = pd.read_csv(path, sep=' ', header=None)
data.columns = ['L', 'CL']
data.index = np.arange(lmax + 1)
return data
def load_cl_from_val(sigma8, lmax=9999, name="f1z1f1z1", path_to_input=PATH_TO_INPUT):
"""
Wrapper function to return pandas data frame for a specified C(l)
:param sigma8: Value of $\\sigma_8$ used to generate the C(l)s
:param lmax: maximum l value in C(l) file
:param name: Name of the C(l)
:param path_to_input: Path to flask101 input directory, ending in / (default assumes data folder in repo)
:return: data frame containing vector of ls and corresponding C(l) values
"""
return load_cl_from_path(path_to_cl(sigma8, name=name, path_to_input=path_to_input), lmax=lmax)
# Descriptions of different data sets
def full_cosmologies_list():
"""
Return the full list of $\\sigma_8$ values in the simulated data
:return: A numpy array covering all 101 $\\sigma_8$ values in the flat prior
"""
return np.linspace(0.5, 1.2, num=201)
def lite_train_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training lite
:return: A numpy array of 16 $\\sigma_8$ values
"""
# return np.array([0.5, 0.57, 0.71, 0.78, 0.92, 0.99, 1.13, 1.2])
return np.array([0.535, 0.605, 0.64, 0.675, 0.71, 0.78,
0.815, 0.85, 0.885, 0.955, 0.99, 1.025, 1.06,
1.13, 1.165, 1.2])
def lite_test_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in testing lite
:return: A numpy array of 4 $\\sigma_8$ values
"""
# return np.array([0.64, 0.85, 1.06])
return np.array([0.57, 0.745, 0.92, 1.095])
def q1_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q1
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([0.7345, 0.969, 0.6435, 0.654, 1.1895, 1.06, 1.109, 0.703,
1.032, 1.1615, 1.0705, 0.759, 0.5175, 0.885, 0.9515, 0.6295,
0.7415, 0.605, 0.5875, 0.7205, 0.7065, 1.1685, 0.773, 1.179,
0.577, 0.857, 1.1195, 1.123, 1.1965, 0.843, 0.7975, 0.5245,
1.172, 0.99, 0.892, 0.7485, 0.955, 0.528, 1.0845, 1.1265,
1.0005, 0.8535, 0.5, 0.9165, 0.5105])
def o1_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q1
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([0.969, 0.654, 1.06, 0.703,
1.1615, 0.759, 0.885, 0.6295,
0.605, 0.7205, 1.1685, 1.179,
0.857, 1.123, 0.843, 0.5245,
0.99, 0.7485, 0.528, 1.1265,
0.8535, 0.9165])
def o2_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q1
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([0.7345, 0.6435, 1.1895, 1.109,
1.032, 1.0705, 0.5175, 0.9515,
0.7415, 0.5875, 0.7065, 0.773,
0.577, 1.1195, 1.1965, 0.7975,
1.172, 0.892, 0.955, 1.0845,
1.0005, 0.5, 0.5105])
def q2_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q2
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([0.5035, 1.025, 0.6995, 0.5385, 0.6015, 1.018, 0.829, 0.927,
0.7905, 1.095, 0.9655, 0.5595, 1.165, 0.6645, 0.724, 0.6155,
1.039, 0.78, 0.941, 0.633, 1.186, 1.193, 1.0635, 0.8675,
1.151, 0.8815, 1.0775, 0.563, 0.6925, 0.7555, 1.1755, 1.1335,
0.696, 0.801, 0.864, 0.598, 1.158, 0.8955, 0.5315, 1.0355,
0.899, 1.046, 0.542, 1.0215, 1.1825])
def o3_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q2
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([1.025, 0.5385, 1.018, 0.927,
1.095, 0.5595, 0.6645, 0.6155,
0.78, 0.633, 1.193, 0.8675,
0.8815, 0.563, 0.7555, 1.1335,
0.801, 0.598, 0.8955, 1.0355,
1.046, 1.0215])
def o4_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q2
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([0.5035, 0.6995, 0.6015, 0.829,
0.7905, 0.9655, 1.165, 0.724,
1.039, 0.941, 1.186, 1.0635,
1.151, 1.0775, 0.6925, 1.1755,
0.696, 0.864, 1.158, 0.5315,
0.899, 0.542, 1.1825])
def q3_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q3
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([0.8395, 0.689, 0.8465, 0.6365, 1.0145, 0.6785, 1.004, 1.1055,
1.053, 0.57, 1.0495, 0.8255, 0.668, 1.137, 0.5805, 0.9235,
0.619, 0.661, 0.7695, 0.7765, 0.9375, 0.836, 0.591, 0.906,
1.1545, 0.8745, 0.766, 0.675, 0.8885, 0.9095, 1.102, 0.92,
0.556, 1.011, 1.0075, 0.6225, 0.5455, 0.6715, 0.626, 0.983,
0.738, 0.71, 0.8325, 0.962, 0.822])
def o5_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q3
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([0.689, 0.6365, 0.6785, 1.1055,
0.57, 0.8255, 1.137, 0.9235,
0.661, 0.7765, 0.836, 0.906,
0.8745, 0.675, 0.9095, 0.92,
1.011, 0.6225, 0.6715, 0.983,
0.71, 0.962])
def o6_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q3
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([0.8395, 0.8465, 1.0145, 1.004,
1.053, 1.0495, 0.668, 0.5805,
0.619, 0.7695, 0.9375, 0.591,
1.1545, 0.766, 0.8885, 1.102,
0.556, 1.0075, 0.5455, 0.626,
0.738, 0.8325, 0.822])
def q4_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q4
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([1.1475, 0.5525, 0.85, 0.9445, 0.7835, 0.549, 0.794, 0.815,
0.9585, 0.6575, 0.9935, 1.067, 0.8605, 0.514, 1.0985, 1.0285,
0.612, 1.1405, 1.081, 0.948, 0.6085, 0.934, 0.731, 0.7275,
0.5945, 0.913, 0.787, 1.0425, 0.7135, 0.808, 1.074, 0.8185,
0.6505, 0.9725, 0.976, 0.9025, 0.8045, 0.584, 0.535, 0.717,
0.64, 1.1125, 0.745, 0.7625, 0.521])
def o7_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q4
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([0.5525, 0.9445, 0.549, 0.815,
0.6575, 1.067, 0.514, 1.0285,
1.1405, 0.948, 0.934, 0.7275,
0.913, 1.0425, 0.808, 0.8185,
0.9725, 0.9025, 0.584, 0.717,
1.1125, 0.7625])
def o8_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in training Q4
:return: A numpy array of 20 $\\sigma_8$ values
"""
return np.array([1.1475, 0.85, 0.7835, 0.794,
0.9585, 0.9935, 0.8605, 1.0985,
0.612, 1.081, 0.6085, 0.731,
0.5945, 0.787, 0.7135, 1.074,
0.6505, 0.976, 0.8045, 0.535,
0.64, 0.745, 0.521])
def test_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in testing
:return: A numpy array of 21 $\\sigma_8$ values
"""
return np.array([1.144, 0.9865, 0.6855, 1.088, 1.116, 0.682, 1.0565, 0.752,
0.9305, 1.0915, 0.5665, 0.647, 0.871, 0.9795, 1.2, 1.13,
0.5735, 0.997, 0.878, 0.507, 0.8115])
def val_cosmologies_list():
"""
Return the list of $\\sigma_8$ values used in validation
:return: A numpy array of 11 $\\sigma_8$ values
"""
return np.array([1.144, 0.6855, 1.116, 1.0565,
0.9305, 0.5665, 0.871, 1.2,
0.5735, 0.878, 0.8115])
def cov_full_cosmologies_list():
"""
Return the list of all 200 IDs used for covariance.
:return: A numpy array of 200 ID values
"""
return np.linspace(201, 400, num=200).astype('int')
def cov_q1_cosmologies_list():
"""
Return the list of the first 40 IDs used for covariance.
:return: A numpy array of 40 ID values
"""
return np.linspace(201, 240, num=40).astype('int')
def cov_q2_cosmologies_list():
"""
Return the list of the second 40 IDs used for covariance.
:return: A numpy array of 40 ID values
"""
return np.linspace(241, 280, num=40).astype('int')
def cov_q3_cosmologies_list():
"""
Return the list of the third 40 IDs used for covariance.
:return: A numpy array of 40 ID values
"""
return np.linspace(281, 320, num=40).astype('int')
def cov_q4_cosmologies_list():
"""
Return the list of the fourth 40 IDs used for covariance.
:return: A numpy array of 40 ID values
"""
return np.linspace(321, 360, num=40).astype('int')
def cov_q5_cosmologies_list():
"""
Return the list of the fifth 40 IDs used for covariance.
:return: A numpy array of 40 ID values
"""
return np.linspace(361, 400, num=40).astype('int')
def cosmologies_list(dataset):
"""
Returns list of $\\sigma_8$ values for an input data set
:param dataset: Name of data set
:return: Numpy array of 4, 7, 20, 21, or 101 values
"""
if dataset == "Q1":
return q1_cosmologies_list()
if dataset == "Q2":
return q2_cosmologies_list()
if dataset == "Q3":
return q3_cosmologies_list()
if dataset == "Q4":
return q4_cosmologies_list()
if dataset == "O1":
return o1_cosmologies_list()
if dataset == "O2":
return o2_cosmologies_list()
if dataset == "O3":
return o3_cosmologies_list()
if dataset == "O4":
return o4_cosmologies_list()
if dataset == "O5":
return o5_cosmologies_list()
if dataset == "O6":
return o6_cosmologies_list()
if dataset == "O7":
return o7_cosmologies_list()
if dataset == "O8":
return o8_cosmologies_list()
if dataset == "TEST":
return test_cosmologies_list()
if dataset == "VAL":
return val_cosmologies_list()
if dataset == "FULL":
return full_cosmologies_list()
if dataset == "TRAINLITE":
return lite_train_cosmologies_list()
if dataset == "TESTLITE":
return lite_test_cosmologies_list()
if dataset == "COVFULL":
return cov_full_cosmologies_list()
if dataset == "COVQ1":
return cov_q1_cosmologies_list()
if dataset == "COVQ2":
return cov_q2_cosmologies_list()
if dataset == "COVQ3":
return cov_q3_cosmologies_list()
if dataset == "COVQ4":
return cov_q4_cosmologies_list()
if dataset == "COVQ5":
return cov_q5_cosmologies_list()
print("Invalid data set specification. Please try again")
def dataset_names(val=False):
"""
Returns list of data set names
:param val: Whether or not to include the validation set
:return: List of strings
"""
if val:
return ["Q1", "Q2", "Q3", "Q4", "TEST", "VAL"]
return ["Q1", "Q2", "Q3", "Q4", "TEST"]
# Map loading functions
def path_to_map(sigma8, name="map-f1z1.fits.gz", path_to_output=PATH_TO_OUTPUT, gaussian=False, covariance=False):
"""
Return relative path to Healpix map given $\\sigma_8$
:param covariance: If True, returns maps from covariance set of maps.
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param sigma8: Value of $\\sigma_8$$ from which the map was generated. Refers to ID if covariance is True.
:param name: Name of the map file
:param path_to_output: Relative path to the FLASK output directory
:return: String with path
"""
if gaussian is True:
return path_to_output + "dss-gauss-{0}/dss-gauss-{0}-{1}".format(round(sigma8, 5), name)
if covariance is True:
return PATH_TO_COV_OUTPUT + "dss-{0}/dss-{0}-{1}".format(sigma8, name)
return path_to_output + "dss-{0}/dss-{0}-{1}".format(round(sigma8, 5), name)
def load_map_by_path(path, field=0, nest=True):
"""
Returns HEALPIX map located at a given path
:param path: relative path to the map
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixellization, False for RING
:return: Numpy array with map
"""
return hp.read_map(path, field=field, nest=nest)
def load_map_by_val(sigma8, name="map-f1z1.fits.gz", path_to_output=PATH_TO_OUTPUT, field=0, nest=True, gaussian=False,
covariance=False):
"""
Returns HEALPIX map for FLASK realization of a given $\\sigma_8$ value
:param covariance: If True, returns maps from covariance set of maps.
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param sigma8: Value of $\\sigma_8$
:param name: name of the map
:param path_to_output: relative path to the FLASK output directory
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixelization, False for RING
:return: Numpy array with map
"""
return load_map_by_path(
path_to_map(sigma8, name=name, path_to_output=path_to_output, gaussian=gaussian, covariance=covariance),
field=field, nest=nest)
def load_shear_maps_by_val(sigma8, coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT, nest=True, gaussian=False,
covariance=False):
"""
Returns list of two HEALPIX maps (for $\\gamma_1$ and $\\gamma_2$) for FLASK realization of a
given $\\sigma_8$ value
:param covariance: If True, returns maps from covariance set of maps.
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param sigma8: Value of $\\sigma_8$
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:return: List of two Numpy arrays
"""
if coadd:
return [load_map_by_val(sigma8, name="kappa-gamma-f2z1.fits.gz", path_to_output=path_to_output, field=i + 1,
nest=nest, gaussian=gaussian, covariance=covariance)
+ load_map_by_val(sigma8, name="kappa-gamma-f2z2.fits.gz", path_to_output=path_to_output, field=i + 1,
nest=nest, gaussian=gaussian, covariance=covariance)
for i in range(2)]
if corr:
return [load_map_by_val(sigma8, name="kappa-gamma-f2z1.fits.gz", path_to_output=path_to_output, field=i + 1,
nest=nest, gaussian=gaussian, covariance=covariance) for i in range(2)]
return [load_map_by_val(sigma8, name="kappa-gamma-f2z2.fits.gz", path_to_output=path_to_output, field=i + 1,
nest=nest, gaussian=gaussian, covariance=covariance) for i in range(2)]
def load_convergence_map_by_val(sigma8, coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT, nest=True,
gaussian=False, covariance=False):
"""
Returns HEALPIX map (for $\\kappa$) for FLASK realization of a given $\\sigma_8$ value
:param covariance: If True, returns maps from covariance set of maps.
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param sigma8: Value of $\\sigma_8$
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:return: Numpy array storing HEALPIX map
"""
if coadd:
return load_map_by_val(sigma8, name="kappa-gamma-f2z1.fits.gz", path_to_output=path_to_output, field=0,
nest=nest, gaussian=gaussian, covariance=covariance) \
+ load_map_by_val(sigma8, name="kappa-gamma-f2z2.fits.gz", path_to_output=path_to_output, field=0,
nest=nest, gaussian=gaussian, covariance=covariance)
if corr:
return load_map_by_val(sigma8, name="kappa-gamma-f2z1.fits.gz", path_to_output=path_to_output, field=0,
nest=nest, gaussian=gaussian, covariance=covariance)
return load_map_by_val(sigma8, name="kappa-gamma-f2z2.fits.gz", path_to_output=path_to_output, field=0,
nest=nest, gaussian=gaussian, covariance=covariance)
@jit(nopython=True)
def accelerated_noiseless_counts(m, pixarea=PIXEL_AREA,
density=DENSITY_M, density_0=DENSITY_M, multiplier=1.0, bias=BIAS, free_bias=False,
prior_low=0.94, prior_high=2.86, nside=NSIDE, order=ORDER,
normalize=False):
"""
Returns new version of input map without any Poissonian shot noise applied.
:param order: Splitting order of HEALPIX maps.
:param nside: NSIDE parameter of HEALPIX maps.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies random linear bias within a certain flat prior to raw map
:param m: FLASK output map of galaxy density contrast, $\\delta_g$
:param pixarea: Area of each pixel, in arcmin^2
:param density: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:return: A noiseless output map
"""
x = np.zeros(m.shape)
if free_bias and not normalize:
num_partials = 12 * order ** 2
partial_size = (nside // order) ** 2
for p in range(num_partials):
b = prior_low + p * (prior_high - prior_low) / (num_partials - 1)
for i in range(partial_size):
x[p * partial_size + i] = multiplier * (density_0 / density) * (
density * pixarea * (1 + max(b * m[p * partial_size + i], -1)))
return x
for i in range(x.size):
if normalize:
x[i] = multiplier * (density_0 / density) * (density * pixarea * (1 + max(m[i], -1)))
else:
x[i] = multiplier * (density_0 / density) * (density * pixarea * (1 + max(bias * m[i], -1)))
return x
@jit(nopython=True)
def accelerated_noiseless_shear(g, multiplier=1.0, zscale=False, npix=NPIX):
"""
Returns new version of input maps without any Gaussian shape noise applied.
:param zscale: If True, rescales to r.m.s units.
:param npix: Number of pixels in map
:param g: FLASK output maps of lensing shear, $\\gamma_1$ and $\\gamma_2$
:param multiplier: Scale factor used to amplify noise distribution
:return: A noiseless list of output maps
"""
x = [np.zeros(g[i].shape) for i in range(2)]
for c in range(2):
for i in range(npix):
x[c][i] = multiplier * g[c][i]
if zscale:
for c in range(2):
x[c] = (x[c] - np.mean(x[c])) / np.std(x[c])
return x
@jit(nopython=True)
def accelerated_noiseless_convergence(k, multiplier=1.0, zscale=False, npix=NPIX):
"""
Returns new version of input map without any Gaussian shape noise applied.
:param zscale: If True, rescales to r.m.s units.
:param npix: Number of pixels in map
:param k: FLASK output maps of lensing convergence, $\\kappa$
:param multiplier: Scale factor used to amplify noise distribution
:return: A noiseless output map
"""
x = np.zeros(k.shape)
for i in range(npix):
x[i] = multiplier * k[i]
if zscale:
x = (x - np.mean(x)) / np.std(x)
return x
@jit(nopython=True)
def accelerated_poissonian_shot_noise(m, pixarea=PIXEL_AREA,
density=DENSITY_M, density_0=DENSITY_M, multiplier=1.0, bias=BIAS,
free_bias=False, nside=NSIDE, order=ORDER,
prior_low=0.94, prior_high=2.86,
normalize=False):
"""
Returns new version of input map with a specified level of Poissonian shot noise applied.
:param order: Splitting order of HEALPIX maps.
:param nside: NSIDE parameter of HEALPIX maps.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies 48 evenly spaced values from prior_low to prior_high, one to each patch
:param m: FLASK output map of galaxy density contrast, $\\delta_g$
:param pixarea: Area of each pixel, in arcmin^2
:param density: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:return: A noisy Poisson-sampled output map
"""
x = np.zeros(m.shape)
if free_bias and not normalize:
num_partials = 12 * order ** 2
partial_size = (nside // order) ** 2
for p in range(num_partials):
b = prior_low + p * (prior_high - prior_low) / (num_partials - 1)
for i in range(partial_size):
x[p * partial_size + i] = multiplier * (density_0 / density) * np.random.poisson(
density * pixarea * (1 + max(b * m[p * partial_size + i], -1)))
return x
for i in range(x.size):
if normalize:
x[i] = multiplier * (density_0 / density) * np.random.poisson(density * pixarea * (1 + max(m[i], -1)))
else:
x[i] = multiplier * (density_0 / density) * np.random.poisson(
density * pixarea * (1 + max(bias * m[i], -1)))
return x
@jit(nopython=True)
def accelerated_gaussian_shear_noise(g, npix=NPIX, pixarea=PIXEL_AREA,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0,
ellip_sigma=ELLIP_SIGMA, zscale=False):
"""
Returns new version of input map with a specified level of Gaussian shape noise applied.
:param zscale: If True, rescales to r.m.s units.
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param g: FLASK output maps of lensing shear ($\\gamma_1$ and $\\gamma_2$)
:param npix: Number of pixels in map
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:return: A list of two noisy Gaussian-sampled output maps
"""
x = [np.zeros(g[i].shape) for i in range(2)]
for c in range(2):
for i in range(npix):
x[c][i] = multiplier * (density_0 / density) * np.random.normal(loc=g[c][i], scale=ellip_sigma / np.sqrt(
pixarea * density))
if zscale:
for c in range(2):
x[c] = (x[c] - np.mean(x[c])) / np.std(x[c])
return x
@jit(nopython=True)
def accelerated_gaussian_convergence_noise(k, npix=NPIX, pixarea=PIXEL_AREA,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0,
ellip_sigma=ELLIP_SIGMA, zscale=False):
"""
Returns new version of input map with a specified level of Gaussian shape noise applied.
(Modeled after https://arxiv.org/pdf/2007.06529.pdf)
:param zscale: If True, rescales to r.m.s units.
:param k: FLASK output maps of lensing convergence ($\\kappa$)
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param npix: Number of pixels in map
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:return: A noisy Gaussian-sampled output map
"""
x = np.zeros(k.shape)
for i in range(npix):
x[i] = multiplier * (density_0 / density) * np.random.normal(loc=k[i], scale=ellip_sigma / np.sqrt(
2 * pixarea * density))
if zscale:
x = (x - np.mean(x)) / np.std(x)
return x
def count_map_by_val(sigma8, name="map-f1z1.fits.gz", path_to_output=PATH_TO_OUTPUT, field=0, nest=True,
pixarea=PIXEL_AREA, density=DENSITY_M, density_0=DENSITY_M, multiplier=1.0,
bias=BIAS, free_bias=False, prior_low=0.94, prior_high=2.86, normalize=False, noiseless=False,
gaussian=False, covariance=False, nside=NSIDE, order=ORDER):
"""
Loads galaxy density contrast map for a given $\\sigma_8$ and applies Poissonian shot noise
:param order: Splitting order of HEALPIX maps.
:param nside: NSIDE parameter of HEALPIX maps.
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies random linear bias within a certain flat prior to raw map
:param sigma8: Value of $\\sigma_8$
:param name: name of the map
:param path_to_output: relative path to the FLASK output directory
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixellization, False for RING
:return: Numpy array with map
:param pixarea: Area of each pixel, in arcmin^2
:param nest: True if "NEST" pixelization, False if "RING"
:param density: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:param noiseless: Does not take Poisson draw if True
:return: A noisy Poisson-sampled output map
"""
if noiseless:
return accelerated_noiseless_counts(
load_map_by_val(sigma8, name=name, path_to_output=path_to_output, field=field, nest=nest,
gaussian=gaussian, covariance=covariance),
pixarea=pixarea, density=density, density_0=density_0, multiplier=multiplier, bias=bias, nside=nside,
normalize=normalize, free_bias=free_bias, prior_low=prior_low, prior_high=prior_high, order=order)
return accelerated_poissonian_shot_noise(
load_map_by_val(sigma8, name=name, path_to_output=path_to_output, field=field, nest=nest, gaussian=gaussian,
covariance=covariance),
pixarea=pixarea, density=density, density_0=density_0, multiplier=multiplier, bias=bias, nside=nside,
normalize=normalize, free_bias=free_bias, prior_low=prior_low, prior_high=prior_high, order=order)
def shear_maps_by_val(sigma8, coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT, nest=True,
pixarea=PIXEL_AREA, zscale=False, npix=NPIX,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0,
ellip_sigma=ELLIP_SIGMA, noiseless=False, gaussian=False, covariance=False):
"""
Loads lensing shear maps for a given $\\sigma_8$ and applies Gaussian shape noise
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param npix: Number of pixels in map
:param noiseless: Does not take Gaussian draw if True
:param sigma8: Value of $\\sigma_8$
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:return: A list of two noisy Gaussian-sampled output maps
"""
if noiseless:
return accelerated_noiseless_shear(
load_shear_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, nest=nest,
gaussian=gaussian, covariance=covariance),
multiplier=multiplier, zscale=zscale, npix=npix)
return accelerated_gaussian_shear_noise(
load_shear_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, nest=nest,
gaussian=gaussian, covariance=covariance), zscale=zscale, npix=npix,
pixarea=pixarea, density=density, density_0=density_0, multiplier=multiplier, ellip_sigma=ellip_sigma)
def convergence_map_by_val(sigma8, coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT, nest=True,
pixarea=PIXEL_AREA, zscale=False, npix=NPIX,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0,
ellip_sigma=ELLIP_SIGMA, noiseless=False, gaussian=False, covariance=False):
"""
Loads lensing convergence map for a given $\\sigma_8$ and applies Gaussian shape noise
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param npix: Number of pixels in map
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param noiseless: Does not take Gaussian draw if True
:param sigma8: Value of $\\sigma_8$
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:return: A noisy Gaussian-sampled output map
"""
if noiseless:
return accelerated_noiseless_convergence(
load_convergence_map_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, nest=nest,
gaussian=gaussian, covariance=covariance),
multiplier=multiplier, zscale=zscale, npix=npix)
return accelerated_gaussian_convergence_noise(
load_convergence_map_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, nest=nest,
gaussian=gaussian, covariance=covariance),
pixarea=pixarea, density=density, density_0=density_0, multiplier=multiplier, ellip_sigma=ellip_sigma,
zscale=zscale, npix=npix)
def split_map(m, order=ORDER, nest=True):
"""
Returns Numpy array of partial-sky Healpix realizations split from an input full-sky map
:param m: Full-sky Healpix map
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param nest: True if "NEST" pixelization, False if "RING"
:return: Numpy array of split maps
"""
return experiment_helper.hp_split(m, order, nest=nest)
def split_count_maps_by_val(sigma8, name="map-f1z1.fits.gz", path_to_output=PATH_TO_OUTPUT, field=0,
nest=True, pixarea=PIXEL_AREA, density=DENSITY_M, density_0=DENSITY_M,
multiplier=1.0, gaussian=False, free_bias=False, prior_low=0.94,
prior_high=2.86, covariance=False, nside=NSIDE,
bias=BIAS, normalize=False, order=ORDER, noiseless=False):
"""
Generates partial-sky maps with applied Poissonian shot noise for a given $\\sigma_8$
:param nside: NSIDE parameter of HEALPIX maps.
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies random linear bias within a certain flat prior to raw map
:param sigma8: Value of $\\sigma_8$
:param name: name of the map
:param path_to_output: relative path to the FLASK output directory
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixellization, False for RING
:return: Numpy array with map
:param pixarea: Area of each pixel, in arcmin^2
:param nest: True if "NEST" pixelization, False if "RING"
:param density: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param noiseless: Does not take Poisson draw if True
:return: Numpy array of split, (rescaled) Poisson-sampled maps
"""
return split_map(count_map_by_val(sigma8, name=name, path_to_output=path_to_output, field=field, nest=nest,
pixarea=pixarea, density=density, density_0=density_0, nside=nside, order=order,
multiplier=multiplier, gaussian=gaussian, free_bias=free_bias,
prior_low=prior_low, prior_high=prior_high, covariance=covariance,
bias=bias, normalize=normalize, noiseless=noiseless), order=order, nest=nest)
def split_shear_maps_by_val(sigma8, coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT, nest=True,
pixarea=PIXEL_AREA, gaussian=False, covariance=False, zscale=False,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0, npix=NPIX,
ellip_sigma=ELLIP_SIGMA, noiseless=False, order=ORDER):
"""
Generates partial-sky shear maps with applied Gaussian shape noise for a given $\\sigma_8$
:param sigma8: Value of $\\sigma_8$
:param npix: Number of pixels in map
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param noiseless: Does not take Gaussian draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:return: Numpy array of split, (rescaled) Gaussian-sampled maps
"""
g = shear_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, nest=nest,
pixarea=pixarea, density=density, density_0=density_0, multiplier=multiplier,
ellip_sigma=ellip_sigma, noiseless=noiseless, gaussian=gaussian, covariance=covariance,
zscale=zscale, npix=npix)
return [split_map(g[i], order=order, nest=nest) for i in range(2)]
def split_convergence_maps_by_val(sigma8, coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT, nest=True,
pixarea=PIXEL_AREA, gaussian=False, covariance=False, zscale=False,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0, npix=NPIX,
ellip_sigma=ELLIP_SIGMA, noiseless=False, order=ORDER):
"""
Generates partial-sky convergence maps with applied Gaussian shape noise for a given $\\sigma_8$
:param sigma8: Value of $\\sigma_8$
:param npix: Number of pixels in map
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param noiseless: Does not take Gaussian draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:return: Numpy array of split, (rescaled) Gaussian-sampled maps
"""
return split_map(
convergence_map_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, nest=nest,
covariance=covariance, zscale=zscale, npix=npix,
pixarea=pixarea, density=density, density_0=density_0, multiplier=multiplier,
ellip_sigma=ellip_sigma, noiseless=noiseless, gaussian=gaussian), order=order, nest=nest)
def split_lensing_maps_by_val(sigma8, config="g", coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT, nest=True,
gaussian=False, covariance=False, zscale=False,
pixarea=PIXEL_AREA, npix=NPIX,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0,
ellip_sigma=ELLIP_SIGMA, noiseless=False, order=ORDER):
"""
Generates a set of partial-sky lensing maps with applied Gaussian shape noise for a given $\\sigma_8$
:param sigma8: Value of $\\sigma_8$
:param npix: Number of pixels in map
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param noiseless: Does not take Gaussian draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param config: "k" for convergence only, "g" for shear only, "kg" for convergence and shear
:return: Numpy array of split, (rescaled) Gaussian-sampled maps
"""
if config == "g":
g = split_shear_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, nest=nest,
covariance=covariance, zscale=zscale, npix=npix,
pixarea=pixarea, density=density, density_0=density_0, multiplier=multiplier,
ellip_sigma=ellip_sigma, noiseless=noiseless, order=order, gaussian=gaussian)
return np.stack((g[0], g[1]), axis=2)
if config == "k":
return split_convergence_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output,
nest=nest, covariance=covariance,
gaussian=gaussian, zscale=zscale,
pixarea=pixarea, density=density, density_0=density_0,
multiplier=multiplier, npix=npix,
ellip_sigma=ellip_sigma, noiseless=noiseless, order=order)
if config == "kg":
g = split_shear_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, nest=nest,
covariance=covariance, zscale=zscale, npix=npix,
pixarea=pixarea, density=density, density_0=density_0, multiplier=multiplier,
ellip_sigma=ellip_sigma, noiseless=noiseless, order=order, gaussian=gaussian)
k = split_convergence_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, nest=nest,
gaussian=gaussian, covariance=covariance, zscale=zscale,
pixarea=pixarea, density=density, density_0=density_0,
multiplier=multiplier, npix=npix,
ellip_sigma=ellip_sigma, noiseless=noiseless, order=order)
return np.stack((g[0], g[1], k), axis=2)
print("Unknown config in deep_dss.utils.split_lensing_maps_by_val. Please try again.")
def split_count_maps_by_vals(sigma8s, name="map-f1z1.fits.gz", path_to_output=PATH_TO_OUTPUT, field=0,
nest=True, pixarea=PIXEL_AREA, density=DENSITY_M, density_0=DENSITY_M,
multiplier=1.0, gaussian=False, free_bias=False, prior_low=0.94,
prior_high=2.86, bias=BIAS, normalize=False, noiseless=False, order=ORDER,
scramble=False, covariance=False, nside=NSIDE,
ground_truths=True, reshape_x=False, reshape_y=True, deepsphere_dataset=False):
"""
Generates stacked array of partial-sky Poisson-sampled maps for a list of $\\sigma_8$ values
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param nside: NSIDE parameter of HEALPIX maps.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies random linear bias within a certain flat prior to raw map.
:param deepsphere_dataset: Returns a DeepSphere LabeledDataset if true
:param reshape_y: If True, reshapes ys to have shape (.., .., 1)
:param reshape_x: If True, reshapes xs to have shape (.., .., 1)
:param sigma8s: List of $\\sigma_8$ values
:param name: name of the map
:param path_to_output: relative path to the FLASK output directory
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixellization, False for RING
:param pixarea: Area of each pixel, in arcmin^2
:param nest: True if "NEST" pixelization, False if "RING"
:param density: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:param noiseless: Does not take Poisson draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param scramble: If True, randomly scrambles the maps out of order
:param ground_truths: True if corresponding labels should be returned as well
:return: Dictionary of DeepSphere datasets if deepsphere_dataset=True, maps and labels if ground_truths=True,
stacked Numpy array of split, (rescaled) Poisson-sampled maps otherwise
"""
num_partials = 12 * order ** 2
partial_size = (nside // order) ** 2
x = np.empty((0, partial_size))
for sigma8 in sigma8s:
m = split_count_maps_by_val(sigma8, name=name, path_to_output=path_to_output, field=field,
nest=nest, pixarea=pixarea, density=density, density_0=density_0,
multiplier=multiplier, gaussian=gaussian, covariance=covariance,
free_bias=free_bias, nside=nside,
prior_low=prior_low, prior_high=prior_high,
bias=bias, normalize=normalize, noiseless=noiseless, order=order)
x = np.vstack((x, m))
if reshape_x:
x = np.reshape(x, (len(sigma8s) * num_partials, partial_size, 1))
if free_bias:
biases = np.linspace(prior_low, prior_high, num=num_partials)
else:
biases = bias * np.ones(num_partials)
if ground_truths:
y = np.zeros((len(sigma8s) * num_partials, 2))
for i in range(len(sigma8s)):
y[i * num_partials:(i + 1) * num_partials, 0] = sigma8s[i]
y[i * num_partials:(i + 1) * num_partials, 1] = np.copy(biases)
if reshape_y:
y = np.reshape(y, (len(sigma8s) * num_partials, 2))
if scramble:
(x, y) = shuffle(x, y, random_state=0)
if deepsphere_dataset:
return LabeledDataset(x, y)
return {"x": x, "y": y}
if scramble:
x = shuffle(x, random_state=0)
return x
def lensing_channels(config):
"""
Returns number of channels associated with a given lensing config string
:param config: Lensing config string
:return: int number of channels
"""
if config == "g":
return 2
if config == "k":
return 1
if config == "kg":
return 3
if config == "":
return 0
print("Unknown config in deep_dss.utils.lensing_channels. Please try again.")
def split_lensing_maps_by_vals(sigma8s, config="g", coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT, nest=True,
nside=NSIDE, gaussian=False, covariance=False,
pixarea=PIXEL_AREA, zscale=False, npix=NPIX,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0,
ellip_sigma=ELLIP_SIGMA, noiseless=False, order=ORDER, scramble=False,
ground_truths=True, reshape_x=False, reshape_y=True, deepsphere_dataset=False):
"""
Generates stacked array of partial-sky Gaussian-sampled maps for a list of $\\sigma_8$ values
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param npix: Number of pixels in map
:param zscale: If True, rescales to r.m.s units.
:param sigma8s: List of $\\sigma_8$ values
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:param nside: NSIDE parameter of HEALPIX maps.
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param noiseless: Does not take Gaussian draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param config: "k" for convergence only, "g" for shear only, "kg" for convergence and shear
:param scramble: If True, randomly scrambles the maps out of order
:param ground_truths: True if corresponding labels should be returned as well
:param reshape_x: If True, reshapes xs to have shape (.., .., 1)
:param reshape_y: If True, reshapes ys to have shape (.., .., 1)
:param deepsphere_dataset: Returns a DeepSphere LabeledDataset if true
:return: Dictionary of DeepSphere datasets if deepsphere_dataset=True, maps and labels if ground_truths=True,
stacked Numpy array of split, (rescaled) Gaussian-sampled maps otherwise
"""
channels = lensing_channels(config)
num_partials = 12 * order ** 2
partial_size = (nside // order) ** 2
if channels == 1:
x = np.empty((0, partial_size))
else:
x = np.empty((0, partial_size, channels))
for sigma8 in sigma8s:
kg = split_lensing_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output, npix=npix,
nest=nest, pixarea=pixarea, density=density, density_0=density_0, zscale=zscale,
multiplier=multiplier, ellip_sigma=ellip_sigma, covariance=covariance,
noiseless=noiseless, order=order, config=config, gaussian=gaussian)
x = np.vstack((x, kg))
if channels == 1 and reshape_x:
x = np.reshape(x, (len(sigma8s) * num_partials, partial_size, 1))
if ground_truths:
y = np.zeros(len(sigma8s) * num_partials)
for i in range(len(sigma8s)):
y[i * num_partials:(i + 1) * num_partials] = sigma8s[i]
if reshape_y:
y = np.reshape(y, (len(sigma8s) * num_partials, 1))
if scramble:
(x, y) = shuffle(x, y, random_state=0)
if deepsphere_dataset:
return LabeledDataset(x, y)
return {"x": x, "y": y}
if scramble:
x = shuffle(x, random_state=0)
return x
def split_count_and_lensing_maps_by_vals(sigma8s, config="g", name="map-f1z1.fits.gz",
path_to_output=PATH_TO_OUTPUT,
field=0, covariance=False, zscale=False,
nest=True, npix=NPIX, pixarea=PIXEL_AREA, density_m=DENSITY_M,
density_m_0=DENSITY_M, multiplier_m=1.0, gaussian=False, free_bias=False,
prior_low=0.94, prior_high=2.86, nside=NSIDE,
bias=BIAS, normalize=False, noiseless_m=False, coadd=True, corr=None,
density_kg=DENSITY_KG, density_kg_0=DENSITY_KG, multiplier_kg=1.0,
ellip_sigma=ELLIP_SIGMA, noiseless_kg=False, order=ORDER, scramble=False,
ground_truths=True, reshape_x=False, reshape_y=True,
deepsphere_dataset=False):
"""
Generates stacked array of partial-sky Poisson and Gaussian-sampled maps for a list of $\\sigma_8$ values
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param nside: NSIDE parameter of HEALPIX maps.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies random linear bias within a certain flat prior to raw map
:param sigma8s: List of $\\sigma_8$ values
:param config: "k" for convergence only, "g" for shear only, "kg" for convergence and shear, with "c" added
to the beginning for counts
:param name: name of the map
:param path_to_output: relative path to the FLASK output directory
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixelization, False for RING
:param npix: Number of pixels in map
:param pixarea: Area of each pixel, in arcmin^2
:param density_m: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_m_0: Baseline tracer galaxy density, in arcmin^2, to scale distribution by
:param multiplier_m: Scale factor used to amplify shot noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:param noiseless_m: Does not take Poisson draw if True
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param density_kg: Source galaxy density, in arcmin^2, to use for noise application
:param density_kg_0: Baseline source galaxy density, in arcmin^2, to scale distribution by
:param multiplier_kg: Scale factor used to amplify lensing noise distributions
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param noiseless_kg: Does not take Gaussian draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param scramble: If True, randomly scrambles the maps out of order
:param ground_truths: True if corresponding labels should be returned as well
:param reshape_x: If True, reshapes xs to have shape (.., .., 1)
:param reshape_y: If True, reshapes ys to have shape (.., .., 1)
:param deepsphere_dataset: Returns a DeepSphere LabeledDataset if true
:return: Dictionary of DeepSphere datasets if deepsphere_dataset=True, maps and labels if ground_truths=True,
stacked Numpy array of split, (rescaled) Poisson and Gaussian-sampled maps otherwise
"""
counts = 0
num_partials = 12 * order ** 2
partial_size = (nside // order) ** 2
if config[0] == "c":
counts = 1
channels = 1 + lensing_channels(config[1:])
if free_bias:
biases = np.linspace(prior_low, prior_high, num=num_partials)
else:
biases = bias * np.ones(num_partials)
else:
channels = lensing_channels(config)
if channels == 1:
x = np.empty((0, partial_size))
else:
x = np.empty((0, partial_size, channels))
for sigma8 in sigma8s:
if counts == 0:
kg = split_lensing_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output,
nest=nest, pixarea=pixarea, density=density_kg, zscale=zscale,
density_0=density_kg_0, covariance=covariance, npix=npix,
multiplier=multiplier_kg, ellip_sigma=ellip_sigma,
noiseless=noiseless_kg, order=order, config=config, gaussian=gaussian)
x = np.vstack((x, kg))
else:
c = split_count_maps_by_val(sigma8, name=name, path_to_output=path_to_output, field=field,
nest=nest, pixarea=pixarea, density=density_m,
density_0=density_m_0, covariance=covariance,
multiplier=multiplier_m, nside=nside,
bias=bias, normalize=normalize, noiseless=noiseless_m,
order=order, gaussian=gaussian, free_bias=free_bias,
prior_low=prior_low, prior_high=prior_high)
if channels == 1:
x = np.vstack((x, c))
else:
c = np.reshape(c, (num_partials, partial_size, 1))
kg = split_lensing_maps_by_val(sigma8, coadd=coadd, corr=corr, path_to_output=path_to_output,
nest=nest, pixarea=pixarea, density=density_kg, zscale=zscale,
density_0=density_kg_0, covariance=covariance, npix=npix,
multiplier=multiplier_kg, ellip_sigma=ellip_sigma,
noiseless=noiseless_kg, order=order, config=config[1:],
gaussian=gaussian)
if channels - counts == 1:
kg = np.reshape(kg, (num_partials, partial_size, 1))
x = np.vstack((x, np.concatenate((c, kg), axis=2)))
if channels == 1 and reshape_x:
x = np.reshape(x, (len(sigma8s) * num_partials, partial_size, 1))
if ground_truths:
if counts == 0:
y = np.zeros(len(sigma8s) * num_partials)
for i in range(len(sigma8s)):
y[i * num_partials:(i + 1) * num_partials] = sigma8s[i]
if counts == 1:
y = np.zeros((len(sigma8s) * num_partials, 2))
for i in range(len(sigma8s)):
y[i * num_partials:(i + 1) * num_partials, 0] = sigma8s[i]
y[i * num_partials:(i + 1) * num_partials, 1] = np.copy(biases)
if reshape_y and counts == 0:
y = np.reshape(y, (len(sigma8s) * num_partials, 1))
if scramble:
(x, y) = shuffle(x, y, random_state=0)
if deepsphere_dataset:
return LabeledDataset(x, y)
return {"x": x, "y": y}
if scramble:
x = shuffle(x, random_state=0)
return x
def split_count_maps_by_dataset(dataset, name="map-f1z1.fits.gz", path_to_output=PATH_TO_OUTPUT, field=0,
nest=True, pixarea=PIXEL_AREA, density=DENSITY_M, density_0=DENSITY_M,
multiplier=1.0, gaussian=False, free_bias=False, covariance=False,
prior_low=0.94, prior_high=2.86,
bias=BIAS, normalize=False, noiseless=False, order=ORDER, scramble=False,
ground_truths=True, reshape_x=False, reshape_y=True, deepsphere_dataset=False):
"""
Generates stacked array of partial-sky Poisson-sampled maps for a given data set
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies random linear bias within a certain flat prior to raw map
:param dataset: String name of data-set to be used
:param name: name of the map
:param path_to_output: relative path to the FLASK output directory
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixelization, False for RING
:param pixarea: Area of each pixel, in arcmin^2
:param nest: True if "NEST" pixelization, False if "RING"
:param density: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:param noiseless: Does not take Poisson draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param scramble: If True, randomly scrambles the maps out of order
:param ground_truths: True if corresponding labels should be returned as well
:param reshape_y: If True, reshapes ys to have shape (.., .., 1)
:param reshape_x: If True, reshapes xs to have shape (.., .., 1)
:param deepsphere_dataset: Returns a DeepSphere LabeledDataset if true
:return: Stacked Numpy array of split, (rescaled) Poisson-sampled maps
"""
return split_count_maps_by_vals(cosmologies_list(dataset), name=name, path_to_output=path_to_output,
field=field, covariance=covariance,
nest=nest, pixarea=pixarea, density=density, density_0=density_0,
multiplier=multiplier,
bias=bias, normalize=normalize, noiseless=noiseless, order=order,
scramble=scramble,
ground_truths=ground_truths, reshape_x=reshape_x, reshape_y=reshape_y,
deepsphere_dataset=deepsphere_dataset, gaussian=gaussian, free_bias=free_bias,
prior_low=prior_low, prior_high=prior_high)
def split_lensing_maps_by_dataset(dataset, config="g", coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT,
nest=True, covariance=False, zscale=False,
npix=NPIX, gaussian=False,
pixarea=PIXEL_AREA,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0,
ellip_sigma=ELLIP_SIGMA, noiseless=False, order=ORDER, scramble=False,
ground_truths=True, reshape_x=False, reshape_y=True, deepsphere_dataset=False):
"""
Generates stacked array of partial-sky Gaussian-sampled maps for a given data-set
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param dataset: String name of data-set to be used
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:param npix: Number of pixels in map
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param noiseless: Does not take Gaussian draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param config: "k" for convergence only, "g" for shear only, "kg" for convergence and shear
:param scramble: If True, randomly scrambles the maps out of order
:param ground_truths: True if corresponding labels should be returned as well
:param reshape_x: If True, reshapes xs to have shape (.., .., 1)
:param reshape_y: If True, reshapes ys to have shape (.., .., 1)
:param deepsphere_dataset: Returns a DeepSphere LabeledDataset if true
:return: Dictionary of DeepSphere datasets if deepsphere_dataset=True, maps and labels if ground_truths=True,
stacked Numpy array of split, (rescaled) Gaussian-sampled maps otherwise
"""
return split_lensing_maps_by_vals(cosmologies_list(dataset), config=config, coadd=coadd, corr=corr,
path_to_output=path_to_output, nest=nest,
npix=npix, covariance=covariance, zscale=zscale,
pixarea=pixarea,
density=density, density_0=density_0, multiplier=multiplier,
ellip_sigma=ellip_sigma, noiseless=noiseless, order=order, scramble=scramble,
ground_truths=ground_truths, reshape_x=reshape_x, reshape_y=reshape_y,
deepsphere_dataset=deepsphere_dataset, gaussian=gaussian)
def split_count_and_lensing_maps_by_dataset(dataset, config="g", name="map-f1z1.fits.gz",
path_to_output=PATH_TO_OUTPUT,
field=0, covariance=False, zscale=False,
nest=True, npix=NPIX, pixarea=PIXEL_AREA, density_m=DENSITY_M,
density_m_0=DENSITY_M,
multiplier_m=1.0, gaussian=False, free_bias=False,
prior_low=0.94, prior_high=2.86,
bias=BIAS, normalize=False, noiseless_m=False, coadd=True, corr=None,
density_kg=DENSITY_KG, density_kg_0=DENSITY_KG, multiplier_kg=1.0,
ellip_sigma=ELLIP_SIGMA, noiseless_kg=False, order=ORDER,
scramble=False,
ground_truths=True, reshape_x=False, reshape_y=True,
deepsphere_dataset=False):
"""
Generates stacked array of partial-sky Poisson and Gaussian-sampled maps for a given data-set
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies random linear bias within a certain flat prior to raw map
:param dataset: String name of data-set to be used
:param config: "k" for convergence only, "g" for shear only, "kg" for convergence and shear,
with "c" added to the beginning for counts
:param name: name of the map
:param path_to_output: relative path to the FLASK output directory
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixelization, False for RING
:param npix: Number of pixels in map
:param pixarea: Area of each pixel, in arcmin^2
:param density_m: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_m_0: Baseline tracer galaxy density, in arcmin^2, to scale distribution by
:param multiplier_m: Scale factor used to amplify shot noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:param noiseless_m: Does not take Poisson draw if True
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param density_kg: Source galaxy density, in arcmin^2, to use for noise application
:param density_kg_0: Baseline source galaxy density, in arcmin^2, to scale distribution by
:param multiplier_kg: Scale factor used to amplify lensing noise distributions
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param noiseless_kg: Does not take Gaussian draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param scramble: If True, randomly scrambles the maps out of order
:param ground_truths: True if corresponding labels should be returned as well
:param reshape_x: If True, reshapes xs to have shape (.., .., 1)
:param reshape_y: If True, reshapes ys to have shape (.., .., 1)
:param deepsphere_dataset: Returns a DeepSphere LabeledDataset if true
:return: Dictionary of DeepSphere datasets if deepsphere_dataset=True, maps and labels if ground_truths=True,
stacked Numpy array of split, (rescaled) Poisson and Gaussian-sampled maps otherwise
"""
return split_count_and_lensing_maps_by_vals(cosmologies_list(dataset), config=config, name=name,
path_to_output=path_to_output,
field=field, covariance=covariance, zscale=zscale,
nest=nest, npix=npix, pixarea=pixarea, density_m=density_m,
density_m_0=density_m_0,
multiplier_m=multiplier_m, gaussian=gaussian, free_bias=free_bias,
prior_low=prior_low, prior_high=prior_high,
bias=bias, normalize=normalize, noiseless_m=noiseless_m,
coadd=coadd,
corr=corr,
density_kg=density_kg, density_kg_0=density_kg_0,
multiplier_kg=multiplier_kg,
ellip_sigma=ellip_sigma, noiseless_kg=noiseless_kg, order=order,
scramble=scramble,
ground_truths=ground_truths, reshape_x=reshape_x,
reshape_y=reshape_y,
deepsphere_dataset=deepsphere_dataset)
def split_count_maps_by_datasets(val=False, name="map-f1z1.fits.gz", path_to_output=PATH_TO_OUTPUT,
field=0, covariance=False,
nest=True, npix=NPIX, pixarea=PIXEL_AREA, density=DENSITY_M, density_0=DENSITY_M,
multiplier=1.0,
bias=BIAS, gaussian=False, free_bias=False,
prior_low=0.94, prior_high=2.86, normalize=False, noiseless=False,
order=ORDER, scramble=False,
ground_truths=True, reshape_x=False, reshape_y=True, deepsphere_dataset=False):
"""
Returns a data dictionary containing Poisson-sampled maps for each data-set
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies random linear bias within a certain flat prior to raw map
:param val: If True, validation set is included in dataset_names()
:param name: name of the map
:param path_to_output: relative path to the FLASK output directory
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixelization, False for RING
:param npix: Number of pixels in map
:param pixarea: Area of each pixel, in arcmin^2
:param nest: True if "NEST" pixelization, False if "RING"
:param density: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:param noiseless: Does not take Poisson draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param scramble: If True, randomly scrambles the maps out of order
:param ground_truths: True if corresponding labels should be returned as well
:param reshape_y: If True, reshapes ys to have shape (.., .., 1)
:param reshape_x: If True, reshapes xs to have shape (.., .., 1)
:param deepsphere_dataset: Returns a DeepSphere LabeledDataset if true
:return: Dictionary of DeepSphere datasets if deepsphere_dataset=True, maps and labels if ground_truths=True,
stacked Numpy array of split, (rescaled) Poisson and Gaussian-sampled maps otherwise
"""
data = {}
for dataset in dataset_names(val=val):
data[dataset] = split_count_maps_by_dataset(dataset=dataset, name=name, path_to_output=path_to_output,
field=field, covariance=covariance,
nest=nest, npix=npix, pixarea=pixarea, density=density,
density_0=density_0, multiplier=multiplier, gaussian=gaussian,
free_bias=free_bias, mixed_bias=mixed_bias, prior_low=prior_low,
prior_high=prior_high,
bias=bias, normalize=normalize, noiseless=noiseless,
order=order,
scramble=scramble,
ground_truths=ground_truths, reshape_x=reshape_x,
reshape_y=reshape_y, deepsphere_dataset=deepsphere_dataset)
return data
def split_lensing_maps_by_datasets(val=False, config="g", coadd=True, corr=None, path_to_output=PATH_TO_OUTPUT,
nest=True, covariance=False,
npix=NPIX, zscale=False,
pixarea=PIXEL_AREA, gaussian=False,
density=DENSITY_KG, density_0=DENSITY_KG, multiplier=1.0,
ellip_sigma=ELLIP_SIGMA, noiseless=False, order=ORDER, scramble=False,
ground_truths=True, reshape_x=False, reshape_y=True, deepsphere_dataset=False):
"""
Returns a data dictionary containing Gaussian-sampled maps for each data-set
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param val: If True, validation set is included in dataset_names()
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param path_to_output: relative path to the FLASK output directory
:param nest: True for NEST pixelization, False for RING
:param npix: Number of pixels in map
:param pixarea: Area of each pixel, in arcmin^2
:param density: Source galaxy density, in arcmin^2, to use for noise application
:param density_0: Baseline galaxy density, in arcmin^2, to scale distribution by
:param multiplier: Scale factor used to amplify noise distribution
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param noiseless: Does not take Gaussian draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param config: "k" for convergence only, "g" for shear only, "kg" for convergence and shear
:param scramble: If True, randomly scrambles the maps out of order
:param ground_truths: True if corresponding labels should be returned as well
:param reshape_x: If True, reshapes xs to have shape (.., .., 1)
:param reshape_y: If True, reshapes ys to have shape (.., .., 1)
:param deepsphere_dataset: Returns a DeepSphere LabeledDataset if true
:return: Dictionary of DeepSphere datasets if deepsphere_dataset=True, maps and labels if ground_truths=True,
stacked Numpy array of split, (rescaled) Poisson and Gaussian-sampled maps otherwise
"""
data = {}
for dataset in dataset_names(val=val):
data[dataset] = split_lensing_maps_by_dataset(dataset, config=config, coadd=coadd, corr=corr,
path_to_output=path_to_output, nest=nest,
npix=npix, covariance=covariance, zscale=zscale,
pixarea=pixarea, gaussian=gaussian,
density=density, density_0=density_0, multiplier=multiplier,
ellip_sigma=ellip_sigma, noiseless=noiseless, order=order,
scramble=scramble,
ground_truths=ground_truths, reshape_x=reshape_x,
reshape_y=reshape_y, deepsphere_dataset=deepsphere_dataset)
return data
def split_count_and_lensing_maps_by_datasets(val=False, config="g", name="map-f1z1.fits.gz",
path_to_output=PATH_TO_OUTPUT,
field=0, covariance=False, zscale=False,
nest=True, npix=NPIX, pixarea=PIXEL_AREA, density_m=DENSITY_M,
density_m_0=DENSITY_M,
multiplier_m=1.0, gaussian=False, free_bias=False,
prior_low=0.94, prior_high=2.86,
bias=BIAS, normalize=False, noiseless_m=False, coadd=True, corr=None,
density_kg=DENSITY_KG, density_kg_0=DENSITY_KG, multiplier_kg=1.0,
ellip_sigma=ELLIP_SIGMA, noiseless_kg=False, order=ORDER,
scramble=False,
ground_truths=True, reshape_x=False, reshape_y=True,
deepsphere_dataset=False):
"""
Returns a data dictionary containing Poisson Gaussian-sampled maps for each data-set
:param gaussian: If True, returns Gaussian map. Returns log-normal if False.
:param covariance: If True, returns maps from covariance set of maps.
:param zscale: If True, rescales to r.m.s units.
:param prior_high: Upper limit of flat prior for linear bias. Used only if free_bias is True.
:param prior_low: Lower limit of flat prior for linear bias. Used only if free_bias is True.
:param free_bias: If True, applies random linear bias within a certain flat prior to raw map
:param val: If True, validation set is included in dataset_names()
:param config: "k" for convergence only, "g" for shear only, "kg" for convergence and shear,
with "c" added to the beginning for counts
:param name: name of the map
:param path_to_output: relative path to the FLASK output directory
:param field: field of the map (for lensing maps with multiple fields)
:param nest: True for NEST pixelization, False for RING
:param npix: Number of pixels in map
:param pixarea: Area of each pixel, in arcmin^2
:param density_m: Tracer galaxy density, in arcmin^2, to use for noise application
:param density_m_0: Baseline tracer galaxy density, in arcmin^2, to scale distribution by
:param multiplier_m: Scale factor used to amplify shot noise distribution
:param bias: Linear galaxy-matter bias
:param normalize: True if resulting noise should be made to reflect a linear galaxy-matter bias of 1
:param noiseless_m: Does not take Poisson draw if True
:param coadd: If True, coadds correlated and uncorrelated signals
:param corr: If coadd is False, determines whether correlated or uncorrelated signals are returned
:param density_kg: Source galaxy density, in arcmin^2, to use for noise application
:param density_kg_0: Baseline source galaxy density, in arcmin^2, to scale distribution by
:param multiplier_kg: Scale factor used to amplify lensing noise distributions
:param ellip_sigma: Standard deviation representing uncertainty in ellipticity measurements
:param noiseless_kg: Does not take Gaussian draw if True
:param order: ORDER giving the number of maps to split into (12*ORDER**2)
:param scramble: If True, randomly scrambles the maps out of order
:param ground_truths: True if corresponding labels should be returned as well
:param reshape_x: If True, reshapes xs to have shape (.., .., 1)
:param reshape_y: If True, reshapes ys to have shape (.., .., 1)
:param deepsphere_dataset: Returns a DeepSphere LabeledDataset if true
:return: Dictionary of DeepSphere datasets if deepsphere_dataset=True, maps and labels if ground_truths=True,
stacked Numpy array of split, (rescaled) Poisson and Gaussian-sampled maps otherwise
"""
data = {}
for dataset in dataset_names(val=val):
data[dataset] = split_count_and_lensing_maps_by_vals(dataset, config=config, name=name,
path_to_output=path_to_output,
field=field, covariance=covariance, zscale=zscale,
nest=nest, npix=npix, pixarea=pixarea,
density_m=density_m,
density_m_0=density_m_0,
multiplier_m=multiplier_m, gaussian=gaussian,
free_bias=free_bias,
prior_low=prior_low, prior_high=prior_high,
bias=bias, normalize=normalize,
noiseless_m=noiseless_m,
coadd=coadd,
corr=corr,
density_kg=density_kg, density_kg_0=density_kg_0,
multiplier_kg=multiplier_kg,
ellip_sigma=ellip_sigma, noiseless_kg=noiseless_kg,
order=order,
scramble=scramble,
ground_truths=ground_truths, reshape_x=reshape_x,
reshape_y=reshape_y,
deepsphere_dataset=deepsphere_dataset)
return data
def list_tracer_noise_scales(handpicked=False, num=6, noiseless=True):
"""
Generate a list of noise levels (tracer galaxy densities in arcmin^2) at which to evaluate model predictions.
:param noiseless: If True, includes noiseless case.
:param num: Number of noise levels (exluding noiseless case) to generate
:param handpicked: If True, return handpicked list of noise-levels. Use geomspace vals if false
:return: A list of noise levels (-1 for noiseless)
"""
if handpicked:
if noiseless:
return np.array([0.04377, 0.12, 0.3, 4, 50, -1])
return np.array([0.04377, 0.12, 1, 10, 100, 1000])
if noiseless:
return np.append(np.geomspace(0.04377, 100, num=num), -1)
return np.geomspace(0.04377, 1000, num=num)
| 56.297604 | 120 | 0.631115 | 12,185 | 89,288 | 4.501518 | 0.051703 | 0.020455 | 0.028878 | 0.016335 | 0.885654 | 0.863594 | 0.849975 | 0.832765 | 0.821042 | 0.811416 | 0 | 0.044415 | 0.284114 | 89,288 | 1,585 | 121 | 56.333123 | 0.813705 | 0.442389 | 0 | 0.464481 | 0 | 0 | 0.019279 | 0.008804 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081967 | false | 0 | 0.009563 | 0 | 0.239071 | 0.004098 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a2ff3026e41354669359f3f4f4acd1521cd0b76a | 186 | py | Python | webapp/app/encryptioncontext/models.py | aws-samples/aws-secrets-manager-credential-rotation-without-container-restart | 11ad22e8f1d55bf48af219fecdd4ba208c88dff4 | [
"MIT-0"
] | 3 | 2021-08-10T21:05:32.000Z | 2021-11-08T10:25:57.000Z | webapp/app/encryptioncontext/models.py | aws-samples/aws-secrets-manager-credential-rotation-without-container-restart | 11ad22e8f1d55bf48af219fecdd4ba208c88dff4 | [
"MIT-0"
] | null | null | null | webapp/app/encryptioncontext/models.py | aws-samples/aws-secrets-manager-credential-rotation-without-container-restart | 11ad22e8f1d55bf48af219fecdd4ba208c88dff4 | [
"MIT-0"
] | 1 | 2021-08-10T21:05:33.000Z | 2021-08-10T21:05:33.000Z | from django.db import models
# Create your models here.
class CustomerProfile(models.Model):
account_number=models.CharField(max_length=8)
userid=models.CharField(max_length=6)
| 26.571429 | 49 | 0.790323 | 26 | 186 | 5.538462 | 0.730769 | 0.208333 | 0.25 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 0.11828 | 186 | 6 | 50 | 31 | 0.865854 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
0c2de71b316ee774863eb0e543c6bc8370c6ee35 | 78 | py | Python | hello.py | souadg/cas-ads-doc-21 | 23ab3241094ffe168840ae34d151c90927da9e52 | [
"MIT"
] | null | null | null | hello.py | souadg/cas-ads-doc-21 | 23ab3241094ffe168840ae34d151c90927da9e52 | [
"MIT"
] | null | null | null | hello.py | souadg/cas-ads-doc-21 | 23ab3241094ffe168840ae34d151c90927da9e52 | [
"MIT"
] | null | null | null | # First programme
# Hello world
print("Hello world! Souad, you look lovely.") | 26 | 45 | 0.730769 | 11 | 78 | 5.181818 | 0.818182 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 78 | 3 | 45 | 26 | 0.863636 | 0.358974 | 0 | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0c5a8f5f041659a66528eb7f4be289cbf040ceca | 34 | py | Python | dotbot/util/__init__.py | indrajitr/dotbot | 3cedc2dbc53e5e872f396671ef874838036019f8 | [
"MIT"
] | 5,493 | 2015-01-07T22:16:25.000Z | 2022-03-30T15:31:11.000Z | dotbot/util/__init__.py | indrajitr/dotbot | 3cedc2dbc53e5e872f396671ef874838036019f8 | [
"MIT"
] | 287 | 2015-02-03T05:12:38.000Z | 2022-03-08T12:32:26.000Z | modules/dotbot/dotbot/util/__init__.py | danitome24/dotfiles | 22821002e8922e1f1ff706b851e116ed77beb164 | [
"MIT"
] | 361 | 2015-01-17T08:31:30.000Z | 2022-03-31T01:02:04.000Z | from .common import shell_command
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a78580f0ca525bf69f08c24c12037f18b63000e5 | 26 | py | Python | arcface/__init__.py | Aitical/ADspeech2face | 2e811ff8cc7333729f4b77d1b1067296253e8e38 | [
"MIT"
] | 1 | 2022-01-27T14:19:04.000Z | 2022-01-27T14:19:04.000Z | arcface/__init__.py | Aitical/ADspeech2face | 2e811ff8cc7333729f4b77d1b1067296253e8e38 | [
"MIT"
] | null | null | null | arcface/__init__.py | Aitical/ADspeech2face | 2e811ff8cc7333729f4b77d1b1067296253e8e38 | [
"MIT"
] | null | null | null | from .model import arcface | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac28c3c52d858013f1c0fa44a3c82acc402fd2ef | 34 | py | Python | model/__init__.py | nachiket273/efficientnetv2 | fdcbcf48ad84d4b16c0edc18f55a27ee5bafd2de | [
"MIT"
] | 1 | 2021-12-01T20:12:49.000Z | 2021-12-01T20:12:49.000Z | model/__init__.py | nachiket273/efficientnetv2 | fdcbcf48ad84d4b16c0edc18f55a27ee5bafd2de | [
"MIT"
] | null | null | null | model/__init__.py | nachiket273/efficientnetv2 | fdcbcf48ad84d4b16c0edc18f55a27ee5bafd2de | [
"MIT"
] | null | null | null | from .model import efficientnetv2
| 17 | 33 | 0.852941 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac47ed1f0ce102a99d8b21999818a2230a8cd575 | 128 | py | Python | best_ai_module.py | nunchuk/best-ai | f0215a12e2dd3ecdc70af62c606116fbd1b75eec | [
"MIT"
] | null | null | null | best_ai_module.py | nunchuk/best-ai | f0215a12e2dd3ecdc70af62c606116fbd1b75eec | [
"MIT"
] | null | null | null | best_ai_module.py | nunchuk/best-ai | f0215a12e2dd3ecdc70af62c606116fbd1b75eec | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# -*- coding: UTF-8 -*-
while True:
print("\033[1;36;m", input().strip('吗??')+'!')
print("\033[0m")
| 18.285714 | 50 | 0.507813 | 19 | 128 | 3.421053 | 0.894737 | 0.246154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.15625 | 128 | 6 | 51 | 21.333333 | 0.490741 | 0.304688 | 0 | 0 | 0 | 0 | 0.252874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ac5c0f1e27e3c1a7db24da5eb98637183d758be2 | 33,069 | py | Python | unifrac/tests/test_api.py | iankrout/StripedUnifrac | 4fb98b89ece30bc173b63ff9c478330394c25b2f | [
"BSD-3-Clause"
] | null | null | null | unifrac/tests/test_api.py | iankrout/StripedUnifrac | 4fb98b89ece30bc173b63ff9c478330394c25b2f | [
"BSD-3-Clause"
] | null | null | null | unifrac/tests/test_api.py | iankrout/StripedUnifrac | 4fb98b89ece30bc173b63ff9c478330394c25b2f | [
"BSD-3-Clause"
] | null | null | null | # ----------------------------------------------------------------------------
# Copyright (c) 2016-2017, UniFrac development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file LICENSE, distributed with this software.
# ----------------------------------------------------------------------------
import unittest
import os
from io import StringIO
from tempfile import gettempdir
import pkg_resources
import numpy as np
import numpy.testing as npt
from biom import Table, load_table
from biom.util import biom_open
from skbio import TreeNode
import skbio.diversity
from unifrac import ssu, faith_pd
class UnifracAPITests(unittest.TestCase):
package = 'unifrac.tests'
def get_data_path(self, filename):
# adapted from qiime2.plugin.testing.TestPluginBase
return pkg_resources.resource_filename(self.package,
'data/%s' % filename)
def test_unweighted_root_eval_issue_46(self):
tree = self.get_data_path('crawford.tre')
table = self.get_data_path('crawford.biom')
table_inmem = load_table(table)
tree_inmem = skbio.TreeNode.read(tree)
ids = table_inmem.ids()
otu_ids = table_inmem.ids(axis='observation')
cnts = table_inmem.matrix_data.astype(int).toarray().T
exp = skbio.diversity.beta_diversity('unweighted_unifrac', cnts,
ids=ids, otu_ids=otu_ids,
tree=tree_inmem)
obs = ssu(table, tree, 'unweighted', False, 1.0, False, 1)
npt.assert_almost_equal(obs.data, exp.data)
def test_meta_unifrac(self):
t1 = self.get_data_path('t1.newick')
e1 = self.get_data_path('e1.biom')
result = ssu(e1, t1, 'unweighted', False, 1.0, False, 1)
u1_distances = np.array([[0, 10 / 16., 8 / 13.],
[10 / 16., 0, 8 / 17.],
[8 / 13., 8 / 17., 0]])
npt.assert_almost_equal(u1_distances, result.data)
self.assertEqual(tuple('ABC'), result.ids)
def test_ssu_bad_tree(self):
e1 = self.get_data_path('e1.biom')
with self.assertRaisesRegex(IOError, "Tree file not found."):
ssu(e1, 'bad-file', 'unweighted', False, 1.0, False, 1)
def test_ssu_bad_table(self):
t1 = self.get_data_path('t1.newick')
with self.assertRaisesRegex(IOError, "Table file not found."):
ssu('bad-file', t1, 'unweighted', False, 1.0, False, 1)
def test_ssu_bad_method(self):
t1 = self.get_data_path('t1.newick')
e1 = self.get_data_path('e1.biom')
with self.assertRaisesRegex(ValueError, "Unknown method."):
ssu(e1, t1, 'unweightedfoo', False, 1.0, False, 1)
class EdgeCasesTests(unittest.TestCase):
# These tests were mostly ported from skbio's
# skbio/diversity/beta/tests/test_unifrac.py at SHA-256 ea901b3b6b0b
# note that not all tests were kept since the APIs are different.
#
# The test cases below only exercise unweighted, weighted and weighted
# normalized UniFrac. The C++ test suite verifies (against reference
# implementations) the variance adjusted and generalized variants of the
# algorithm.
package = 'unifrac.tests'
def _work(self, u_counts, v_counts, otu_ids, tree, method):
data = np.array([u_counts, v_counts]).T
bt = Table(data, otu_ids, ['u', 'v'])
ta = os.path.join(gettempdir(), 'table.biom')
tr = os.path.join(gettempdir(), 'tree.biom')
self.files_to_delete.append(ta)
self.files_to_delete.append(tr)
with biom_open(ta, 'w') as fhdf5:
bt.to_hdf5(fhdf5, 'Table for unit testing')
tree.write(tr)
# return value is a distance matrix, get the distance from u->v
return ssu(ta, tr, method, False, 1.0, False, 1)['u', 'v']
def weighted_unifrac(self, u_counts, v_counts, otu_ids, tree,
normalized=False):
if normalized:
method = 'weighted_normalized'
else:
method = 'weighted_unnormalized'
return self._work(u_counts, v_counts, otu_ids, tree, method)
def unweighted_unifrac(self, u_counts, v_counts, otu_ids, tree,
normalized=False):
return self._work(u_counts, v_counts, otu_ids, tree, 'unweighted')
def setUp(self):
self.b1 = np.array(
[[1, 3, 0, 1, 0],
[0, 2, 0, 4, 4],
[0, 0, 6, 2, 1],
[0, 0, 1, 1, 1],
[5, 3, 5, 0, 0],
[0, 0, 0, 3, 5]])
self.sids1 = list('ABCDEF')
self.oids1 = ['OTU%d' % i for i in range(1, 6)]
self.t1 = TreeNode.read(
StringIO('(((((OTU1:0.5,OTU2:0.5):0.5,OTU3:1.0):1.0):0.0,(OTU4:'
'0.75,OTU5:0.75):1.25):0.0)root;'))
self.t1_w_extra_tips = TreeNode.read(
StringIO('(((((OTU1:0.5,OTU2:0.5):0.5,OTU3:1.0):1.0):0.0,(OTU4:'
'0.75,(OTU5:0.25,(OTU6:0.5,OTU7:0.5):0.5):0.5):1.25):0.0'
')root;'))
self.t2 = TreeNode.read(
StringIO('((OTU1:0.1, OTU2:0.2):0.3, (OTU3:0.5, OTU4:0.7):1.1)'
'root;'))
self.oids2 = ['OTU%d' % i for i in range(1, 5)]
self.files_to_delete = []
def tearDown(self):
for f in self.files_to_delete:
try:
os.remove(f)
except OSError:
pass
def test_ssu_table_not_subset_tree(self):
tree = TreeNode.read(StringIO('((OTU1:0.5,OTU3:1.0):1.0)root;'))
expected_message = "The table does not appear to be completely "\
"represented by the phylogeny."
with self.assertRaisesRegex(ValueError, expected_message):
self.unweighted_unifrac(self.b1[0], self.b1[1], self.oids1, tree)
def test_unweighted_otus_out_of_order(self):
# UniFrac API does not assert the observations are in tip order of the
# input tree
shuffled_ids = self.oids1[:]
shuffled_b1 = self.b1.copy()
shuffled_ids[0], shuffled_ids[-1] = shuffled_ids[-1], shuffled_ids[0]
shuffled_b1[:, [0, -1]] = shuffled_b1[:, [-1, 0]]
for i in range(len(self.b1)):
for j in range(len(self.b1)):
actual = self.unweighted_unifrac(
self.b1[i], self.b1[j], self.oids1, self.t1)
expected = self.unweighted_unifrac(
shuffled_b1[i], shuffled_b1[j], shuffled_ids, self.t1)
self.assertAlmostEqual(actual, expected)
def test_weighted_otus_out_of_order(self):
# UniFrac API does not assert the observations are in tip order of the
# input tree
shuffled_ids = self.oids1[:]
shuffled_b1 = self.b1.copy()
shuffled_ids[0], shuffled_ids[-1] = shuffled_ids[-1], shuffled_ids[0]
shuffled_b1[:, [0, -1]] = shuffled_b1[:, [-1, 0]]
for i in range(len(self.b1)):
for j in range(len(self.b1)):
actual = self.weighted_unifrac(
self.b1[i], self.b1[j], self.oids1, self.t1)
expected = self.weighted_unifrac(
shuffled_b1[i], shuffled_b1[j], shuffled_ids, self.t1)
self.assertAlmostEqual(actual, expected)
def test_unweighted_extra_tips(self):
# UniFrac values are the same despite unobserved tips in the tree
for i in range(len(self.b1)):
for j in range(len(self.b1)):
actual = self.unweighted_unifrac(
self.b1[i], self.b1[j], self.oids1, self.t1_w_extra_tips)
expected = self.unweighted_unifrac(
self.b1[i], self.b1[j], self.oids1, self.t1)
self.assertAlmostEqual(actual, expected)
def test_weighted_extra_tips(self):
# UniFrac values are the same despite unobserved tips in the tree
for i in range(len(self.b1)):
for j in range(len(self.b1)):
actual = self.weighted_unifrac(
self.b1[i], self.b1[j], self.oids1, self.t1_w_extra_tips)
expected = self.weighted_unifrac(
self.b1[i], self.b1[j], self.oids1, self.t1)
self.assertAlmostEqual(actual, expected)
def test_unweighted_minimal_trees(self):
# two tips
tree = TreeNode.read(StringIO('(OTU1:0.25, OTU2:0.25)root;'))
actual = self.unweighted_unifrac([1, 0], [0, 0], ['OTU1', 'OTU2'],
tree)
expected = 1.0
self.assertEqual(actual, expected)
def test_unweighted_root_not_observed(self):
# expected values computed with QIIME 1.9.1 and by hand
# root node not observed, but branch between (OTU1, OTU2) and root
# is considered shared
actual = self.unweighted_unifrac([1, 1, 0, 0], [1, 0, 0, 0],
self.oids2, self.t2)
# for clarity of what I'm testing, compute expected as it would
# based on the branch lengths. the values that compose shared was
# a point of confusion for me here, so leaving these in for
# future reference
expected = 0.2 / (0.1 + 0.2 + 0.3) # 0.3333333333
self.assertAlmostEqual(actual, expected)
# root node not observed, but branch between (OTU3, OTU4) and root
# is considered shared
actual = self.unweighted_unifrac([0, 0, 1, 1], [0, 0, 1, 0],
self.oids2, self.t2)
# for clarity of what I'm testing, compute expected as it would
# based on the branch lengths. the values that compose shared was
# a point of confusion for me here, so leaving these in for
# future reference
expected = 0.7 / (1.1 + 0.5 + 0.7) # 0.3043478261
self.assertAlmostEqual(actual, expected)
def test_weighted_root_not_observed(self):
# expected values computed by hand, these disagree with QIIME 1.9.1
# root node not observed, but branch between (OTU1, OTU2) and root
# is considered shared
actual = self.weighted_unifrac([1, 0, 0, 0], [1, 1, 0, 0],
self.oids2, self.t2)
expected = 0.15
self.assertAlmostEqual(actual, expected)
# root node not observed, but branch between (OTU3, OTU4) and root
# is considered shared
actual = self.weighted_unifrac([0, 0, 1, 1], [0, 0, 1, 0],
self.oids2, self.t2)
expected = 0.6
self.assertAlmostEqual(actual, expected)
def test_weighted_normalized_root_not_observed(self):
# expected values computed by hand, these disagree with QIIME 1.9.1
# root node not observed, but branch between (OTU1, OTU2) and root
# is considered shared
actual = self.weighted_unifrac([1, 0, 0, 0], [1, 1, 0, 0],
self.oids2, self.t2, normalized=True)
expected = 0.1764705882
self.assertAlmostEqual(actual, expected)
# root node not observed, but branch between (OTU3, OTU4) and root
# is considered shared
actual = self.weighted_unifrac([0, 0, 1, 1], [0, 0, 1, 0],
self.oids2, self.t2, normalized=True)
expected = 0.1818181818
self.assertAlmostEqual(actual, expected)
def test_unweighted_unifrac_identity(self):
for i in range(len(self.b1)):
actual = self.unweighted_unifrac(
self.b1[i], self.b1[i], self.oids1, self.t1)
expected = 0.0
self.assertAlmostEqual(actual, expected)
def test_unweighted_unifrac_symmetry(self):
for i in range(len(self.b1)):
for j in range(len(self.b1)):
actual = self.unweighted_unifrac(
self.b1[i], self.b1[j], self.oids1, self.t1)
expected = self.unweighted_unifrac(
self.b1[j], self.b1[i], self.oids1, self.t1)
self.assertAlmostEqual(actual, expected)
def test_unweighted_unifrac_non_overlapping(self):
# these communities only share the root node
actual = self.unweighted_unifrac(
self.b1[4], self.b1[5], self.oids1, self.t1)
expected = 1.0
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
[1, 1, 1, 0, 0], [0, 0, 0, 1, 1], self.oids1, self.t1)
expected = 1.0
self.assertAlmostEqual(actual, expected)
def test_unweighted_unifrac(self):
# expected results derived from QIIME 1.9.1, which
# is a completely different implementation skbio's initial
# unweighted unifrac implementation
# sample A versus all
actual = self.unweighted_unifrac(
self.b1[0], self.b1[1], self.oids1, self.t1)
expected = 0.238095238095
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[0], self.b1[2], self.oids1, self.t1)
expected = 0.52
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[0], self.b1[3], self.oids1, self.t1)
expected = 0.52
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[0], self.b1[4], self.oids1, self.t1)
expected = 0.545454545455
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[0], self.b1[5], self.oids1, self.t1)
expected = 0.619047619048
self.assertAlmostEqual(actual, expected)
# sample B versus remaining
actual = self.unweighted_unifrac(
self.b1[1], self.b1[2], self.oids1, self.t1)
expected = 0.347826086957
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[1], self.b1[3], self.oids1, self.t1)
expected = 0.347826086957
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[1], self.b1[4], self.oids1, self.t1)
expected = 0.68
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[1], self.b1[5], self.oids1, self.t1)
expected = 0.421052631579
self.assertAlmostEqual(actual, expected)
# sample C versus remaining
actual = self.unweighted_unifrac(
self.b1[2], self.b1[3], self.oids1, self.t1)
expected = 0.0
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[2], self.b1[4], self.oids1, self.t1)
expected = 0.68
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[2], self.b1[5], self.oids1, self.t1)
expected = 0.421052631579
self.assertAlmostEqual(actual, expected)
# sample D versus remaining
actual = self.unweighted_unifrac(
self.b1[3], self.b1[4], self.oids1, self.t1)
expected = 0.68
self.assertAlmostEqual(actual, expected)
actual = self.unweighted_unifrac(
self.b1[3], self.b1[5], self.oids1, self.t1)
expected = 0.421052631579
self.assertAlmostEqual(actual, expected)
# sample E versus remaining
actual = self.unweighted_unifrac(
self.b1[4], self.b1[5], self.oids1, self.t1)
expected = 1.0
self.assertAlmostEqual(actual, expected)
def test_weighted_unifrac_identity(self):
for i in range(len(self.b1)):
actual = self.weighted_unifrac(
self.b1[i], self.b1[i], self.oids1, self.t1)
expected = 0.0
self.assertAlmostEqual(actual, expected)
def test_weighted_unifrac_symmetry(self):
for i in range(len(self.b1)):
for j in range(len(self.b1)):
actual = self.weighted_unifrac(
self.b1[i], self.b1[j], self.oids1, self.t1)
expected = self.weighted_unifrac(
self.b1[j], self.b1[i], self.oids1, self.t1)
self.assertAlmostEqual(actual, expected)
def test_weighted_unifrac_non_overlapping(self):
# expected results derived from QIIME 1.9.1, which
# is a completely different implementation skbio's initial
# weighted unifrac implementation
# these communities only share the root node
actual = self.weighted_unifrac(
self.b1[4], self.b1[5], self.oids1, self.t1)
expected = 4.0
self.assertAlmostEqual(actual, expected)
def test_weighted_unifrac(self):
# expected results derived from QIIME 1.9.1, which
# is a completely different implementation skbio's initial
# weighted unifrac implementation
actual = self.weighted_unifrac(
self.b1[0], self.b1[1], self.oids1, self.t1)
expected = 2.4
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[0], self.b1[2], self.oids1, self.t1)
expected = 1.86666666667
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[0], self.b1[3], self.oids1, self.t1)
expected = 2.53333333333
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[0], self.b1[4], self.oids1, self.t1)
expected = 1.35384615385
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[0], self.b1[5], self.oids1, self.t1)
expected = 3.2
self.assertAlmostEqual(actual, expected)
# sample B versus remaining
actual = self.weighted_unifrac(
self.b1[1], self.b1[2], self.oids1, self.t1)
expected = 2.26666666667
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[1], self.b1[3], self.oids1, self.t1)
expected = 0.933333333333
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[1], self.b1[4], self.oids1, self.t1)
expected = 3.2
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[1], self.b1[5], self.oids1, self.t1)
expected = 0.8375
self.assertAlmostEqual(actual, expected)
# sample C versus remaining
actual = self.weighted_unifrac(
self.b1[2], self.b1[3], self.oids1, self.t1)
expected = 1.33333333333
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[2], self.b1[4], self.oids1, self.t1)
expected = 1.89743589744
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[2], self.b1[5], self.oids1, self.t1)
expected = 2.66666666667
self.assertAlmostEqual(actual, expected)
# sample D versus remaining
actual = self.weighted_unifrac(
self.b1[3], self.b1[4], self.oids1, self.t1)
expected = 2.66666666667
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[3], self.b1[5], self.oids1, self.t1)
expected = 1.33333333333
self.assertAlmostEqual(actual, expected)
# sample E versus remaining
actual = self.weighted_unifrac(
self.b1[4], self.b1[5], self.oids1, self.t1)
expected = 4.0
self.assertAlmostEqual(actual, expected)
def test_weighted_unifrac_identity_normalized(self):
for i in range(len(self.b1)):
actual = self.weighted_unifrac(
self.b1[i], self.b1[i], self.oids1, self.t1, normalized=True)
expected = 0.0
self.assertAlmostEqual(actual, expected)
def test_weighted_unifrac_symmetry_normalized(self):
for i in range(len(self.b1)):
for j in range(len(self.b1)):
actual = self.weighted_unifrac(
self.b1[i], self.b1[j], self.oids1, self.t1,
normalized=True)
expected = self.weighted_unifrac(
self.b1[j], self.b1[i], self.oids1, self.t1,
normalized=True)
self.assertAlmostEqual(actual, expected)
def test_weighted_unifrac_non_overlapping_normalized(self):
# these communities only share the root node
actual = self.weighted_unifrac(
self.b1[4], self.b1[5], self.oids1, self.t1, normalized=True)
expected = 1.0
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
[1, 1, 1, 0, 0], [0, 0, 0, 1, 1], self.oids1, self.t1,
normalized=True)
expected = 1.0
self.assertAlmostEqual(actual, expected)
def test_weighted_unifrac_normalized(self):
# expected results derived from QIIME 1.9.1, which
# is a completely different implementation skbio's initial
# weighted unifrac implementation
actual = self.weighted_unifrac(
self.b1[0], self.b1[1], self.oids1, self.t1, normalized=True)
expected = 0.6
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[0], self.b1[2], self.oids1, self.t1, normalized=True)
expected = 0.466666666667
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[0], self.b1[3], self.oids1, self.t1, normalized=True)
expected = 0.633333333333
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[0], self.b1[4], self.oids1, self.t1, normalized=True)
expected = 0.338461538462
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[0], self.b1[5], self.oids1, self.t1, normalized=True)
expected = 0.8
self.assertAlmostEqual(actual, expected)
# sample B versus remaining
actual = self.weighted_unifrac(
self.b1[1], self.b1[2], self.oids1, self.t1, normalized=True)
expected = 0.566666666667
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[1], self.b1[3], self.oids1, self.t1, normalized=True)
expected = 0.233333333333
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[1], self.b1[4], self.oids1, self.t1, normalized=True)
expected = 0.8
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[1], self.b1[5], self.oids1, self.t1, normalized=True)
expected = 0.209375
self.assertAlmostEqual(actual, expected)
# sample C versus remaining
actual = self.weighted_unifrac(
self.b1[2], self.b1[3], self.oids1, self.t1, normalized=True)
expected = 0.333333333333
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[2], self.b1[4], self.oids1, self.t1, normalized=True)
expected = 0.474358974359
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[2], self.b1[5], self.oids1, self.t1, normalized=True)
expected = 0.666666666667
self.assertAlmostEqual(actual, expected)
# sample D versus remaining
actual = self.weighted_unifrac(
self.b1[3], self.b1[4], self.oids1, self.t1, normalized=True)
expected = 0.666666666667
self.assertAlmostEqual(actual, expected)
actual = self.weighted_unifrac(
self.b1[3], self.b1[5], self.oids1, self.t1, normalized=True)
expected = 0.333333333333
self.assertAlmostEqual(actual, expected)
# sample E versus remaining
actual = self.weighted_unifrac(
self.b1[4], self.b1[5], self.oids1, self.t1, normalized=True)
expected = 1.0
self.assertAlmostEqual(actual, expected)
class FaithPDEdgeCasesTests(unittest.TestCase):
# These tests were mostly ported from skbio's
# skbio/diversity/alpha/tests/test_fatih_pd.py at SHA-256 a8c086b
# note that not all tests were kept since the APIs are different.
package = 'unifrac.tests'
def write_table_tree(self, u_counts, otu_ids, sample_ids, tree):
data = np.array([u_counts]).T
bt = Table(data, otu_ids, sample_ids)
ta = os.path.join(gettempdir(), 'table.biom')
tr = os.path.join(gettempdir(), 'tree.biom')
self.files_to_delete.append(ta)
self.files_to_delete.append(tr)
with biom_open(ta, 'w') as fhdf5:
bt.to_hdf5(fhdf5, 'Table for unit testing')
tree.write(tr)
return ta, tr
def faith_pd_work(self, u_counts, otu_ids, sample_ids, tree):
ta, tr = self.write_table_tree(u_counts, otu_ids, sample_ids, tree)
return faith_pd(ta, tr)
def setUp(self):
self.counts = np.array([0, 1, 1, 4, 2, 5, 2, 4, 1, 2])
self.b1 = np.array([[1, 3, 0, 1, 0],
[0, 2, 0, 4, 4],
[0, 0, 6, 2, 1],
[0, 0, 1, 1, 1]])
self.sids1 = list('ABCD')
self.oids1 = ['OTU%d' % i for i in range(1, 6)]
self.t1 = TreeNode.read(StringIO(
'(((((OTU1:0.5,OTU2:0.5):0.5,OTU3:1.0):1.0):'
'0.0,(OTU4:0.75,OTU5:0.75):1.25):0.0)root;'))
self.t1_w_extra_tips = TreeNode.read(
StringIO('(((((OTU1:0.5,OTU2:0.5):0.5,OTU3:1.0):1.0):0.0,(OTU4:'
'0.75,(OTU5:0.25,(OTU6:0.5,OTU7:0.5):0.5):0.5):1.25):0.0'
')root;'))
self.files_to_delete = []
def tearDown(self):
for f in self.files_to_delete:
try:
os.remove(f)
except OSError:
pass
def test_faith_pd_zero_branches_omitted(self):
# also deleted branch length fo
t2 = TreeNode.read(StringIO(
'((OTU1:0.5,OTU2:0.5),(OTU3:1.0,(OTU4:0.5,'
'OTU5:0.75):1.0):1.0)root;'
))
actual = self.faith_pd_work([1, 1, 0, 0, 0], self.oids1, ['foo'], t2)
expected = 1.0
self.assertAlmostEqual(actual[0], expected)
def test_faith_pd_none_observed(self):
actual = self.faith_pd_work([0, 0, 0, 0, 0], self.oids1, ['foo'],
self.t1)
expected = 0.0
self.assertAlmostEqual(actual.values, expected)
def test_faith_pd_biom_table_empty(self):
table, tree = self.write_table_tree([], [], [],
self.t1)
self.assertRaises(ValueError, faith_pd, table, tree)
def test_faith_pd_table_not_subset_tree(self):
tree = TreeNode.read(StringIO('((OTU1:0.5,OTU3:1.0):1.0)root;'))
table_ids = ['OTU1', 'OTU2']
table, tree = self.write_table_tree([1, 0], table_ids, ['foo'],
tree)
expected_message = "The table does not appear to be completely "\
"represented by the phylogeny."
with self.assertRaisesRegex(ValueError, expected_message):
faith_pd(table, tree)
def test_faith_pd_all_observed(self):
actual = self.faith_pd_work([1, 1, 1, 1, 1], self.oids1, ['foo'],
self.t1)
expected = sum(n.length for n in self.t1.traverse()
if n.length is not None)
self.assertAlmostEqual(actual.values, expected)
actual = self.faith_pd_work([1, 2, 3, 4, 5], self.oids1, ['foo'],
self.t1)
expected = sum(n.length for n in self.t1.traverse()
if n.length is not None)
self.assertAlmostEqual(actual.values, expected)
def test_faith_pd(self):
# expected results derived from QIIME 1.9.1, which
# is a completely different implementation unifrac's initial
# phylogenetic diversity implementation
actual = self.faith_pd_work(self.b1[0], self.oids1, [self.sids1[0]],
self.t1)
expected = 4.5
self.assertAlmostEqual(actual.values, expected)
actual = self.faith_pd_work(self.b1[1], self.oids1, [self.sids1[1]],
self.t1)
expected = 4.75
self.assertAlmostEqual(actual.values, expected)
actual = self.faith_pd_work(self.b1[2], self.oids1, [self.sids1[2]],
self.t1)
expected = 4.75
self.assertAlmostEqual(actual.values, expected)
actual = self.faith_pd_work(self.b1[3], self.oids1, [self.sids1[3]],
self.t1)
expected = 4.75
self.assertAlmostEqual(actual.values, expected)
def test_faith_pd_extra_tips(self):
# results are the same despite presences of unobserved tips in tree
actual = self.faith_pd_work(self.b1[0], self.oids1, [self.sids1[0]],
self.t1_w_extra_tips)
expected = self.faith_pd_work(self.b1[0], self.oids1, [self.sids1[0]],
self.t1)
self.assertAlmostEqual(actual.values, expected.values)
actual = self.faith_pd_work(self.b1[1], self.oids1, [self.sids1[1]],
self.t1_w_extra_tips)
expected = self.faith_pd_work(self.b1[1], self.oids1, [self.sids1[1]],
self.t1)
self.assertAlmostEqual(actual.values, expected.values)
actual = self.faith_pd_work(self.b1[2], self.oids1, [self.sids1[2]],
self.t1_w_extra_tips)
expected = self.faith_pd_work(self.b1[2], self.oids1, [self.sids1[2]],
self.t1)
self.assertAlmostEqual(actual.values, expected.values)
actual = self.faith_pd_work(self.b1[3], self.oids1, [self.sids1[3]],
self.t1_w_extra_tips)
expected = self.faith_pd_work(self.b1[3], self.oids1, [self.sids1[3]],
self.t1)
self.assertAlmostEqual(actual.values, expected.values)
def test_faith_pd_minimal(self):
# two tips
tree = TreeNode.read(StringIO('(OTU1:0.25, OTU2:0.25)root;'))
actual = self.faith_pd_work([1, 0], ['OTU1', 'OTU2'], ['foo'], tree)
expected = 0.25
self.assertEqual(actual.values, expected)
def test_faith_pd_series_name(self):
tree = TreeNode.read(StringIO('(OTU1:0.25, OTU2:0.25)root;'))
actual = self.faith_pd_work([1, 0], ['OTU1', 'OTU2'], ['foo'], tree)
self.assertEqual("faith_pd", actual.name)
def test_faith_pd_root_not_observed(self):
# expected values computed by hand
tree = TreeNode.read(
StringIO('((OTU1:0.1, OTU2:0.2):0.3, (OTU3:0.5, OTU4:0.7):1.1)'
'root;'))
otu_ids = ['OTU%d' % i for i in range(1, 5)]
# root node not observed, but branch between (OTU1, OTU2) and root
# is considered observed
actual = self.faith_pd_work([1, 1, 0, 0], otu_ids, ['foo'], tree)
expected = 0.6
self.assertAlmostEqual(actual[0], expected)
# root node not observed, but branch between (OTU3, OTU4) and root
# is considered observed
actual = self.faith_pd_work([0, 0, 1, 1], otu_ids, ['foo'], tree)
expected = 2.3
self.assertAlmostEqual(actual[0], expected)
def test_faith_pd_invalid_input(self):
# tests are based of skbio tests, checking for duplicate ids,
# negative counts are not included but should be incorporated
# tree has duplicated tip ids
tree = TreeNode.read(
StringIO('((OTU1:0.1, OTU2:0.2):0.3, (OTU3:0.5, OTU4:0.7):1.1)'
'root;'))
otu_ids = ['OTU%d' % i for i in range(1, 5)]
u_counts = [1, 1, 0, 0]
data = np.array([u_counts]).T
bt = Table(data, otu_ids, ['u'])
ta = os.path.join(gettempdir(), 'table.biom')
tr = os.path.join(gettempdir(), 'tree.biom')
self.files_to_delete.append(ta)
self.files_to_delete.append(tr)
with biom_open(ta, 'w') as fhdf5:
bt.to_hdf5(fhdf5, 'Table for unit testing')
tree.write(tr)
self.assertRaises(IOError, faith_pd, 'dne.biom', tr)
self.assertRaises(IOError, faith_pd, ta, 'dne.tre')
if __name__ == "__main__":
unittest.main()
| 42.450578 | 78 | 0.583931 | 4,263 | 33,069 | 4.425053 | 0.083509 | 0.051209 | 0.114504 | 0.122455 | 0.844943 | 0.834129 | 0.821883 | 0.803753 | 0.784245 | 0.755725 | 0 | 0.068072 | 0.294112 | 33,069 | 778 | 79 | 42.505141 | 0.740051 | 0.122804 | 0 | 0.673504 | 0 | 0.018803 | 0.05188 | 0.018365 | 0 | 0 | 0 | 0 | 0.160684 | 1 | 0.080342 | false | 0.003419 | 0.020513 | 0.003419 | 0.121368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ac7281c7bf18dcb514e30b258e38351663b81cf7 | 39 | py | Python | ark_search/recall/__init__.py | DataArk/ArkSearch | d4de73786654db72a689a1bbef1dfa3ee5d73d1d | [
"Apache-2.0"
] | 2 | 2021-12-30T06:10:50.000Z | 2021-12-30T06:12:36.000Z | ark_search/recall/__init__.py | DataArk/ArkSearch | d4de73786654db72a689a1bbef1dfa3ee5d73d1d | [
"Apache-2.0"
] | null | null | null | ark_search/recall/__init__.py | DataArk/ArkSearch | d4de73786654db72a689a1bbef1dfa3ee5d73d1d | [
"Apache-2.0"
] | null | null | null | from ark_search.recall.bm25 import BM25 | 39 | 39 | 0.871795 | 7 | 39 | 4.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.076923 | 39 | 1 | 39 | 39 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ac7907ad06d31dbf086bfca4609b483f3cf86904 | 42 | py | Python | tflib/distribute/__init__.py | AlexBlack2202/EigenGAN-Tensorflow | 9668738852abdcd7161b64b7e6a074c7ebfea055 | [
"MIT"
] | 302 | 2021-04-27T02:15:47.000Z | 2022-03-13T07:51:07.000Z | tflib/distribute/__init__.py | gokulsg/EigenGAN-Tensorflow | 86b21a47a824a2bb04a088c3e78b03d03a53735c | [
"MIT"
] | 7 | 2021-05-26T05:44:46.000Z | 2021-12-28T02:38:47.000Z | tflib/distribute/__init__.py | gokulsg/EigenGAN-Tensorflow | 86b21a47a824a2bb04a088c3e78b03d03a53735c | [
"MIT"
] | 34 | 2021-04-27T02:16:04.000Z | 2022-01-28T12:18:17.000Z | from tflib.distribute.distribute import *
| 21 | 41 | 0.833333 | 5 | 42 | 7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3bcb8659237db7aeeaf1370f6822ee2753e63421 | 19 | py | Python | Local/__init__.py | FurmanCenter/ACSDownloader | 918afc0c7baa8814da98c2e3ee11352af68c027e | [
"Apache-2.0"
] | 1 | 2020-04-15T15:40:18.000Z | 2020-04-15T15:40:18.000Z | Local/__init__.py | FurmanCenter/ACSDownloader | 918afc0c7baa8814da98c2e3ee11352af68c027e | [
"Apache-2.0"
] | null | null | null | Local/__init__.py | FurmanCenter/ACSDownloader | 918afc0c7baa8814da98c2e3ee11352af68c027e | [
"Apache-2.0"
] | null | null | null | from Local import * | 19 | 19 | 0.789474 | 3 | 19 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3bd57332470a072bc1139febc309c622c0dcb534 | 48 | py | Python | data/micro-benchmark/classes/imported_call_without_init/to_import.py | vitsalis/pycg-evaluation | ce37eb5668465b0c17371914e863d699826447ee | [
"Apache-2.0"
] | 121 | 2020-12-16T20:31:37.000Z | 2022-03-21T20:32:43.000Z | data/micro-benchmark/classes/imported_call_without_init/to_import.py | vitsalis/pycg-evaluation | ce37eb5668465b0c17371914e863d699826447ee | [
"Apache-2.0"
] | 24 | 2021-03-13T00:04:00.000Z | 2022-03-21T17:28:11.000Z | data/micro-benchmark/classes/imported_call_without_init/to_import.py | vitsalis/pycg-evaluation | ce37eb5668465b0c17371914e863d699826447ee | [
"Apache-2.0"
] | 19 | 2021-03-23T10:58:47.000Z | 2022-03-24T19:46:50.000Z | class MyClass:
def func(self):
pass
| 12 | 19 | 0.5625 | 6 | 48 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.354167 | 48 | 3 | 20 | 16 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
cbf3717c830b6844dac4bd189f88a0ca6f9b3c6d | 51 | py | Python | fission/examples/pythontest/hello.py | fadams/serverless | 3b8298d89166b7edbf456070a167887cd1c90db1 | [
"Apache-2.0"
] | 1 | 2019-05-07T09:14:12.000Z | 2019-05-07T09:14:12.000Z | fission/examples/pythontest/hello.py | fadams/serverless | 3b8298d89166b7edbf456070a167887cd1c90db1 | [
"Apache-2.0"
] | null | null | null | fission/examples/pythontest/hello.py | fadams/serverless | 3b8298d89166b7edbf456070a167887cd1c90db1 | [
"Apache-2.0"
] | 1 | 2019-05-06T20:50:48.000Z | 2019-05-06T20:50:48.000Z | def main():
return "Hello, world! From Python"
| 17 | 38 | 0.647059 | 7 | 51 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215686 | 51 | 2 | 39 | 25.5 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0.490196 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0207a70ab13924653a2ad1bfe0a375de98e5f3b1 | 29 | py | Python | skillset/__init__.py | sofwerx/dataglove | e49d72bef23fcba840e67fabc2fb81ce9f91b775 | [
"MIT"
] | 5 | 2019-05-07T17:28:20.000Z | 2020-06-18T15:08:04.000Z | skillset/__init__.py | sofwerx/dataglove | e49d72bef23fcba840e67fabc2fb81ce9f91b775 | [
"MIT"
] | 1 | 2019-08-29T22:54:07.000Z | 2019-08-29T23:03:57.000Z | skillset/__init__.py | sofwerx/dataglove | e49d72bef23fcba840e67fabc2fb81ce9f91b775 | [
"MIT"
] | 2 | 2019-05-28T13:11:09.000Z | 2019-06-05T17:47:28.000Z | from .com_link import ComLink | 29 | 29 | 0.862069 | 5 | 29 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
022ee5502a094443f541667910a2c4187341322d | 28,611 | py | Python | fortlab/vargen/plugins/gencore/gen_typedecl_in_module.py | grnydawn/fortlab | 524daa6dd7c99c1ca4bf6088a8ba3e1bcd096d5d | [
"MIT"
] | null | null | null | fortlab/vargen/plugins/gencore/gen_typedecl_in_module.py | grnydawn/fortlab | 524daa6dd7c99c1ca4bf6088a8ba3e1bcd096d5d | [
"MIT"
] | 1 | 2021-03-29T14:54:22.000Z | 2021-03-29T14:54:51.000Z | fortlab/vargen/plugins/gencore/gen_typedecl_in_module.py | grnydawn/fortlab | 524daa6dd7c99c1ca4bf6088a8ba3e1bcd096d5d | [
"MIT"
] | null | null | null | # gen_write_typedecl_in_module.py
from __future__ import absolute_import
from collections import OrderedDict
from fortlab.resolver import statements, block_statements, typedecl_statements
from fortlab.kgplugin import Kgen_Plugin
from .gencore_utils import get_topname, get_typedecl_writename, get_dtype_writename, get_module_in_writename, STATE_PBLOCK_WRITE_IN_EXTERNS, \
STATE_PBLOCK_USE_PART, kernel_gencore_contains, state_gencore_contains, get_typedecl_readname, get_dtype_readname, get_module_in_readname, \
KERNEL_PBLOCK_USE_PART, DRIVER_READ_IN_EXTERNS, process_spec_stmts, get_module_out_writename, get_module_out_readname, \
KERNEL_PBLOCK_READ_OUT_EXTERNS, STATE_PBLOCK_WRITE_OUT_EXTERNS, gen_write_istrue, gen_read_istrue, is_excluded, \
is_remove_state, is_zero_array, DRIVER_USE_PART, check_class_derived, modreadsubrs, modwritesubrs, varstr
class Gen_Typedecl_In_Module(Kgen_Plugin):
def __init__(self):
self.frame_msg = None
self.state_externs_subrs = OrderedDict()
self.kernel_externs_subrs = OrderedDict()
self.state_callsite_use_stmts = []
self.kernel_callsite_use_stmts = []
self.state_callsite_call_stmts = []
self.kernel_callsite_call_stmts = []
self.state_created_subrs = []
self.kernel_created_subrs = []
self.state_extern_writes = []
self.kernel_extern_reads = []
setinfo("modreadsubrs", modreadsubrs)
setinfo("modwritesubrs", modwritesubrs)
# registration
def register(self, msg):
self.frame_msg = msg
# register initial events
self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.STATE, GENERATION_STAGE.NODE_CREATED, \
block_statements.Module, self.has_externs_in_module, self.create_state_module_parts)
#self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.STATE, GENERATION_STAGE.NODE_CREATED, \
# block_statements.Module, None, self.use_ieee_module)
self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.KERNEL, GENERATION_STAGE.NODE_CREATED, \
block_statements.Module, self.has_externs_in_module, self.create_kernel_module_parts)
#self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.KERNEL, GENERATION_STAGE.NODE_CREATED, \
# block_statements.Module, None, self.add_default_stmts)
#self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.KERNEL, GENERATION_STAGE.BEGIN_PROCESS, \
# block_statements.Module, self.has_specstmts_in_module, self.process_specstmts_in_module)
#self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.KERNEL, GENERATION_STAGE.BEGIN_PROCESS, \
# statements.Use, None, self.process_use_in_module)
def has_externs_in_module(self, node):
for stmt in node.kgen_stmt.content:
if isinstance(stmt, typedecl_statements.TypeDeclarationStatement) and \
"parameter" not in stmt.attrspec and hasattr(stmt, 'geninfo') and \
any(len(v) > 0 for v in stmt.geninfo.values()):
for entity_name in [ get_entity_name(decl) for decl in stmt.entity_decls ]:
var = stmt.get_variable(entity_name)
if not var.is_parameter():
return True
return False
def has_specstmts_in_module(self, node):
if not node.kgen_stmt: return False
if hasattr(node.kgen_stmt, 'spec_stmts'): return True
else: return False
def is_extern_in_kernel_module(self, node):
if node.kgen_stmt and hasattr(node.kgen_stmt, 'geninfo') and any(len(v) > 0 for v in node.kgen_stmt.geninfo.values()) and \
isinstance(node.kgen_parent.kgen_stmt, block_statements.Module) and 'parameter' not in node.kgen_stmt.attrspec:
for entity_name in [ get_entity_name(decl) for decl in node.kgen_stmt.entity_decls ]:
var = node.kgen_stmt.get_variable(entity_name)
if not var.is_parameter():
return True
return False
def is_extern_in_state_module(self, node):
if node.kgen_stmt and hasattr(node.kgen_stmt, 'geninfo') and any(len(v) > 0 for v in node.kgen_stmt.geninfo.values()) and \
isinstance(node.kgen_parent.kgen_stmt, block_statements.Module) and 'parameter' not in node.kgen_stmt.attrspec:
for entity_name in [ get_entity_name(decl) for decl in node.kgen_stmt.entity_decls ]:
var = node.kgen_stmt.get_variable(entity_name)
if not var.is_parameter():
return True
return False
def process_specstmts_in_module(self, node):
process_spec_stmts(node.kgen_stmt)
def process_use_in_module(self, node):
if not node.kgen_isvalid: return
if not node.kgen_stmt: return
if not hasattr(node.kgen_stmt, 'geninfo'):
node.kgen_isvalid = False
return
new_items = []
unames = list(set([ uname.firstpartname() for uname, req in KGGenType.get_state(node.kgen_stmt.geninfo) ]))
for item in node.kgen_stmt.items:
if item.split('=>')[0].strip() in unames:
new_items.append(item)
node.items = new_items
node.nature = node.kgen_stmt.nature
node.isonly = node.kgen_stmt.isonly
node.kgen_use_tokgen = True
# def use_ieee_module(self, node):
#
# attrs = {'name':'IEEE_ARITHMETIC', 'nature': 'INTRINSIC', 'isonly': True, 'items':['ieee_is_normal']}
# part_append_gensnode(node, USE_PART, statements.Use, attrs=attrs)
def add_default_stmts(self, node):
attrs = {'name':'kgen_utils_mod', 'isonly': True, 'items':['kgen_dp', 'kgen_array_sumcheck']}
part_append_genknode(node, USE_PART, statements.Use, attrs=attrs)
#attrs = {'name':'tprof_mod', 'isonly': True, 'items':['tstart', 'tstop', 'tnull', 'tprnt']}
#part_append_genknode(node, USE_PART, statements.Use, attrs=attrs)
#attrs = {'name':'IEEE_ARITHMETIC', 'nature': 'INTRINSIC', 'isonly': True, 'items':['ieee_is_normal']}
#part_append_genknode(node, USE_PART, statements.Use, attrs=attrs)
def create_kernel_module_parts(self, node):
in_subrobj = None
in_subrname = get_module_in_readname(node.kgen_stmt)
checks = lambda n: isinstance(n.kgen_stmt, block_statements.Subroutine) and n.name==in_subrname
if not part_has_node(node, SUBP_PART, checks):
attrs = {'name': in_subrname, 'args': ['kgen_unit']}
#in_subrobj = part_append_genknode(node, SUBP_PART, block_statements.Subroutine, attrs=attrs)
in_subrobj = genkobj(node, block_statements.Subroutine, node.kgen_kernel_id, attrs=attrs)
###### VARLIST
modreadsubrs[node] = in_subrobj
part_append_comment(node, SUBP_PART, '')
# kgen_unit
attrs = {'type_spec': 'INTEGER', 'attrspec': ['INTENT(IN)'], 'entity_decls': ['kgen_unit']}
part_append_genknode(in_subrobj, DECL_PART, typedecl_statements.Integer, attrs=attrs)
# kgen_istrue
attrs = {'type_spec': 'LOGICAL', 'entity_decls': ['kgen_istrue']}
part_append_genknode(in_subrobj, DECL_PART, typedecl_statements.Logical, attrs=attrs)
attrs = {'type_spec': 'REAL', 'entity_decls': ['kgen_array_sum'], 'selector': (None, '8')}
part_append_genknode(in_subrobj, DECL_PART, typedecl_statements.Real, attrs=attrs)
part_append_comment(in_subrobj, DECL_PART, '')
out_subrobj = None
if hasattr(node.kgen_stmt, 'geninfo') and KGGenType.has_state_out(node.kgen_stmt.geninfo):
out_subrname = get_module_out_readname(node.kgen_stmt)
checks = lambda n: isinstance(n.kgen_stmt, block_statements.Subroutine) and n.name==out_subrname
if not part_has_node(node, SUBP_PART, checks):
attrs = {'name': out_subrname, 'args': ['kgen_unit']}
#out_subrobj = part_append_genknode(node, SUBP_PART, block_statements.Subroutine, attrs=attrs)
out_subrobj = genkobj(node, block_statements.Subroutine, node.kgen_kernel_id, attrs=attrs)
part_append_comment(node, SUBP_PART, '')
# kgen_unit
attrs = {'type_spec': 'INTEGER', 'attrspec': ['INTENT(IN)'], 'entity_decls': ['kgen_unit']}
part_append_genknode(out_subrobj, DECL_PART, typedecl_statements.Integer, attrs=attrs)
part_append_comment(out_subrobj, DECL_PART, '')
# kgen_istrue
attrs = {'type_spec': 'LOGICAL', 'entity_decls': ['kgen_istrue']}
part_append_genknode(out_subrobj, DECL_PART, typedecl_statements.Logical, attrs=attrs)
attrs = {'type_spec': 'REAL', 'entity_decls': ['kgen_array_sum'], 'selector': (None, '8')}
part_append_genknode(out_subrobj, DECL_PART, typedecl_statements.Real, attrs=attrs)
if in_subrobj or out_subrobj:
self.kernel_externs_subrs[node] = ( in_subrobj, out_subrobj )
# register event per typedecl
self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.KERNEL, GENERATION_STAGE.BEGIN_PROCESS, \
typedecl_statements.TypeDeclarationStatement, self.is_extern_in_kernel_module, self.create_subr_read_typedecl_in_module)
# register event per module
#self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.KERNEL, GENERATION_STAGE.BEGIN_PROCESS, \
# block_statements.Module, self.has_externs_in_module, self.create_kernel_stmts_in_callsite)
def create_state_module_parts(self, node):
in_subrname = get_module_in_writename(node.kgen_stmt)
in_subrobj = None
checks = lambda n: isinstance(n.kgen_stmt, block_statements.Subroutine) and n.name==in_subrname
if not part_has_node(node, SUBP_PART, checks):
attrs = {'name': in_subrname, 'args': ['kgen_unit']}
part_append_comment(node, SUBP_PART, 'write in state subroutine for %s'%in_subrname)
in_subrobj = part_append_gensnode(node, SUBP_PART, block_statements.Subroutine, attrs=attrs)
part_append_comment(node, SUBP_PART, '')
# kgen_unit
attrs = {'type_spec': 'INTEGER', 'attrspec': ['INTENT(IN)'], 'entity_decls': ['kgen_unit']}
part_append_gensnode(in_subrobj, DECL_PART, typedecl_statements.Integer, attrs=attrs)
# kgen_istrue
attrs = {'type_spec': 'LOGICAL', 'entity_decls': ['kgen_istrue']}
part_append_gensnode(in_subrobj, DECL_PART, typedecl_statements.Logical, attrs=attrs)
attrs = {'type_spec': 'REAL', 'entity_decls': ['kgen_array_sum'], 'selector': (None, '8')}
part_append_gensnode(in_subrobj, DECL_PART, typedecl_statements.Real, attrs=attrs)
part_append_comment(in_subrobj, DECL_PART, '')
out_subrobj = None
if hasattr(node.kgen_stmt, 'geninfo') and KGGenType.has_state_out(node.kgen_stmt.geninfo):
out_subrname = get_module_out_writename(node.kgen_stmt)
checks = lambda n: isinstance(n.kgen_stmt, block_statements.Subroutine) and n.name==out_subrname
if not part_has_node(node, SUBP_PART, checks):
attrs = {'name': out_subrname, 'args': ['kgen_unit']}
part_append_comment(node, SUBP_PART, 'write out state subroutine for %s'%out_subrname)
#out_subrobj = part_append_gensnode(node, SUBP_PART, block_statements.Subroutine, attrs=attrs)
out_subrobj = gensobj(node, block_statements.Subroutine, node.kgen_kernel_id, attrs=attrs)
part_append_comment(node, SUBP_PART, '')
modwritesubrs[node] = out_subrobj
# kgen_unit
attrs = {'type_spec': 'INTEGER', 'attrspec': ['INTENT(IN)'], 'entity_decls': ['kgen_unit']}
part_append_gensnode(out_subrobj, DECL_PART, typedecl_statements.Integer, attrs=attrs)
# kgen_istrue
attrs = {'type_spec': 'LOGICAL', 'entity_decls': ['kgen_istrue']}
part_append_gensnode(out_subrobj, DECL_PART, typedecl_statements.Logical, attrs=attrs)
attrs = {'type_spec': 'REAL', 'entity_decls': ['kgen_array_sum'], 'selector': (None, '8')}
part_append_gensnode(out_subrobj, DECL_PART, typedecl_statements.Real, attrs=attrs)
part_append_comment(out_subrobj, DECL_PART, '')
if in_subrobj or out_subrobj:
self.state_externs_subrs[node] = (in_subrobj, out_subrobj)
node.kgen_stmt.top.used4genstate = True
# register event per typedecl
self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.STATE, GENERATION_STAGE.BEGIN_PROCESS, \
typedecl_statements.TypeDeclarationStatement, self.is_extern_in_state_module, self.create_subr_write_typedecl_in_module)
# register event per module
#self.frame_msg.add_event(KERNEL_SELECTION.ALL, FILE_TYPE.STATE, GENERATION_STAGE.BEGIN_PROCESS, \
# block_statements.Module, self.has_externs_in_module, self.create_state_stmts_in_callsite)
else:
raise Exception('Dupulicated state extern subroutine name for module: %s. Please ensure that KGen-generated source file is NOT re-used.'%node.name)
def create_kernel_stmts_in_callsite(self, node):
if not self.kernel_externs_subrs[node][0] in self.kernel_callsite_use_stmts:
attrs = {'name':node.name, 'isonly': True, 'items':[self.kernel_externs_subrs[node][0].name]}
namedpart_append_genknode(node.kgen_kernel_id, DRIVER_USE_PART, statements.Use, attrs=attrs)
self.kernel_callsite_use_stmts.append(self.kernel_externs_subrs[node][0])
if not self.kernel_externs_subrs[node][0] in self.kernel_callsite_call_stmts:
attrs = {'designator': self.kernel_externs_subrs[node][0].name, 'items': ['kgen_unit']}
namedpart_append_genknode(node.kgen_kernel_id, DRIVER_READ_IN_EXTERNS, statements.Call, attrs=attrs)
self.kernel_callsite_call_stmts.append(self.kernel_externs_subrs[node][0])
if hasattr(node.kgen_stmt, 'geninfo') and KGGenType.has_state_out(node.kgen_stmt.geninfo):
if not self.kernel_externs_subrs[node][1] in self.kernel_callsite_use_stmts and node.name!=getinfo('topblock_stmt').name:
attrs = {'name':node.name, 'isonly': True, 'items':[self.kernel_externs_subrs[node][1].name]}
namedpart_append_genknode(node.kgen_kernel_id, KERNEL_PBLOCK_USE_PART, statements.Use, attrs=attrs)
self.kernel_callsite_use_stmts.append(self.kernel_externs_subrs[node][1])
if not self.kernel_externs_subrs[node][1] in self.kernel_callsite_call_stmts:
attrs = {'designator': self.kernel_externs_subrs[node][1].name, 'items': ['kgen_unit']}
namedpart_append_genknode(node.kgen_kernel_id, KERNEL_PBLOCK_READ_OUT_EXTERNS, statements.Call, attrs=attrs)
self.kernel_callsite_call_stmts.append(self.kernel_externs_subrs[node][1])
#
# if not self.kernel_externs_subrs[node][0] in self.kernel_callsite_use_stmts and node.name!=getinfo('topblock_stmt').name:
# attrs = {'name':node.name, 'isonly': True, 'items':[self.kernel_externs_subrs[node][0].name]}
# namedpart_append_genknode(node.kgen_kernel_id, KERNEL_PBLOCK_USE_PART, statements.Use, attrs=attrs)
# self.kernel_callsite_use_stmts.append(self.kernel_externs_subrs[node][0])
#
# if not self.kernel_externs_subrs[node][0] in self.kernel_callsite_call_stmts:
# attrs = {'designator': self.kernel_externs_subrs[node][0].name, 'items': ['kgen_unit']}
# namedpart_append_genknode(node.kgen_kernel_id, KERNEL_PBLOCK_READ_IN_EXTERNS, statements.Call, attrs=attrs)
# self.kernel_callsite_call_stmts.append(self.kernel_externs_subrs[node][0])
#
# if hasattr(node.kgen_stmt, 'geninfo') and KGGenType.has_state_out(node.kgen_stmt.geninfo):
# if not self.kernel_externs_subrs[node][1] in self.kernel_callsite_use_stmts and node.name!=getinfo('topblock_stmt').name:
# attrs = {'name':node.name, 'isonly': True, 'items':[self.kernel_externs_subrs[node][1].name]}
# namedpart_append_genknode(node.kgen_kernel_id, KERNEL_PBLOCK_USE_PART, statements.Use, attrs=attrs)
# self.kernel_callsite_use_stmts.append(self.kernel_externs_subrs[node][1])
#
# if not self.kernel_externs_subrs[node][1] in self.kernel_callsite_call_stmts:
# attrs = {'designator': self.kernel_externs_subrs[node][1].name, 'items': ['kgen_unit']}
# namedpart_append_genknode(node.kgen_kernel_id, KERNEL_PBLOCK_READ_OUT_EXTERNS, statements.Call, attrs=attrs)
# self.kernel_callsite_call_stmts.append(self.kernel_externs_subrs[node][1])
def create_state_stmts_in_callsite(self, node):
kgenunit = 'kgen_unit'
if not self.state_externs_subrs[node][0] in self.state_callsite_use_stmts and node.name!=getinfo('topblock_stmt').name:
attrs = {'name':node.name, 'isonly': True, 'items':[self.state_externs_subrs[node][0].name]}
namedpart_append_gensnode(node.kgen_kernel_id, STATE_PBLOCK_USE_PART, statements.Use, attrs=attrs)
self.state_callsite_use_stmts.append(self.state_externs_subrs[node][0])
if not self.state_externs_subrs[node][0] in self.state_callsite_call_stmts:
attrs = {'designator': self.state_externs_subrs[node][0].name, 'items': [kgenunit]}
namedpart_append_gensnode(node.kgen_kernel_id, STATE_PBLOCK_WRITE_IN_EXTERNS, statements.Call, attrs=attrs)
self.state_callsite_call_stmts.append(self.state_externs_subrs[node][0])
if hasattr(node.kgen_stmt, 'geninfo') and KGGenType.has_state_out(node.kgen_stmt.geninfo):
if not self.state_externs_subrs[node][1] in self.state_callsite_use_stmts and node.name!=getinfo('topblock_stmt').name:
attrs = {'name':node.name, 'isonly': True, 'items':[self.state_externs_subrs[node][1].name]}
namedpart_append_gensnode(node.kgen_kernel_id, STATE_PBLOCK_USE_PART, statements.Use, attrs=attrs)
self.state_callsite_use_stmts.append(self.state_externs_subrs[node][1])
if not self.state_externs_subrs[node][1] in self.state_callsite_call_stmts:
attrs = {'designator': self.state_externs_subrs[node][1].name, 'items': [kgenunit]}
namedpart_append_gensnode(node.kgen_kernel_id, STATE_PBLOCK_WRITE_OUT_EXTERNS, statements.Call, attrs=attrs)
self.state_callsite_call_stmts.append(self.state_externs_subrs[node][1])
def create_subr_read_typedecl_in_module(self, node):
parent = node.kgen_parent
stmt = node.kgen_stmt
raw_entity_names = set([ uname.firstpartname() for uname, req in KGGenType.get_state(stmt.geninfo)])
entity_names = [ e for e in raw_entity_names if not stmt.get_variable(e).is_parameter() ]
raw_out_entity_names = set([ uname.firstpartname() for uname, req in KGGenType.get_state_out(stmt.geninfo)])
out_entity_names = [ e for e in raw_out_entity_names if not stmt.get_variable(e).is_parameter() ]
def get_attrs(attrspec, allowed_attrs):
attrspec = []
for attr in stmt.attrspec:
if any( attr.startswith(allowed_attr) for allowed_attr in allowed_attrs):
attrspec.append(attr)
return attrspec
def get_decls(names, decls, prefix=''):
import re
entity_decls = []
for decl in decls:
ename = re.split(r'\(|\*|=', decl)[0].strip()
if ename in names:
entity_decls.append(prefix+decl)
return entity_decls
if len(entity_names)==0:
node.kgen_forced_line = False
elif len(entity_names)!=len(stmt.entity_decls):
attrspec = get_attrs(stmt.attrspec, ['pointer', 'allocatable', 'dimension', 'public', 'target'])
entity_decls = get_decls(entity_names, stmt.entity_decls)
attrs = {'type_spec': stmt.__class__.__name__.upper(), 'attrspec': attrspec, \
'selector':stmt.selector, 'entity_decls': entity_decls}
if stmt.is_derived():
node.type_spec = 'TYPE'
else:
node.type_spec = stmt.__class__.__name__.upper()
node.attrspec = attrspec
node.selector = stmt.selector
node.entity_decls = entity_decls
node.kgen_use_tokgen = True
#part_append_genknode(node.kgen_parent, DECL_PART, stmt.__class__, attrs=attrs)
#part_append_genknode(node.kgen_parent, DECL_PART, stmt.__class__, attrs=attrs)
if len(out_entity_names)>0:
attrspec = get_attrs(stmt.attrspec, ['pointer', 'allocatable', 'dimension'])
entity_decls = get_decls(out_entity_names, stmt.entity_decls, prefix='kgenref_')
#attrs = {'type_spec': stmt.__class__.__name__.upper(), 'attrspec': attrspec, \
# 'selector':stmt.selector, 'entity_decls': entity_decls}
#part_append_genknode(node.kgen_parent, DECL_PART, stmt.__class__, attrs=attrs)
is_class_derived = check_class_derived(stmt)
for entity_name, entity_decl in zip(entity_names, stmt.entity_decls):
if node.kgen_parent.name+entity_name in self.kernel_extern_reads: continue
if is_remove_state(entity_name, stmt): continue
self.kernel_extern_reads.append(node.kgen_parent.name+entity_name)
var = stmt.get_variable(entity_name)
subrname = get_typedecl_readname(stmt, entity_name)
if var.is_array():
if is_zero_array(var, stmt): continue
if stmt.is_derived() or is_class_derived:
part_append_comment(self.kernel_externs_subrs[node.kgen_parent][0], EXEC_PART, varstr(entity_name, "derived array", at=node.kgen_parent.name))
else: # intrinsic type
if var.is_explicit_shape_array():
part_append_comment(self.kernel_externs_subrs[node.kgen_parent][0], EXEC_PART, varstr(entity_name, "explicit array", at=node.kgen_parent.name))
else: # implicit array
part_append_comment(self.kernel_externs_subrs[node.kgen_parent][0], EXEC_PART, varstr(entity_name, "intrinsic array", at=node.kgen_parent.name))
else: # scalar
if stmt.is_derived() or is_class_derived:
if var.is_allocatable() or var.is_pointer():
part_append_comment(self.kernel_externs_subrs[node.kgen_parent][0], EXEC_PART, varstr(entity_name, "derived allocatable(or pointer)", at=node.kgen_parent.name))
else:
subrname = None
for uname, req in stmt.unknowns.items():
if ( is_class_derived and uname.firstpartname()==stmt.selector[1]) or uname.firstpartname()==stmt.name:
#if uname.firstpartname()==stmt.name:
if len(req.res_stmts)>0:
res = req.res_stmts[0]
subrname = get_dtype_readname(res)
break
if subrname is None:
print('WARNING: Can not find Type resolver for %s'%stmt.name)
namedpart_append_comment(self.kernel_externs_subrs[node.kgen_parent][0], EXEC_PART, \
'ERROR: "%s" is not resolved. Call statements to read "%s" is not created here.'%\
(stmt.name, stmt.name))
else:
part_append_comment(self.kernel_externs_subrs[node.kgen_parent][0], EXEC_PART, varstr(entity_name, "derived", at=node.kgen_parent.name))
else: # intrinsic type
part_append_comment(self.kernel_externs_subrs[node.kgen_parent][0], EXEC_PART, varstr(entity_name, "intrinsic", at=node.kgen_parent.name))
def create_subr_write_typedecl_in_module(self, node):
parent = node.kgen_parent
stmt = node.kgen_stmt
raw_entity_names = set([ uname.firstpartname() for uname, req in KGGenType.get_state(stmt.geninfo)])
entity_names = [ e for e in raw_entity_names if not stmt.get_variable(e).is_parameter() ]
raw_out_entity_names = set([ uname.firstpartname() for uname, req in KGGenType.get_state_out(stmt.geninfo)])
out_entity_names = [ e for e in raw_out_entity_names if not stmt.get_variable(e).is_parameter() ]
#entity_names = set([ uname.firstpartname() for uname, req in KGGenType.get_state(stmt.geninfo)])
#out_entity_names = set([ uname.firstpartname() for uname, req in KGGenType.get_state_out(stmt.geninfo)])
is_class_derived = check_class_derived(stmt)
for entity_name, entity_decl in zip(entity_names, stmt.entity_decls):
if node.kgen_parent.name+entity_name in self.state_extern_writes: continue
if is_remove_state(entity_name, stmt): continue
self.state_extern_writes.append(node.kgen_parent.name+entity_name)
var = stmt.get_variable(entity_name)
subrname = get_typedecl_writename(stmt, entity_name)
if var.is_array():
if is_zero_array(var, stmt): continue
if stmt.is_derived() or is_class_derived:
if entity_name in out_entity_names:
part_append_comment(self.state_externs_subrs[node.kgen_parent][1], EXEC_PART, varstr(entity_name, "derived array", at=node.kgen_parent.name))
else: # intrinsic type
if var.is_explicit_shape_array():
if entity_name in out_entity_names:
part_append_comment(self.state_externs_subrs[node.kgen_parent][1], EXEC_PART, varstr(entity_name, "explicit array", at=node.kgen_parent.name))
else: # implicit array
if entity_name in out_entity_names:
part_append_comment(self.state_externs_subrs[node.kgen_parent][1], EXEC_PART, varstr(entity_name, "implicit array", at=node.kgen_parent.name))
else: # scalar
if stmt.is_derived() or is_class_derived:
if var.is_allocatable() or var.is_pointer():
if entity_name in out_entity_names:
part_append_comment(self.state_externs_subrs[node.kgen_parent][1], EXEC_PART, varstr(entity_name, "derived allocatable (or pointer)", at=node.kgen_parent.name))
else:
subrname = None
for uname, req in stmt.unknowns.items():
if ( is_class_derived and uname.firstpartname()==stmt.selector[1]) or uname.firstpartname()==stmt.name:
#if uname.firstpartname()==stmt.name:
if len(req.res_stmts)>0:
res = req.res_stmts[0]
subrname = get_dtype_writename(res)
break
if subrname is None:
print('WARNING: Can not find Type resolver for %s'%stmt.name)
namedpart_append_comment(self.state_externs_subrs[node.kgen_parent][0], EXEC_PART, \
'ERROR: "%s" is not resolved. Call statements to write "%s" is not created here.'%\
(stmt.name, stmt.name))
else:
if entity_name in out_entity_names:
part_append_comment(self.state_externs_subrs[node.kgen_parent][1], EXEC_PART, varstr(entity_name, "derived", at=node.kgen_parent.name))
else: # intrinsic type
if entity_name in out_entity_names:
part_append_comment(self.state_externs_subrs[node.kgen_parent][1], EXEC_PART, varstr(entity_name, "intrinsic", at=node.kgen_parent.name))
| 58.509202 | 188 | 0.65723 | 3,604 | 28,611 | 4.876526 | 0.062431 | 0.043698 | 0.04734 | 0.041309 | 0.847397 | 0.811209 | 0.799374 | 0.791807 | 0.779175 | 0.768706 | 0 | 0.003132 | 0.241201 | 28,611 | 488 | 189 | 58.629098 | 0.806403 | 0.162315 | 0 | 0.421384 | 0 | 0.009434 | 0.07206 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053459 | false | 0 | 0.018868 | 0 | 0.103774 | 0.006289 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0287339c13bf38060ea681f02e33f37caa7e08f9 | 146 | py | Python | FaceExtractor/ExtractFacesFromImages.py | UtkarshPandey55557/Face_detcetion | d95d8cc6ea00a66f39cd94c91bf1a6ac3e97c31d | [
"Apache-2.0"
] | null | null | null | FaceExtractor/ExtractFacesFromImages.py | UtkarshPandey55557/Face_detcetion | d95d8cc6ea00a66f39cd94c91bf1a6ac3e97c31d | [
"Apache-2.0"
] | null | null | null | FaceExtractor/ExtractFacesFromImages.py | UtkarshPandey55557/Face_detcetion | d95d8cc6ea00a66f39cd94c91bf1a6ac3e97c31d | [
"Apache-2.0"
] | null | null | null | from FaceExtractor.image_face_detector_by_folder import face_extractor
face_extractor_obj = face_extractor()
face_extractor_obj.find_faces()
| 29.2 | 71 | 0.863014 | 20 | 146 | 5.75 | 0.6 | 0.452174 | 0.295652 | 0.452174 | 0.504348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089041 | 146 | 4 | 72 | 36.5 | 0.864662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0288f5e202c4f8790fa0281476c0fcae8cbdaccf | 22,960 | py | Python | networkx/algorithms/connectivity/cuts.py | argriffing/networkx | 5a3d000e605be2ca567f69a4694afcba3b8acb54 | [
"BSD-3-Clause"
] | null | null | null | networkx/algorithms/connectivity/cuts.py | argriffing/networkx | 5a3d000e605be2ca567f69a4694afcba3b8acb54 | [
"BSD-3-Clause"
] | null | null | null | networkx/algorithms/connectivity/cuts.py | argriffing/networkx | 5a3d000e605be2ca567f69a4694afcba3b8acb54 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Flow based cut algorithms
"""
import itertools
import networkx as nx
# Define the default maximum flow function to use in all flow based
# cut algorithms.
from networkx.algorithms.flow import edmonds_karp, shortest_augmenting_path
from networkx.algorithms.flow import build_residual_network
default_flow_func = edmonds_karp
from .utils import (build_auxiliary_node_connectivity,
build_auxiliary_edge_connectivity)
__author__ = '\n'.join(['Jordi Torrents <jtorrents@milnou.net>'])
__all__ = ['minimum_st_node_cut',
'minimum_node_cut',
'minimum_st_edge_cut',
'minimum_edge_cut']
def minimum_st_edge_cut(G, s, t, flow_func=None, auxiliary=None,
residual=None):
"""Returns the edges of the cut-set of a minimum (s, t)-cut.
This function returns the set of edges of minimum cardinality that,
if removed, would destroy all paths among source and target in G.
Edge weights are not considered
Parameters
----------
G : NetworkX graph
Edges of the graph are expected to have an attribute called
'capacity'. If this attribute is not present, the edge is
considered to have infinite capacity.
s : node
Source node for the flow.
t : node
Sink node for the flow.
auxiliary : NetworkX DiGraph
Auxiliary digraph to compute flow based node connectivity. It has
to have a graph attribute called mapping with a dictionary mapping
node names in G and in the auxiliary digraph. If provided
it will be reused instead of recreated. Default value: None.
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See :meth:`node_connectivity` for
details. The choice of the default function may change from version
to version and should not be relied on. Default value: None.
residual : NetworkX DiGraph
Residual network to compute maximum flow. If provided it will be
reused instead of recreated. Default value: None.
Returns
-------
cutset : set
Set of edges that, if removed from the graph, will disconnect it.
See also
--------
:meth:`minimum_cut`
:meth:`minimum_node_cut`
:meth:`minimum_edge_cut`
:meth:`stoer_wagner`
:meth:`node_connectivity`
:meth:`edge_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
Examples
--------
This function is not imported in the base NetworkX namespace, so you
have to explicitly import it from the connectivity package:
>>> from networkx.algorithms.connectivity import minimum_st_edge_cut
We use in this example the platonic icosahedral graph, which has edge
connectivity 5.
>>> G = nx.icosahedral_graph()
>>> len(minimum_st_edge_cut(G, 0, 6))
5
If you need to compute local edge cuts on several pairs of
nodes in the same graph, it is recommended that you reuse the
data structures that NetworkX uses in the computation: the
auxiliary digraph for edge connectivity, and the residual
network for the underlying maximum flow computation.
Example of how to compute local edge cuts among all pairs of
nodes of the platonic icosahedral graph reusing the data
structures.
>>> import itertools
>>> # You also have to explicitly import the function for
>>> # building the auxiliary digraph from the connectivity package
>>> from networkx.algorithms.connectivity import (
... build_auxiliary_edge_connectivity)
>>> H = build_auxiliary_edge_connectivity(G)
>>> # And the function for building the residual network from the
>>> # flow package
>>> from networkx.algorithms.flow import build_residual_network
>>> # Note that the auxiliary digraph has an edge attribute named capacity
>>> R = build_residual_network(H, 'capacity')
>>> result = dict.fromkeys(G, dict())
>>> # Reuse the auxiliary digraph and the residual network by passing them
>>> # as parameters
>>> for u, v in itertools.combinations(G, 2):
... k = len(minimum_st_edge_cut(G, u, v, auxiliary=H, residual=R))
... result[u][v] = k
>>> all(result[u][v] == 5 for u, v in itertools.combinations(G, 2))
True
You can also use alternative flow algorithms for computing edge
cuts. For instance, in dense networks the algorithm
:meth:`shortest_augmenting_path` will usually perform better than
the default :meth:`edmonds_karp` which is faster for sparse
networks with highly skewed degree distributions. Alternative flow
functions have to be explicitly imported from the flow package.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> len(minimum_st_edge_cut(G, 0, 6, flow_func=shortest_augmenting_path))
5
"""
if flow_func is None:
flow_func = default_flow_func
if auxiliary is None:
H = build_auxiliary_edge_connectivity(G)
else:
H = auxiliary
kwargs = dict(capacity='capacity', flow_func=flow_func, residual=residual)
cut_value, partition = nx.minimum_cut(H, s, t, **kwargs)
reachable, non_reachable = partition
# Any edge in the original graph linking the two sets in the
# partition is part of the edge cutset
cutset = set()
for u, nbrs in ((n, G[n]) for n in reachable):
cutset.update((u, v) for v in nbrs if v in non_reachable)
return cutset
def minimum_st_node_cut(G, s, t, flow_func=None, auxiliary=None, residual=None):
r"""Returns a set of nodes of minimum cardinality that disconnect source
from target in G.
This function returns the set of nodes of minimum cardinality that,
if removed, would destroy all paths among source and target in G.
Parameters
----------
G : NetworkX graph
s : node
Source node.
t : node
Target node.
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See below for details. The choice
of the default function may change from version to version and
should not be relied on. Default value: None.
auxiliary : NetworkX DiGraph
Auxiliary digraph to compute flow based node connectivity. It has
to have a graph attribute called mapping with a dictionary mapping
node names in G and in the auxiliary digraph. If provided
it will be reused instead of recreated. Default value: None.
residual : NetworkX DiGraph
Residual network to compute maximum flow. If provided it will be
reused instead of recreated. Default value: None.
Returns
-------
cutset : set
Set of nodes that, if removed, would destroy all paths between
source and target in G.
Examples
--------
This function is not imported in the base NetworkX namespace, so you
have to explicitly import it from the connectivity package:
>>> from networkx.algorithms.connectivity import minimum_st_node_cut
We use in this example the platonic icosahedral graph, which has node
connectivity 5.
>>> G = nx.icosahedral_graph()
>>> len(minimum_st_node_cut(G, 0, 6))
5
If you need to compute local st cuts between several pairs of
nodes in the same graph, it is recommended that you reuse the
data structures that NetworkX uses in the computation: the
auxiliary digraph for node connectivity and node cuts, and the
residual network for the underlying maximum flow computation.
Example of how to compute local st node cuts reusing the data
structures:
>>> # You also have to explicitly import the function for
>>> # building the auxiliary digraph from the connectivity package
>>> from networkx.algorithms.connectivity import (
... build_auxiliary_node_connectivity)
>>> H = build_auxiliary_node_connectivity(G)
>>> # And the function for building the residual network from the
>>> # flow package
>>> from networkx.algorithms.flow import build_residual_network
>>> # Note that the auxiliary digraph has an edge attribute named capacity
>>> R = build_residual_network(H, 'capacity')
>>> # Reuse the auxiliary digraph and the residual network by passing them
>>> # as parameters
>>> len(minimum_st_node_cut(G, 0, 6, auxiliary=H, residual=R))
5
You can also use alternative flow algorithms for computing minimum st
node cuts. For instance, in dense networks the algorithm
:meth:`shortest_augmenting_path` will usually perform better than
the default :meth:`edmonds_karp` which is faster for sparse
networks with highly skewed degree distributions. Alternative flow
functions have to be explicitly imported from the flow package.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> len(minimum_st_node_cut(G, 0, 6, flow_func=shortest_augmenting_path))
5
Notes
-----
This is a flow based implementation of minimum node cut. The algorithm
is based in solving a number of maximum flow computations to determine
the capacity of the minimum cut on an auxiliary directed network that
corresponds to the minimum node cut of G. It handles both directed
and undirected graphs. This implementation is based on algorithm 11
in [1]_.
See also
--------
:meth:`minimum_node_cut`
:meth:`minimum_edge_cut`
:meth:`stoer_wagner`
:meth:`node_connectivity`
:meth:`edge_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
References
----------
.. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms.
http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf
"""
if auxiliary is None:
H = build_auxiliary_node_connectivity(G)
else:
H = auxiliary
mapping = H.graph.get('mapping', None)
if mapping is None:
raise nx.NetworkXError('Invalid auxiliary digraph.')
if G.has_edge(s, t) or G.has_edge(t, s):
return []
kwargs = dict(flow_func=flow_func, residual=residual, auxiliary=H)
# The edge cut in the auxiliary digraph corresponds to the node cut in the
# original graph.
edge_cut = minimum_st_edge_cut(H, '%sB' % mapping[s], '%sA' % mapping[t],
**kwargs)
# Each node in the original graph maps to two nodes of the auxiliary graph
node_cut = set(H.node[node]['id'] for edge in edge_cut for node in edge)
return node_cut - set([s, t])
def minimum_node_cut(G, s=None, t=None, flow_func=None):
r"""Returns a set of nodes of minimum cardinality that disconnects G.
If source and target nodes are provided, this function returns the
set of nodes of minimum cardinality that, if removed, would destroy
all paths among source and target in G. If not, it returns a set
of nodes of minimum cardinality that disconnects G.
Parameters
----------
G : NetworkX graph
s : node
Source node. Optional. Default value: None.
t : node
Target node. Optional. Default value: None.
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See below for details. The
choice of the default function may change from version
to version and should not be relied on. Default value: None.
Returns
-------
cutset : set
Set of nodes that, if removed, would disconnect G. If source
and target nodes are provided, the set contians the nodes that
if removed, would destroy all paths between source and target.
Examples
--------
>>> # Platonic icosahedral graph has node connectivity 5
>>> G = nx.icosahedral_graph()
>>> node_cut = nx.minimum_node_cut(G)
>>> len(node_cut)
5
You can use alternative flow algorithms for the underlying maximum
flow computation. In dense networks the algorithm
:meth:`shortest_augmenting_path` will usually perform better
than the default :meth:`edmonds_karp`, which is faster for
sparse networks with highly skewed degree distributions. Alternative
flow functions have to be explicitly imported from the flow package.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> node_cut == nx.minimum_node_cut(G, flow_func=shortest_augmenting_path)
True
If you specify a pair of nodes (source and target) as parameters,
this function returns a local st node cut.
>>> len(nx.minimum_node_cut(G, 3, 7))
5
If you need to perform several local st cuts among different
pairs of nodes on the same graph, it is recommended that you reuse
the data structures used in the maximum flow computations. See
:meth:`minimum_st_node_cut` for details.
Notes
-----
This is a flow based implementation of minimum node cut. The algorithm
is based in solving a number of maximum flow computations to determine
the capacity of the minimum cut on an auxiliary directed network that
corresponds to the minimum node cut of G. It handles both directed
and undirected graphs. This implementation is based on algorithm 11
in [1]_.
See also
--------
:meth:`minimum_st_node_cut`
:meth:`minimum_cut`
:meth:`minimum_edge_cut`
:meth:`stoer_wagner`
:meth:`node_connectivity`
:meth:`edge_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
References
----------
.. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms.
http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf
"""
if (s is not None and t is None) or (s is None and t is not None):
raise nx.NetworkXError('Both source and target must be specified.')
# Local minimum node cut.
if s is not None and t is not None:
if s not in G:
raise nx.NetworkXError('node %s not in graph' % s)
if t not in G:
raise nx.NetworkXError('node %s not in graph' % t)
return minimum_st_node_cut(G, s, t, flow_func=flow_func)
# Global minimum node cut.
# Analog to the algoritm 11 for global node connectivity in [1].
if G.is_directed():
if not nx.is_weakly_connected(G):
raise nx.NetworkXError('Input graph is not connected')
iter_func = itertools.permutations
def neighbors(v):
return itertools.chain.from_iterable([G.predecessors(v),
G.successors(v)])
else:
if not nx.is_connected(G):
raise nx.NetworkXError('Input graph is not connected')
iter_func = itertools.combinations
neighbors = G.neighbors
# Reuse the auxiliary digraph and the residual network.
H = build_auxiliary_node_connectivity(G)
R = build_residual_network(H, 'capacity')
kwargs = dict(flow_func=flow_func, auxiliary=H, residual=R)
# Choose a node with minimum degree.
v = min(G, key=G.degree)
# Initial node cutset is all neighbors of the node with minimum degree.
min_cut = set(G[v])
# Compute st node cuts between v and all its non-neighbors nodes in G.
for w in set(G) - set(neighbors(v)) - set([v]):
this_cut = minimum_st_node_cut(G, v, w, **kwargs)
if len(min_cut) >= len(this_cut):
min_cut = this_cut
# Also for non adjacent pairs of neighbors of v.
for x, y in iter_func(neighbors(v), 2):
if y in G[x]:
continue
this_cut = minimum_st_node_cut(G, x, y, **kwargs)
if len(min_cut) >= len(this_cut):
min_cut = this_cut
return min_cut
def minimum_edge_cut(G, s=None, t=None, flow_func=None):
r"""Returns a set of edges of minimum cardinality that disconnects G.
If source and target nodes are provided, this function returns the
set of edges of minimum cardinality that, if removed, would break
all paths among source and target in G. If not, it returns a set of
edges of minimum cardinality that disconnects G.
Parameters
----------
G : NetworkX graph
s : node
Source node. Optional. Default value: None.
t : node
Target node. Optional. Default value: None.
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See below for details. The
choice of the default function may change from version
to version and should not be relied on. Default value: None.
Returns
-------
cutset : set
Set of edges that, if removed, would disconnect G. If source
and target nodes are provided, the set contians the edges that
if removed, would destroy all paths between source and target.
Examples
--------
>>> # Platonic icosahedral graph has edge connectivity 5
>>> G = nx.icosahedral_graph()
>>> len(nx.minimum_edge_cut(G))
5
You can use alternative flow algorithms for the underlying
maximum flow computation. In dense networks the algorithm
:meth:`shortest_augmenting_path` will usually perform better
than the default :meth:`edmonds_karp`, which is faster for
sparse networks with highly skewed degree distributions.
Alternative flow functions have to be explicitly imported
from the flow package.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> len(nx.minimum_edge_cut(G, flow_func=shortest_augmenting_path))
5
If you specify a pair of nodes (source and target) as parameters,
this function returns the value of local edge connectivity.
>>> nx.edge_connectivity(G, 3, 7)
5
If you need to perform several local computations among different
pairs of nodes on the same graph, it is recommended that you reuse
the data structures used in the maximum flow computations. See
:meth:`local_edge_connectivity` for details.
Notes
-----
This is a flow based implementation of minimum edge cut. For
undirected graphs the algorithm works by finding a 'small' dominating
set of nodes of G (see algorithm 7 in [1]_) and computing the maximum
flow between an arbitrary node in the dominating set and the rest of
nodes in it. This is an implementation of algorithm 6 in [1]_. For
directed graphs, the algorithm does n calls to the max flow function.
It is an implementation of algorithm 8 in [1]_.
See also
--------
:meth:`minimum_st_edge_cut`
:meth:`minimum_node_cut`
:meth:`stoer_wagner`
:meth:`node_connectivity`
:meth:`edge_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
References
----------
.. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms.
http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf
"""
if (s is not None and t is None) or (s is None and t is not None):
raise nx.NetworkXError('Both source and target must be specified.')
# reuse auxiliary digraph and residual network
H = build_auxiliary_edge_connectivity(G)
R = build_residual_network(H, 'capacity')
kwargs = dict(flow_func=flow_func, residual=R, auxiliary=H)
# Local minimum edge cut if s and t are not None
if s is not None and t is not None:
if s not in G:
raise nx.NetworkXError('node %s not in graph' % s)
if t not in G:
raise nx.NetworkXError('node %s not in graph' % t)
return minimum_st_edge_cut(H, s, t, **kwargs)
# Global minimum edge cut
# Analog to the algoritm for global edge connectivity
if G.is_directed():
# Based on algorithm 8 in [1]
if not nx.is_weakly_connected(G):
raise nx.NetworkXError('Input graph is not connected')
# Initial cutset is all edges of a node with minimum degree
node = min(G, key=G.degree)
min_cut = set(G.edges(node))
nodes = list(G)
n = len(nodes)
for i in range(n):
try:
this_cut = minimum_st_edge_cut(H, nodes[i], nodes[i+1], **kwargs)
if len(this_cut) <= len(min_cut):
min_cut = this_cut
except IndexError: # Last node!
this_cut = minimum_st_edge_cut(H, nodes[i], nodes[0], **kwargs)
if len(this_cut) <= len(min_cut):
min_cut = this_cut
return min_cut
else: # undirected
# Based on algorithm 6 in [1]
if not nx.is_connected(G):
raise nx.NetworkXError('Input graph is not connected')
# Initial cutset is all edges of a node with minimum degree
node = min(G, key=G.degree)
min_cut = set(G.edges(node))
# A dominating set is \lambda-covering
# We need a dominating set with at least two nodes
for node in G:
D = nx.dominating_set(G, start_with=node)
v = D.pop()
if D:
break
else:
# in complete graphs the dominating set will always be of one node
# thus we return min_cut, which now contains the edges of a node
# with minimum degree
return min_cut
for w in D:
this_cut = minimum_st_edge_cut(H, v, w, **kwargs)
if len(this_cut) <= len(min_cut):
min_cut = this_cut
return min_cut
| 38.013245 | 81 | 0.673868 | 3,267 | 22,960 | 4.632997 | 0.093664 | 0.014799 | 0.024709 | 0.012685 | 0.807149 | 0.775899 | 0.75958 | 0.736786 | 0.720468 | 0.700581 | 0 | 0.003843 | 0.252091 | 22,960 | 603 | 82 | 38.076285 | 0.877591 | 0.696995 | 0 | 0.445313 | 0 | 0 | 0.080939 | 0.003975 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039063 | false | 0 | 0.039063 | 0.007813 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a35191c6240ea998ffa4b8d63337f63b8b007b1 | 2,383 | py | Python | rlkit/torch/sets/mmd.py | Asap7772/railrl_evalsawyer | baba8ce634d32a48c7dfe4dc03b123e18e96e0a3 | [
"MIT"
] | null | null | null | rlkit/torch/sets/mmd.py | Asap7772/railrl_evalsawyer | baba8ce634d32a48c7dfe4dc03b123e18e96e0a3 | [
"MIT"
] | null | null | null | rlkit/torch/sets/mmd.py | Asap7772/railrl_evalsawyer | baba8ce634d32a48c7dfe4dc03b123e18e96e0a3 | [
"MIT"
] | null | null | null | """
"""
import torch
import rlkit.torch.pytorch_util as ptu
def mmd_distance(
X: torch.Tensor,
Y: torch.Tensor,
kernel='imq',
p_z_stddev=None,
):
if kernel == 'imq':
return imq_kernel(X, Y, p_z_stddev)
elif kernel == 'rbf':
return rbf_kernel(X, Y, p_z_stddev)
else:
raise NotImplementedError(kernel)
def imq_kernel(
X: torch.Tensor,
Y: torch.Tensor,
p_z_stddev,
):
X = X / p_z_stddev
Y = Y / p_z_stddev
batch_size = X.size(0)
h_dim = X.size(1)
norms_x = X.pow(2).sum(1, keepdim=True) # batch_size x 1
prods_x = torch.mm(X, X.t()) # batch_size x batch_size
dists_x = norms_x + norms_x.t() - 2 * prods_x
norms_y = Y.pow(2).sum(1, keepdim=True) # batch_size x 1
prods_y = torch.mm(Y, Y.t()) # batch_size x batch_size
dists_y = norms_y + norms_y.t() - 2 * prods_y
dot_prd = torch.mm(X, Y.t())
dists_c = norms_x + norms_y.t() - 2 * dot_prd
stats = 0
for scale in [.1, .2, .5, 1., 2., 5., 10.]:
C = 2 * h_dim * 1.0 * scale
res1 = C / (C + dists_x) + C / (C + dists_y)
res1 = (1 - ptu.eye(batch_size).cuda()) * res1
res1 = res1.sum() / (batch_size - 1)
res2 = C / (C + dists_c)
res2 = res2.sum() * 2. / batch_size
stats = stats + res1 - res2
return stats
def rbf_kernel(
X: torch.Tensor,
Y: torch.Tensor,
p_z_stddev,
):
X = X / p_z_stddev
Y = Y / p_z_stddev
batch_size = X.size(0)
h_dim = X.size(1)
norms_x = X.pow(2).sum(1, keepdim=True) # batch_size x 1
prods_x = torch.mm(X, X.t()) # batch_size x batch_size
dists_x = norms_x + norms_x.t() - 2 * prods_x
norms_y = Y.pow(2).sum(1, keepdim=True) # batch_size x 1
prods_y = torch.mm(Y, Y.t()) # batch_size x batch_size
dists_y = norms_y + norms_y.t() - 2 * prods_y
dot_prd = torch.mm(X, Y.t())
dists_c = norms_x + norms_y.t() - 2 * dot_prd
stats = 0
for scale in [.01, .1, 1., 10., 100]:
C = 2 * h_dim * 1.0 / scale
res1 = torch.exp(-C * dists_x) + torch.exp(-C * dists_y)
res1 = (1 - ptu.eye(batch_size).cuda()) * res1
res1 = res1.sum() / (batch_size - 1)
res2 = torch.exp(-C * dists_c)
res2 = res2.sum() * 2. / batch_size
stats = stats + res1 - res2
return stats | 26.477778 | 64 | 0.549727 | 407 | 2,383 | 3.002457 | 0.142506 | 0.1473 | 0.081833 | 0.02946 | 0.816694 | 0.816694 | 0.770867 | 0.770867 | 0.743044 | 0.743044 | 0 | 0.043923 | 0.30256 | 2,383 | 90 | 65 | 26.477778 | 0.691336 | 0.065044 | 0 | 0.691176 | 0 | 0 | 0.004069 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044118 | false | 0 | 0.029412 | 0 | 0.132353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ce6f3118ccf0a5c371b7522e31e43ece16483b32 | 149 | py | Python | Python/6.Modules_and_Packages/myprogram.py | razorstun/legendary-parakeet | e573a566446d2f1190d15ebb7becf77d28673e1b | [
"Apache-2.0"
] | 1 | 2021-05-05T07:31:46.000Z | 2021-05-05T07:31:46.000Z | Python/6.Modules_and_Packages/myprogram.py | razorstun/legendary-parakeet | e573a566446d2f1190d15ebb7becf77d28673e1b | [
"Apache-2.0"
] | null | null | null | Python/6.Modules_and_Packages/myprogram.py | razorstun/legendary-parakeet | e573a566446d2f1190d15ebb7becf77d28673e1b | [
"Apache-2.0"
] | null | null | null | from Mymainpackage import some_main_script
from Mymainpackage.Subpackage import mysubscript
some_main_script.report_main()
mysubscript.sub_report() | 24.833333 | 48 | 0.879195 | 19 | 149 | 6.578947 | 0.526316 | 0.272 | 0.224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073826 | 149 | 6 | 49 | 24.833333 | 0.905797 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ceaa1315c65dac54661e22949fcb5d420cd84b43 | 58 | py | Python | tests/func/test_version.py | jmabry/pyaf | afbc15a851a2445a7824bf255af612dc429265af | [
"BSD-3-Clause"
] | 377 | 2016-10-13T20:52:44.000Z | 2022-03-29T18:04:14.000Z | tests/func/test_version.py | jmabry/pyaf | afbc15a851a2445a7824bf255af612dc429265af | [
"BSD-3-Clause"
] | 160 | 2016-10-13T16:11:53.000Z | 2022-03-28T04:21:34.000Z | tests/func/test_version.py | jmabry/pyaf | afbc15a851a2445a7824bf255af612dc429265af | [
"BSD-3-Clause"
] | 63 | 2017-03-09T14:51:18.000Z | 2022-03-27T20:52:57.000Z | import pyaf
print("PyAF version : " , pyaf.__version__)
| 11.6 | 43 | 0.706897 | 7 | 58 | 5.285714 | 0.571429 | 0.594595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 58 | 4 | 44 | 14.5 | 0.770833 | 0 | 0 | 0 | 0 | 0 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
0cafd4fc41a04a23194348650370a0155c45c929 | 25 | py | Python | tef/python/tef/training/__init__.py | jony0917/tensorflow-extend-framework | 5db3a0ed373173fd799d292bfcc7e5544882e9d0 | [
"Apache-2.0"
] | 12 | 2020-02-04T04:06:03.000Z | 2021-09-18T12:14:04.000Z | tef/python/tef/training/__init__.py | jony0917/tensorflow-extend-framework | 5db3a0ed373173fd799d292bfcc7e5544882e9d0 | [
"Apache-2.0"
] | null | null | null | tef/python/tef/training/__init__.py | jony0917/tensorflow-extend-framework | 5db3a0ed373173fd799d292bfcc7e5544882e9d0 | [
"Apache-2.0"
] | null | null | null |
from optimizer import *
| 8.333333 | 23 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 25 | 2 | 24 | 12.5 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0cc43edd459aada8fc01d0fe37d6f6baffd1b4e8 | 40 | py | Python | conll_iterator/__init__.py | nicolaCirillo/conll_iterator | bcc3905ad71d566a90692e6098c5372558034fa1 | [
"MIT"
] | null | null | null | conll_iterator/__init__.py | nicolaCirillo/conll_iterator | bcc3905ad71d566a90692e6098c5372558034fa1 | [
"MIT"
] | null | null | null | conll_iterator/__init__.py | nicolaCirillo/conll_iterator | bcc3905ad71d566a90692e6098c5372558034fa1 | [
"MIT"
] | null | null | null | from .ConllIterator import ConllIterator | 40 | 40 | 0.9 | 4 | 40 | 9 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0cd222d0a16bc5c9a6943a5f50046be042b7fab2 | 131 | py | Python | 06_import_order/tyokalut/tilastotiede/keskiarvot.py | PythonVinkit/youtube-series | 7f5c6939e0e83d05798b7b86ac1683593f41288b | [
"MIT"
] | 2 | 2021-09-16T21:06:24.000Z | 2021-09-30T12:43:27.000Z | 06_import_order/tyokalut/tilastotiede/keskiarvot.py | PythonVinkit/youtube-series | 7f5c6939e0e83d05798b7b86ac1683593f41288b | [
"MIT"
] | null | null | null | 06_import_order/tyokalut/tilastotiede/keskiarvot.py | PythonVinkit/youtube-series | 7f5c6939e0e83d05798b7b86ac1683593f41288b | [
"MIT"
] | null | null | null | from statistics import mean, median
def keskiarvo(arvot):
return mean(arvot)
def mediaani(arvot):
return median(arvot)
| 13.1 | 35 | 0.725191 | 17 | 131 | 5.588235 | 0.588235 | 0.231579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19084 | 131 | 9 | 36 | 14.555556 | 0.896226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0b3012f3e08c50e9925aa376a5108a52dfac8b7d | 80 | py | Python | kvuilder/data/templates/libs/screens/wizard/view.py | aorizondo/kvuilder | a4c4f7d017c364a4f39044c63c3594e6bd18ef01 | [
"MIT"
] | 1 | 2022-01-26T01:56:59.000Z | 2022-01-26T01:56:59.000Z | kvuilder/data/templates/libs/screens/wizard/view.py | aorizondo/kvuilder | a4c4f7d017c364a4f39044c63c3594e6bd18ef01 | [
"MIT"
] | null | null | null | kvuilder/data/templates/libs/screens/wizard/view.py | aorizondo/kvuilder | a4c4f7d017c364a4f39044c63c3594e6bd18ef01 | [
"MIT"
] | 2 | 2021-04-29T21:24:36.000Z | 2022-01-26T01:57:01.000Z | from kivy.uix.screenmanager import Screen
class WizardScreen(Screen):
pass
| 16 | 41 | 0.7875 | 10 | 80 | 6.3 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 80 | 4 | 42 | 20 | 0.926471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0b42518da35f2b7786737b440bcca8bb6465f701 | 5,182 | py | Python | flattentool/tests/test_unflatten.py | OpenDataServices/flatten-tool | 11bded4bfcd110416c664918a284c5bb9e11cce4 | [
"MIT"
] | 86 | 2015-07-16T10:23:47.000Z | 2022-03-29T08:11:40.000Z | flattentool/tests/test_unflatten.py | OpenDataServices/flatten-tool | 11bded4bfcd110416c664918a284c5bb9e11cce4 | [
"MIT"
] | 275 | 2015-03-31T14:51:31.000Z | 2022-03-07T14:54:05.000Z | flattentool/tests/test_unflatten.py | OpenDataServices/flatten-tool | 11bded4bfcd110416c664918a284c5bb9e11cce4 | [
"MIT"
] | 16 | 2015-11-06T15:41:30.000Z | 2021-07-16T00:18:32.000Z | import json
import os
import pytest
from flattentool import unflatten
def test_360_main_sheetname_insensitive(tmpdir):
input_name = "flattentool/tests/fixtures/xlsx/fundingproviders-grants_2_grants.xlsx"
unflatten(
input_name=input_name,
output_name=tmpdir.join("output_grant.json").strpath,
input_format="xlsx",
schema="flattentool/tests/fixtures/360-giving-schema.json",
main_sheet_name="grants",
root_list_path="grants",
root_id="",
convert_titles=True,
)
output_json_grants = json.load(tmpdir.join("output_grant.json"))
input_name = "flattentool/tests/fixtures/xlsx/fundingproviders-grants_2_grants_sheet_title_case.xlsx"
unflatten(
input_name=input_name,
output_name=tmpdir.join("output_grant_sheet_title_case.json").strpath,
input_format="xlsx",
schema="flattentool/tests/fixtures/360-giving-schema.json",
main_sheet_name="grants",
root_list_path="grants",
root_id="",
convert_titles=True,
)
output_json_Grants = json.load(tmpdir.join("output_grant_sheet_title_case.json"))
assert output_json_grants == output_json_Grants
def test_360_fields_case_insensitive(tmpdir):
input_name = "flattentool/tests/fixtures/xlsx/fundingproviders-grants_2_grants.xlsx"
unflatten(
input_name=input_name,
output_name=tmpdir.join("output_grant.json").strpath,
input_format="xlsx",
schema="flattentool/tests/fixtures/360-giving-schema.json",
main_sheet_name="grants",
root_list_path="grants",
root_id="",
convert_titles=True,
)
output_json_grants = json.load(tmpdir.join("output_grant.json"))
input_name = "flattentool/tests/fixtures/xlsx/fundingproviders-grants_2_grants_title_space_case.xlsx"
unflatten(
input_name=input_name,
output_name=tmpdir.join("output_space_case.json").strpath,
input_format="xlsx",
schema="flattentool/tests/fixtures/360-giving-schema.json",
main_sheet_name="grants",
root_list_path="grants",
root_id="",
convert_titles=True,
)
output_json_space_case = json.load(tmpdir.join("output_space_case.json"))
assert output_json_grants == output_json_space_case
@pytest.mark.parametrize(
"dirname,input_format",
[
("examples/iati", "csv"),
("examples/iati", "ods"),
("examples/iati", "xlsx"),
("examples/iati_multilang", "csv"),
],
)
def test_unflatten_xml(tmpdir, dirname, input_format):
schema_path = "examples/iati"
schemas = ["iati-activities-schema.xsd", "iati-common.xsd"]
schema_filepaths = ["{}/{}".format(schema_path, schema) for schema in schemas]
unflatten(
input_name=dirname
+ (".{}".format(input_format) if input_format != "csv" else ""),
output_name=tmpdir.join("output.xml").strpath,
input_format=input_format,
root_list_path="iati-activity",
id_name="iati-identifier",
xml=True,
xml_schemas=schema_filepaths,
)
assert (
open(os.path.join(dirname, "expected.xml")).read()
== tmpdir.join("output.xml").read()
)
@pytest.mark.parametrize("dirname", ["examples/iati_xml_comment"])
def test_unflatten_xml_comment(tmpdir, dirname):
"""
Edit default xml comment 'XML generated by flatten-tool' by 'XML generated by ODS'
"""
schema_path = "examples/iati"
schemas = ["iati-activities-schema.xsd", "iati-common.xsd"]
schema_filepaths = ["{}/{}".format(schema_path, schema) for schema in schemas]
unflatten(
input_name=dirname,
output_name=tmpdir.join("output.xml").strpath,
input_format="csv",
root_list_path="iati-activity",
id_name="iati-identifier",
xml=True,
xml_schemas=schema_filepaths,
xml_comment="XML generated by ODS",
)
assert (
open(os.path.join(dirname, "expected.xml")).read()
== tmpdir.join("output.xml").read()
)
@pytest.mark.parametrize("input_format", ["xlsx", "ods"])
def test_unflatten_org_xml_xlsx(tmpdir, input_format):
unflatten(
input_name="flattentool/tests/fixtures/{}/iati-org.{}".format(
input_format, input_format
),
output_name=tmpdir.join("output.xml").strpath,
input_format=input_format,
id_name="organisation-identifier",
xml=True,
metatab_name="Meta",
)
assert (
open("flattentool/tests/fixtures/iati-org.xml").read()
== tmpdir.join("output.xml").read()
)
@pytest.mark.parametrize("input_format", ["xlsx", "ods"])
def test_unflatten_empty_column_header(tmpdir, input_format):
unflatten(
input_name="flattentool/tests/fixtures/{}/empty_column_header.{}".format(
input_format, input_format
),
output_name=tmpdir.join("output.json").strpath,
input_format=input_format,
)
assert (
tmpdir.join("output.json").read()
== """{
"main": [
{
"colA": "cell1"
},
{
"colA": "cell3"
}
]
}"""
)
| 31.987654 | 105 | 0.650328 | 602 | 5,182 | 5.325581 | 0.137874 | 0.078915 | 0.07985 | 0.049906 | 0.80942 | 0.771054 | 0.762944 | 0.762944 | 0.731753 | 0.680287 | 0 | 0.005894 | 0.214203 | 5,182 | 161 | 106 | 32.186335 | 0.781434 | 0.015824 | 0 | 0.557143 | 0 | 0 | 0.295633 | 0.171715 | 0 | 0 | 0 | 0 | 0.042857 | 1 | 0.042857 | false | 0 | 0.028571 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0b6af93c9c2adbcd4a1d3a59745cb5d4cf3add39 | 5,334 | py | Python | tests/unit/html_sections_test_io/main_test_io.py | MickyHCorbett/MorfLess | 9761197d7767c250cc27262e1ab41adf21c59333 | [
"MIT"
] | null | null | null | tests/unit/html_sections_test_io/main_test_io.py | MickyHCorbett/MorfLess | 9761197d7767c250cc27262e1ab41adf21c59333 | [
"MIT"
] | null | null | null | tests/unit/html_sections_test_io/main_test_io.py | MickyHCorbett/MorfLess | 9761197d7767c250cc27262e1ab41adf21c59333 | [
"MIT"
] | null | null | null | # test values of main.polimorf_add_main
import libraries.constants as ct
import libraries.globals as gb
# main and sidebar data are arrays of strings
test_values = [\
{ 'remark': 'Test Case 1:polimorf_add_main - No sidebar, not template, not search, meta present, fileroot = myfile',
'inputs': { 'main_data': ['<div class="main-out"></div>'],
'sidebar_data': [ct.PCOM_NO_ENTRY],
'meta_present': True,
'wrap': False,
'fileroot': 'myfile',
'is_template': False,
'is_search': False },
'assertIn': ['<div class="main-out">'],
'assertNotIn': ['main-outer', 'main-inner', 'with-sidebar'] },
{ 'remark': 'Test Case 2:polimorf_add_main - No sidebar, not template, not search, NO meta present, fileroot = myfile',
'inputs': { 'main_data': ['<div class="main-out"></div>'],
'sidebar_data': [ct.PCOM_NO_ENTRY],
'meta_present': False,
'wrap': False,
'fileroot': 'myfile',
'is_template': False,
'is_search': False },
'assertIn': [ct.PCOM_NO_ENTRY],
'assertNotIn': ['main-outer', 'main-inner', 'with-sidebar', '<div class="main-out">'] },
{ 'remark': 'Test Case 3:polimorf_add_main - Sidebar data, not template, not search, meta present, fileroot = myfile',
'inputs': { 'main_data': ['<div class="main-out"></div>'],
'sidebar_data': ['<div class="sidebar-out"></div>'],
'meta_present': True,
'wrap': False,
'fileroot': 'myfile',
'is_template': False,
'is_search': False },
'assertIn': [\
'main-outer',
'main-inner',
'<div class="main-out">',
'<div class="sidebar-out">',
'with-sidebar-main',
'with-sidebar-sidebar'],
'assertNotIn': [] },
{ 'remark': 'Test Case 4:polimorf_add_main - Sidebar data, template, not search, meta present, fileroot = posts',
'inputs': { 'main_data': ['<div class="main-out"></div>'],
'sidebar_data': ['<div class="sidebar-out"></div>'],
'meta_present': True,
'wrap': False,
'fileroot': 'posts',
'is_template': True,
'is_search': False },
'assertIn': [\
'data-postlist-name="postlist-template',
'pm-postlist-entries',
'pm-postlist-pagination',
'pm-post-list-custom',
'main-outer',
'main-inner',
'<div class="main-out">',
'<div class="sidebar-out">',
'with-sidebar-main',
'with-sidebar-sidebar'],
'assertNotIn': ['data-postlist-name="postlist-posts'] },
{ 'remark': 'Test Case 5:polimorf_add_main - Sidebar data, template, search, meta present, fileroot = posts',
'inputs': { 'main_data': ['<div class="main-out"></div>'],
'sidebar_data': ['<div class="sidebar-out"></div>'],
'meta_present': True,
'wrap': False,
'fileroot': 'posts',
'is_template': True,
'is_search': True },
'assertIn': [\
'class="pm-searchbar pm-search-page clearfix-small',
'id="search-submit" class="pm-search-button"',
'pm-post-list-custom',
'main-outer',
'main-inner',
'<div class="main-out">',
'<div class="sidebar-out">',
'with-sidebar-main',
'with-sidebar-sidebar'],
'assertNotIn': [\
'data-postlist-name="postlist-posts',
'data-postlist-name="postlist-template',
'pm-postlist-entries',
'pm-postlist-pagination'] },
{ 'remark': 'Test Case 6:polimorf_add_main - Sidebar data but not array, not template, not search, meta present, fileroot = posts',
'inputs': { 'main_data': ['<div class="main-out"></div>'],
'sidebar_data': 'ANO',
'meta_present': True,
'wrap': False,
'fileroot': 'posts',
'is_template': False,
'is_search': False },
'assertIn': [\
'A\n',
'N\n',
'O\n',
'main-outer',
'main-inner',
'<div class="main-out">',
'with-sidebar-main',
'with-sidebar-sidebar'],
'assertNotIn': ['ANO'] },
{ 'remark': 'Test Case 7:polimorf_add_main - No sidebar, not template, not search, meta present, fileroot = myfile wrap= True',
'inputs': { 'main_data': ['<div class="main-out"></div>'],
'sidebar_data': [ct.PCOM_NO_ENTRY],
'meta_present': True,
'wrap': True,
'fileroot': 'myfile',
'is_template': False,
'is_search': False },
'assertIn': ['<div class="main-out">','section id="main'],
'assertNotIn': ['main-outer', 'main-inner', 'with-sidebar'] },
]
| 43.721311 | 134 | 0.483315 | 519 | 5,334 | 4.851638 | 0.142582 | 0.063542 | 0.06672 | 0.0834 | 0.833995 | 0.813344 | 0.789515 | 0.730342 | 0.699762 | 0.663622 | 0 | 0.002023 | 0.351331 | 5,334 | 121 | 135 | 44.082645 | 0.725723 | 0.015186 | 0 | 0.720721 | 0 | 0.045045 | 0.508 | 0.085524 | 0 | 0 | 0 | 0 | 0.126126 | 1 | 0 | false | 0 | 0.018018 | 0 | 0.018018 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0b6b926119c97f49f5db7e498243bf23b92aa6c5 | 23 | py | Python | ezbasti/__init__.py | dinilbose/ezbasti | a0fa1b68e31f5f60924420cbf256f0bca8aa9725 | [
"MIT"
] | null | null | null | ezbasti/__init__.py | dinilbose/ezbasti | a0fa1b68e31f5f60924420cbf256f0bca8aa9725 | [
"MIT"
] | null | null | null | ezbasti/__init__.py | dinilbose/ezbasti | a0fa1b68e31f5f60924420cbf256f0bca8aa9725 | [
"MIT"
] | null | null | null | from .ezbasti import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0bb2616ca07cb7ce5209991dedc944cd7cbb0ffa | 12 | py | Python | v0.1/test1.py | strickyak/pythonine | 7428f4a625c228ebcc582cad4f7f057f625a0561 | [
"MIT"
] | null | null | null | v0.1/test1.py | strickyak/pythonine | 7428f4a625c228ebcc582cad4f7f057f625a0561 | [
"MIT"
] | null | null | null | v0.1/test1.py | strickyak/pythonine | 7428f4a625c228ebcc582cad4f7f057f625a0561 | [
"MIT"
] | null | null | null | print 3 + 4
| 6 | 11 | 0.583333 | 3 | 12 | 2.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.333333 | 12 | 1 | 12 | 12 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0bc0c52ebca22158acefebe773acd872eeac731a | 177 | py | Python | learn/jsonrpc/client.py | guoxiaoyong/simple-useful | 63f483250cc5e96ef112aac7499ab9e3a35572a8 | [
"CC0-1.0"
] | null | null | null | learn/jsonrpc/client.py | guoxiaoyong/simple-useful | 63f483250cc5e96ef112aac7499ab9e3a35572a8 | [
"CC0-1.0"
] | null | null | null | learn/jsonrpc/client.py | guoxiaoyong/simple-useful | 63f483250cc5e96ef112aac7499ab9e3a35572a8 | [
"CC0-1.0"
] | null | null | null | import jsonrpclib
server = jsonrpclib.Server('http://localhost:8080')
print server.add(12345, 23456)
print server.ping('first string')
print server.ping_later('second string')
| 25.285714 | 51 | 0.779661 | 24 | 177 | 5.708333 | 0.625 | 0.240876 | 0.218978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08642 | 0.084746 | 177 | 6 | 52 | 29.5 | 0.759259 | 0 | 0 | 0 | 0 | 0 | 0.259887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.6 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e7ebcbd0d50be0726a9e8919183965b86ec3baa6 | 35,274 | py | Python | server/ahj_app/tests/test_edit_views.py | reepoi/ahj-registry | d4498bccfe114b19acca4f931d29f30fbc65a803 | [
"MIT"
] | null | null | null | server/ahj_app/tests/test_edit_views.py | reepoi/ahj-registry | d4498bccfe114b19acca4f931d29f30fbc65a803 | [
"MIT"
] | null | null | null | server/ahj_app/tests/test_edit_views.py | reepoi/ahj-registry | d4498bccfe114b19acca4f931d29f30fbc65a803 | [
"MIT"
] | null | null | null | import pdb
import uuid
from decimal import Decimal
from django.apps import apps
from ahj_app.models import User, Edit, Comment, AHJInspection, Contact, Address, Location, AHJ, AHJUserMaintains
from django.urls import reverse
from django.utils import timezone
import pytest
import datetime
from fixtures import create_user, ahj_obj, generate_client_with_webpage_credentials, api_client, create_minimal_obj, \
set_obj_field, get_obj_field, get_value_or_enum_row
from ahj_app.models_field_enums import RequirementLevel, LocationDeterminationMethod
from ahj_app import views_edits
@pytest.fixture
def user_obj(create_user):
user = create_user(Username='someone')
return user
@pytest.fixture
def add_enums():
RequirementLevel.objects.create(Value='ConditionallyRequired')
RequirementLevel.objects.create(Value='Required')
RequirementLevel.objects.create(Value='Optional')
LocationDeterminationMethod.objects.create(Value='AddressGeocoding')
LocationDeterminationMethod.objects.create(Value='GPS')
def edit_is_pending(edit):
return edit.ReviewStatus == 'P' and edit.ApprovedBy is None and edit.DateEffective is None and edit.IsApplied is False
def filter_to_edit(edit_dict):
search_dict = {k: v for k, v in edit_dict.items()}
search_dict['DateRequested__date'] = search_dict.pop('DateRequested')
search_dict['DateEffective__date'] = search_dict.pop('DateEffective')
return Edit.objects.filter(**search_dict)
def check_edit_exists(edit_dict):
return filter_to_edit(edit_dict).exists()
@pytest.mark.parametrize(
'user_type', [
'Admin',
'AHJOfficial'
]
)
@pytest.mark.django_db
def test_edit_review__authenticated_normal_use(user_type, generate_client_with_webpage_credentials, ahj_obj):
client = generate_client_with_webpage_credentials(Username='someone')
user = User.objects.get(Username='someone')
if user_type == 'Admin':
user.is_superuser = True
user.save()
elif user_type == 'AHJOfficial':
AHJUserMaintains.objects.create(UserID=user, AHJPK=ahj_obj, MaintainerStatus=True)
edit_dict = {'ChangedBy': user, 'ApprovedBy': None,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': 'oldname', 'NewValue': 'newname',
'DateRequested': timezone.now(), 'DateEffective': None,
'ReviewStatus': 'P', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
url = reverse('edit-review')
response = client.post(url, {'EditID': edit.EditID, 'Status': 'A'})
assert response.status_code == 200
edit = Edit.objects.get(EditID=edit.EditID)
assert edit.ReviewStatus == 'A'
assert edit.ApprovedBy == user
tomorrow = timezone.now() + datetime.timedelta(days=1)
assert edit.DateEffective.date() == tomorrow.date()
@pytest.mark.django_db
def test_edit_review__no_auth_normal_use(generate_client_with_webpage_credentials, ahj_obj):
client = generate_client_with_webpage_credentials(Username='someone')
user = User.objects.get(Username='someone')
edit_dict = {'ChangedBy': user, 'ApprovedBy': None,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': 'oldname', 'NewValue': 'newname',
'DateRequested': timezone.now(), 'DateEffective': None,
'ReviewStatus': 'P', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
url = reverse('edit-review')
response = client.post(url, {'EditID': edit.EditID, 'Status': 'A'})
assert response.status_code == 403
@pytest.mark.django_db
def test_edit_review__invalid_status(generate_client_with_webpage_credentials, ahj_obj):
client = generate_client_with_webpage_credentials(Username='someone')
user = User.objects.get(Username='someone')
edit_dict = {'ChangedBy': user, 'ApprovedBy': None,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': 'oldname', 'NewValue': 'newname',
'DateRequested': timezone.now(), 'DateEffective': None,
'ReviewStatus': 'P', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
url = reverse('edit-review')
response = client.post(url, {'EditID': edit.EditID, 'Status': 'Z'})
assert response.status_code == 400
@pytest.mark.django_db
def test_edit_review__edit_does_not_exist(generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
url = reverse('edit-review')
response = client.post(url, {'EditID': 0, 'Status': 'A'})
assert response.status_code == 400
@pytest.mark.django_db
@pytest.mark.parametrize(
'params', [
({}),
({'EditID': '1'}),
({'Status': 'A'}),
]
)
def test_edit_review__missing_param(params, generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
url = reverse('edit-review')
response = client.post(url, params)
assert response.status_code == 400
@pytest.mark.django_db
def test_edit_addition__normal_use(ahj_obj, generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
user = User.objects.get(Username='someone')
AHJInspection.objects.create(AHJPK=ahj_obj, AHJInspectionName='Inspection1', TechnicianRequired=1, InspectionStatus=True)
url = reverse('edit-addition')
response = client.post(url, {
'SourceTable': 'AHJInspection',
'AHJPK': ahj_obj.AHJPK,
'ParentTable': 'AHJ',
'ParentID': ahj_obj.AHJPK,
'Value': [
{ 'AHJInspectionName': 'NewName'}
]}, format='json')
assert response.status_code == 200
assert response.data[0]['AHJInspectionName']['Value'] == 'NewName' # confirm returned AHJInspection was updated
edit = Edit.objects.get(AHJPK=ahj_obj.AHJPK)
assert edit.EditType == 'A'
assert edit.NewValue == 'True'
assert edit.SourceRow == response.data[0]['InspectionID']['Value']
@pytest.mark.django_db
@pytest.mark.parametrize(
'params', [
({'SourceTable': 'AHJ', 'ParentID': '1', 'ParentTable': 'AHJ'}),
({'AHJPK': '1', 'ParentID': '1', 'ParentTable': 'AHJ'}),
({'SourceTable': 'AHJ', 'AHJPK': '1', 'ParentTable': 'AHJ'}),
({'SourceTable': 'AHJ', 'AHJPK': '1', 'ParentID': '1'})
]
)
def test_edit_addition__missing_param(params, generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
url = reverse('edit-addition')
response = client.post(url, params)
assert response.status_code == 400
@pytest.mark.django_db
def test_edit_deletion__normal_use(ahj_obj, generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
user = User.objects.get(Username='someone')
inspection = AHJInspection.objects.create(AHJPK=ahj_obj, AHJInspectionName='Inspection1', TechnicianRequired=1, InspectionStatus=True)
url = reverse('edit-deletion')
response = client.post(url, {
'SourceTable': 'AHJInspection',
'AHJPK': ahj_obj.AHJPK,
'ParentTable': 'AHJ',
'ParentID': ahj_obj.AHJPK,
'Value': [
inspection.InspectionID
]}, format='json')
assert response.status_code == 200
edit = Edit.objects.get(AHJPK=ahj_obj.AHJPK)
assert edit.EditType == 'D'
assert edit.NewValue == 'False'
assert edit.SourceRow == response.data[0]['InspectionID']['Value']
@pytest.mark.django_db
@pytest.mark.parametrize(
'params', [
({'SourceTable': 'AHJ'}),
({'AHJPK': '1'}),
]
)
def test_edit_deletion__missing_param(params, generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
url = reverse('edit-deletion')
response = client.post(url, params)
assert response.status_code == 400
@pytest.mark.parametrize(
'ReviewStatus, DateEffective', [
('A', timezone.now()),
('A', timezone.now() - datetime.timedelta(days=1)),
('A', timezone.now() + datetime.timedelta(days=1)),
('A', None),
('P', timezone.now()),
('D', timezone.now())
]
)
@pytest.mark.django_db
def test_apply_edits(ReviewStatus, DateEffective, create_user, ahj_obj):
field_name = 'AHJName'
old_value = 'oldname'
new_value = 'newname'
user = create_user()
set_obj_field(ahj_obj, field_name, old_value)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user if DateEffective is not None else None,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': field_name,
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': DateEffective,
'ReviewStatus': ReviewStatus, 'IsApplied': False, 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
views_edits.apply_edits()
ahj = AHJ.objects.get(AHJPK=ahj_obj.AHJPK)
is_date_effective = (DateEffective.date() == datetime.date.today()) if DateEffective is not None else False
edit_should_apply = is_date_effective and ReviewStatus == 'A'
edit_is_applied = getattr(ahj, field_name) == new_value
assert edit_is_applied == edit_should_apply
edit = Edit.objects.get(EditID=edit.EditID)
assert edit.IsApplied == edit_should_apply
@pytest.mark.django_db
def test_edit_update__normal_use(ahj_obj, generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
user = User.objects.get(Username='someone')
inspection = AHJInspection.objects.create(AHJPK=ahj_obj, AHJInspectionName='Inspection1', TechnicianRequired=1, InspectionStatus=True)
url = reverse('edit-update')
input = [
{
'AHJPK': ahj_obj.AHJPK,
'SourceTable': 'AHJInspection',
'SourceRow': inspection.pk,
'SourceColumn': 'AHJInspectionName',
'NewValue': 'NewName'
}
]
response = client.post(url, input, format='json')
assert response.status_code == 200
edit = Edit.objects.get(AHJPK=ahj_obj.AHJPK) # Got newly created edit object and set it as approved
edit.ReviewStatus = 'A'
edit.DateEffective = timezone.now()
edit.ApprovedBy = user
edit.save()
views_edits.apply_edits() # Now that it's approved, apply edits will apply it.
Inspection = AHJInspection.objects.get(AHJPK=ahj_obj)
assert Inspection.AHJInspectionName == 'NewName'
@pytest.mark.django_db
@pytest.mark.parametrize(
'params', [
({'SourceTable': 'AHJ'}),
({'AHJPK': '1', 'SourceTable': 'AHJ', 'SourceRow': 'row', 'SourceColumn': 'column'}),
]
)
def test_edit_update__missing_param(params, generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
url = reverse('edit-deletion')
response = client.post(url, params)
assert response.status_code == 400
@pytest.mark.django_db
def test_edit_list__normal_use(ahj_obj, generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
user = User.objects.get(Username='someone')
Edit.objects.create(EditID=1, AHJPK=ahj_obj, ChangedBy=user, EditType='A', SourceTable='AHJ', SourceColumn='BuildingCode', SourceRow='2118', DateRequested=timezone.now())
Edit.objects.create(EditID=2, AHJPK=ahj_obj, ChangedBy=user, EditType='A', SourceTable='AHJ', SourceColumn='BuildingCode', SourceRow='2118', DateRequested=timezone.now())
url = reverse('edit-list')
response = client.get(url, {'AHJPK':'1'})
assert response.status_code == 200
assert len(response.data) == 2
@pytest.mark.django_db
def test_edit_list__missing_param(generate_client_with_webpage_credentials):
client = generate_client_with_webpage_credentials(Username='someone')
url = reverse('edit-list')
response = client.get(url)
assert response.status_code == 200
assert len(response.data) == 0
@pytest.mark.parametrize(
'model_name, field_name, old_value, new_value, expected_value', [
('AHJ', 'AHJName', 'oldname', 'newname', 'old_value'),
('Contact', 'FirstName', 'oldname', 'newname', 'old_value'),
('Address', 'Country', 'oldcountry', 'newcountry', 'old_value'),
('Location', 'Elevation', Decimal('0.00000000'), Decimal('10000.00000000'), 'old_value'),
('Location', 'LocationDeterminationMethod', '', 'AddressGeocoding', None),
('Location', 'LocationDeterminationMethod', 'AddressGeocoding', '', 'old_value'),
('EngineeringReviewRequirement', 'RequirementLevel', 'ConditionallyRequired', 'Required', 'old_value'),
('AHJInspection', 'FileFolderURL', 'oldurl', 'newurl', 'old_value'),
('FeeStructure', 'FeeStructureID', str(uuid.uuid4()), str(uuid.uuid4()), 'old_value')
]
)
@pytest.mark.django_db
def test_edit_revert__edit_update(model_name, field_name, old_value, new_value, create_user, ahj_obj, expected_value, create_minimal_obj, add_enums):
user = create_user()
obj = create_minimal_obj(model_name)
set_obj_field(obj, field_name, new_value)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': model_name, 'SourceRow': obj.pk, 'SourceColumn': field_name,
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
assert views_edits.revert_edit(user, edit)
edit_dict['OldValue'], edit_dict['NewValue'] = edit.NewValue, edit.OldValue
if expected_value:
expected_value = get_value_or_enum_row(field_name, old_value)
assert get_obj_field(obj, field_name) == expected_value
assert check_edit_exists(edit_dict)
@pytest.mark.django_db
def test_edit_revert__edit_pending_do_nothing(create_user, ahj_obj):
user = create_user()
old_value = 'oldname'
new_value = 'newname'
set_obj_field(ahj_obj, 'AHJName', old_value)
edit_dict = {'ChangedBy': user, 'ApprovedBy': None,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': None,
'ReviewStatus': 'P', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
assert not views_edits.revert_edit(user, edit)
edit_dict['OldValue'], edit_dict['NewValue'] = old_value, edit_dict['OldValue']
edit_dict['ReviewStatus'] = 'A'
edit_dict['ApprovedBy'], edit_dict['DateEffective'] = user, timezone.now()
assert not check_edit_exists(edit_dict)
assert Edit.objects.all().count() == 1
@pytest.mark.django_db
def test_edit_revert__current_value_is_old_value_do_nothing(create_user, ahj_obj):
user = create_user()
old_value = 'oldname'
new_value = 'newname'
set_obj_field(ahj_obj, 'AHJName', old_value)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
assert not views_edits.revert_edit(user, edit)
edit_dict['OldValue'], edit_dict['NewValue'] = old_value, edit_dict['OldValue']
assert not check_edit_exists(edit_dict)
assert Edit.objects.all().count() == 1
@pytest.mark.django_db
def test_edit_revert__revert_edit_old_value_uses_current_row_value(create_user, ahj_obj):
user = create_user()
old_value = 'oldname'
middle_value = 'newername'
new_value = 'newestname'
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': old_value, 'NewValue': middle_value,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
edit_dict['OldValue'], edit_dict['NewValue'] = edit_dict['NewValue'], new_value
setattr(ahj_obj, 'AHJName', new_value)
ahj_obj.save()
newer_edit = Edit.objects.create(**edit_dict)
assert views_edits.revert_edit(user, edit)
edit_dict['OldValue'], edit_dict['NewValue'] = edit_dict['NewValue'], old_value
reverting_edit = filter_to_edit(edit_dict)
assert reverting_edit.exists()
assert reverting_edit.first().OldValue == new_value
assert get_obj_field(ahj_obj, 'AHJName')
@pytest.mark.parametrize(
'parent_model_name, model_name', [
('AHJ', 'Contact'),
('AHJInspection', 'Contact'),
('AHJ', 'EngineeringReviewRequirement'),
('AHJ', 'AHJInspection'),
('AHJ', 'DocumentSubmissionMethod'),
('AHJ', 'PermitIssueMethod'),
('AHJ', 'FeeStructure')
]
)
@pytest.mark.django_db
def test_edit_revert__edit_addition(parent_model_name, model_name, create_user, create_minimal_obj, ahj_obj):
user = create_user()
parent_obj = create_minimal_obj(parent_model_name)
obj = create_minimal_obj(model_name)
relation = obj.create_relation_to(parent_obj)
set_obj_field(relation, relation.get_relation_status_field(), True)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': relation.__class__.__name__, 'SourceRow': relation.pk, 'SourceColumn': relation.get_relation_status_field(),
'OldValue': None, 'NewValue': True,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'EditType': 'A', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
assert views_edits.revert_edit(user, edit)
edit_dict['OldValue'], edit_dict['NewValue'] = edit_dict['NewValue'], False
assert check_edit_exists(edit_dict)
assert get_obj_field(relation, relation.get_relation_status_field()) == edit_dict['NewValue']
@pytest.mark.parametrize(
'parent_model_name, model_name', [
('AHJ', 'Contact'),
('AHJInspection', 'Contact'),
('AHJ', 'EngineeringReviewRequirement'),
('AHJ', 'AHJInspection'),
('AHJ', 'DocumentSubmissionMethod'),
('AHJ', 'PermitIssueMethod'),
('AHJ', 'FeeStructure')
]
)
@pytest.mark.django_db
def test_edit_revert__edit_deletion(parent_model_name, model_name, create_user, create_minimal_obj, ahj_obj):
user = create_user()
parent_obj = create_minimal_obj(parent_model_name)
obj = create_minimal_obj(model_name)
relation = obj.create_relation_to(parent_obj)
set_obj_field(relation, relation.get_relation_status_field(), False)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': relation.__class__.__name__, 'SourceRow': relation.pk, 'SourceColumn': relation.get_relation_status_field(),
'OldValue': True, 'NewValue': False,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'EditType': 'D', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
assert views_edits.revert_edit(user, edit)
edit_dict['OldValue'], edit_dict['NewValue'] = edit_dict['NewValue'], edit_dict['OldValue']
assert check_edit_exists(edit_dict)
assert get_obj_field(relation, relation.get_relation_status_field()) == edit_dict['NewValue']
@pytest.mark.parametrize(
'edit_status1, is_applied1, is_applied2, expected_outcome', [
# Rejected edits are resettable.
('R', False, True, True),
# Approved, but not yet applied, edits are resettable.
('A', False, False, True),
('A', False, True, True),
# Approved and applied edits where they are the latest applied are resettable.
('A', True, False, True),
# Approved and applied edits where another edit was since applied are not resettable.
('A', True, True, False)
]
)
@pytest.mark.django_db
def test_edit_is_resettable(edit_status1, is_applied1, is_applied2, expected_outcome, create_user, ahj_obj):
user = create_user()
new_value = 'newname'
old_value = 'oldname'
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': edit_status1, 'IsApplied': is_applied1, 'EditType': 'U', 'AHJPK': ahj_obj}
edit_to_reset = Edit.objects.create(**edit_dict)
tomorrow = timezone.now() + datetime.timedelta(days=1)
edit_dict['DateRequested'], edit_dict['DateEffective'] = tomorrow, tomorrow
edit_dict['ReviewStatus'], edit_dict['IsApplied'] = 'A', is_applied2
later_edit = Edit.objects.create(**edit_dict)
assert expected_outcome == views_edits.edit_is_resettable(edit_to_reset)
@pytest.mark.django_db
def test_edit_make_pending(create_user, ahj_obj):
user = create_user()
set_obj_field(ahj_obj, 'AHJName', 'newername')
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': 'oldname', 'NewValue': 'newname',
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'R', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
views_edits.edit_make_pending(edit)
edit = Edit.objects.get(EditID=edit.EditID)
assert edit_is_pending(edit)
@pytest.mark.parametrize(
'model_name, field_name, old_value, new_value', [
('AHJ', 'AHJName', 'oldname', 'newname'),
('Contact', 'FirstName', 'oldname', 'newname'),
('Address', 'Country', 'oldcountry', 'newcountry'),
('Location', 'Elevation', Decimal('0.00000000'), Decimal('10000.00000000')),
('Location', 'LocationDeterminationMethod', '', 'AddressGeocoding'),
('Location', 'LocationDeterminationMethod', 'AddressGeocoding', ''),
('EngineeringReviewRequirement', 'RequirementLevel', 'ConditionallyRequired', 'Required'),
('AHJInspection', 'FileFolderURL', 'oldurl', 'newurl'),
('FeeStructure', 'FeeStructureID', str(uuid.uuid4()), str(uuid.uuid4()))
]
)
@pytest.mark.django_db
def test_edit_update_old_value(model_name, field_name, old_value, new_value, create_user, ahj_obj, create_minimal_obj, add_enums):
user = create_user()
obj = create_minimal_obj(model_name)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': model_name, 'SourceRow': obj.pk, 'SourceColumn': field_name,
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
views_edits.apply_edits(ready_edits=[edit])
views_edits.edit_update_old_value(edit)
edit = Edit.objects.get(EditID=edit.EditID)
assert edit.OldValue == str(new_value)
@pytest.mark.parametrize(
'model_name, field_name, old_value, new_value', [
('AHJ', 'AHJName', 'oldname', 'newname'),
('Contact', 'FirstName', 'oldname', 'newname'),
('Address', 'Country', 'oldcountry', 'newcountry'),
('Location', 'Elevation', Decimal('0.00000000'), Decimal('10000.00000000')),
('Location', 'LocationDeterminationMethod', '', 'AddressGeocoding'),
('Location', 'LocationDeterminationMethod', 'AddressGeocoding', ''),
('EngineeringReviewRequirement', 'RequirementLevel', 'ConditionallyRequired', 'Required'),
('AHJInspection', 'FileFolderURL', 'oldurl', 'newurl'),
('FeeStructure', 'FeeStructureID', str(uuid.uuid4()), str(uuid.uuid4()))
]
)
@pytest.mark.django_db
def test_edit_update_old_value_all_awaiting_apply_or_review(model_name, field_name, old_value, new_value, create_user, ahj_obj, create_minimal_obj, add_enums):
user = create_user()
obj = create_minimal_obj(model_name)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': model_name, 'SourceRow': obj.pk, 'SourceColumn': field_name,
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'IsApplied': True, 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
edit_dict['IsApplied'] = False
approved_edit = Edit.objects.create(**edit_dict)
edit_dict['ReviewStatus'] = 'P'
pending_edit = Edit.objects.create(**edit_dict)
views_edits.apply_edits(ready_edits=[edit])
views_edits.edit_update_old_value_all_awaiting_apply_or_review(edit)
approved_edit = Edit.objects.get(EditID=approved_edit.EditID)
pending_edit = Edit.objects.get(EditID=pending_edit.EditID)
assert approved_edit.OldValue == str(new_value)
assert pending_edit.OldValue == str(new_value)
@pytest.mark.parametrize(
'model_name, field_name, old_value, new_value, expected_value', [
('AHJ', 'AHJName', 'oldname', 'newname', 'old_value'),
('Contact', 'FirstName', 'oldname', 'newname', 'old_value'),
('Address', 'Country', 'oldcountry', 'newcountry', 'old_value'),
('Location', 'Elevation', Decimal('0.00000000'), Decimal('10000.00000000'), 'old_value'),
('Location', 'LocationDeterminationMethod', '', 'AddressGeocoding', None),
('Location', 'LocationDeterminationMethod', 'AddressGeocoding', '', 'old_value'),
('EngineeringReviewRequirement', 'RequirementLevel', 'ConditionallyRequired', 'Required', 'old_value'),
('AHJInspection', 'FileFolderURL', 'oldurl', 'newurl', 'old_value'),
('FeeStructure', 'FeeStructureID', str(uuid.uuid4()), str(uuid.uuid4()), 'old_value')
]
)
@pytest.mark.django_db
def test_edit_undo_apply(model_name, field_name, old_value, new_value, create_user, ahj_obj, expected_value, create_minimal_obj, add_enums):
user = create_user()
obj = create_minimal_obj(model_name)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': model_name, 'SourceRow': obj.pk, 'SourceColumn': field_name,
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
views_edits.apply_edits(ready_edits=[edit])
views_edits.edit_undo_apply(edit)
if expected_value == 'old_value':
expected_value = get_value_or_enum_row(field_name, old_value)
assert get_obj_field(obj, field_name) == expected_value
@pytest.mark.parametrize(
'model_name, field_name, old_value, new_value, expected_value', [
('AHJ', 'AHJName', 'oldname', 'newname', 'old_value'),
('Contact', 'FirstName', 'oldname', 'newname', 'old_value'),
('Address', 'Country', 'oldcountry', 'newcountry', 'old_value'),
('Location', 'Elevation', Decimal('0.00000000'), Decimal('10000.00000000'), 'old_value'),
('Location', 'LocationDeterminationMethod', '', 'AddressGeocoding', None),
('Location', 'LocationDeterminationMethod', 'AddressGeocoding', '', 'old_value'),
('EngineeringReviewRequirement', 'RequirementLevel', 'ConditionallyRequired', 'Required', 'old_value'),
('AHJInspection', 'FileFolderURL', 'oldurl', 'newurl', 'old_value'),
('FeeStructure', 'FeeStructureID', str(uuid.uuid4()), str(uuid.uuid4()), 'old_value')
]
)
@pytest.mark.django_db
def test_edit_reset__edit_update(model_name, field_name, old_value, new_value, create_user, ahj_obj, create_minimal_obj, expected_value, add_enums):
user = create_user()
obj = create_minimal_obj(model_name)
set_obj_field(obj, field_name, new_value)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': model_name, 'SourceRow': obj.pk, 'SourceColumn': field_name,
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'IsApplied': True, 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
assert views_edits.reset_edit(user, edit)
assert edit_is_pending(edit)
if expected_value == 'old_value':
expected_value = get_value_or_enum_row(field_name, old_value)
assert get_obj_field(obj, field_name) == expected_value
@pytest.mark.parametrize(
'parent_model_name, model_name, review_status', [
('AHJ', 'Contact', 'A'),
('AHJInspection', 'Contact', 'A'),
('AHJ', 'EngineeringReviewRequirement', 'A'),
('AHJ', 'AHJInspection', 'A'),
('AHJ', 'DocumentSubmissionMethod', 'A'),
('AHJ', 'PermitIssueMethod', 'A'),
('AHJ', 'FeeStructure', 'A'),
('AHJ', 'Contact', 'R'),
('AHJInspection', 'Contact', 'R'),
('AHJ', 'EngineeringReviewRequirement', 'R'),
('AHJ', 'AHJInspection', 'R'),
('AHJ', 'DocumentSubmissionMethod', 'R'),
('AHJ', 'PermitIssueMethod', 'R'),
('AHJ', 'FeeStructure', 'R')
]
)
@pytest.mark.django_db
def test_edit_reset__edit_addition(parent_model_name, model_name, review_status, create_user, create_minimal_obj, ahj_obj):
user = create_user()
parent_obj = create_minimal_obj(parent_model_name)
obj = create_minimal_obj(model_name)
relation = obj.create_relation_to(parent_obj)
set_obj_field(relation, relation.get_relation_status_field(), review_status == 'A')
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': relation.__class__.__name__, 'SourceRow': relation.pk, 'SourceColumn': relation.get_relation_status_field(),
'OldValue': None, 'NewValue': True,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': review_status, 'IsApplied': review_status == 'A', 'EditType': 'A', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
assert views_edits.reset_edit(user, edit)
assert edit_is_pending(edit)
assert get_obj_field(relation, relation.get_relation_status_field()) == edit_dict['OldValue']
@pytest.mark.parametrize(
'parent_model_name, model_name, review_status', [
('AHJ', 'Contact', 'A'),
('AHJInspection', 'Contact', 'A'),
('AHJ', 'EngineeringReviewRequirement', 'A'),
('AHJ', 'AHJInspection', 'A'),
('AHJ', 'DocumentSubmissionMethod', 'A'),
('AHJ', 'PermitIssueMethod', 'A'),
('AHJ', 'FeeStructure', 'A'),
('AHJ', 'Contact', 'R'),
('AHJInspection', 'Contact', 'R'),
('AHJ', 'EngineeringReviewRequirement', 'R'),
('AHJ', 'AHJInspection', 'R'),
('AHJ', 'DocumentSubmissionMethod', 'R'),
('AHJ', 'PermitIssueMethod', 'R'),
('AHJ', 'FeeStructure', 'R')
]
)
@pytest.mark.django_db
def test_edit_reset__edit_deletion(parent_model_name, model_name, review_status, create_user, create_minimal_obj, ahj_obj):
user = create_user()
parent_obj = create_minimal_obj(parent_model_name)
obj = create_minimal_obj(model_name)
relation = obj.create_relation_to(parent_obj)
set_obj_field(relation, relation.get_relation_status_field(), review_status != 'A')
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': relation.__class__.__name__, 'SourceRow': relation.pk, 'SourceColumn': relation.get_relation_status_field(),
'OldValue': True, 'NewValue': False,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': review_status, 'IsApplied': review_status == 'A', 'EditType': 'A', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
assert views_edits.reset_edit(user, edit)
edit = Edit.objects.get(EditID=edit.EditID)
assert edit_is_pending(edit)
assert get_obj_field(relation, relation.get_relation_status_field()) == edit_dict['OldValue']
@pytest.mark.django_db
def test_edit_reset__edit_pending_do_nothing(create_user, ahj_obj):
user = create_user()
old_value = 'oldname'
new_value = 'newname'
set_obj_field(ahj_obj, 'AHJName', old_value)
edit_dict = {'ChangedBy': user, 'ApprovedBy': None,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': None,
'ReviewStatus': 'P', 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
assert not views_edits.reset_edit(user, edit)
edit_dict['OldValue'], edit_dict['NewValue'] = old_value, edit_dict['OldValue']
edit_dict['ReviewStatus'] = 'A'
edit_dict['ApprovedBy'], edit_dict['DateEffective'] = user, timezone.now()
assert not check_edit_exists(edit_dict)
assert Edit.objects.all().count() == 1
@pytest.mark.parametrize(
'force_resettable, skip_undo', [
(True, False),
(True, True)
]
)
@pytest.mark.django_db
def test_edit_reset__kwargs(force_resettable, skip_undo, create_user, ahj_obj):
user = create_user()
old_value = 'oldname'
new_value = 'newname'
later_value = 'newname_later'
set_obj_field(ahj_obj, 'AHJName', later_value)
edit_dict = {'ChangedBy': user, 'ApprovedBy': user,
'SourceTable': 'AHJ', 'SourceRow': ahj_obj.pk, 'SourceColumn': 'AHJName',
'OldValue': old_value, 'NewValue': new_value,
'DateRequested': timezone.now(), 'DateEffective': timezone.now(),
'ReviewStatus': 'A', 'IsApplied': True, 'EditType': 'U', 'AHJPK': ahj_obj}
edit = Edit.objects.create(**edit_dict)
edit_dict['OldValue'], edit_dict['NewValue'] = edit_dict['NewValue'], later_value
later_edit = Edit.objects.create(**edit_dict)
assert views_edits.reset_edit(user, edit, force_resettable=force_resettable, skip_undo=skip_undo)
edit = Edit.objects.get(EditID=edit.EditID)
if force_resettable and not skip_undo:
assert get_obj_field(ahj_obj, 'AHJName') == old_value
elif force_resettable and skip_undo:
assert get_obj_field(ahj_obj, 'AHJName') == later_value
assert edit.OldValue == later_value
assert edit.NewValue == new_value
assert edit_is_pending(edit)
| 47.41129 | 174 | 0.673726 | 3,984 | 35,274 | 5.683484 | 0.061998 | 0.034978 | 0.023186 | 0.023848 | 0.835755 | 0.821623 | 0.809036 | 0.786866 | 0.755598 | 0.736122 | 0 | 0.007162 | 0.180671 | 35,274 | 743 | 175 | 47.475101 | 0.776305 | 0.011085 | 0 | 0.660209 | 0 | 0 | 0.224774 | 0.024315 | 0 | 0 | 0 | 0 | 0.105812 | 1 | 0.052161 | false | 0 | 0.017884 | 0.002981 | 0.076006 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b2fef8098e5738ff2bac12e615b8868ea60ebe90 | 794 | py | Python | stepIndex.py | Kantheesh/Learning-Python | d2dc9f1b9f652e6a6d84028e86a1daf77551eb5f | [
"MIT"
] | null | null | null | stepIndex.py | Kantheesh/Learning-Python | d2dc9f1b9f652e6a6d84028e86a1daf77551eb5f | [
"MIT"
] | null | null | null | stepIndex.py | Kantheesh/Learning-Python | d2dc9f1b9f652e6a6d84028e86a1daf77551eb5f | [
"MIT"
] | null | null | null | inp="ABC1010567"
print(inp[1:500]) #1
print("-----------------------")
inp="ABC1010567"
print(inp[1::1]) #2
print("-----------------------")
inp="ABC1010567"
print(inp[::-1]) #3
print("-----------------------")
print(inp[::1])#4
print("-----------------------")
print(inp[1::1])#5
print("-----------------------")
print(inp[1::-1])#6
print("-----------------------")
print(inp[-1::1])#7
print("-----------------------")
print(inp[-1::-1])#8
print("-----------------------")
print(inp[:1:1])#9
print("-----------------------")
print(inp[:1:-1])#10
print("-----------------------")
print(inp[:-1:1])#11
print("-----------------------")
print(inp[:-1:-1])#12
print("-----------------------")
print(inp[1:9:2])#13
print("-----------------------")
print(inp[-4:2:-1])#14 | 26.466667 | 33 | 0.333753 | 89 | 794 | 2.977528 | 0.202247 | 0.483019 | 0.441509 | 0.528302 | 0.739623 | 0.203774 | 0 | 0 | 0 | 0 | 0 | 0.095498 | 0.076826 | 794 | 30 | 34 | 26.466667 | 0.26603 | 0.023929 | 0 | 0.6 | 0 | 0 | 0.44884 | 0.407913 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.9 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
65363adbfa5c4c8ce610713982f421978b5a5bed | 13,276 | py | Python | test/azure/Expected/AcceptanceTests/AzureParameterGrouping/azureparametergrouping/operations/parameter_grouping_operations.py | iscai-msft/autorest.python | a9f38dd762fbc046ce6197bfabea2f56045d2957 | [
"MIT"
] | null | null | null | test/azure/Expected/AcceptanceTests/AzureParameterGrouping/azureparametergrouping/operations/parameter_grouping_operations.py | iscai-msft/autorest.python | a9f38dd762fbc046ce6197bfabea2f56045d2957 | [
"MIT"
] | null | null | null | test/azure/Expected/AcceptanceTests/AzureParameterGrouping/azureparametergrouping/operations/parameter_grouping_operations.py | iscai-msft/autorest.python | a9f38dd762fbc046ce6197bfabea2f56045d2957 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
import uuid
from msrest.pipeline import ClientRawResponse
from .. import models
class ParameterGroupingOperations(object):
"""ParameterGroupingOperations operations.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.config = config
def post_required(
self, parameter_grouping_post_required_parameters, custom_headers=None, raw=False, **operation_config):
"""Post a bunch of required parameters grouped.
:param parameter_grouping_post_required_parameters: Additional
parameters for the operation
:type parameter_grouping_post_required_parameters:
~azureparametergrouping.models.ParameterGroupingPostRequiredParameters
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`ErrorException<azureparametergrouping.models.ErrorException>`
"""
body = None
if parameter_grouping_post_required_parameters is not None:
body = parameter_grouping_post_required_parameters.body
custom_header = None
if parameter_grouping_post_required_parameters is not None:
custom_header = parameter_grouping_post_required_parameters.custom_header
query = None
if parameter_grouping_post_required_parameters is not None:
query = parameter_grouping_post_required_parameters.query
path = None
if parameter_grouping_post_required_parameters is not None:
path = parameter_grouping_post_required_parameters.path
# Construct URL
url = self.post_required.metadata['url']
path_format_arguments = {
'path': self._serialize.url("path", path, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if query is not None:
query_parameters['query'] = self._serialize.query("query", query, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
if custom_header is not None:
header_parameters['customHeader'] = self._serialize.header("custom_header", custom_header, 'str')
# Construct body
body_content = self._serialize.body(body, 'int')
# Construct and send request
request = self._client.post(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
raise models.ErrorException(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
post_required.metadata = {'url': '/parameterGrouping/postRequired/{path}'}
def post_optional(
self, parameter_grouping_post_optional_parameters=None, custom_headers=None, raw=False, **operation_config):
"""Post a bunch of optional parameters grouped.
:param parameter_grouping_post_optional_parameters: Additional
parameters for the operation
:type parameter_grouping_post_optional_parameters:
~azureparametergrouping.models.ParameterGroupingPostOptionalParameters
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`ErrorException<azureparametergrouping.models.ErrorException>`
"""
custom_header = None
if parameter_grouping_post_optional_parameters is not None:
custom_header = parameter_grouping_post_optional_parameters.custom_header
query = None
if parameter_grouping_post_optional_parameters is not None:
query = parameter_grouping_post_optional_parameters.query
# Construct URL
url = self.post_optional.metadata['url']
# Construct parameters
query_parameters = {}
if query is not None:
query_parameters['query'] = self._serialize.query("query", query, 'int')
# Construct headers
header_parameters = {}
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
if custom_header is not None:
header_parameters['customHeader'] = self._serialize.header("custom_header", custom_header, 'str')
# Construct and send request
request = self._client.post(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
raise models.ErrorException(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
post_optional.metadata = {'url': '/parameterGrouping/postOptional'}
def post_multi_param_groups(
self, first_parameter_group=None, parameter_grouping_post_multi_param_groups_second_param_group=None, custom_headers=None, raw=False, **operation_config):
"""Post parameters from multiple different parameter groups.
:param first_parameter_group: Additional parameters for the operation
:type first_parameter_group:
~azureparametergrouping.models.FirstParameterGroup
:param parameter_grouping_post_multi_param_groups_second_param_group:
Additional parameters for the operation
:type parameter_grouping_post_multi_param_groups_second_param_group:
~azureparametergrouping.models.ParameterGroupingPostMultiParamGroupsSecondParamGroup
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`ErrorException<azureparametergrouping.models.ErrorException>`
"""
header_one = None
if first_parameter_group is not None:
header_one = first_parameter_group.header_one
query_one = None
if first_parameter_group is not None:
query_one = first_parameter_group.query_one
header_two = None
if parameter_grouping_post_multi_param_groups_second_param_group is not None:
header_two = parameter_grouping_post_multi_param_groups_second_param_group.header_two
query_two = None
if parameter_grouping_post_multi_param_groups_second_param_group is not None:
query_two = parameter_grouping_post_multi_param_groups_second_param_group.query_two
# Construct URL
url = self.post_multi_param_groups.metadata['url']
# Construct parameters
query_parameters = {}
if query_one is not None:
query_parameters['query-one'] = self._serialize.query("query_one", query_one, 'int')
if query_two is not None:
query_parameters['query-two'] = self._serialize.query("query_two", query_two, 'int')
# Construct headers
header_parameters = {}
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
if header_one is not None:
header_parameters['header-one'] = self._serialize.header("header_one", header_one, 'str')
if header_two is not None:
header_parameters['header-two'] = self._serialize.header("header_two", header_two, 'str')
# Construct and send request
request = self._client.post(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
raise models.ErrorException(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
post_multi_param_groups.metadata = {'url': '/parameterGrouping/postMultipleParameterGroups'}
def post_shared_parameter_group_object(
self, first_parameter_group=None, custom_headers=None, raw=False, **operation_config):
"""Post parameters with a shared parameter group object.
:param first_parameter_group: Additional parameters for the operation
:type first_parameter_group:
~azureparametergrouping.models.FirstParameterGroup
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`ErrorException<azureparametergrouping.models.ErrorException>`
"""
header_one = None
if first_parameter_group is not None:
header_one = first_parameter_group.header_one
query_one = None
if first_parameter_group is not None:
query_one = first_parameter_group.query_one
# Construct URL
url = self.post_shared_parameter_group_object.metadata['url']
# Construct parameters
query_parameters = {}
if query_one is not None:
query_parameters['query-one'] = self._serialize.query("query_one", query_one, 'int')
# Construct headers
header_parameters = {}
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
if header_one is not None:
header_parameters['header-one'] = self._serialize.header("header_one", header_one, 'str')
# Construct and send request
request = self._client.post(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
raise models.ErrorException(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
post_shared_parameter_group_object.metadata = {'url': '/parameterGrouping/sharedParameterGroupObject'}
| 46.41958 | 166 | 0.692302 | 1,450 | 13,276 | 6.073103 | 0.106897 | 0.014763 | 0.026573 | 0.032705 | 0.816716 | 0.773223 | 0.745969 | 0.735635 | 0.728481 | 0.692482 | 0 | 0.001745 | 0.222808 | 13,276 | 285 | 167 | 46.582456 | 0.851715 | 0.296249 | 0 | 0.668919 | 0 | 0 | 0.08214 | 0.040445 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033784 | false | 0 | 0.02027 | 0 | 0.094595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
65461b7e503c33474ec3c980cc20a762664d3f80 | 78 | py | Python | tutorials/02. easter eggs/00. hello.py | kctzstyle/my-python-tutorial | 1af9195741ea744ad70de546e46bd6ca8b9c03ab | [
"MIT"
] | null | null | null | tutorials/02. easter eggs/00. hello.py | kctzstyle/my-python-tutorial | 1af9195741ea744ad70de546e46bd6ca8b9c03ab | [
"MIT"
] | null | null | null | tutorials/02. easter eggs/00. hello.py | kctzstyle/my-python-tutorial | 1af9195741ea744ad70de546e46bd6ca8b9c03ab | [
"MIT"
] | null | null | null |
# Hello, world!
import __hello__
# 설명
# 실행하면 'Hello world!'가 바로 출력이 됩니다.
| 9.75 | 36 | 0.641026 | 12 | 78 | 3.833333 | 0.75 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24359 | 78 | 7 | 37 | 11.142857 | 0.779661 | 0.653846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3354c1a106fdaf2fe1b14ec0413d7c7e06416ac8 | 31 | py | Python | KD_Lib/Quantization/qat/__init__.py | PiaCuk/KD_Lib | 153299d484e4c6b33793749709dbb0f33419f190 | [
"MIT"
] | 360 | 2020-05-11T08:18:20.000Z | 2022-03-31T01:48:43.000Z | KD_Lib/Quantization/qat/__init__.py | PiaCuk/KD_Lib | 153299d484e4c6b33793749709dbb0f33419f190 | [
"MIT"
] | 91 | 2020-05-11T08:14:56.000Z | 2022-03-30T05:29:03.000Z | KD_Lib/Quantization/qat/__init__.py | PiaCuk/KD_Lib | 153299d484e4c6b33793749709dbb0f33419f190 | [
"MIT"
] | 39 | 2020-05-11T08:06:47.000Z | 2022-03-29T05:11:18.000Z | from .qat import QAT_Quantizer
| 15.5 | 30 | 0.83871 | 5 | 31 | 5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
68927d6eb694fb6e3bdd4da7997afa57f3eb9602 | 21 | py | Python | vision/src/temp/toplevel.py | nagneeve/ecen490 | 260805b87f3d890cbcb892121261baa5038e65c8 | [
"MIT"
] | null | null | null | vision/src/temp/toplevel.py | nagneeve/ecen490 | 260805b87f3d890cbcb892121261baa5038e65c8 | [
"MIT"
] | null | null | null | vision/src/temp/toplevel.py | nagneeve/ecen490 | 260805b87f3d890cbcb892121261baa5038e65c8 | [
"MIT"
] | null | null | null | import roboclaw.py
| 5.25 | 18 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 3 | 19 | 7 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0bc8d3f1f3bb7d019dc6b418345de0639ae54bfd | 47 | py | Python | pywebby/__init__.py | TomekPulkiewicz/pywebby | a7f8bd22a697ee4a4f09612e8a4384941ad08074 | [
"MIT"
] | 4 | 2021-05-26T08:00:18.000Z | 2021-05-26T10:12:37.000Z | pywebby/__init__.py | TomekPulkiewicz/pywebby | a7f8bd22a697ee4a4f09612e8a4384941ad08074 | [
"MIT"
] | null | null | null | pywebby/__init__.py | TomekPulkiewicz/pywebby | a7f8bd22a697ee4a4f09612e8a4384941ad08074 | [
"MIT"
] | 1 | 2021-05-26T12:56:27.000Z | 2021-05-26T12:56:27.000Z | from .lib import WebServer
from . import types_ | 23.5 | 26 | 0.808511 | 7 | 47 | 5.285714 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 47 | 2 | 27 | 23.5 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
043ed71e6994428fee1e4c2d3acfa66573091871 | 6,641 | py | Python | pLaplace_eqs1d.py | xuzhiqin1990/MSDNN2ellipticPDEs | ddaee034474c18bc23b51824fb6a00539c07d52c | [
"MIT"
] | 1 | 2021-12-23T07:40:04.000Z | 2021-12-23T07:40:04.000Z | pLaplace_eqs1d.py | xuzhiqin1990/MSDNN2ellipticPDEs | ddaee034474c18bc23b51824fb6a00539c07d52c | [
"MIT"
] | null | null | null | pLaplace_eqs1d.py | xuzhiqin1990/MSDNN2ellipticPDEs | ddaee034474c18bc23b51824fb6a00539c07d52c | [
"MIT"
] | 1 | 2021-12-31T10:57:17.000Z | 2021-12-31T10:57:17.000Z | import tensorflow as tf
import numpy as np
def get_infos_2laplace(in_dim=None, out_dim=None, region_a=0, region_b=1, p=2, eps=0.01):
f = lambda x: tf.ones_like(x)
aeps = lambda x: 1.0 / (2 + tf.cos(2 * np.pi * x / eps))
u_l = lambda x: tf.zeros_like(x)
u_r = lambda x: tf.zeros_like(x)
utrue = lambda x: x - tf.square(x) + eps * (
1 / np.pi * tf.sin(np.pi * 2 * x / eps) * (1 / 4 - x / 2) - eps / (4 * np.pi ** 2) * tf.cos(
np.pi * 2 * x / eps) + eps / 4 / np.pi ** 2)
return utrue, f, aeps, u_l, u_r
def get_infos_3laplace(in_dim=None, out_dim=None, region_a=0, region_b=1, p=2, eps=0.01):
f = lambda x: abs(2 * x - 1) * (
4 * eps + 2 * eps * tf.cos(2 * np.pi * x / eps) + np.pi * (1 - 2 * x) * tf.sin(2 * np.pi * x / eps)) / (
2 * eps)
aeps = lambda x: 1.0 / (2 + tf.cos(2 * np.pi * x / eps))
u_l = lambda x: tf.zeros_like(x)
u_r = lambda x: tf.zeros_like(x)
utrue = lambda x: x - tf.square(x) + eps * (
1 / np.pi * tf.sin(np.pi * 2 * x / eps) * (1 / 4 - x / 2) - eps / (4 * np.pi ** 2) * tf.cos(
np.pi * 2 * x / eps) + eps / 4 / np.pi ** 2)
return utrue, f, aeps, u_l, u_r
def get_infos_4laplace(in_dim=None, out_dim=None, region_a=0, region_b=1, p=2, eps=0.01):
f = lambda x: ((1-2*x)**2) * (2+tf.cos(2*np.pi*x/eps))*(
6 * eps + 3 * eps * tf.cos(2 * np.pi * x / eps) - 2*np.pi * (2 * x-1) * tf.sin(2 * np.pi * x / eps)) / (
4 * eps)
aeps = lambda x: 1.0 / (2 + tf.cos(2 * np.pi * x / eps))
u_l = lambda x: tf.zeros_like(x)
u_r = lambda x: tf.zeros_like(x)
utrue = lambda x: x - tf.square(x) + eps * (
1 / np.pi * tf.sin(np.pi * 2 * x / eps) * (1 / 4 - x / 2) - eps / (4 * np.pi ** 2) * tf.cos(
np.pi * 2 * x / eps) + eps / 4 / np.pi ** 2)
return utrue, f, aeps, u_l, u_r
def get_infos_5laplace(in_dim=None, out_dim=None, region_a=0, region_b=1, p=2, eps=0.01):
# f = lambda x: ((2 * x - 1) ** 3) * ((2 + tf.cos(2 * np.pi * x / eps)) ** 2) * (
# 3 * np.pi * (2 * x - 1) * tf.sin(2 * np.pi * x / eps) - 4 * eps * tf.cos(2 * np.pi * x / eps) - 8 * eps) / (
# 8 * eps)
# f = lambda x: ((1-2 * x ) ** 3) * ((2 + tf.cos(2 * np.pi * x / eps)) ** 2) * (
# 3 * np.pi * (2 * x - 1) * tf.sin(2 * np.pi * x / eps) - 4 * eps * tf.cos(2 * np.pi * x / eps) - 8 * eps) / (
# 8 * eps)
f = lambda x: -1.0*abs((2 * x - 1) ** 3) * ((2 + tf.cos(2 * np.pi * x / eps))**2) * (
3 * np.pi * (2 * x - 1) * tf.sin(2 * np.pi * x / eps) - 4 * eps * tf.cos(2 * np.pi * x / eps) - 8*eps) / (
8 * eps)
aeps = lambda x: 1.0 / (2 + tf.cos(2 * np.pi * x / eps))
u_l = lambda x: tf.zeros_like(x)
u_r = lambda x: tf.zeros_like(x)
utrue = lambda x: x - tf.square(x) + eps * (
1 / np.pi * tf.sin(np.pi * 2 * x / eps) * (1 / 4 - x / 2) - eps / (4 * np.pi ** 2) * tf.cos(
np.pi * 2 * x / eps) + eps / 4 / np.pi ** 2)
return utrue, f, aeps, u_l, u_r
def get_infos_8laplace(in_dim=None, out_dim=None, region_a=0, region_b=1, p=2, eps=0.01):
f = lambda x: ((1 - 2 * x) ** 6) * ((2 + tf.cos(2 * np.pi * x / eps)) ** 5) * (
7 * eps * tf.cos(2 * np.pi * x / eps) + 2 * (
7 * eps - 3 * np.pi * (2 * x - 1) * tf.sin(2 * np.pi * x / eps))) / (
64 * eps)
aeps = lambda x: 1.0 / (2 + tf.cos(2 * np.pi * x / eps))
u_l = lambda x: tf.zeros_like(x)
u_r = lambda x: tf.zeros_like(x)
utrue = lambda x: x - tf.square(x) + eps * (
1 / np.pi * tf.sin(np.pi * 2 * x / eps) * (1 / 4 - x / 2) - eps / (4 * np.pi ** 2) * tf.cos(
np.pi * 2 * x / eps) + eps / 4 / np.pi ** 2)
return utrue, f, aeps, u_l, u_r
def get_infos_multi_scale(in_dim=None, out_dim=None, region_a=0, region_b=1, p=2, eps=0.01):
f = lambda x: (np.power(1 - 2 * x, p) * np.power(2 + tf.cos(2 * np.pi * x / eps), p)*(
eps * (p - 1) * (2+tf.cos(2 * np.pi * x / eps)) - np.pi * (p - 2) * (2 * x - 1) * tf.sin(2 * np.pi * x / eps))) / (
np.power(2, p - 2) * eps * ((1 - 2 * x) ** 2) * ((2 + tf.cos(2 * np.pi * x / eps)) ** 3))
aeps = lambda x: 1.0 / (2 + tf.cos(2 * np.pi * x / eps))
u_l = lambda x: tf.zeros_like(x)
u_r = lambda x: tf.zeros_like(x)
utrue = lambda x: x - tf.square(x) + eps * (
1 / np.pi * tf.sin(np.pi * 2 * x / eps) * (1 / 4 - x / 2) - eps / (4 * np.pi ** 2) * tf.cos(
np.pi * 2 * x / eps) + eps / 4 / np.pi ** 2)
return utrue, f, aeps, u_l, u_r
def get_infos__multi_scale_abs(in_dim=None, out_dim=None, region_a=0, region_b=1, p=2, eps=0.01):
f = lambda x: (np.power(abs(1 - 2 * x), p) * np.power(2 + tf.cos(2 * np.pi * x / eps), p) * (
eps * (p - 1) * (2 + tf.cos(2 * np.pi * x / eps)) - np.pi * (p - 2) * (2 * x - 1) * tf.sin(
2 * np.pi * x / eps))) / (
np.power(2, p - 2) * eps * ((1 - 2 * x) ** 2) * ((2 + tf.cos(2 * np.pi * x / eps)) ** 3))
aeps = lambda x: 1 / (2 + tf.cos(2 * np.pi * x / eps))
u_l = lambda x: tf.zeros_like(x)
u_r = lambda x: tf.zeros_like(x)
u_true = lambda x: x - tf.square(x) + eps * (
1 / np.pi * tf.sin(np.pi * 2 * x / eps) * (1 / 4 - x / 2) - eps / (4 * np.pi ** 2) * tf.cos(
np.pi * 2 * x / eps) + eps / 4 / np.pi ** 2)
return u_true, f, aeps, u_l, u_r
def get_infos_pLaplace(in_dim=None, out_dim=None, region_a=0, region_b=1, p=2, eps=0.01, eqs_name=None):
if eqs_name == 'multi_scale':
f = lambda x: (np.power(abs(1 - 2 * x), p) * np.power(2 + tf.cos(2 * np.pi * x / eps), p) * (
eps * (p - 1) * (2 + tf.cos(2 * np.pi * x / eps)) - np.pi * (p - 2) * (2 * x - 1) * tf.sin(
2 * np.pi * x / eps))) / (
np.power(2, p - 2) * eps * ((1 - 2 * x) ** 2) * ((2 + tf.cos(2 * np.pi * x / eps)) ** 3))
aeps = lambda x: 1 / (2 + tf.cos(2 * np.pi * x / eps))
u_l = lambda x: tf.zeros_like(x)
u_r = lambda x: tf.zeros_like(x)
utrue = lambda x: x - tf.square(x) + eps * (
1 / np.pi * tf.sin(np.pi * 2 * x / eps) * (1 / 4 - x / 2) - eps / (4 * np.pi ** 2) * tf.cos(
np.pi * 2 * x / eps) + eps / 4 / np.pi ** 2)
return utrue, f, aeps, u_l, u_r
| 42.570513 | 132 | 0.430357 | 1,241 | 6,641 | 2.217566 | 0.047542 | 0.125 | 0.069041 | 0.080669 | 0.934956 | 0.934956 | 0.933866 | 0.928416 | 0.921875 | 0.894985 | 0 | 0.071547 | 0.364403 | 6,641 | 155 | 133 | 42.845161 | 0.580431 | 0.068363 | 0 | 0.684783 | 0 | 0 | 0.001826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.021739 | 0 | 0.195652 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f0987b1ccdad4b0a7b023327a8e1e07e9982b27c | 28 | py | Python | foowise/__init__.py | ben-schulz/foowise | 16f437e9fc9a282db56a39efa8b84d06981ce652 | [
"MIT"
] | 1 | 2020-01-25T00:14:41.000Z | 2020-01-25T00:14:41.000Z | foowise/__init__.py | ben-schulz/foowise | 16f437e9fc9a282db56a39efa8b84d06981ce652 | [
"MIT"
] | 1 | 2018-08-19T17:41:33.000Z | 2018-08-26T02:15:02.000Z | foowise/__init__.py | ben-schulz/foowise | 16f437e9fc9a282db56a39efa8b84d06981ce652 | [
"MIT"
] | null | null | null | import foowise.channels.Cla
| 14 | 27 | 0.857143 | 4 | 28 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 28 | 1 | 28 | 28 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f0bec3e9725c59cc48518121c5ecdb626de40bf6 | 164 | py | Python | tests/unit/conftest.py | JanPillarRubrik/rubrik-sdk-for-python | 838dac8cbddcd3d38c5523fe47bea0e22a4d940e | [
"MIT"
] | 4 | 2018-09-06T23:34:32.000Z | 2018-10-08T15:04:22.000Z | tests/unit/conftest.py | JanPillarRubrik/rubrik-sdk-for-python | 838dac8cbddcd3d38c5523fe47bea0e22a4d940e | [
"MIT"
] | 8 | 2021-03-09T13:02:15.000Z | 2022-02-24T08:46:50.000Z | tests/unit/conftest.py | JanPillarRubrik/rubrik-sdk-for-python | 838dac8cbddcd3d38c5523fe47bea0e22a4d940e | [
"MIT"
] | 4 | 2021-04-16T15:49:36.000Z | 2021-11-09T17:58:21.000Z | import pytest
import rubrik_cdm
@pytest.fixture(scope='module')
def rubrik():
return rubrik_cdm.Connect("10.0.1.1", "user", "password", enable_logging=True)
| 18.222222 | 82 | 0.72561 | 24 | 164 | 4.833333 | 0.75 | 0.155172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.115854 | 164 | 8 | 83 | 20.5 | 0.765517 | 0 | 0 | 0 | 0 | 0 | 0.158537 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0.2 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 6 |
f0c18167631ea3f89f826fa77e9ab029333e73a9 | 2,019 | py | Python | BlaBlauto/AdministracionReservas/views/reservaschofer.py | irri96/BlaBlautos | 2ca3d808ef8ba18d6fa8658edd1411f72cc71e71 | [
"MIT"
] | null | null | null | BlaBlauto/AdministracionReservas/views/reservaschofer.py | irri96/BlaBlautos | 2ca3d808ef8ba18d6fa8658edd1411f72cc71e71 | [
"MIT"
] | null | null | null | BlaBlauto/AdministracionReservas/views/reservaschofer.py | irri96/BlaBlautos | 2ca3d808ef8ba18d6fa8658edd1411f72cc71e71 | [
"MIT"
] | null | null | null | from django.shortcuts import render,redirect
from Nucleo.models import Viaje,Tramo,Reservacion,Chofer,User
from datetime import datetime,timedelta
from django.contrib.auth.decorators import user_passes_test
# Create your views here.
def VerReservas(request,id):
try:
chofer = Chofer.objects.get(user=User(id=request.user.id))
except:
return render(request, 'error.html', {"mensaje": "No has ingresado al sistema como chofer", "redirection":"/"})
try:
viaje = Viaje.objects.get(id=id)
except:
return render(request, 'error.html', {"mensaje": "El viaje no existe",
"redirection": "/"})
elset = set([])
for tramo in viaje.tramos.all():
for reserva in tramo.reservas_del_tramo.all():
elset.add(reserva)
if viaje.conductor!=chofer:
return render(request, 'error.html', {"mensaje": "No estás autorizado para mirar este viaje",
"redirection": "/"})
return render(request,'reservasviaje.html',{"reservas":list(elset)})
def VerReservantes(request,id):
try:
chofer = Chofer.objects.get(user=User(id=request.user.id))
except:
return render(request, 'error.html', {"mensaje": "No has ingresado al sistema como chofer", "redirection":"/"})
try:
viaje = Viaje.objects.get(id=id)
except:
return render(request, 'error.html', {"mensaje": "El viaje no existe",
"redirection": "/"})
elset = set([])
for tramo in viaje.tramos.all():
for reserva in tramo.reservas_del_tramo.all():
if reserva.estado_reserva == 2:
elset.add(reserva.pasajero)
if viaje.conductor!=chofer:
return render(request, 'error.html', {"mensaje": "No estás autorizado para mirar este viaje",
"redirection": "/"})
return render(request,'reservantes.html',{"pasajeros":list(elset)}) | 43.891304 | 119 | 0.601783 | 227 | 2,019 | 5.321586 | 0.30837 | 0.07947 | 0.125828 | 0.119205 | 0.705298 | 0.705298 | 0.705298 | 0.705298 | 0.705298 | 0.705298 | 0 | 0.000675 | 0.266469 | 2,019 | 46 | 120 | 43.891304 | 0.81499 | 0.011392 | 0 | 0.731707 | 0 | 0 | 0.211028 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0.02439 | 0.097561 | 0 | 0.341463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9bdfcb9db844cd54941de7f7b075b6a190d23941 | 2,102 | py | Python | build/lib/auto_instr/psw_800.py | arvind0790/auto_instr | 6b4ff1c535a8124c6f8e92e5dddd71f9101f8c24 | [
"MIT"
] | 1 | 2018-07-26T09:08:18.000Z | 2018-07-26T09:08:18.000Z | build/lib/auto_instr/psw_800.py | arvind0790/auto_instr | 6b4ff1c535a8124c6f8e92e5dddd71f9101f8c24 | [
"MIT"
] | null | null | null | build/lib/auto_instr/psw_800.py | arvind0790/auto_instr | 6b4ff1c535a8124c6f8e92e5dddd71f9101f8c24 | [
"MIT"
] | 1 | 2019-07-15T13:19:01.000Z | 2019-07-15T13:19:01.000Z | class PSW800(object):
#############Source a Voltage with OCP On############
def set_volt(instr,volt,current_lim):
instr.write('SOUR:CURR:PROT:LEV %f'%current_lim)
instr.write('SOUR:CURR:PROT:STAT ON')
instr.write('SOUR:VOLT:LEV:IMM %f'%volt)
instr.write('OUTP:STAT:IMM ON')
#################Switch off the supply###############
def off_HV(instr):
instr.write('OUTP:STAT:IMM OFF')
#############Source voltage and measure current######
def forceVMeasI(instr,volt,current_lim):
instr.write('SOUR:CURR:PROT:LEV %f' % current_lim)
instr.write('SOUR:CURR:PROT:STAT ON')
instr.write('SOUR:VOLT:LEV:IMM %f' % volt)
instr.write('OUTP:STAT:IMM ON')
current=instr.query('MEAS:SCAL:CURR:DC?')
return current
def set_volt_high_speed_CC(instr,volt,current_lim):#need to select F-03 to one before. 120V/17ms with out load.
instr.write('SOUR:CURR:PROT:LEV %f'%current_lim)
instr.write('SOUR:CURR:PROT:STAT ON')
instr.write('SOUR:VOLT:LEV:IMM %f'%volt)
instr.write('OUTP:STAT:IMM ON')
def set_volt_high_speed_CV(instr,volt,current_lim):#need to select F-03 to zero before.120V in 67mS with out load
instr.write('SOUR:CURR:PROT:LEV %f'%current_lim)
instr.write('SOUR:CURR:PROT:STAT ON')
instr.write('SOUR:VOLT:LEV:IMM %f'%volt)
instr.write('OUTP:STAT:IMM ON')
def set_volt_slew_rate_rise_CV(instr,volt,slew_rate,current_lim):#need to select F-03 to two before.Fast 120V in 83mS and slow can be 120V in 120S.
instr.write('SOUR:CURR:PROT:LEV %f'%current_lim)
instr.write('SOUR:CURR:PROT:STAT ON')
instr.write('SOUR:VOLT:LEV:IMM %f'%volt)
instr.write('SOUR:VOLT:SLEW:RIS %f '%slew_rate)
instr.write('OUTP:STAT:IMM ON')
def set_volt_slew_rate_fall_CV(instr,slew_rate):#need to select F-03 to two before.Fast 120V in 86mS and slow can be 120V in 120S.
instr.write('SOUR:VOLT:SLEW:FALL %f '%slew_rate)
instr.write('OUTP:STAT:IMM OFF')
| 52.55 | 153 | 0.629401 | 332 | 2,102 | 3.88253 | 0.201807 | 0.186191 | 0.184639 | 0.139643 | 0.792863 | 0.755625 | 0.734678 | 0.734678 | 0.696664 | 0.696664 | 0 | 0.025749 | 0.205519 | 2,102 | 39 | 154 | 53.897436 | 0.746108 | 0.174596 | 0 | 0.647059 | 0 | 0 | 0.304644 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0 | 0 | 0.264706 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9bea131171b3577fea029c6acecd2eb0a03f1d98 | 1,525 | py | Python | pyaz/webapp/deployment/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/webapp/deployment/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/webapp/deployment/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | 1 | 2022-02-03T09:12:01.000Z | 2022-02-03T09:12:01.000Z | '''
Manage web app deployments.
'''
from ... pyaz_utils import _call_az
from . import container, github_actions, slot, source, user
def list_publishing_profiles(name, resource_group, slot=None, xml=None):
'''
Get the details for available web app deployment profiles.
Required Parameters:
- name -- name of the web app. If left unspecified, a name will be randomly generated. You can configure the default using `az configure --defaults web=<name>`
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
Optional Parameters:
- slot -- the name of the slot. Default to the productions slot if not specified
- xml -- retrieves the publishing profile details in XML format
'''
return _call_az("az webapp deployment list-publishing-profiles", locals())
def list_publishing_credentials(name, resource_group, slot=None):
'''
Get the details for available web app publishing credentials
Required Parameters:
- name -- name of the web app. If left unspecified, a name will be randomly generated. You can configure the default using `az configure --defaults web=<name>`
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
Optional Parameters:
- slot -- the name of the slot. Default to the productions slot if not specified
'''
return _call_az("az webapp deployment list-publishing-credentials", locals())
| 42.361111 | 163 | 0.730492 | 209 | 1,525 | 5.253589 | 0.291866 | 0.071038 | 0.061931 | 0.065574 | 0.754098 | 0.712204 | 0.712204 | 0.712204 | 0.568306 | 0.568306 | 0 | 0 | 0.188852 | 1,525 | 35 | 164 | 43.571429 | 0.887631 | 0.676066 | 0 | 0 | 0 | 0 | 0.226277 | 0.124088 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
501b3f77c9fc6e5101579ee2f9439c7af1b3dca8 | 67 | py | Python | setup.py | alejandrogallo/show-me-your-electrons | 509a7193868a06adc25dbea2c5677e02f8de2f21 | [
"MIT"
] | null | null | null | setup.py | alejandrogallo/show-me-your-electrons | 509a7193868a06adc25dbea2c5677e02f8de2f21 | [
"MIT"
] | null | null | null | setup.py | alejandrogallo/show-me-your-electrons | 509a7193868a06adc25dbea2c5677e02f8de2f21 | [
"MIT"
] | null | null | null | from setuptools import setup
import smye
setup(**smye.SETUP_INFO)
| 13.4 | 28 | 0.80597 | 10 | 67 | 5.3 | 0.6 | 0.339623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119403 | 67 | 4 | 29 | 16.75 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
501ddc2c8b2aaee3b67f30a7842764efafa0aeae | 45,354 | py | Python | tests/test_views.py | maykinmedia/django-digid-eherkenning | 48efc88e64d3b2f9aa793758cd313b6ad8c633c4 | [
"MIT"
] | 1 | 2022-02-25T19:36:08.000Z | 2022-02-25T19:36:08.000Z | tests/test_views.py | maykinmedia/django-digid-eherkenning | 48efc88e64d3b2f9aa793758cd313b6ad8c633c4 | [
"MIT"
] | 2 | 2022-03-15T14:05:58.000Z | 2022-03-17T08:33:44.000Z | tests/test_views.py | maykinmedia/django-digid-eherkenning | 48efc88e64d3b2f9aa793758cd313b6ad8c633c4 | [
"MIT"
] | null | null | null | import urllib
from base64 import b64decode, b64encode
from hashlib import sha1
from unittest import skip
from unittest.mock import patch
from django.conf import settings
from django.contrib import auth
from django.test import TestCase
from django.urls import reverse
from django.utils import timezone
import responses
from freezegun import freeze_time
from furl import furl
from lxml import etree
from onelogin.saml2.utils import OneLogin_Saml2_Utils
from .project.models import User
from .utils import get_saml_element
def create_example_artifact(endpoint_url, endpoint_index=b"\x00\x00"):
type_code = b"\x00\x04"
source_id = sha1(endpoint_url.encode("utf-8")).digest()
message_handle = b"01234567890123456789" # something random
return b64encode(type_code + endpoint_index + source_id + message_handle)
class DigidLoginViewTests(TestCase):
maxDiff = None
@freeze_time("2020-04-09T08:31:46Z")
@patch("onelogin.saml2.utils.uuid4")
def test_login(self, uuid_mock):
"""
DigID
Make sure DigiD - 3.3.2 Stap 2 Authenticatievraag
works as intended.
"""
uuid_mock.hex = "80dd245883b84bd98dacbf3978af3d03"
response = self.client.get(reverse("digid:login"))
saml_request = b64decode(
response.context["form"].initial["SAMLRequest"].encode("utf-8")
)
#
# DigiD - 1.4 Voorbeeldbericht bij Stap 2 : AuthnRequest Post Binding
#
# <?xml version="1.0" encoding="UTF-8"?>
# <samlp:AuthnRequest
# xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
# xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
# xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
# xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#"
# Destination="https://example.com" ForceAuthn="false" ID="_1330416073" Version="2.0"
# IssueInstant="2012-02-28T09:01:13Z" AssertionConsumerServiceIndex="0"
# ProviderName="provider name">
# <saml:Issuer>https://sp.example.com</saml:Issuer>
# <ds:Signature><!—Zie XML Signature--></ds:Signature>
# <samlp:RequestedAuthnContext Comparison="minimum">
# <saml:AuthnContextClassRef>
# urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
# </saml:AuthnContextClassRef>
# </samlp:RequestedAuthnContext>
# </samlp:AuthnRequest>
#
# DigiD - 1.1 Xml Signature
# <ds:Signature>
# <ds:SignedInfo>
# <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
# <ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
# <ds:Reference URI="#_1330416073">
# <ds:Transforms>
# <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
# <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
# <ec:InclusiveNamespaces PrefixList="ds saml samlp xs"/>
# </ds:Transform>
# </ds:Transforms>
# <ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
# <ds:DigestValue>irsh4GNXQcsbkUmex22XsUejBTXyDdHfaUL/MFFWQHs=</ds:DigestValue>
# </ds:Reference>
# </ds:SignedInfo>
# <ds:SignatureValue>YJ0V4gCTwRYvgy <INGEKORT> LnOEvyF2ddwBFwILL4nCpw==</ds:SignatureValue>
# </ds:Signature>
tree = etree.fromstring(saml_request)
self.assertEqual(
tree.attrib,
{
"ID": "ONELOGIN_5ba93c9db0cff93f52b521d7420e43f6eda2784f",
"Version": "2.0",
"IssueInstant": "2020-04-09T08:31:46Z",
"Destination": "https://preprod1.digid.nl/saml/idp/request_authentication",
"ProtocolBinding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact",
"AssertionConsumerServiceURL": "https://sp.example.nl/digid/acs/",
},
)
auth_context_class_ref = tree.xpath(
"samlp:RequestedAuthnContext[@Comparison='minimum']/saml:AuthnContextClassRef",
namespaces={
"samlp": "urn:oasis:names:tc:SAML:2.0:protocol",
"saml": "urn:oasis:names:tc:SAML:2.0:assertion",
},
)[0]
self.assertEqual(
auth_context_class_ref.text,
"urn:oasis:names:tc:SAML:2.0:ac:classes:MobileTwoFactorContract",
)
# Make sure Signature properties are as expected.
signature = tree.xpath(
"//xmldsig:Signature",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)[0]
elements = signature.xpath(
"//xmldsig:SignatureValue",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)
elements[0].text = ""
elements = signature.xpath(
"//xmldsig:DigestValue",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)
elements[0].text = ""
elements = signature.xpath(
"//xmldsig:X509Certificate",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)
elements[0].text = ""
expected_signature = (
'<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" '
' xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" '
' xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">'
"<ds:SignedInfo>"
'<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>'
'<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>'
'<ds:Reference URI="#ONELOGIN_5ba93c9db0cff93f52b521d7420e43f6eda2784f">'
"<ds:Transforms>"
'<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>'
'<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>'
"</ds:Transforms>"
'<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>'
"<ds:DigestValue></ds:DigestValue>"
"</ds:Reference>"
"</ds:SignedInfo>"
"<ds:SignatureValue></ds:SignatureValue>"
"<ds:KeyInfo>"
"<ds:X509Data>"
"<ds:X509Certificate></ds:X509Certificate>"
"</ds:X509Data>"
"</ds:KeyInfo>"
"</ds:Signature>"
)
self.assertXMLEqual(
etree.tostring(signature, pretty_print=True).decode("utf-8"),
etree.tostring(
etree.fromstring(expected_signature), pretty_print=True
).decode("utf-8"),
)
@freeze_time("2020-04-09T08:31:46Z")
class DigidAssertionConsumerServiceViewTests(TestCase):
maxDiff = None
def setUp(self):
super().setUp()
# DigiD - 1.6 Voorbeeldbericht bij Stap 7 : Artifact Response (SOAP)
# In een Soap envelope. Voor de leesbaarheid is de Saml Assertion uit de Response genomen.
# <samlp:ArtifactResponse
# xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
# xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
# xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
# xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#"
# ID="_1330416516" Version="2.0" IssueInstant="2012-12-20T18:50:27Z"
# InResponseTo="_1330416516">
# <saml:Issuer>https://idp.example.com</saml:Issuer>
# <ds:Signature><!-- Zie XML Signature --></ds:Signature>
# <samlp:Status>
# <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
# </samlp:Status>
# <samlp:Response InResponseTo="_7afa5ce49" Version="2.0" ID="_1072ee96"
# IssueInstant="2012-12-20T18:50:27Z">
# <saml:Issuer>https://idp.example.com</saml:Issuer>
# <samlp:Status>
# <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
# </samlp:Status>
# <saml:Assertion><!—ZIE ASSERTION HIERONDER --></saml:Assertion>
# </samlp:Response>
# </samlp:ArtifactResponse>
# <saml:Assertion Version="2.0" ID="_dc9f70e61c" IssueInstant="2012-12-20T18:50:27Z">
# <saml:Issuer>https://idp.example.com</saml:Issuer>
# <ds:Signature><!—Optioneel Zie XML Signature --></ds:Signature>
# <saml:Subject>
# <saml:NameID>s00000000:12345678</saml:NameID>
# <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
# <saml:SubjectConfirmationData InResponseTo="_7afa5ce49"
# Recipient="http://example.com/artifact_url" NotOnOrAfter="2012-12-20T18:52:27Z"/>
# </saml:SubjectConfirmation>
# </saml:Subject>
# <saml:Conditions NotBefore="2012-12-20T18:48:27Z" NotOnOrAfter="2012-12-20T18:52:27Z">
# <saml:AudienceRestriction>
# <saml:Audience>http://sp.example.com</saml:Audience>
# </saml:AudienceRestriction>
# </saml:Conditions>
# <saml:AuthnStatement SessionIndex="17" AuthnInstant="2012-12-20T18:50:27Z">
# <saml:SubjectLocality Address="127.0.0.1"/>
# <saml:AuthnContext Comparison="minimum">
# <saml:AuthnContextClassRef>
# urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
# </saml:AuthnContextClassRef>
# </saml:AuthnContext>
# </saml:AuthnStatement>
# </saml:Assertion>
self.bogus_signature = (
"<ds:Signature>"
"<ds:SignedInfo>"
'<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>'
'<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>'
'<ds:Reference URI="#{id}">'
"<ds:Transforms>"
'<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>'
'<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">'
'<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="xacml-saml"/>'
"</ds:Transform>"
"</ds:Transforms>"
'<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>'
"<ds:DigestValue></ds:DigestValue>"
"</ds:Reference>"
"</ds:SignedInfo>"
"<ds:SignatureValue>"
""
"</ds:SignatureValue>"
"<ds:KeyInfo>"
"<ds:KeyName></ds:KeyName>"
"</ds:KeyInfo>"
"</ds:Signature>"
)
self.response = (
'<samlp:Response InResponseTo="_7afa5ce49" Version="2.0" ID="_1072ee96"'
' IssueInstant="2020-04-09T08:31:46Z">'
"<saml:Issuer>https://was-preprod1.digid.nl/saml/idp/metadata</saml:Issuer>"
+ self.bogus_signature.format(id="_1072ee96")
+ "<samlp:Status>"
'<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>'
"</samlp:Status>"
'<saml:Assertion Version="2.0" ID="_dc9f70e61c" IssueInstant="2020-04-09T08:31:46Z">'
"<saml:Issuer>https://was-preprod1.digid.nl/saml/idp/metadata</saml:Issuer>"
"<saml:Subject>"
"<saml:NameID>s00000000:12345678</saml:NameID>"
'<saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">'
'<saml:SubjectConfirmationData InResponseTo="_7afa5ce49"'
' Recipient="https://sp.example.nl/digid/acs/" NotOnOrAfter="2020-04-10T08:31:46Z"/>'
"</saml:SubjectConfirmation>"
"</saml:Subject>"
'<saml:Conditions NotBefore="2012-12-20T18:48:27Z" NotOnOrAfter="2020-04-10T08:31:46Z">'
"<saml:AudienceRestriction>"
"<saml:Audience>sp.example.nl/digid</saml:Audience>"
"</saml:AudienceRestriction>"
"</saml:Conditions>"
'<saml:AuthnStatement SessionIndex="17" AuthnInstant="2020-04-09T08:31:46Z">'
'<saml:SubjectLocality Address="127.0.0.1"/>'
"<saml:AuthnContext>"
"<saml:AuthnContextClassRef>"
" urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport"
"</saml:AuthnContextClassRef>"
"</saml:AuthnContext>"
"</saml:AuthnStatement>"
"</saml:Assertion>"
"</samlp:Response>"
)
self.artifact_response = (
"<samlp:ArtifactResponse"
' xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"'
' xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"'
' xmlns:ds="http://www.w3.org/2000/09/xmldsig#"'
' xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#"'
' ID="_1330416516" Version="2.0" IssueInstant="2020-04-09T08:31:46Z"'
' InResponseTo="ONELOGIN_5ba93c9db0cff93f52b521d7420e43f6eda2784f">'
"<saml:Issuer>https://was-preprod1.digid.nl/saml/idp/metadata</saml:Issuer>"
"<samlp:Status>"
'<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>'
"</samlp:Status>" + self.response + "</samlp:ArtifactResponse>"
)
self.artifact_response_soap = (
b'<?xml version="1.0" encoding="UTF-8"?>'
b"<soapenv:Envelope"
b' xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"'
b' xmlns:xsd="http://www.w3.org/2001/XMLSchema"'
b' xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">'
b"<soapenv:Body>"
+ str(self.artifact_response).encode("utf-8")
+ b"</soapenv:Body>"
b"</soapenv:Envelope>"
)
self.artifact = create_example_artifact(
"https://was-preprod1.digid.nl/saml/idp/metadata"
)
self.uuid_patcher = patch("onelogin.saml2.utils.uuid4")
self.cache_patcher = patch("digid_eherkenning.saml2.base.cache")
self.uuid_mock = self.uuid_patcher.start()
self.uuid_mock.hex = "80dd245883b84bd98dacbf3978af3d03"
self.cache_mock = self.cache_patcher.start()
self.cache_mock.get.return_value = {
"current_time": timezone.now(),
"client_ip_address": "127.0.0.1",
}
self.validate_sign_patcher = patch.object(OneLogin_Saml2_Utils, "validate_sign")
self.validate_sign_mock = self.validate_sign_patcher.start()
self.addCleanup(patch.stopall)
@responses.activate
def test_response_status_code_authnfailed(self):
root_element = etree.fromstring(self.artifact_response_soap)
# Remove Assertion element. It will not be returned
# when user cancels.
assertion = get_saml_element(
root_element,
"//saml:Assertion",
)
assertion.getparent().remove(assertion)
status_code = get_saml_element(
root_element, "//samlp:Response/samlp:Status/samlp:StatusCode"
)
status_code.set("Value", "urn:oasis:names:tc:SAML:2.0:status:Responder")
status_code.insert(
0,
etree.Element(
"{urn:oasis:names:tc:SAML:2.0:protocol}StatusCode",
Value="urn:oasis:names:tc:SAML:2.0:status:NoAvailableIDP",
),
)
responses.add(
responses.POST,
"https://was-preprod1.digid.nl/saml/idp/resolve_artifact",
body=etree.tostring(root_element),
status=200,
)
url = (
reverse("digid:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
with self.assertLogs("digid_eherkenning.backends", level="INFO") as log_watcher:
response = self.client.get(url, follow=True)
logs = [r.getMessage() for r in log_watcher.records]
self.assertIn(
"A technical error occurred from 127.0.0.1 during DigiD login.", logs
)
self.assertEqual(response.redirect_chain, [("/admin/login/", 302)])
self.assertEqual(
list(response.context["messages"])[0].message,
"Login to DigiD did not succeed. Please try again.",
)
# Make sure no user is created.
self.assertEqual(User.objects.count(), 0)
@responses.activate
def test_artifact_response_status_code_authnfailed(self):
root_element = etree.fromstring(self.artifact_response_soap)
status_code = get_saml_element(
root_element, "//samlp:ArtifactResponse/samlp:Status/samlp:StatusCode"
)
status_code.set("Value", "urn:oasis:names:tc:SAML:2.0:status:AuthnFailed")
responses.add(
responses.POST,
"https://was-preprod1.digid.nl/saml/idp/resolve_artifact",
body=etree.tostring(root_element),
status=200,
)
url = (
reverse("digid:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
with self.assertLogs("digid_eherkenning.backends", level="INFO") as log_watcher:
response = self.client.get(url, follow=True)
logs = [r.getMessage() for r in log_watcher.records]
self.assertIn(
"A technical error occurred from 127.0.0.1 during DigiD login.", logs
)
self.assertEqual(response.redirect_chain, [("/admin/login/", 302)])
self.assertEqual(
list(response.context["messages"])[0].message,
"Login to DigiD did not succeed. Please try again.",
)
# Make sure no user is created.
self.assertEqual(User.objects.count(), 0)
@skip("See issue #2. Not implemented")
@responses.activate
def test_invalid_subject_ip_address(self):
root_element = etree.fromstring(self.artifact_response_soap)
status_code = get_saml_element(
root_element, "//saml:AuthnStatement/saml:SubjectLocality"
)
# We do the request with 127.0.0.1
status_code.set("Address", "127.0.0.2")
responses.add(
responses.POST,
"https://was-preprod1.digid.nl/saml/idp/resolve_artifact",
body=etree.tostring(root_element),
status=200,
)
url = (
reverse("digid:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
with self.assertLogs("digid_eherkenning.backends", level="INFO") as log_watcher:
response = self.client.get(url, follow=True)
logs = [r.getMessage() for r in log_watcher.records]
self.assertIn(
"A technical error occurred from 127.0.0.1 during DigiD login.", logs
)
self.assertEqual(response.redirect_chain, [("/admin/login/", 302)])
self.assertEqual(
list(response.context["messages"])[0].message,
"Login to DigiD did not succeed. Please try again.",
)
# Make sure no user is created.
self.assertEqual(User.objects.count(), 0)
@responses.activate
def test_get(self):
responses.add(
responses.POST,
"https://was-preprod1.digid.nl/saml/idp/resolve_artifact",
body=self.artifact_response_soap,
status=200,
)
url = (
reverse("digid:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact, "RelayState": "/home/"})
)
with self.assertLogs("digid_eherkenning.backends", level="INFO") as log_watcher:
response = self.client.get(url, secure=True)
logs = [r.getMessage() for r in log_watcher.records]
self.assertIn(
"User user-12345678 (new account) from 127.0.0.1 logged in using DigiD",
logs,
)
# Make sure we're redirect the the right place.
self.assertEqual(response.url, "/home/")
# Make sure the user is created and logged in.
user = auth.get_user(self.client)
self.assertEqual(user.username, "user-12345678")
self.assertEqual(user.bsn, "12345678")
# DigiD - Stap 6
# 1.5 Voorbeeldbericht bij Stap 6 : Artifact Resolve (SOAP)
# <samlp:ArtifactResolve
# xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
# xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
# xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
# xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#"
# ID="_1330416073" Version="2.0" IssueInstant="2012-02-28T09:01:13Z">
# <saml:Issuer>http://sp.example.com</saml:Issuer>
# <ds:Signature><!—Zie XML Signature--></ds:Signature>
# <samlp:Artifact>AAQAAMh48/1oXIMRdUmllwn9jJHyEgIi8=</samlp:Artifact>
# </samlp:ArtifactResolve>
tree = etree.fromstring(responses.calls[0].request.body)
elements = tree.xpath(
"//xmldsig:SignatureValue",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)
elements[0].text = ""
elements = tree.xpath(
"//xmldsig:DigestValue",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)
elements[0].text = ""
elements = tree.xpath(
"//xmldsig:X509Certificate",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)
elements[0].text = ""
elements = tree.xpath(
"//samlp:Artifact",
namespaces={"samlp": "urn:oasis:names:tc:SAML:2.0:protocol"},
)
# Make sure the Artifact is sent as-is.
self.assertEqual(elements[0].text, self.artifact.decode("utf-8"))
elements = tree.xpath(
"//saml:Issuer",
namespaces={"saml": "urn:oasis:names:tc:SAML:2.0:assertion"},
)
self.assertEqual(elements[0].text, "sp.example.nl/digid")
# Make sure that the cache is checked for the InResponseTo returned
# by the IDP.
self.cache_mock.get.assert_called_once_with("digid__7afa5ce49")
@responses.activate
def test_no_authn_request(self):
"""
Make sure that when the InResponseTo in the Response does not match
any id we've given out, an error occurs.
"""
self.cache_mock.get.return_value = None
responses.add(
responses.POST,
"https://was-preprod1.digid.nl/saml/idp/resolve_artifact",
body=self.artifact_response_soap,
status=200,
)
url = (
reverse("digid:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
response = self.client.get(url)
with self.assertLogs("digid_eherkenning.backends", level="INFO") as log_watcher:
response = self.client.get(url, secure=True)
logs = [r.getMessage() for r in log_watcher.records]
self.assertIn(
"A technical error occurred from 127.0.0.1 during DigiD login.", logs
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response.url, settings.DIGID["login_url"])
# Make sure no user is created.
self.assertEqual(User.objects.count(), 0)
@responses.activate
def test_redirect_default(self):
"""
Make sure the view returns to the default URL if no RelayState is set
"""
responses.add(
responses.POST,
"https://was-preprod1.digid.nl/saml/idp/resolve_artifact",
body=self.artifact_response_soap,
status=200,
)
url = (
reverse("digid:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
response = self.client.get(url)
self.assertEqual(response.url, settings.LOGIN_REDIRECT_URL)
@responses.activate
def test_lower_session_age(self):
"""
Make sure the session age is lowered. Since 'session_age' is
set to 15 * 60 minutes in the configuration.
DigiD requires a session of max 15 minutes. See DigiDCheck 2.2 T14 -- Sessieduur
"""
responses.add(
responses.POST,
"https://was-preprod1.digid.nl/saml/idp/resolve_artifact",
body=self.artifact_response_soap,
status=200,
)
url = (
reverse("digid:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
response = self.client.get(url)
self.assertEqual(self.client.session.get_expiry_age(), 900)
class eHerkenningLoginViewTests(TestCase):
maxDiff = None
@freeze_time("2020-04-09T08:31:46Z")
@patch("onelogin.saml2.utils.uuid4")
def test_login(self, uuid_mock):
uuid_mock.hex = "80dd245883b84bd98dacbf3978af3d03"
response = self.client.get(reverse("eherkenning:login"))
saml_request = b64decode(
response.context["form"].initial["SAMLRequest"].encode("utf-8")
)
tree = etree.fromstring(saml_request)
self.assertEqual(
tree.attrib,
{
"ID": "ONELOGIN_5ba93c9db0cff93f52b521d7420e43f6eda2784f",
"Version": "2.0",
"ForceAuthn": "true",
"IssueInstant": "2020-04-09T08:31:46Z",
"Destination": "https://eh01.staging.iwelcome.nl/broker/sso/1.13",
"ProtocolBinding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact",
"AssertionConsumerServiceURL": "https://example.com/eherkenning/acs/",
},
)
auth_context_class_ref = tree.xpath(
"samlp:RequestedAuthnContext[@Comparison='minimum']/saml:AuthnContextClassRef",
namespaces={
"samlp": "urn:oasis:names:tc:SAML:2.0:protocol",
"saml": "urn:oasis:names:tc:SAML:2.0:assertion",
},
)[0]
self.assertEqual(
auth_context_class_ref.text,
"urn:etoegang:core:assurance-class:loa3",
)
# Make sure Signature properties are as expected.
signature = tree.xpath(
"//xmldsig:Signature",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)[0]
elements = signature.xpath(
"//xmldsig:SignatureValue",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)
elements[0].text = ""
elements = signature.xpath(
"//xmldsig:DigestValue",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)
elements[0].text = ""
elements = signature.xpath(
"//xmldsig:X509Certificate",
namespaces={"xmldsig": "http://www.w3.org/2000/09/xmldsig#"},
)
elements[0].text = ""
expected_signature = (
'<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" '
' xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" '
' xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">'
"<ds:SignedInfo>"
'<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>'
'<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>'
'<ds:Reference URI="#ONELOGIN_5ba93c9db0cff93f52b521d7420e43f6eda2784f">'
"<ds:Transforms>"
'<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>'
'<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>'
"</ds:Transforms>"
'<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>'
"<ds:DigestValue></ds:DigestValue>"
"</ds:Reference>"
"</ds:SignedInfo>"
"<ds:SignatureValue></ds:SignatureValue>"
"<ds:KeyInfo>"
"<ds:X509Data>"
"<ds:X509Certificate></ds:X509Certificate>"
"</ds:X509Data>"
"</ds:KeyInfo>"
"</ds:Signature>"
)
self.assertXMLEqual(
etree.tostring(signature, pretty_print=True).decode("utf-8"),
etree.tostring(
etree.fromstring(expected_signature), pretty_print=True
).decode("utf-8"),
)
@freeze_time("2020-04-09T08:31:46Z")
@patch("onelogin.saml2.utils.uuid4")
def test_login_with_attribute_consuming_service_index(self, uuid_mock):
uuid_mock.hex = "80dd245883b84bd98dacbf3978af3d03"
url = furl(reverse("eherkenning:login")).set(
{"attr_consuming_service_index": "2"}
)
response = self.client.get(url)
saml_request = b64decode(
response.context["form"].initial["SAMLRequest"].encode("utf-8")
)
tree = etree.fromstring(saml_request)
self.assertEqual(
tree.attrib,
{
"ID": "ONELOGIN_5ba93c9db0cff93f52b521d7420e43f6eda2784f",
"Version": "2.0",
"ForceAuthn": "true",
"IssueInstant": "2020-04-09T08:31:46Z",
"Destination": "https://eh01.staging.iwelcome.nl/broker/sso/1.13",
"ProtocolBinding": "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact",
"AssertionConsumerServiceURL": "https://example.com/eherkenning/acs/",
"AttributeConsumingServiceIndex": "2",
},
)
@freeze_time("2020-04-09T08:31:46Z")
class eHerkenningAssertionConsumerServiceViewTests(TestCase):
def setUp(self):
super().setUp()
cert_file = settings.EHERKENNING["cert_file"]
key_file = settings.EHERKENNING["key_file"]
key = open(key_file, "r").read()
cert = open(cert_file, "r").read()
encrypted_attribute = OneLogin_Saml2_Utils.generate_name_id(
"123456782",
sp_nq=None,
nq="urn:etoegang:1.9:EntityConcernedID:RSIN",
sp_format="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent",
cert=cert,
)
self.bogus_signature = (
"<ds:Signature>"
"<ds:SignedInfo>"
'<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>'
'<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>'
'<ds:Reference URI="#{id}">'
"<ds:Transforms>"
'<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>'
'<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">'
'<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="xacml-saml"/>'
"</ds:Transform>"
"</ds:Transforms>"
'<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>'
"<ds:DigestValue></ds:DigestValue>"
"</ds:Reference>"
"</ds:SignedInfo>"
"<ds:SignatureValue>"
""
"</ds:SignatureValue>"
"<ds:KeyInfo>"
"<ds:KeyName></ds:KeyName>"
"</ds:KeyInfo>"
"</ds:Signature>"
)
# self.bogus_signature = (
# '<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">'
# '<ds:SignedInfo>'
# '<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>'
# '<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>'
# '<ds:Reference URI="#_0ddd4451-264c-3823-88e2-7da7490652cd">'
# '<ds:Transforms>'
# '<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>'
# '<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">'
# '<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="xacml-saml"/>'
# '</ds:Transform>'
# '</ds:Transforms>'
# '<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>'
# '<ds:DigestValue>/q9QSh5W8fF0+UxcvJ3tbxPGSD4d66BGekaHyqH+oX4=</ds:DigestValue>'
# '</ds:Reference>'
# '</ds:SignedInfo>'
# '<ds:SignatureValue>'
# 'zxgJZd8Y1w/xce3ptMqSOLlA7MSv9r7hG0x+XQYQohSJpldEp5/ZVV6TyxonMvmKSJxl7KNoMvK9'
# 'XrXDuy02L0oIchUFIUfU5O1h5IouON6WRuQjhcILlL/hhWayqwabiDJ8iqAoifgmSRM1A/Am0+6c'
# '9oTULCLjk3OtHZXcXb0VWJGM9CHvLiG2rWJtggxhJOFX0TQ5AUIkDtilN74flQSyH5bAlXSnkyo5'
# 'Z77nQ4NcdWctpOSgnwx5fFHg69IWac8DjYs2/eQ72AIDsoEgb7x/qtCchseJSbm6rCDJWi8qzMDj'
# '0uw0mnxf1OrrLq2Mmz5hopGn0y+ueGwCDsNwY2Bd1DgifqzH8ra5asI63rkPghOuM7x96Ovob2lx'
# 'bJAXVkZXinIsCxVrNTSPXIjQiLs+uHkM/rDa31a9XXGRddTekOI449ZRxgvlMcp2SViIPmBWv8Fe'
# 'rbgriNaRZ2Kr2oa1sXcc02UGwDvJ6jX+q2EXd38txiuW254LzI9P9FenW7CQsuKR9ArIW9XWyQnI'
# 'FB9X/mWKZXxVsf8yhlQ9mgDb3xtvQ326TYD9PuCVInRmsBVATVGJs64qEEaJq17XaL52JzXZicK/'
# 'rb8ciC3U/vruE5OWcsORQEivG09LcDu9cFhFLjSuPtaEbAS34rVKIsmNLJvbg3e/qaS2oMszEP4='
# '</ds:SignatureValue>'
# '<ds:KeyInfo>'
# '<ds:KeyName>e6e04e0a22bbc8a036a8a243abc9655e92907f73a4ba5a2ad28485ec3f4c82d1</ds:KeyName>'
# '</ds:KeyInfo>'
# '</ds:Signature>'
# )
# eHerkenning has a Advice element with more elements than this. But these elements are what
# broke python3-saml and for which I had to introduce "disableSignatureWrappingProtection" security setting.
self.advice = (
"<saml:Advice>"
'<saml:Assertion ID="bla" IssueInstant="2020-04-09T08:31:46Z" Version="2.0">'
"<saml:Issuer>urn:etoegang:HM:00000003520354760000:entities:9632</saml:Issuer>"
+ self.bogus_signature.format(id="bla")
+ "</saml:Assertion>"
"</saml:Advice>"
)
self.assertion = (
'<saml:Assertion ID="_ae28e39f-bf7a-32d5-9653-3ad07c0e911e" IssueInstant="2020-04-09T08:31:46Z" Version="2.0" xmlns:xacml-saml="urn:oasis:xacml:2.0:saml:assertion:schema:os">'
"<saml:Issuer>urn:etoegang:HM:00000003520354760000:entities:9632</saml:Issuer>"
+ self.bogus_signature.format(id="_ae28e39f-bf7a-32d5-9653-3ad07c0e911e")
+ "<saml:Subject>"
'<saml:NameID Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient" NameQualifier="urn:etoegang:EB:00000004000000149000:entities:9009">b964780b-3441-4e57-a027-a59c21c3019d</saml:NameID>'
'<saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">'
'<saml:SubjectConfirmationData InResponseTo="id-jiaDzLL9mR3C3hioH" NotOnOrAfter="2020-04-09T08:35:46Z" Recipient="https://example.com/eherkenning/acs/"/>'
"</saml:SubjectConfirmation>"
"</saml:Subject>"
'<saml:Conditions NotBefore="2020-04-09T08:31:46Z" NotOnOrAfter="2020-04-09T08:35:46Z">'
"<saml:AudienceRestriction>"
"<saml:Audience>urn:etoegang:DV:0000000000000000001:entities:0002</saml:Audience>"
"</saml:AudienceRestriction>"
"</saml:Conditions>"
+ self.advice
+ '<saml:AuthnStatement AuthnInstant="2020-05-06T10:50:14Z">'
"<saml:AuthnContext>"
"<saml:AuthnContextClassRef>urn:etoegang:core:assurance-class:loa3</saml:AuthnContextClassRef>"
"<saml:AuthenticatingAuthority>urn:etoegang:EB:00000004000000149000:entities:9009</saml:AuthenticatingAuthority>"
"</saml:AuthnContext>"
"</saml:AuthnStatement>"
"<saml:AttributeStatement>"
'<saml:Attribute Name="urn:etoegang:core:ServiceID">'
'<saml:AttributeValue xsi:type="xs:string">urn:etoegang:DV:00000002003214394001:services:5000</saml:AttributeValue>'
"</saml:Attribute>"
'<saml:Attribute Name="urn:etoegang:core:ServiceUUID">'
'<saml:AttributeValue xsi:type="xs:string">87f3035b-b0c2-482a-b693-98316f5f4ba4</saml:AttributeValue>'
"</saml:Attribute>"
'<saml:Attribute FriendlyName="ActingSubjectID" Name="urn:etoegang:core:LegalSubjectID">'
"<saml:AttributeValue>"
+ encrypted_attribute
+ "</saml:AttributeValue></saml:Attribute>"
"</saml:AttributeStatement>"
"</saml:Assertion>"
)
self.response = (
"<samlp:Response"
' Destination="https://example.com/eherkenning/acs/"'
' ID="_d4d73890-b5ca-3ca4-ab7b-d078378e3527"'
' InResponseTo="id-jiaDzLL9mR3C3hioH"'
' IssueInstant="2020-04-09T08:31:46Z"'
' Version="2.0"'
' xmlns:ds="http://www.w3.org/2000/09/xmldsig#"'
' xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"'
' xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"'
' xmlns:xs="http://www.w3.org/2001/XMLSchema"'
' xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">'
"<saml:Issuer>urn:etoegang:HM:00000003520354760000:entities:9632</saml:Issuer>"
"<samlp:Status>"
'<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>'
"</samlp:Status>" + self.assertion + "</samlp:Response>"
)
self.artifact_response = (
"<samlp:ArtifactResponse"
' xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"'
' xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"'
' xmlns:ds="http://www.w3.org/2000/09/xmldsig#"'
' xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#"'
' ID="_1330416516" Version="2.0" IssueInstant="2020-04-09T08:31:46Z"'
' InResponseTo="ONELOGIN_5ba93c9db0cff93f52b521d7420e43f6eda2784f">'
"<saml:Issuer>urn:etoegang:HM:00000003520354760000:entities:9632</saml:Issuer>"
"<samlp:Status>"
'<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>'
"</samlp:Status>" + self.response + "</samlp:ArtifactResponse>"
)
self.artifact_response_soap = (
b'<?xml version="1.0" encoding="UTF-8"?>'
b"<soapenv:Envelope"
b' xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"'
b' xmlns:xsd="http://www.w3.org/2001/XMLSchema"'
b' xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">'
b"<soapenv:Body>"
+ str(self.artifact_response).encode("utf-8")
+ b"</soapenv:Body>"
b"</soapenv:Envelope>"
)
self.artifact = create_example_artifact(
"urn:etoegang:HM:00000003520354760000:entities:9632",
endpoint_index=b"\x00\x01",
)
self.uuid_patcher = patch("onelogin.saml2.utils.uuid4")
self.cache_patcher = patch("digid_eherkenning.saml2.base.cache")
self.uuid_mock = self.uuid_patcher.start()
self.uuid_mock.hex = "80dd245883b84bd98dacbf3978af3d03"
self.cache_mock = self.cache_patcher.start()
self.cache_mock.get.return_value = {
"current_time": timezone.now(),
"client_ip_address": "127.0.0.1",
}
self.validate_sign_patcher = patch.object(OneLogin_Saml2_Utils, "validate_sign")
self.validate_sign_mock = self.validate_sign_patcher.start()
self.addCleanup(patch.stopall)
@responses.activate
def test_get(self):
responses.add(
responses.POST,
"https://eh02.staging.iwelcome.nl/broker/ars/1.13",
body=self.artifact_response_soap,
status=200,
)
url = (
reverse("eherkenning:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact, "RelayState": "/home/"})
)
with self.assertLogs("digid_eherkenning.backends", level="INFO") as log_watcher:
response = self.client.get(url, secure=True)
logs = [r.getMessage() for r in log_watcher.records]
self.assertIn(
"User user-123456782 (new account) from 127.0.0.1 logged in using eHerkenning",
logs,
)
# Make sure we're redirect the the right place.
self.assertEqual(response.url, "/home/")
# Make sure that the cache is checked for the InResponseTo returned
# by the IDP.
self.cache_mock.get.assert_called_once_with("eherkenning_id-jiaDzLL9mR3C3hioH")
@responses.activate
def test_no_authn_request(self):
"""
Make sure that when the InResponseTo in the Response does not match
any id we've given out, an error occurs.
"""
self.cache_mock.get.return_value = None
responses.add(
responses.POST,
"https://eh02.staging.iwelcome.nl/broker/ars/1.13",
body=self.artifact_response_soap,
status=200,
)
url = (
reverse("eherkenning:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
with self.assertLogs("digid_eherkenning.backends", level="INFO") as log_watcher:
response = self.client.get(url)
logs = [r.getMessage() for r in log_watcher.records]
self.assertIn(
"A technical error occurred from 127.0.0.1 during eHerkenning login.", logs
)
self.assertEqual(response.status_code, 302)
self.assertEqual(response.url, settings.EHERKENNING["login_url"])
# Make sure no user is created.
self.assertEqual(User.objects.count(), 0)
@responses.activate
def test_redirect_default(self):
"""
Make sure the view returns to the default URL if no RelayState is set
"""
responses.add(
responses.POST,
"https://eh02.staging.iwelcome.nl/broker/ars/1.13",
body=self.artifact_response_soap,
status=200,
)
url = (
reverse("eherkenning:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
response = self.client.get(url)
self.assertEqual(response.url, settings.LOGIN_REDIRECT_URL)
# TODO: Add authnfailed tests here as well.
@responses.activate
def test_no_rsin(self):
artifact_response_soap = etree.fromstring(self.artifact_response_soap)
# Remove the RSIN. In this scenario it is not returned by eHerkenning.
encrypted_id = get_saml_element(
artifact_response_soap,
"//saml:EncryptedID",
)
encrypted_id.getparent().remove(encrypted_id)
responses.add(
responses.POST,
"https://eh02.staging.iwelcome.nl/broker/ars/1.13",
body=etree.tostring(artifact_response_soap),
status=200,
)
url = (
reverse("eherkenning:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
response = self.client.get(url, follow=True)
messages = [str(m) for m in response.context["messages"]]
self.assertIn(
"No RSIN returned by eHerkenning. Login to eHerkenning did not succeed.",
messages,
)
@responses.activate
def test_user_cancels(self):
"""
Test that when a user cancels this is logged properly.
"""
artifact_response_soap = etree.fromstring(self.artifact_response_soap)
# Remove Assertion element. It will not be returned
# when user cancels.
assertion = get_saml_element(
artifact_response_soap,
"//samlp:Response/saml:Assertion",
)
assertion.getparent().remove(assertion)
status_code = get_saml_element(
artifact_response_soap, "//samlp:Response/samlp:Status/samlp:StatusCode"
)
status_code.set("Value", "urn:oasis:names:tc:SAML:2.0:status:Responder")
status_code.insert(
0,
etree.Element(
"{urn:oasis:names:tc:SAML:2.0:protocol}StatusCode",
Value="urn:oasis:names:tc:SAML:2.0:status:AuthnFailed",
),
)
responses.add(
responses.POST,
"https://eh02.staging.iwelcome.nl/broker/ars/1.13",
body=etree.tostring(artifact_response_soap),
status=200,
)
url = (
reverse("eherkenning:acs")
+ "?"
+ urllib.parse.urlencode({"SAMLart": self.artifact})
)
with self.assertLogs("digid_eherkenning.backends", level="INFO") as log_watcher:
response = self.client.get(url)
logs = [r.getMessage() for r in log_watcher.records]
self.assertIn(
"The eHerkenning login from 127.0.0.1 did not succeed or was cancelled.",
logs,
)
self.assertEqual(response.status_code, 302)
# Make sure no user is created.
self.assertEqual(User.objects.count(), 0)
| 40.458519 | 205 | 0.592825 | 4,832 | 45,354 | 5.494826 | 0.115273 | 0.016873 | 0.021694 | 0.028925 | 0.807804 | 0.789499 | 0.757636 | 0.745998 | 0.734021 | 0.718768 | 0 | 0.072616 | 0.262777 | 45,354 | 1,120 | 206 | 40.494643 | 0.721348 | 0.177294 | 0 | 0.695541 | 0 | 0.06879 | 0.398559 | 0.18092 | 0 | 0 | 0 | 0.000893 | 0.100637 | 1 | 0.02293 | false | 0.001274 | 0.021656 | 0 | 0.054777 | 0.005096 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
503a0b05c46be03c5f5cac7bdd197d1c32962bf9 | 8,548 | py | Python | test/unit/test_mvt_manager.py | kirkhansen/djangorestframework-mvt | c6b4626c503a689ea051800d23529883f8f02918 | [
"BSD-3-Clause"
] | null | null | null | test/unit/test_mvt_manager.py | kirkhansen/djangorestframework-mvt | c6b4626c503a689ea051800d23529883f8f02918 | [
"BSD-3-Clause"
] | null | null | null | test/unit/test_mvt_manager.py | kirkhansen/djangorestframework-mvt | c6b4626c503a689ea051800d23529883f8f02918 | [
"BSD-3-Clause"
] | null | null | null | from django.core.exceptions import FieldError
from rest_framework_mvt.managers import MVTManager
from rest_framework.serializers import ValidationError
from mock import patch, MagicMock
import pytest
@pytest.fixture
def mvt_manager():
mvt_manager = MVTManager(geo_col="jazzy_geo")
meta = MagicMock(db_table="test_table")
fields = [
MagicMock(
get_attname_column=MagicMock(return_value=("other_column", "other_column"))
),
MagicMock(
get_attname_column=MagicMock(return_value=("jazzy_geo", "jazzy_geo"))
),
MagicMock(get_attname_column=MagicMock(return_value=("city", "city"))),
]
meta.get_fields.return_value = fields
mvt_manager.model = MagicMock(_meta=meta)
return mvt_manager
@pytest.fixture
def mvt_manager_no_col():
mvt_manager_no_col = MVTManager()
meta = MagicMock(db_table="test_table")
fields = [
MagicMock(
get_attname_column=MagicMock(return_value=("other_column", "other_column"))
),
MagicMock(
get_attname_column=MagicMock(return_value=("jazzy_geo", "jazzy_geo"))
),
MagicMock(get_attname_column=MagicMock(return_value=("city", "city"))),
MagicMock(
get_attname_column=MagicMock(return_value=("generic_relation", None))
),
]
meta.get_fields.return_value = fields
mvt_manager_no_col.model = MagicMock(_meta=meta)
return mvt_manager_no_col
@patch("rest_framework_mvt.managers.MVTManager._get_connection")
def test_mvt_manager_intersect__calls__build_query(get_conn, mvt_manager):
mvt_manager._build_query = MagicMock()
mvt_manager._build_query.return_value = ("foo", ["bar"])
mvt_manager.intersect(bbox="", limit=10, offset=7)
mvt_manager._build_query.assert_called_once_with(filters={})
@patch("rest_framework_mvt.managers.MVTManager.only")
@patch("rest_framework_mvt.managers.MVTManager._get_connection")
def test_mvt_manager_build_query__all(get_conn, only, mvt_manager):
query = MagicMock()
query.sql_with_params.return_value = ("SELECT other_column, city FROM table", [])
only.return_value = MagicMock(query=query)
expected_query = """
SELECT NULL AS id, ST_AsMVT(q, 'default', 4096, 'mvt_geom')
FROM (SELECT other_column, city,
ST_AsMVTGeom(ST_Transform(test_table.jazzy_geo, 3857),
ST_Transform(ST_SetSRID(ST_GeomFromText(%s), 4326), 3857), 4096, 0, false) AS mvt_geom
FROM test_table
WHERE ST_Intersects(test_table.jazzy_geo, ST_SetSRID(ST_GeomFromText(%s), 4326))
LIMIT %s
OFFSET %s) AS q;
""".strip()
expected_parameters = []
query, parameters = mvt_manager._build_query()
assert expected_query == query
assert expected_parameters == parameters
@patch("rest_framework_mvt.managers.MVTManager.only")
@patch("rest_framework_mvt.managers.MVTManager._get_connection")
def test_mvt_manager_build_query__no_geo_col(get_conn, only, mvt_manager_no_col):
query = MagicMock()
query.sql_with_params.return_value = ("SELECT other_column, city FROM table", [])
only.return_value = MagicMock(query=query)
expected_query = """
SELECT NULL AS id, ST_AsMVT(q, 'default', 4096, 'mvt_geom')
FROM (SELECT other_column, city,
ST_AsMVTGeom(ST_Transform(test_table.geom, 3857),
ST_Transform(ST_SetSRID(ST_GeomFromText(%s), 4326), 3857), 4096, 0, false) AS mvt_geom
FROM test_table
WHERE ST_Intersects(test_table.geom, ST_SetSRID(ST_GeomFromText(%s), 4326))
LIMIT %s
OFFSET %s) AS q;
""".strip()
expected_parameters = []
query, parameters = mvt_manager_no_col._build_query()
assert expected_query == query
assert expected_parameters == parameters
only.assert_called_once_with("other_column", "jazzy_geo", "city")
@patch("rest_framework_mvt.managers.MVTManager.filter")
@patch("rest_framework_mvt.managers.MVTManager.only")
@patch("rest_framework_mvt.managers.MVTManager._get_connection")
def test_mvt_manager_build_query__filter(get_conn, only, orm_filter, mvt_manager):
query = MagicMock()
query.sql_with_params.return_value = (
"SELECT other_column, city FROM table WHERE (city = %s)",
["johnston"],
)
only.return_value = MagicMock(query=query)
orm_filter.return_value = MagicMock(query=query)
expected_query = """
SELECT NULL AS id, ST_AsMVT(q, 'default', 4096, 'mvt_geom')
FROM (SELECT other_column, city,
ST_AsMVTGeom(ST_Transform(test_table.jazzy_geo, 3857),
ST_Transform(ST_SetSRID(ST_GeomFromText(%s), 4326), 3857), 4096, 0, false) AS mvt_geom
FROM test_table
WHERE ST_Intersects(test_table.jazzy_geo, ST_SetSRID(ST_GeomFromText(%s), 4326)) AND (city = %s)
LIMIT %s
OFFSET %s) AS q;
""".strip()
expected_parameters = ["johnston"]
query, parameters = mvt_manager._build_query(filters={"city": "johnston"})
assert expected_query == query
assert expected_parameters == parameters
@patch("rest_framework_mvt.managers.MVTManager.filter")
@patch("rest_framework_mvt.managers.MVTManager.only")
@patch("rest_framework_mvt.managers.MVTManager._get_connection")
def test_mvt_manager_build_query__multiple_filters(
get_conn, only, orm_filter, mvt_manager
):
query = MagicMock()
query.sql_with_params.return_value = (
"SELECT other_column, city FROM table WHERE (city = %s AND other_column = %s)",
["johnston", "IA"],
)
only.return_value = MagicMock(query=query)
orm_filter.return_value = MagicMock(query=query)
expected_query = """
SELECT NULL AS id, ST_AsMVT(q, 'default', 4096, 'mvt_geom')
FROM (SELECT other_column, city,
ST_AsMVTGeom(ST_Transform(test_table.jazzy_geo, 3857),
ST_Transform(ST_SetSRID(ST_GeomFromText(%s), 4326), 3857), 4096, 0, false) AS mvt_geom
FROM test_table
WHERE ST_Intersects(test_table.jazzy_geo, ST_SetSRID(ST_GeomFromText(%s), 4326)) AND (city = %s AND other_column = %s)
LIMIT %s
OFFSET %s) AS q;
""".strip()
expected_parameters = ["johnston", "IA"]
query, parameters = mvt_manager._build_query(
filters={"city": "johnston", "other_column": "IA"}
)
assert expected_query == query
assert expected_parameters == parameters
@patch("rest_framework_mvt.managers.MVTManager.filter")
@patch("rest_framework_mvt.managers.MVTManager.only")
@patch("rest_framework_mvt.managers.MVTManager._get_connection")
def test_mvt_manager_build_query__validation_error(
get_conn, only, orm_filter, mvt_manager
):
query = MagicMock()
query.sql_with_params.return_value = (
"SELECT other_column, city FROM table WHERE (city = %s AND other_column = %s)",
["johnston", "IA"],
)
only.return_value = MagicMock(query=query)
orm_filter.side_effect = FieldError
with pytest.raises(ValidationError) as e:
query = mvt_manager._build_query(filters={"not_a_filter": "oops"})
@patch("rest_framework_mvt.managers.MVTManager.filter")
@patch("rest_framework_mvt.managers.MVTManager._get_connection")
def test_mvt_manager_create_where_clause_with_params(get_conn, orm_filter, mvt_manager):
query_filter = MagicMock()
query_filter.sql_with_params.return_value = (
(
'SELECT "my_schema"."my_table"."id", "my_schema"."my_table"."foreign_key_id", '
'"my_schema"."my_table"."col_1", "my_schema"."my_table"."geom"::bytea FROM '
'"my_schema"."my_table" WHERE ("my_schema"."my_table"."col_1" = %s AND '
'"my_schema"."my_table"."foreign_key_id" = %s)'
),
("filter_1", 1),
)
orm_filter.return_value = MagicMock(query=query_filter)
(
parameterized_where_clause,
where_clause_parameters,
) = mvt_manager._create_where_clause_with_params(
"my_schema.my_table", {"col_1": "filter_1", "foreign_key": 1}
)
orm_filter.assert_called_once_with(col_1="filter_1", foreign_key=1)
query_filter.sql_with_params.assert_called_once()
assert parameterized_where_clause == (
"ST_Intersects(my_schema.my_table.jazzy_geo, ST_SetSRID(ST_GeomFromText(%s), 4326)) "
'AND ("my_schema"."my_table"."col_1" = %s AND "my_schema"."my_table"."foreign_key_id" = %s)'
)
assert where_clause_parameters == ["filter_1", 1]
| 39.391705 | 130 | 0.689518 | 1,086 | 8,548 | 5.054328 | 0.101289 | 0.058298 | 0.049554 | 0.07433 | 0.833303 | 0.797231 | 0.783021 | 0.729277 | 0.714702 | 0.695026 | 0 | 0.017394 | 0.192911 | 8,548 | 216 | 131 | 39.574074 | 0.778229 | 0 | 0 | 0.606557 | 0 | 0.016393 | 0.436125 | 0.20613 | 0 | 0 | 0 | 0 | 0.076503 | 1 | 0.04918 | false | 0 | 0.027322 | 0 | 0.087432 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ac8ed8ffff265c8c4b0ff7b2a289f63400d01086 | 37 | py | Python | koapy/utils/store/__init__.py | resoliwan/koapy | b0616f252bb3588695dfb37c7d9b8580a65649a3 | [
"MIT"
] | 1 | 2021-09-25T22:33:01.000Z | 2021-09-25T22:33:01.000Z | koapy/utils/store/__init__.py | resoliwan/koapy | b0616f252bb3588695dfb37c7d9b8580a65649a3 | [
"MIT"
] | null | null | null | koapy/utils/store/__init__.py | resoliwan/koapy | b0616f252bb3588695dfb37c7d9b8580a65649a3 | [
"MIT"
] | 1 | 2021-11-12T15:33:29.000Z | 2021-11-12T15:33:29.000Z | from .SQLiteStore import SQLiteStore
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
acc33c1f743811eb185f5e6dd17321dced5a6ad6 | 3,560 | py | Python | api/routes/v0_routes.py | s2t2/tweet-analyzer-py | 0a398fc47101a2d602d8c4116c970f1076a58f27 | [
"MIT"
] | 5 | 2020-04-02T12:03:57.000Z | 2020-10-18T19:29:15.000Z | api/routes/v0_routes.py | s2t2/tweet-analyzer-py | 0a398fc47101a2d602d8c4116c970f1076a58f27 | [
"MIT"
] | 22 | 2020-03-31T02:00:34.000Z | 2021-06-30T17:59:01.000Z | api/routes/v0_routes.py | s2t2/tweet-analyzer-py | 0a398fc47101a2d602d8c4116c970f1076a58f27 | [
"MIT"
] | 3 | 2020-04-04T16:08:08.000Z | 2020-10-20T01:32:46.000Z |
from flask import Blueprint, current_app, jsonify, request
api_routes = Blueprint("v0_routes", __name__)
@api_routes.route("/api/v0/user_details/<screen_name>")
def user_details(screen_name=None):
#print(f"USER DETAILS: '{screen_name}'")
if "@" in screen_name or ";" in screen_name: # just be super safe about preventing sql injection. there are no screen names with semicolons
return jsonify({"message": f"Oh, expecting a screen name like 'politico'. Please try again."}), 400
response = list(current_app.config["BQ_SERVICE"].fetch_user_details_api_v0(screen_name))
try:
return jsonify(dict(response[0]))
except IndexError as err:
print(err)
return jsonify({"message": f"Oh, couldn't find user with screen name '{screen_name}'. Please try again."}), 404
@api_routes.route("/api/v0/user_tweets/<screen_name>")
def user_tweets(screen_name=None):
#print(f"USER TWEETS: '{screen_name}'")
if "@" in screen_name or ";" in screen_name: # just be super safe about preventing sql injection. there are no screen names with semicolons
return jsonify({"message": f"Oh, expecting a screen name like 'politico'. Please try again."}), 400
response = list(current_app.config["BQ_SERVICE"].fetch_user_tweets_api_v0(screen_name))
try:
return jsonify([dict(row) for row in response])
except IndexError as err:
print(err)
return jsonify({"message": f"Oh, couldn't find user with screen name '{screen_name}'. Please try again."}), 404
@api_routes.route("/api/v0/users_most_retweeted")
def users_most_retweeted():
query_params = {"metric": request.args.get("metric"), "limit": request.args.get("limit")}
print("QUERY PARAMS:", query_params)
response = list(current_app.config["BQ_SERVICE"].fetch_users_most_retweeted_api_v0(**query_params))
return jsonify([dict(row) for row in response])
@api_routes.route("/api/v0/statuses_most_retweeted")
def statuses_most_retweeted():
query_params = {"metric": request.args.get("metric"), "limit": request.args.get("limit")}
print("QUERY PARAMS:", query_params)
response = list(current_app.config["BQ_SERVICE"].fetch_statuses_most_retweeted_api_v0(**query_params))
return jsonify([dict(row) for row in response])
@api_routes.route("/api/v0/top_profile_tokens")
def top_profile_tokens():
query_params = {"limit": request.args.get("limit")}
print("QUERY PARAMS:", query_params)
response = list(current_app.config["BQ_SERVICE"].fetch_top_profile_tokens_api_v0(**query_params))
return jsonify([dict(row) for row in response])
@api_routes.route("/api/v0/top_profile_tags")
def top_profile_tags():
query_params = {"limit": request.args.get("limit")}
print("QUERY PARAMS:", query_params)
response = list(current_app.config["BQ_SERVICE"].fetch_top_profile_tags_api_v0(**query_params))
return jsonify([dict(row) for row in response])
@api_routes.route("/api/v0/top_status_tokens")
def top_status_tokens():
query_params = {"limit": request.args.get("limit")}
print("QUERY PARAMS:", query_params)
response = list(current_app.config["BQ_SERVICE"].fetch_top_status_tokens_api_v0(**query_params))
return jsonify([dict(row) for row in response])
@api_routes.route("/api/v0/top_status_tags")
def top_status_tags():
query_params = {"limit": request.args.get("limit")}
print("QUERY PARAMS:", query_params)
response = list(current_app.config["BQ_SERVICE"].fetch_top_status_tags_api_v0(**query_params))
return jsonify([dict(row) for row in response])
| 48.767123 | 143 | 0.724157 | 515 | 3,560 | 4.75534 | 0.159223 | 0.107799 | 0.045733 | 0.055533 | 0.871784 | 0.871784 | 0.84116 | 0.84116 | 0.804818 | 0.804818 | 0 | 0.009743 | 0.135112 | 3,560 | 72 | 144 | 49.444444 | 0.785645 | 0.073876 | 0 | 0.534483 | 0 | 0 | 0.236634 | 0.068044 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.017241 | 0 | 0.362069 | 0.172414 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
acf5adc6619414d28429862c87fab944b15fa595 | 9,934 | py | Python | data/archive/download_rh_sigma995.py | Skye777/transformer | 177834bcb55e59f8ea0fbe666734c148effbec8d | [
"Apache-2.0"
] | null | null | null | data/archive/download_rh_sigma995.py | Skye777/transformer | 177834bcb55e59f8ea0fbe666734c148effbec8d | [
"Apache-2.0"
] | null | null | null | data/archive/download_rh_sigma995.py | Skye777/transformer | 177834bcb55e59f8ea0fbe666734c148effbec8d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#################################################################
# Python Script to retrieve 164 online Data files of 'ds131.2',
# total 3.46G. This script uses 'requests' to download data.
#
# Highlight this script by Select All, Copy and Paste it into a file;
# make the file executable and run it on command line.
#
# You need pass in your password as a parameter to execute
# this script; or you can set an environment variable RDAPSWD
# if your Operating System supports it.
#
# Contact rpconroy@ucar.edu (Riley Conroy) for further assistance.
#################################################################
import sys, os
import requests
def check_file_status(filepath, filesize):
sys.stdout.write('\r')
sys.stdout.flush()
size = int(os.stat(filepath).st_size)
percent_complete = (size / filesize) * 100
sys.stdout.write('%.3f %s' % (percent_complete, '% Completed'))
sys.stdout.flush()
# Try to get password
if len(sys.argv) < 2 and not 'RDAPSWD' in os.environ:
try:
import getpass
input = getpass.getpass
except:
try:
input = raw_input
except:
pass
pswd = input('Password: ')
else:
try:
pswd = sys.argv[1]
except:
pswd = os.environ['RDAPSWD']
url = 'https://rda.ucar.edu/cgi-bin/login'
values = {'email': '1811017@tongji.edu.cn', 'passwd': pswd, 'action': 'login'}
# Authenticate
ret = requests.post(url, data=values)
if ret.status_code != 200:
print('Bad Authentication')
print(ret.text)
exit(1)
dspath = 'https://rda.ucar.edu/data/ds131.2/'
filelist = [
'pgrbanl/pgrbanl_mean_1851_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1852_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1853_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1854_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1855_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1856_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1857_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1858_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1859_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1860_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1861_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1862_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1863_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1864_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1865_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1866_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1867_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1868_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1869_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1870_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1871_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1872_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1873_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1874_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1875_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1876_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1877_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1878_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1879_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1880_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1881_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1882_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1883_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1884_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1885_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1886_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1887_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1888_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1889_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1890_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1891_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1892_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1893_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1894_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1895_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1896_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1897_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1898_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1899_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1900_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1901_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1902_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1903_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1904_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1905_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1906_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1907_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1908_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1909_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1910_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1911_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1912_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1913_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1914_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1915_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1916_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1917_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1918_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1919_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1920_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1921_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1922_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1923_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1924_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1925_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1926_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1927_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1928_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1929_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1930_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1931_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1932_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1933_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1934_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1935_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1936_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1937_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1938_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1939_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1940_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1941_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1942_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1943_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1944_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1945_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1946_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1947_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1948_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1949_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1950_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1951_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1952_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1953_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1954_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1955_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1956_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1957_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1958_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1959_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1960_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1961_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1962_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1963_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1964_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1965_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1966_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1967_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1968_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1969_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1970_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1971_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1972_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1973_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1974_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1975_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1976_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1977_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1978_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1979_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1980_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1981_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1982_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1983_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1984_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1985_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1986_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1987_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1988_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1989_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1990_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1991_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1992_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1993_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1994_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1995_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1996_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1997_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1998_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_1999_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2000_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2001_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2002_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2003_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2004_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2005_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2006_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2007_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2008_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2009_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2010_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2011_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2012_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2013_RH_sigma.grib',
'pgrbanl/pgrbanl_mean_2014_RH_sigma.grib']
for file in filelist:
filename = dspath + file
file_base = '../meta-data/rh/' + os.path.basename(file)
print('Downloading', file_base)
req = requests.get(filename, cookies=ret.cookies, allow_redirects=True, stream=True)
filesize = int(req.headers['Content-length'])
with open(file_base, 'wb') as outfile:
chunk_size = 1048576
for chunk in req.iter_content(chunk_size=chunk_size):
outfile.write(chunk)
if chunk_size < filesize:
check_file_status(file_base, filesize)
check_file_status(file_base, filesize)
print()
| 42.09322 | 88 | 0.750252 | 1,440 | 9,934 | 4.704861 | 0.220833 | 0.338893 | 0.43572 | 0.433063 | 0.708044 | 0.708044 | 0.010332 | 0 | 0 | 0 | 0 | 0.080194 | 0.12885 | 9,934 | 235 | 89 | 42.27234 | 0.702681 | 0.051842 | 0 | 0.047393 | 0 | 0 | 0.712884 | 0.69186 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004739 | false | 0.023697 | 0.014218 | 0 | 0.018957 | 0.018957 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
acfb6118c412a1d00d79c1f65100746a99557f3a | 40 | py | Python | singlebar/__init__.py | ericedem/singlebar | 81d5f517284b64e838706d7ac168f8a700afe57c | [
"MIT"
] | null | null | null | singlebar/__init__.py | ericedem/singlebar | 81d5f517284b64e838706d7ac168f8a700afe57c | [
"MIT"
] | 1 | 2016-05-11T17:06:39.000Z | 2016-05-11T17:06:39.000Z | singlebar/__init__.py | ericedem/singlebar | 81d5f517284b64e838706d7ac168f8a700afe57c | [
"MIT"
] | null | null | null | from .core import start, update, finish
| 20 | 39 | 0.775 | 6 | 40 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 40 | 1 | 40 | 40 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a0cd75ad827ec73c9f052e1985296f2a9b15f38 | 134 | py | Python | mabel/operators/minio/__init__.py | mabel-dev/mabel | ee1fdfcfe5fb87d2c5ce4f24b4b7113478ba1b8a | [
"Apache-2.0"
] | null | null | null | mabel/operators/minio/__init__.py | mabel-dev/mabel | ee1fdfcfe5fb87d2c5ce4f24b4b7113478ba1b8a | [
"Apache-2.0"
] | 287 | 2021-05-14T21:25:26.000Z | 2022-03-30T12:02:51.000Z | mabel/operators/minio/__init__.py | gva-jjoyce/mabel | eb99e02d0287b851e65ad9a75b5f4188805d4ec9 | [
"Apache-2.0"
] | 1 | 2021-04-29T18:18:20.000Z | 2021-04-29T18:18:20.000Z | from .minio_batch_writer_operator import MinIoBatchWriterOperator
from .minio_stream_writer_operator import MinIoStreamWriterOperator
| 44.666667 | 67 | 0.925373 | 14 | 134 | 8.428571 | 0.642857 | 0.152542 | 0.338983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059701 | 134 | 2 | 68 | 67 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4a147d11a520e873591eb9916de7ea51eea627dc | 105 | py | Python | openslides/saml/exceptions.py | swilde/OpenSlides | 23ae32a75892005632784652d108836d1ba09da9 | [
"MIT"
] | 3 | 2021-02-11T20:45:58.000Z | 2022-02-09T21:59:42.000Z | openslides/saml/exceptions.py | swilde/OpenSlides | 23ae32a75892005632784652d108836d1ba09da9 | [
"MIT"
] | 2 | 2021-11-02T15:48:16.000Z | 2022-03-02T08:38:19.000Z | openslides/saml/exceptions.py | swilde/OpenSlides | 23ae32a75892005632784652d108836d1ba09da9 | [
"MIT"
] | 3 | 2021-01-18T11:44:05.000Z | 2022-01-19T16:00:23.000Z | from openslides.utils.exceptions import OpenSlidesError
class SamlException(OpenSlidesError):
pass
| 17.5 | 55 | 0.828571 | 10 | 105 | 8.7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12381 | 105 | 5 | 56 | 21 | 0.945652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c5be50d843ecec7e74cd469542d42ea164bd75ad | 132 | py | Python | 000562HeadFirstPy/000562_01_01_p048_datetime_20200201.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 000562HeadFirstPy/000562_01_01_p048_datetime_20200201.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 000562HeadFirstPy/000562_01_01_p048_datetime_20200201.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | import time
print("12 часовой формат:")
print(time.strftime("%I:%M"))
print("\nдо/после полуночи:")
print(time.strftime("%A %p"))
| 16.5 | 29 | 0.674242 | 20 | 132 | 4.45 | 0.7 | 0.202247 | 0.382022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.090909 | 132 | 7 | 30 | 18.857143 | 0.725 | 0 | 0 | 0 | 0 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.8 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.