hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5de788406b875fb0c369bd3fdcc75197d0248074 | 21,406 | py | Python | kubernetes/test/test_io_cert_manager_v1beta1_issuer_spec.py | mariusgheorghies/python | 68ac7e168963d8b5a81dc493b1973d29e903a15b | [
"Apache-2.0"
] | null | null | null | kubernetes/test/test_io_cert_manager_v1beta1_issuer_spec.py | mariusgheorghies/python | 68ac7e168963d8b5a81dc493b1973d29e903a15b | [
"Apache-2.0"
] | null | null | null | kubernetes/test/test_io_cert_manager_v1beta1_issuer_spec.py | mariusgheorghies/python | 68ac7e168963d8b5a81dc493b1973d29e903a15b | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Kubernetes
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: v1.20.7
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import unittest
import datetime
import kubernetes.client
from kubernetes.client.models.io_cert_manager_v1beta1_issuer_spec import IoCertManagerV1beta1IssuerSpec # noqa: E501
from kubernetes.client.rest import ApiException
class TestIoCertManagerV1beta1IssuerSpec(unittest.TestCase):
"""IoCertManagerV1beta1IssuerSpec unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def make_instance(self, include_optional):
"""Test IoCertManagerV1beta1IssuerSpec
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included """
# model = kubernetes.client.models.io_cert_manager_v1beta1_issuer_spec.IoCertManagerV1beta1IssuerSpec() # noqa: E501
if include_optional :
return IoCertManagerV1beta1IssuerSpec(
acme = kubernetes.client.models.io_cert_manager_v1beta1_cluster_issuer_spec_acme.io_cert_manager_v1beta1_ClusterIssuer_spec_acme(
disable_account_key_generation = True,
email = '0',
enable_duration_feature = True,
external_account_binding = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_acme_external_account_binding.io_cert_manager_v1_ClusterIssuer_spec_acme_externalAccountBinding(
key_algorithm = 'HS256',
key_id = '0',
key_secret_ref = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_acme_external_account_binding_key_secret_ref.io_cert_manager_v1_ClusterIssuer_spec_acme_externalAccountBinding_keySecretRef(
key = '0',
name = '0', ), ),
preferred_chain = '0',
private_key_secret_ref = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_acme_private_key_secret_ref.io_cert_manager_v1_ClusterIssuer_spec_acme_privateKeySecretRef(
key = '0',
name = '0', ),
server = '0',
skip_tls_verify = True,
solvers = [
kubernetes.client.models.io_cert_manager_v1beta1_cluster_issuer_spec_acme_solvers.io_cert_manager_v1beta1_ClusterIssuer_spec_acme_solvers(
dns01 = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01.io_cert_manager_acme_v1_Challenge_spec_solver_dns01(
acme_dns = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_acme_dns.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_acmeDNS(
account_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_acme_dns_account_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_acmeDNS_accountSecretRef(
key = '0',
name = '0', ),
host = '0', ),
akamai = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_akamai.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_akamai(
access_token_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_acme_dns_account_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_acmeDNS_accountSecretRef(
key = '0',
name = '0', ),
kubernetes.client_secret_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_acme_dns_account_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_acmeDNS_accountSecretRef(
key = '0',
name = '0', ),
kubernetes.client_token_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_acme_dns_account_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_acmeDNS_accountSecretRef(
key = '0',
name = '0', ),
service_consumer_domain = '0', ),
azure_dns = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_azure_dns.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_azureDNS(
kubernetes.client_id = '0',
environment = 'AzurePublicCloud',
hosted_zone_name = '0',
resource_group_name = '0',
subscription_id = '0',
tenant_id = '0', ),
cloud_dns = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_cloud_dns.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_cloudDNS(
hosted_zone_name = '0',
project = '0',
service_account_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_acme_dns_account_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_acmeDNS_accountSecretRef(
key = '0',
name = '0', ), ),
cloudflare = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_cloudflare.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_cloudflare(
api_key_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_cloudflare_api_key_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_cloudflare_apiKeySecretRef(
key = '0',
name = '0', ),
api_token_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_cloudflare_api_token_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_cloudflare_apiTokenSecretRef(
key = '0',
name = '0', ),
email = '0', ),
cname_strategy = 'None',
digitalocean = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_digitalocean.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_digitalocean(
token_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_acme_dns_account_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_acmeDNS_accountSecretRef(
key = '0',
name = '0', ), ),
rfc2136 = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_rfc2136.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_rfc2136(
nameserver = '0',
tsig_algorithm = '0',
tsig_key_name = '0',
tsig_secret_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_rfc2136_tsig_secret_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_rfc2136_tsigSecretSecretRef(
key = '0',
name = '0', ), ),
route53 = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_route53.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_route53(
access_key_id = '0',
hosted_zone_id = '0',
region = '0',
role = '0',
secret_access_key_secret_ref = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_route53_secret_access_key_secret_ref.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_route53_secretAccessKeySecretRef(
key = '0',
name = '0', ), ),
webhook = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_dns01_webhook.io_cert_manager_acme_v1_Challenge_spec_solver_dns01_webhook(
config = kubernetes.client.models.config.config(),
group_name = '0',
solver_name = '0', ), ),
http01 = kubernetes.client.models.io_cert_manager_acme_v1beta1_challenge_spec_solver_http01.io_cert_manager_acme_v1beta1_Challenge_spec_solver_http01(
gateway_http_route = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_http01_gateway_http_route.io_cert_manager_acme_v1_Challenge_spec_solver_http01_gatewayHTTPRoute(
labels = {
'key' : '0'
},
service_type = '0', ),
ingress = kubernetes.client.models.io_cert_manager_acme_v1beta1_challenge_spec_solver_http01_ingress.io_cert_manager_acme_v1beta1_Challenge_spec_solver_http01_ingress(
class = '0',
ingress_template = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_http01_ingress_ingress_template.io_cert_manager_acme_v1_Challenge_spec_solver_http01_ingress_ingressTemplate(
metadata = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_http01_ingress_ingress_template_metadata.io_cert_manager_acme_v1_Challenge_spec_solver_http01_ingress_ingressTemplate_metadata(
annotations = {
'key' : '0'
}, ), ),
name = '0',
pod_template = kubernetes.client.models.io_cert_manager_acme_v1beta1_challenge_spec_solver_http01_ingress_pod_template.io_cert_manager_acme_v1beta1_Challenge_spec_solver_http01_ingress_podTemplate(
spec = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_http01_ingress_pod_template_spec.io_cert_manager_acme_v1_Challenge_spec_solver_http01_ingress_podTemplate_spec(
affinity = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_http01_ingress_pod_template_spec_affinity.io_cert_manager_acme_v1_Challenge_spec_solver_http01_ingress_podTemplate_spec_affinity(
node_affinity = kubernetes.client.models.com_coreos_monitoring_v1_alertmanager_spec_affinity_node_affinity.com_coreos_monitoring_v1_Alertmanager_spec_affinity_nodeAffinity(
preferred_during_scheduling_ignored_during_execution = [
kubernetes.client.models.com_coreos_monitoring_v1_alertmanager_spec_affinity_node_affinity_preferred_during_scheduling_ignored_during_execution.com_coreos_monitoring_v1_Alertmanager_spec_affinity_nodeAffinity_preferredDuringSchedulingIgnoredDuringExecution(
preference = kubernetes.client.models.com_coreos_monitoring_v1_alertmanager_spec_affinity_node_affinity_preference.com_coreos_monitoring_v1_Alertmanager_spec_affinity_nodeAffinity_preference(
match_expressions = [
kubernetes.client.models.com_coreos_monitoring_v1_alertmanager_spec_affinity_node_affinity_preference_match_expressions.com_coreos_monitoring_v1_Alertmanager_spec_affinity_nodeAffinity_preference_matchExpressions(
key = '0',
operator = '0',
values = [
'0'
], )
],
match_fields = [
kubernetes.client.models.com_coreos_monitoring_v1_alertmanager_spec_affinity_node_affinity_preference_match_expressions.com_coreos_monitoring_v1_Alertmanager_spec_affinity_nodeAffinity_preference_matchExpressions(
key = '0',
operator = '0', )
], ),
weight = 56, )
],
required_during_scheduling_ignored_during_execution = kubernetes.client.models.com_coreos_monitoring_v1_alertmanager_spec_affinity_node_affinity_required_during_scheduling_ignored_during_execution.com_coreos_monitoring_v1_Alertmanager_spec_affinity_nodeAffinity_requiredDuringSchedulingIgnoredDuringExecution(
node_selector_terms = [
kubernetes.client.models.com_coreos_monitoring_v1_alertmanager_spec_affinity_node_affinity_required_during_scheduling_ignored_during_execution_node_selector_terms.com_coreos_monitoring_v1_Alertmanager_spec_affinity_nodeAffinity_requiredDuringSchedulingIgnoredDuringExecution_nodeSelectorTerms()
], ), ),
pod_affinity = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_http01_ingress_pod_template_spec_affinity_pod_affinity.io_cert_manager_acme_v1_Challenge_spec_solver_http01_ingress_podTemplate_spec_affinity_podAffinity(),
pod_anti_affinity = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_http01_ingress_pod_template_spec_affinity_pod_anti_affinity.io_cert_manager_acme_v1_Challenge_spec_solver_http01_ingress_podTemplate_spec_affinity_podAntiAffinity(), ),
node_selector = {
'key' : '0'
},
priority_class_name = '0',
service_account_name = '0',
tolerations = [
kubernetes.client.models.com_coreos_monitoring_v1_alertmanager_spec_tolerations.com_coreos_monitoring_v1_Alertmanager_spec_tolerations(
effect = '0',
key = '0',
operator = '0',
toleration_seconds = 56,
value = '0', )
], ), ),
service_type = '0', ), ),
selector = kubernetes.client.models.io_cert_manager_acme_v1_challenge_spec_solver_selector.io_cert_manager_acme_v1_Challenge_spec_solver_selector(
dns_names = [
'0'
],
dns_zones = [
'0'
],
match_labels = {
'key' : '0'
}, ), )
], ),
ca = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_ca.io_cert_manager_v1_ClusterIssuer_spec_ca(
crl_distribution_points = [
'0'
],
ocsp_servers = [
'0'
],
secret_name = '0', ),
self_signed = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_self_signed.io_cert_manager_v1_ClusterIssuer_spec_selfSigned(
crl_distribution_points = [
'0'
], ),
vault = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_vault.io_cert_manager_v1_ClusterIssuer_spec_vault(
auth = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_vault_auth.io_cert_manager_v1_ClusterIssuer_spec_vault_auth(
app_role = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_vault_auth_app_role.io_cert_manager_v1_ClusterIssuer_spec_vault_auth_appRole(
path = '0',
role_id = '0',
secret_ref = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_vault_auth_app_role_secret_ref.io_cert_manager_v1_ClusterIssuer_spec_vault_auth_appRole_secretRef(
key = '0',
name = '0', ), ),
kubernetes = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_vault_auth_kubernetes.io_cert_manager_v1_ClusterIssuer_spec_vault_auth_kubernetes(
mount_path = '0',
role = '0',
secret_ref = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_vault_auth_kubernetes_secret_ref.io_cert_manager_v1_ClusterIssuer_spec_vault_auth_kubernetes_secretRef(
key = '0',
name = '0', ), ),
token_secret_ref = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_vault_auth_token_secret_ref.io_cert_manager_v1_ClusterIssuer_spec_vault_auth_tokenSecretRef(
key = '0',
name = '0', ), ),
ca_bundle = 'YQ==',
namespace = '0',
path = '0',
server = '0', ),
venafi = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_venafi.io_cert_manager_v1_ClusterIssuer_spec_venafi(
cloud = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_venafi_cloud.io_cert_manager_v1_ClusterIssuer_spec_venafi_cloud(
api_token_secret_ref = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_venafi_cloud_api_token_secret_ref.io_cert_manager_v1_ClusterIssuer_spec_venafi_cloud_apiTokenSecretRef(
key = '0',
name = '0', ),
url = '0', ),
tpp = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_venafi_tpp.io_cert_manager_v1_ClusterIssuer_spec_venafi_tpp(
ca_bundle = 'YQ==',
credentials_ref = kubernetes.client.models.io_cert_manager_v1_cluster_issuer_spec_venafi_tpp_credentials_ref.io_cert_manager_v1_ClusterIssuer_spec_venafi_tpp_credentialsRef(
name = '0', ),
url = '0', ),
zone = '0', )
)
else :
return IoCertManagerV1beta1IssuerSpec(
)
def testIoCertManagerV1beta1IssuerSpec(self):
"""Test IoCertManagerV1beta1IssuerSpec"""
inst_req_only = self.make_instance(include_optional=False)
inst_req_and_optional = self.make_instance(include_optional=True)
if __name__ == '__main__':
unittest.main()
| 87.729508 | 361 | 0.578856 | 1,968 | 21,406 | 5.645833 | 0.114837 | 0.055081 | 0.119341 | 0.094861 | 0.783368 | 0.766178 | 0.758078 | 0.739447 | 0.705607 | 0.639546 | 0 | 0.030861 | 0.373307 | 21,406 | 243 | 362 | 88.090535 | 0.797391 | 0.006494 | 0 | 0.35814 | 1 | 0 | 0.006952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009302 | 0.027907 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f8de4248d3b75881a4398c35d9f8ad38203312a5 | 114 | py | Python | MiddleKit/Design/PostgreSQLPythonGenerator.py | PeaceWorksTechnologySolutions/w4py | 74f5a03a63f1a93563502b908474aefaae2abda2 | [
"MIT"
] | 18 | 2016-08-01T20:15:59.000Z | 2019-12-24T16:00:03.000Z | MiddleKit/Design/PostgreSQLPythonGenerator.py | WebwareForPython/w4py | bba08f5974d49f5da7e88abe3eeda1037d0824a3 | [
"MIT"
] | 6 | 2016-09-13T05:48:45.000Z | 2020-01-09T18:29:12.000Z | MiddleKit/Design/PostgreSQLPythonGenerator.py | WebwareForPython/w4py | bba08f5974d49f5da7e88abe3eeda1037d0824a3 | [
"MIT"
] | 6 | 2016-09-16T14:32:29.000Z | 2020-01-03T18:52:16.000Z | from SQLPythonGenerator import SQLPythonGenerator
class PostgreSQLPythonGenerator(SQLPythonGenerator):
pass
| 19 | 52 | 0.859649 | 8 | 114 | 12.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114035 | 114 | 5 | 53 | 22.8 | 0.970297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
f8e5b9bbf81bc95953e000a3ecdccac0dcfc7074 | 19,176 | py | Python | SeMask-FAPN/SeMask-Mask2Former/mask2former/modeling/criterion.py | an99990/SeMask-Segmentation | 786f395fab4e156970628134cb49eb3547d7287b | [
"MIT"
] | 143 | 2021-12-23T15:28:07.000Z | 2022-03-31T04:41:29.000Z | SeMask-FAPN/SeMask-Mask2Former/mask2former/modeling/criterion.py | AnukritiSinghh/SeMask-Segmentation | 786f395fab4e156970628134cb49eb3547d7287b | [
"MIT"
] | 18 | 2021-12-27T08:39:04.000Z | 2022-03-30T12:22:26.000Z | SeMask-FAPN/SeMask-Mask2Former/mask2former/modeling/criterion.py | AnukritiSinghh/SeMask-Segmentation | 786f395fab4e156970628134cb49eb3547d7287b | [
"MIT"
] | 27 | 2021-12-25T09:30:03.000Z | 2022-03-30T03:25:20.000Z | # Copyright (c) Facebook, Inc. and its affiliates.
# --------------------------------------------------------
# Modified by Jitesh Jain
"""
MaskFormer criterion.
"""
import logging
import torch
import torch.nn.functional as F
from torch import nn
from detectron2.utils.comm import get_world_size
from detectron2.projects.point_rend.point_features import (
get_uncertain_point_coords_with_randomness,
point_sample,
)
from ..utils.misc import is_dist_avail_and_initialized, nested_tensor_from_tensor_list
def dice_loss(
inputs: torch.Tensor,
targets: torch.Tensor,
num_masks: float,
):
"""
Compute the DICE loss, similar to generalized IOU for masks
Args:
inputs: A float tensor of arbitrary shape.
The predictions for each example.
targets: A float tensor with the same shape as inputs. Stores the binary
classification label for each element in inputs
(0 for the negative class and 1 for the positive class).
"""
inputs = inputs.sigmoid()
inputs = inputs.flatten(1)
numerator = 2 * (inputs * targets).sum(-1)
denominator = inputs.sum(-1) + targets.sum(-1)
loss = 1 - (numerator + 1) / (denominator + 1)
return loss.sum() / num_masks
dice_loss_jit = torch.jit.script(
dice_loss
) # type: torch.jit.ScriptModule
def sigmoid_ce_loss(
inputs: torch.Tensor,
targets: torch.Tensor,
num_masks: float,
):
"""
Args:
inputs: A float tensor of arbitrary shape.
The predictions for each example.
targets: A float tensor with the same shape as inputs. Stores the binary
classification label for each element in inputs
(0 for the negative class and 1 for the positive class).
Returns:
Loss tensor
"""
loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none")
return loss.mean(1).sum() / num_masks
sigmoid_ce_loss_jit = torch.jit.script(
sigmoid_ce_loss
) # type: torch.jit.ScriptModule
def calculate_uncertainty(logits):
"""
We estimate uncerainty as L1 distance between 0.0 and the logit prediction in 'logits' for the
foreground class in `classes`.
Args:
logits (Tensor): A tensor of shape (R, 1, ...) for class-specific or
class-agnostic, where R is the total number of predicted masks in all images and C is
the number of foreground classes. The values are logits.
Returns:
scores (Tensor): A tensor of shape (R, 1, ...) that contains uncertainty scores with
the most uncertain locations having the highest uncertainty score.
"""
assert logits.shape[1] == 1
gt_class_logits = logits.clone()
return -(torch.abs(gt_class_logits))
class SetCriterion(nn.Module):
"""This class computes the loss for DETR.
The process happens in two steps:
1) we compute hungarian assignment between ground truth boxes and the outputs of the model
2) we supervise each pair of matched ground-truth / prediction (supervise class and box)
"""
def __init__(self, num_classes, matcher, weight_dict, eos_coef, losses,
num_points, oversample_ratio, importance_sample_ratio):
"""Create the criterion.
Parameters:
num_classes: number of object categories, omitting the special no-object category
matcher: module able to compute a matching between targets and proposals
weight_dict: dict containing as key the names of the losses and as values their relative weight.
eos_coef: relative classification weight applied to the no-object category
losses: list of all the losses to be applied. See get_loss for list of available losses.
"""
super().__init__()
self.num_classes = num_classes
self.matcher = matcher
self.weight_dict = weight_dict
self.eos_coef = eos_coef
self.losses = losses
empty_weight = torch.ones(self.num_classes + 1)
empty_weight[-1] = self.eos_coef
self.register_buffer("empty_weight", empty_weight)
# pointwise mask loss parameters
self.num_points = num_points
self.oversample_ratio = oversample_ratio
self.importance_sample_ratio = importance_sample_ratio
def loss_labels(self, outputs, targets, indices, num_masks):
"""Classification loss (NLL)
targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
"""
assert "pred_logits" in outputs
src_logits = outputs["pred_logits"].float()
idx = self._get_src_permutation_idx(indices)
target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)])
target_classes = torch.full(
src_logits.shape[:2], self.num_classes, dtype=torch.int64, device=src_logits.device
)
target_classes[idx] = target_classes_o
loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight)
losses = {"loss_ce": loss_ce}
return losses
def loss_masks(self, outputs, targets, indices, num_masks):
"""Compute the losses related to the masks: the focal loss and the dice loss.
targets dicts must contain the key "masks" containing a tensor of dim [nb_target_boxes, h, w]
"""
assert "pred_masks" in outputs
src_idx = self._get_src_permutation_idx(indices)
tgt_idx = self._get_tgt_permutation_idx(indices)
src_masks = outputs["pred_masks"]
src_masks = src_masks[src_idx]
masks = [t["masks"] for t in targets]
# TODO use valid to mask invalid areas due to padding in loss
target_masks, valid = nested_tensor_from_tensor_list(masks).decompose()
target_masks = target_masks.to(src_masks)
target_masks = target_masks[tgt_idx]
# No need to upsample predictions as we are using normalized coordinates :)
# N x 1 x H x W
src_masks = src_masks[:, None]
target_masks = target_masks[:, None]
with torch.no_grad():
# sample point_coords
point_coords = get_uncertain_point_coords_with_randomness(
src_masks.float(),
lambda logits: calculate_uncertainty(logits).float(),
self.num_points,
self.oversample_ratio,
self.importance_sample_ratio,
)
# get gt labels
point_labels = point_sample(
target_masks.float(),
point_coords.float(),
align_corners=False,
).squeeze(1)
point_logits = point_sample(
src_masks.float(),
point_coords.float(),
align_corners=False,
).squeeze(1)
losses = {
"loss_mask": sigmoid_ce_loss_jit(point_logits, point_labels, num_masks),
"loss_dice": dice_loss_jit(point_logits, point_labels, num_masks),
}
del src_masks
del target_masks
return losses
def _get_src_permutation_idx(self, indices):
# permute predictions following indices
batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)])
src_idx = torch.cat([src for (src, _) in indices])
return batch_idx, src_idx
def _get_tgt_permutation_idx(self, indices):
# permute targets following indices
batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)])
tgt_idx = torch.cat([tgt for (_, tgt) in indices])
return batch_idx, tgt_idx
def get_loss(self, loss, outputs, targets, indices, num_masks):
loss_map = {
'labels': self.loss_labels,
'masks': self.loss_masks,
}
assert loss in loss_map, f"do you really want to compute {loss} loss?"
return loss_map[loss](outputs, targets, indices, num_masks)
def forward(self, outputs, targets):
"""This performs the loss computation.
Parameters:
outputs: dict of tensors, see the output specification of the model for the format
targets: list of dicts, such that len(targets) == batch_size.
The expected keys in each dict depends on the losses applied, see each loss' doc
"""
outputs_without_aux = {k: v for k, v in outputs.items() if k != "aux_outputs"}
# Retrieve the matching between the outputs of the last layer and the targets
indices = self.matcher(outputs_without_aux, targets)
# Compute the average number of target boxes accross all nodes, for normalization purposes
num_masks = sum(len(t["labels"]) for t in targets)
num_masks = torch.as_tensor(
[num_masks], dtype=torch.float, device=next(iter(outputs.values())).device
)
if is_dist_avail_and_initialized():
torch.distributed.all_reduce(num_masks)
num_masks = torch.clamp(num_masks / get_world_size(), min=1).item()
# Compute all the requested losses
losses = {}
for loss in self.losses:
losses.update(self.get_loss(loss, outputs, targets, indices, num_masks))
# In case of auxiliary losses, we repeat this process with the output of each intermediate layer.
if "aux_outputs" in outputs:
for i, aux_outputs in enumerate(outputs["aux_outputs"]):
indices = self.matcher(aux_outputs, targets)
for loss in self.losses:
l_dict = self.get_loss(loss, aux_outputs, targets, indices, num_masks)
l_dict = {k + f"_{i}": v for k, v in l_dict.items()}
losses.update(l_dict)
return losses
def __repr__(self):
head = "Criterion " + self.__class__.__name__
body = [
"matcher: {}".format(self.matcher.__repr__(_repr_indent=8)),
"losses: {}".format(self.losses),
"weight_dict: {}".format(self.weight_dict),
"num_classes: {}".format(self.num_classes),
"eos_coef: {}".format(self.eos_coef),
"num_points: {}".format(self.num_points),
"oversample_ratio: {}".format(self.oversample_ratio),
"importance_sample_ratio: {}".format(self.importance_sample_ratio),
]
_repr_indent = 4
lines = [head] + [" " * _repr_indent + line for line in body]
return "\n".join(lines)
class SeMaskSetCriterion(nn.Module):
"""This class computes the loss for DETR.
The process happens in two steps:
1) we compute hungarian assignment between ground truth boxes and the outputs of the model
2) we supervise each pair of matched ground-truth / prediction (supervise class and box)
"""
def __init__(self, num_classes, matcher, weight_dict, eos_coef, losses,
num_points, oversample_ratio, importance_sample_ratio):
"""Create the criterion.
Parameters:
num_classes: number of object categories, omitting the special no-object category
matcher: module able to compute a matching between targets and proposals
weight_dict: dict containing as key the names of the losses and as values their relative weight.
eos_coef: relative classification weight applied to the no-object category
losses: list of all the losses to be applied. See get_loss for list of available losses.
"""
super().__init__()
self.num_classes = num_classes
self.matcher = matcher
self.weight_dict = weight_dict
self.eos_coef = eos_coef
self.losses = losses
empty_weight = torch.ones(self.num_classes + 1)
empty_weight[-1] = self.eos_coef
self.register_buffer("empty_weight", empty_weight)
# pointwise mask loss parameters
self.num_points = num_points
self.oversample_ratio = oversample_ratio
self.importance_sample_ratio = importance_sample_ratio
def loss_labels(self, outputs, cls_outputs, targets, indices, num_masks):
"""Classification loss (NLL)
targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
"""
assert "pred_logits" in outputs
src_logits = outputs["pred_logits"].float()
idx = self._get_src_permutation_idx(indices)
target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)])
target_classes = torch.full(
src_logits.shape[:2], self.num_classes, dtype=torch.int64, device=src_logits.device
)
target_classes[idx] = target_classes_o
loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight)
losses = {"loss_ce": loss_ce}
return losses
def loss_cate(self, outputs, cls_outputs, targets, indices, num_masks):
"""Classification loss (NLL)
targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
"""
src_logits = cls_outputs
if src_logits is None:
loss_cate = torch.tensor([0], dtype=outputs["pred_logits"].dtype).to(outputs["pred_logits"].device)
else:
gt_seg_targets = torch.cat([t["seg_maps"].unsqueeze(0) for t in targets], dim=0)
loss_cate = F.cross_entropy(src_logits, gt_seg_targets, ignore_index=255)
losses = {"loss_cate": loss_cate}
return losses
def loss_masks(self, outputs, cls_outputs, targets, indices, num_masks):
"""Compute the losses related to the masks: the focal loss and the dice loss.
targets dicts must contain the key "masks" containing a tensor of dim [nb_target_boxes, h, w]
"""
assert "pred_masks" in outputs
src_idx = self._get_src_permutation_idx(indices)
tgt_idx = self._get_tgt_permutation_idx(indices)
src_masks = outputs["pred_masks"]
src_masks = src_masks[src_idx]
masks = [t["masks"] for t in targets]
# TODO use valid to mask invalid areas due to padding in loss
target_masks, valid = nested_tensor_from_tensor_list(masks).decompose()
target_masks = target_masks.to(src_masks)
target_masks = target_masks[tgt_idx]
# No need to upsample predictions as we are using normalized coordinates :)
# N x 1 x H x W
src_masks = src_masks[:, None]
target_masks = target_masks[:, None]
with torch.no_grad():
# sample point_coords
point_coords = get_uncertain_point_coords_with_randomness(
src_masks.float(),
lambda logits: calculate_uncertainty(logits).float(),
self.num_points,
self.oversample_ratio,
self.importance_sample_ratio,
)
# get gt labels
point_labels = point_sample(
target_masks.float(),
point_coords.float(),
align_corners=False,
).squeeze(1)
point_logits = point_sample(
src_masks.float(),
point_coords.float(),
align_corners=False,
).squeeze(1)
losses = {
"loss_mask": sigmoid_ce_loss_jit(point_logits, point_labels, num_masks),
"loss_dice": dice_loss_jit(point_logits, point_labels, num_masks),
}
del src_masks
del target_masks
return losses
def _get_src_permutation_idx(self, indices):
# permute predictions following indices
batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)])
src_idx = torch.cat([src for (src, _) in indices])
return batch_idx, src_idx
def _get_tgt_permutation_idx(self, indices):
# permute targets following indices
batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)])
tgt_idx = torch.cat([tgt for (_, tgt) in indices])
return batch_idx, tgt_idx
def get_loss(self, loss, outputs, cls_outputs, targets, indices, num_masks):
loss_map = {
'labels': self.loss_labels,
'labels_cate': self.loss_cate,
'masks': self.loss_masks,
}
assert loss in loss_map, f"do you really want to compute {loss} loss?"
return loss_map[loss](outputs, cls_outputs, targets, indices, num_masks)
def forward(self, outputs, cls_outputs, targets):
"""This performs the loss computation.
Parameters:
outputs: dict of tensors, see the output specification of the model for the format
targets: list of dicts, such that len(targets) == batch_size.
The expected keys in each dict depends on the losses applied, see each loss' doc
"""
outputs_without_aux = {k: v for k, v in outputs.items() if k != "aux_outputs"}
# Retrieve the matching between the outputs of the last layer and the targets
indices = self.matcher(outputs_without_aux, targets)
# Compute the average number of target boxes accross all nodes, for normalization purposes
num_masks = sum(len(t["labels"]) for t in targets)
num_masks = torch.as_tensor(
[num_masks], dtype=torch.float, device=next(iter(outputs.values())).device
)
if is_dist_avail_and_initialized():
torch.distributed.all_reduce(num_masks)
num_masks = torch.clamp(num_masks / get_world_size(), min=1).item()
# Compute all the requested losses
losses = {}
for loss in self.losses:
losses.update(self.get_loss(loss, outputs, cls_outputs['pred'], targets, indices, num_masks))
# In case of auxiliary losses, we repeat this process with the output of each intermediate layer.
if "aux_outputs" in outputs:
for i, aux_outputs in enumerate(outputs["aux_outputs"]):
indices = self.matcher(aux_outputs, targets)
for loss in self.losses:
l_dict = self.get_loss(loss, aux_outputs, None, targets, indices, num_masks)
l_dict = {k + f"_{i}": v for k, v in l_dict.items()}
losses.update(l_dict)
return losses
def __repr__(self):
head = "SeMaskCriterion " + self.__class__.__name__
body = [
"matcher: {}".format(self.matcher.__repr__(_repr_indent=8)),
"losses: {}".format(self.losses),
"weight_dict: {}".format(self.weight_dict),
"num_classes: {}".format(self.num_classes),
"eos_coef: {}".format(self.eos_coef),
"num_points: {}".format(self.num_points),
"oversample_ratio: {}".format(self.oversample_ratio),
"importance_sample_ratio: {}".format(self.importance_sample_ratio),
]
_repr_indent = 4
lines = [head] + [" " * _repr_indent + line for line in body]
return "\n".join(lines)
| 41.960613 | 111 | 0.635273 | 2,454 | 19,176 | 4.737571 | 0.129177 | 0.022708 | 0.019009 | 0.0246 | 0.881731 | 0.871581 | 0.863066 | 0.856442 | 0.854722 | 0.847325 | 0 | 0.004167 | 0.274197 | 19,176 | 456 | 112 | 42.052632 | 0.831154 | 0.278838 | 0 | 0.751799 | 0 | 0 | 0.053408 | 0.003611 | 0 | 0 | 0 | 0.004386 | 0.02518 | 1 | 0.071942 | false | 0 | 0.053957 | 0 | 0.197842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5d235405c53ad5c54957b8e6ac9bc39d58f3f49c | 8,921 | py | Python | openstackclient/tests/functional/network/v2/test_network_qos_rule.py | adgeese/python-openstackclient | 06263bd5852aad9cd03a76f50140fbbb2d0751ba | [
"Apache-2.0"
] | 1 | 2018-04-23T20:59:31.000Z | 2018-04-23T20:59:31.000Z | openstackclient/tests/functional/network/v2/test_network_qos_rule.py | adgeese/python-openstackclient | 06263bd5852aad9cd03a76f50140fbbb2d0751ba | [
"Apache-2.0"
] | null | null | null | openstackclient/tests/functional/network/v2/test_network_qos_rule.py | adgeese/python-openstackclient | 06263bd5852aad9cd03a76f50140fbbb2d0751ba | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2016, Intel Corporation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import uuid
from openstackclient.tests.functional.network.v2 import common
class NetworkQosRuleTestsMinimumBandwidth(common.NetworkTests):
"""Functional tests for QoS minimum bandwidth rule"""
def setUp(self):
super(NetworkQosRuleTestsMinimumBandwidth, self).setUp()
# Nothing in this class works with Nova Network
if not self.haz_network:
self.skipTest("No Network service present")
self.QOS_POLICY_NAME = 'qos_policy_%s' % uuid.uuid4().hex
self.openstack(
'network qos policy create %s' % self.QOS_POLICY_NAME
)
self.addCleanup(self.openstack,
'network qos policy delete %s' % self.QOS_POLICY_NAME)
cmd_output = json.loads(self.openstack(
'network qos rule create -f json '
'--type minimum-bandwidth '
'--min-kbps 2800 '
'--egress %s' %
self.QOS_POLICY_NAME
))
self.RULE_ID = cmd_output['id']
self.addCleanup(self.openstack,
'network qos rule delete %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID))
self.assertTrue(self.RULE_ID)
def test_qos_rule_create_delete(self):
# This is to check the output of qos rule delete
policy_name = uuid.uuid4().hex
self.openstack('network qos policy create -f json %s' % policy_name)
self.addCleanup(self.openstack,
'network qos policy delete %s' % policy_name)
rule = json.loads(self.openstack(
'network qos rule create -f json '
'--type minimum-bandwidth '
'--min-kbps 2800 '
'--egress %s' % policy_name
))
raw_output = self.openstack(
'network qos rule delete %s %s' %
(policy_name, rule['id']))
self.assertEqual('', raw_output)
def test_qos_rule_list(self):
cmd_output = json.loads(self.openstack(
'network qos rule list -f json %s' % self.QOS_POLICY_NAME))
self.assertIn(self.RULE_ID, [rule['ID'] for rule in cmd_output])
def test_qos_rule_show(self):
cmd_output = json.loads(self.openstack(
'network qos rule show -f json %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID)))
self.assertEqual(self.RULE_ID, cmd_output['id'])
def test_qos_rule_set(self):
self.openstack('network qos rule set --min-kbps 7500 %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID))
cmd_output = json.loads(self.openstack(
'network qos rule show -f json %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID)))
self.assertEqual(7500, cmd_output['min_kbps'])
class NetworkQosRuleTestsDSCPMarking(common.NetworkTests):
"""Functional tests for QoS DSCP marking rule"""
def setUp(self):
super(NetworkQosRuleTestsDSCPMarking, self).setUp()
# Nothing in this class works with Nova Network
if not self.haz_network:
self.skipTest("No Network service present")
self.QOS_POLICY_NAME = 'qos_policy_%s' % uuid.uuid4().hex
self.openstack(
'network qos policy create %s' % self.QOS_POLICY_NAME
)
self.addCleanup(self.openstack,
'network qos policy delete %s' % self.QOS_POLICY_NAME)
cmd_output = json.loads(self.openstack(
'network qos rule create -f json '
'--type dscp-marking '
'--dscp-mark 8 %s' %
self.QOS_POLICY_NAME
))
self.RULE_ID = cmd_output['id']
self.addCleanup(self.openstack,
'network qos rule delete %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID))
self.assertTrue(self.RULE_ID)
def test_qos_rule_create_delete(self):
# This is to check the output of qos rule delete
policy_name = uuid.uuid4().hex
self.openstack('network qos policy create -f json %s' % policy_name)
self.addCleanup(self.openstack,
'network qos policy delete %s' % policy_name)
rule = json.loads(self.openstack(
'network qos rule create -f json '
'--type dscp-marking '
'--dscp-mark 8 %s' % policy_name
))
raw_output = self.openstack(
'network qos rule delete %s %s' %
(policy_name, rule['id']))
self.assertEqual('', raw_output)
def test_qos_rule_list(self):
cmd_output = json.loads(self.openstack(
'network qos rule list -f json %s' % self.QOS_POLICY_NAME))
self.assertIn(self.RULE_ID, [rule['ID'] for rule in cmd_output])
def test_qos_rule_show(self):
cmd_output = json.loads(self.openstack(
'network qos rule show -f json %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID)))
self.assertEqual(self.RULE_ID, cmd_output['id'])
def test_qos_rule_set(self):
self.openstack('network qos rule set --dscp-mark 32 %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID))
cmd_output = json.loads(self.openstack(
'network qos rule show -f json %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID)))
self.assertEqual(32, cmd_output['dscp_mark'])
class NetworkQosRuleTestsBandwidthLimit(common.NetworkTests):
"""Functional tests for QoS bandwidth limit rule"""
def setUp(self):
super(NetworkQosRuleTestsBandwidthLimit, self).setUp()
# Nothing in this class works with Nova Network
if not self.haz_network:
self.skipTest("No Network service present")
self.QOS_POLICY_NAME = 'qos_policy_%s' % uuid.uuid4().hex
self.openstack(
'network qos policy create %s' % self.QOS_POLICY_NAME
)
self.addCleanup(self.openstack,
'network qos policy delete %s' % self.QOS_POLICY_NAME)
cmd_output = json.loads(self.openstack(
'network qos rule create -f json '
'--type bandwidth-limit '
'--max-kbps 10000 '
'--max-burst-kbits 1400 '
'--egress %s' %
self.QOS_POLICY_NAME
))
self.RULE_ID = cmd_output['id']
self.addCleanup(self.openstack,
'network qos rule delete %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID))
self.assertTrue(self.RULE_ID)
def test_qos_rule_create_delete(self):
# This is to check the output of qos rule delete
policy_name = uuid.uuid4().hex
self.openstack('network qos policy create -f json %s' % policy_name)
self.addCleanup(self.openstack,
'network qos policy delete %s' % policy_name)
rule = json.loads(self.openstack(
'network qos rule create -f json '
'--type bandwidth-limit '
'--max-kbps 10000 '
'--max-burst-kbits 1400 '
'--egress %s' % policy_name
))
raw_output = self.openstack(
'network qos rule delete %s %s' %
(policy_name, rule['id']))
self.assertEqual('', raw_output)
def test_qos_rule_list(self):
cmd_output = json.loads(self.openstack(
'network qos rule list -f json %s' %
self.QOS_POLICY_NAME))
self.assertIn(self.RULE_ID, [rule['ID'] for rule in cmd_output])
def test_qos_rule_show(self):
cmd_output = json.loads(self.openstack(
'network qos rule show -f json %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID)))
self.assertEqual(self.RULE_ID, cmd_output['id'])
def test_qos_rule_set(self):
self.openstack('network qos rule set --max-kbps 15000 '
'--max-burst-kbits 1800 '
'--ingress %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID))
cmd_output = json.loads(self.openstack(
'network qos rule show -f json %s %s' %
(self.QOS_POLICY_NAME, self.RULE_ID)))
self.assertEqual(15000, cmd_output['max_kbps'])
self.assertEqual(1800, cmd_output['max_burst_kbps'])
self.assertEqual('ingress', cmd_output['direction'])
| 40.366516 | 78 | 0.600045 | 1,115 | 8,921 | 4.641256 | 0.128251 | 0.073043 | 0.13913 | 0.16 | 0.809275 | 0.797101 | 0.774493 | 0.774493 | 0.774493 | 0.774493 | 0 | 0.011622 | 0.295931 | 8,921 | 220 | 79 | 40.55 | 0.812291 | 0.11501 | 0 | 0.847059 | 0 | 0 | 0.217684 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.088235 | false | 0 | 0.017647 | 0 | 0.123529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5d23662f7800fc1b69fae0343ad7ccad9180ca57 | 7,429 | py | Python | sdk/python/pulumi_azure_native/insights/v20170501preview/outputs.py | pulumi-bot/pulumi-azure-native | f7b9490b5211544318e455e5cceafe47b628e12c | [
"Apache-2.0"
] | 31 | 2020-09-21T09:41:01.000Z | 2021-02-26T13:21:59.000Z | sdk/python/pulumi_azure_native/insights/v20170501preview/outputs.py | pulumi-bot/pulumi-azure-native | f7b9490b5211544318e455e5cceafe47b628e12c | [
"Apache-2.0"
] | 231 | 2020-09-21T09:38:45.000Z | 2021-03-01T11:16:03.000Z | sdk/python/pulumi_azure_native/insights/v20170501preview/outputs.py | pulumi-bot/pulumi-azure-native | f7b9490b5211544318e455e5cceafe47b628e12c | [
"Apache-2.0"
] | 4 | 2020-09-29T14:14:59.000Z | 2021-02-10T20:38:16.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
from . import outputs
__all__ = [
'LogSettingsResponse',
'MetricSettingsResponse',
'RetentionPolicyResponse',
'SubscriptionLogSettingsResponse',
]
@pulumi.output_type
class LogSettingsResponse(dict):
"""
Part of MultiTenantDiagnosticSettings. Specifies the settings for a particular log.
"""
def __init__(__self__, *,
enabled: bool,
category: Optional[str] = None,
retention_policy: Optional['outputs.RetentionPolicyResponse'] = None):
"""
Part of MultiTenantDiagnosticSettings. Specifies the settings for a particular log.
:param bool enabled: a value indicating whether this log is enabled.
:param str category: Name of a Diagnostic Log category for a resource type this setting is applied to. To obtain the list of Diagnostic Log categories for a resource, first perform a GET diagnostic settings operation.
:param 'RetentionPolicyResponseArgs' retention_policy: the retention policy for this log.
"""
pulumi.set(__self__, "enabled", enabled)
if category is not None:
pulumi.set(__self__, "category", category)
if retention_policy is not None:
pulumi.set(__self__, "retention_policy", retention_policy)
@property
@pulumi.getter
def enabled(self) -> bool:
"""
a value indicating whether this log is enabled.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def category(self) -> Optional[str]:
"""
Name of a Diagnostic Log category for a resource type this setting is applied to. To obtain the list of Diagnostic Log categories for a resource, first perform a GET diagnostic settings operation.
"""
return pulumi.get(self, "category")
@property
@pulumi.getter(name="retentionPolicy")
def retention_policy(self) -> Optional['outputs.RetentionPolicyResponse']:
"""
the retention policy for this log.
"""
return pulumi.get(self, "retention_policy")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class MetricSettingsResponse(dict):
"""
Part of MultiTenantDiagnosticSettings. Specifies the settings for a particular metric.
"""
def __init__(__self__, *,
enabled: bool,
category: Optional[str] = None,
retention_policy: Optional['outputs.RetentionPolicyResponse'] = None,
time_grain: Optional[str] = None):
"""
Part of MultiTenantDiagnosticSettings. Specifies the settings for a particular metric.
:param bool enabled: a value indicating whether this category is enabled.
:param str category: Name of a Diagnostic Metric category for a resource type this setting is applied to. To obtain the list of Diagnostic metric categories for a resource, first perform a GET diagnostic settings operation.
:param 'RetentionPolicyResponseArgs' retention_policy: the retention policy for this category.
:param str time_grain: the timegrain of the metric in ISO8601 format.
"""
pulumi.set(__self__, "enabled", enabled)
if category is not None:
pulumi.set(__self__, "category", category)
if retention_policy is not None:
pulumi.set(__self__, "retention_policy", retention_policy)
if time_grain is not None:
pulumi.set(__self__, "time_grain", time_grain)
@property
@pulumi.getter
def enabled(self) -> bool:
"""
a value indicating whether this category is enabled.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def category(self) -> Optional[str]:
"""
Name of a Diagnostic Metric category for a resource type this setting is applied to. To obtain the list of Diagnostic metric categories for a resource, first perform a GET diagnostic settings operation.
"""
return pulumi.get(self, "category")
@property
@pulumi.getter(name="retentionPolicy")
def retention_policy(self) -> Optional['outputs.RetentionPolicyResponse']:
"""
the retention policy for this category.
"""
return pulumi.get(self, "retention_policy")
@property
@pulumi.getter(name="timeGrain")
def time_grain(self) -> Optional[str]:
"""
the timegrain of the metric in ISO8601 format.
"""
return pulumi.get(self, "time_grain")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RetentionPolicyResponse(dict):
"""
Specifies the retention policy for the log.
"""
def __init__(__self__, *,
days: int,
enabled: bool):
"""
Specifies the retention policy for the log.
:param int days: the number of days for the retention in days. A value of 0 will retain the events indefinitely.
:param bool enabled: a value indicating whether the retention policy is enabled.
"""
pulumi.set(__self__, "days", days)
pulumi.set(__self__, "enabled", enabled)
@property
@pulumi.getter
def days(self) -> int:
"""
the number of days for the retention in days. A value of 0 will retain the events indefinitely.
"""
return pulumi.get(self, "days")
@property
@pulumi.getter
def enabled(self) -> bool:
"""
a value indicating whether the retention policy is enabled.
"""
return pulumi.get(self, "enabled")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class SubscriptionLogSettingsResponse(dict):
"""
Part of Subscription diagnostic setting. Specifies the settings for a particular log.
"""
def __init__(__self__, *,
enabled: bool,
category: Optional[str] = None):
"""
Part of Subscription diagnostic setting. Specifies the settings for a particular log.
:param bool enabled: a value indicating whether this log is enabled.
:param str category: Name of a Subscription Diagnostic Log category for a resource type this setting is applied to.
"""
pulumi.set(__self__, "enabled", enabled)
if category is not None:
pulumi.set(__self__, "category", category)
@property
@pulumi.getter
def enabled(self) -> bool:
"""
a value indicating whether this log is enabled.
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def category(self) -> Optional[str]:
"""
Name of a Subscription Diagnostic Log category for a resource type this setting is applied to.
"""
return pulumi.get(self, "category")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
| 36.777228 | 231 | 0.656078 | 860 | 7,429 | 5.510465 | 0.132558 | 0.069635 | 0.030175 | 0.044102 | 0.8544 | 0.83161 | 0.807976 | 0.783921 | 0.755434 | 0.73391 | 0 | 0.002002 | 0.260466 | 7,429 | 201 | 232 | 36.960199 | 0.860575 | 0.391035 | 0 | 0.711538 | 1 | 0 | 0.112767 | 0.049677 | 0 | 0 | 0 | 0 | 0 | 1 | 0.182692 | false | 0 | 0.057692 | 0.038462 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5d27761cf89d4d014859172fcc71000626fa6e32 | 2,943 | py | Python | utility/refined_events/chrome_new_record.py | EfficientAI/efficient_cv | e308f229e4d99da86ad56f87f3a78b2c81f27ca5 | [
"MIT"
] | null | null | null | utility/refined_events/chrome_new_record.py | EfficientAI/efficient_cv | e308f229e4d99da86ad56f87f3a78b2c81f27ca5 | [
"MIT"
] | null | null | null | utility/refined_events/chrome_new_record.py | EfficientAI/efficient_cv | e308f229e4d99da86ad56f87f3a78b2c81f27ca5 | [
"MIT"
] | null | null | null | from com.android.monkeyrunner import MonkeyRunner
from com.android.monkeyrunner import MonkeyDevice
print('Connecting to device...')
device = MonkeyRunner.waitForConnection()
print('Connected to device')
# Reproduce action log from here
print('Start to reproduce action log')
device.touch(533, 1696, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(533, 1696, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(543, 1268, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(543, 1268, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(897, 140, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(897, 140, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(74, 144, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(74, 144, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(907, 144, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(907, 144, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(978, 1120, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(978, 1120, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(978, 1120, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(978, 1120, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(978, 1120, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(978, 1120, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(982, 1116, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(982, 1116, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(982, 1116, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(982, 1116, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(982, 1116, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(982, 1116, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(978, 1116, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(978, 1116, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(978, 1116, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(978, 1116, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(978, 1116, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(978, 1116, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(978, 1116, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(978, 1116, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(742, 908, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(742, 908, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.touch(742, 908, MonkeyDevice.DOWN_AND_UP)
print('Executing : device.touch(742, 908, MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
device.press("KEYCODE_HOME", MonkeyDevice.DOWN_AND_UP)
print('Executing : device.press("KEYCODE_HOME", MonkeyDevice.DOWN_AND_UP)')
MonkeyRunner.sleep(1.0)
print('Finish to reproduce action log') | 35.890244 | 75 | 0.789331 | 436 | 2,943 | 5.158257 | 0.100917 | 0.256114 | 0.304135 | 0.336149 | 0.915073 | 0.886616 | 0.886616 | 0.880391 | 0.747888 | 0.704313 | 0 | 0.095936 | 0.072035 | 2,943 | 82 | 76 | 35.890244 | 0.727572 | 0.010194 | 0 | 0.688525 | 0 | 0 | 0.415522 | 0.164148 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032787 | 0 | 0.032787 | 0.360656 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5d402ab8b16ab553658e62334c51fb0ddcc5dc48 | 6,158 | py | Python | test/python/Utils_t/PortForward_t.py | khurtado/WMCore | f74e252412e49189a92962945a94f93bec81cd1e | [
"Apache-2.0"
] | 21 | 2015-11-19T16:18:45.000Z | 2021-12-02T18:20:39.000Z | test/python/Utils_t/PortForward_t.py | khurtado/WMCore | f74e252412e49189a92962945a94f93bec81cd1e | [
"Apache-2.0"
] | 5,671 | 2015-01-06T14:38:52.000Z | 2022-03-31T22:11:14.000Z | test/python/Utils_t/PortForward_t.py | khurtado/WMCore | f74e252412e49189a92962945a94f93bec81cd1e | [
"Apache-2.0"
] | 67 | 2015-01-21T15:55:38.000Z | 2022-02-03T19:53:13.000Z | #!/usr/bin/env python
"""
Unittests for PortForward
"""
from __future__ import division, print_function
import unittest
from Utils.PortForward import portForward, PortForward
class RequestHandler(object):
def __init__(self, config=None, logger=None):
super(RequestHandler, self).__init__()
if not config:
config = {}
@portForward(8443)
def request(self, url, params=None, headers=None, verb='GET',
verbose=0, ckey=None, cert=None, doseq=True,
encode=False, decode=False, cookie=None, uri=None):
return url
class PortForwardTests(unittest.TestCase):
"""
Unittest for PortForward decorator and class
"""
def __init__(self, *args, **kwargs):
super(PortForwardTests, self).__init__(*args, **kwargs)
self.urlResultList = []
self.urlTestList = ['https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions12/8TeV/Reprocessing/Cert_190456-195530_8TeV_08Jun2012ReReco_Collisions12_JSON.txt',
'https://cmsweb.cern.ch/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bydate?descending=true&limit=1',
'https://cmsweb.cern.ch/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bystatusandtime?startkey=%5B%22announced%22%2C+0%5D&endkey=%5B%22announced%22%2C+1616016936%5D&descending=false&stale=update_after&include_docs=false',
'https://cmsweb.cern.ch:8443/reqmgr2/js/?f=utils.js&f=ajax_utils.js&f=md5.js&f=task_splitting.js',
'https://cmsweb.cern.ch:443/wmstatsserver/data/filtered_requests?mask=RequestStatus&mask=RequestType&mask=RequestPriority&mask=Campaign&mask=RequestNumEvents',
u'https://cmsweb.cern.ch/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bydate?descending=true&limit=1',
u'https://cmsweb.cern.ch/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bystatusandtime?startkey=%5B%22announced%22%2C+0%5D&endkey=%5B%22announced%22%2C+1616016936%5D&descending=false&stale=update_after&include_docs=false',
u'https://cmsweb.cern.ch/reqmgr2/js/?f=utils.js&f=ajax_utils.js&f=md5.js&f=task_splitting.js',
u'https://cmsweb.cern.ch/wmstatsserver/data/filtered_requests?mask=RequestStatus&mask=RequestType&mask=RequestPriority&mask=Campaign&mask=RequestNumEvents',
b'https://cmsweb.cern.ch/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bydate?descending=true&limit=1',
b'https://cmsweb.cern.ch/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bystatusandtime?startkey=%5B%22announced%22%2C+0%5D&endkey=%5B%22announced%22%2C+1616016936%5D&descending=false&stale=update_after&include_docs=false',
b'https://cmsweb.cern.ch/reqmgr2/js/?f=utils.js&f=ajax_utils.js&f=md5.js&f=task_splitting.js',
b'https://cmsweb.cern.ch/wmstatsserver/data/filtered_requests?mask=RequestStatus&mask=RequestType&mask=RequestPriority&mask=Campaign&mask=RequestNumEvents']
self.urlExpectedtList = ['https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions12/8TeV/Reprocessing/Cert_190456-195530_8TeV_08Jun2012ReReco_Collisions12_JSON.txt',
'https://cmsweb.cern.ch:8443/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bydate?descending=true&limit=1',
'https://cmsweb.cern.ch:8443/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bystatusandtime?startkey=%5B%22announced%22%2C+0%5D&endkey=%5B%22announced%22%2C+1616016936%5D&descending=false&stale=update_after&include_docs=false',
'https://cmsweb.cern.ch:8443/reqmgr2/js/?f=utils.js&f=ajax_utils.js&f=md5.js&f=task_splitting.js',
'https://cmsweb.cern.ch:443/wmstatsserver/data/filtered_requests?mask=RequestStatus&mask=RequestType&mask=RequestPriority&mask=Campaign&mask=RequestNumEvents',
u'https://cmsweb.cern.ch:8443/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bydate?descending=true&limit=1',
u'https://cmsweb.cern.ch:8443/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bystatusandtime?startkey=%5B%22announced%22%2C+0%5D&endkey=%5B%22announced%22%2C+1616016936%5D&descending=false&stale=update_after&include_docs=false',
u'https://cmsweb.cern.ch:8443/reqmgr2/js/?f=utils.js&f=ajax_utils.js&f=md5.js&f=task_splitting.js',
u'https://cmsweb.cern.ch:8443/wmstatsserver/data/filtered_requests?mask=RequestStatus&mask=RequestType&mask=RequestPriority&mask=Campaign&mask=RequestNumEvents',
b'https://cmsweb.cern.ch:8443/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bydate?descending=true&limit=1',
b'https://cmsweb.cern.ch:8443/couchdb/reqmgr_workload_cache/_design/ReqMgr/_view/bystatusandtime?startkey=%5B%22announced%22%2C+0%5D&endkey=%5B%22announced%22%2C+1616016936%5D&descending=false&stale=update_after&include_docs=false',
b'https://cmsweb.cern.ch:8443/reqmgr2/js/?f=utils.js&f=ajax_utils.js&f=md5.js&f=task_splitting.js',
b'https://cmsweb.cern.ch:8443/wmstatsserver/data/filtered_requests?mask=RequestStatus&mask=RequestType&mask=RequestPriority&mask=Campaign&mask=RequestNumEvents']
def testDecorator(self):
requesHandler = RequestHandler()
self.urlResultList = []
for url in self.urlTestList:
self.urlResultList.append(requesHandler.request(url))
self.assertListEqual(self.urlResultList, self.urlExpectedtList)
def testCallClass(self):
portForwarder = PortForward(8443)
self.urlResultList = []
for url in self.urlTestList:
self.urlResultList.append(portForwarder(url))
self.assertListEqual(self.urlResultList, self.urlExpectedtList)
if __name__ == '__main__':
unittest.main()
| 77.949367 | 265 | 0.689022 | 753 | 6,158 | 5.479416 | 0.167331 | 0.037809 | 0.087252 | 0.098885 | 0.826466 | 0.826466 | 0.826466 | 0.797867 | 0.797867 | 0.797867 | 0 | 0.055258 | 0.180091 | 6,158 | 78 | 266 | 78.948718 | 0.761933 | 0.014778 | 0 | 0.192982 | 0 | 0.45614 | 0.631535 | 0 | 0 | 0 | 0 | 0 | 0.035088 | 1 | 0.087719 | false | 0 | 0.052632 | 0.017544 | 0.192982 | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5d60b1ab9afa6f85d4b211aa39598303a14ddca4 | 4,038 | py | Python | parking_lot/parking_lot/src/tests.py | deepaksood619/Projects | 93f09db22ad65e72e2a2ffc909b2bb3e1e5c5fa7 | [
"MIT"
] | null | null | null | parking_lot/parking_lot/src/tests.py | deepaksood619/Projects | 93f09db22ad65e72e2a2ffc909b2bb3e1e5c5fa7 | [
"MIT"
] | 5 | 2021-03-19T03:52:50.000Z | 2022-02-10T11:43:51.000Z | parking_lot/parking_lot/src/tests.py | deepaksood619/Projects | 93f09db22ad65e72e2a2ffc909b2bb3e1e5c5fa7 | [
"MIT"
] | null | null | null | """
Created on 2019-11-27
@author: deepaksood619
"""
import unittest
from Constants import PARKING_LOT_FULL
from ParkingLot import ParkingLot
class TestParkingLot(unittest.TestCase):
"""
Test class for testing parking lot
"""
def setUp(self):
"""
This will be called everytime a new test is run
:return:
"""
self.parkinglot = ParkingLot(6)
def test_park(self):
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 1)
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 2)
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 3)
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 4)
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 5)
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 6)
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), PARKING_LOT_FULL)
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), PARKING_LOT_FULL)
def test_leave(self):
self.assertEqual(self.parkinglot.leave(slot_index=1), False)
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 1)
self.assertEqual(self.parkinglot.leave(slot_index=1), True)
self.assertEqual(self.parkinglot.leave(slot_index=1), False)
self.assertEqual(self.parkinglot.leave(slot_index=2), False)
def test_get_registration_numbers_for_cars_with_colour(self):
self.assertEqual(self.parkinglot.get_registration_numbers_for_cars_with_colour('White'), '')
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 1)
self.assertEqual(self.parkinglot.get_registration_numbers_for_cars_with_colour('White'), 'KA-01-HH-1234')
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1235', colour='White'), 2)
self.assertEqual(self.parkinglot.get_registration_numbers_for_cars_with_colour('White'),
'KA-01-HH-1234, KA-01-HH-1235')
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1235', colour='Black'), 3)
self.assertEqual(self.parkinglot.get_registration_numbers_for_cars_with_colour('White'),
'KA-01-HH-1234, KA-01-HH-1235')
def test_get_slot_numbers_for_cars_with_colour(self):
self.assertEqual(self.parkinglot.get_slot_numbers_for_cars_with_colour('White'), '')
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 1)
self.assertEqual(self.parkinglot.get_slot_numbers_for_cars_with_colour('White'), '1')
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1235', colour='White'), 2)
self.assertEqual(self.parkinglot.get_slot_numbers_for_cars_with_colour('White'), '1, 2')
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1235', colour='Black'), 3)
self.assertEqual(self.parkinglot.get_slot_numbers_for_cars_with_colour('White'), '1, 2')
def test_get_slot_number_for_registration_number(self):
self.assertEqual(self.parkinglot.get_slot_number_for_registration_number('KA-01-HH-1234'), 'Not found')
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1234', colour='White'), 1)
self.assertEqual(self.parkinglot.get_slot_number_for_registration_number('KA-01-HH-1234'), 1)
self.assertEqual(self.parkinglot.park(registration_number='KA-01-HH-1235', colour='White'), 2)
self.assertEqual(self.parkinglot.get_slot_number_for_registration_number('KA-01-HH-1234'), 1)
self.assertEqual(self.parkinglot.get_slot_number_for_registration_number('KA-01-HH-1235'), 2)
unittest.main()
| 56.083333 | 117 | 0.724864 | 546 | 4,038 | 5.14652 | 0.117216 | 0.169395 | 0.223132 | 0.340569 | 0.880427 | 0.877224 | 0.863701 | 0.849466 | 0.844128 | 0.844128 | 0 | 0.055858 | 0.135463 | 4,038 | 71 | 118 | 56.873239 | 0.749069 | 0.033928 | 0 | 0.468085 | 0 | 0 | 0.126072 | 0 | 0 | 0 | 0 | 0 | 0.702128 | 1 | 0.12766 | false | 0 | 0.06383 | 0 | 0.212766 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
54d18437175f6eaa539c9f0c4f0b66ae9a1ba32b | 50 | py | Python | lectures/code/list_compr_intr.py | naskoch/python_course | 84adfd3f8d48ca3ad5837f7acc59d2fa051e95d3 | [
"MIT"
] | 4 | 2015-08-10T17:46:55.000Z | 2020-04-18T21:09:03.000Z | lectures/code/list_compr_intr.py | naskoch/python_course | 84adfd3f8d48ca3ad5837f7acc59d2fa051e95d3 | [
"MIT"
] | null | null | null | lectures/code/list_compr_intr.py | naskoch/python_course | 84adfd3f8d48ca3ad5837f7acc59d2fa051e95d3 | [
"MIT"
] | 2 | 2019-04-24T03:31:02.000Z | 2019-05-13T07:36:06.000Z | print [i**2 for i in range(5)]
# [0, 1, 4, 9, 16]
| 16.666667 | 30 | 0.5 | 13 | 50 | 1.923077 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 0.24 | 50 | 2 | 31 | 25 | 0.447368 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
54e6828fd15c9235a9e138996c108eb32d06c28f | 16,805 | py | Python | tests/api/user_test.py | felbinger/PythonFlaskLogin | 3f55c2ce358331c1f182ee1a03fe3a13a53e3f69 | [
"MIT"
] | 2 | 2020-07-13T08:26:46.000Z | 2021-05-23T00:13:34.000Z | tests/api/user_test.py | felbinger/PythonFlaskLogin | 3f55c2ce358331c1f182ee1a03fe3a13a53e3f69 | [
"MIT"
] | 1 | 2020-07-04T17:10:29.000Z | 2020-07-10T18:55:43.000Z | tests/api/user_test.py | felbinger/PythonFlaskLogin | 3f55c2ce358331c1f182ee1a03fe3a13a53e3f69 | [
"MIT"
] | null | null | null | from tests.utils import Utils
import json
import onetimepass
def test_create(app, client):
utils = Utils(app, client)
data = {
'username': 'new_user',
'password': 'password_for_new_user',
'email': 'new_user@test.test',
'role': 'user'
}
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.post('/api/users', headers=headers, json=data)
assert resp.status_code == 201
assert json.loads(resp.data.decode()).get('data').get('name') == data.get('name')
assert json.loads(resp.data.decode()).get('data').get('description') == data.get('description')
def test_create_without_permissions(app, client):
utils = Utils(app, client)
data = {
'username': 'new_user',
'password': 'password_for_new_user',
'email': 'new_user@test.test',
'role': 'user'
}
headers = {'Authorization': f'Bearer {utils.generate_access_token()}'}
resp = client.post('/api/users', headers=headers, json=data)
assert resp.status_code == 403
assert json.loads(resp.data.decode()).get('message') == 'Access Denied!'
def test_create_without_data(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.post('/api/users', headers=headers)
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == 'Payload is invalid'
def test_create_invalid_data(app, client):
data = {'invalid': 'invalid'}
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.post('/api/users', headers=headers, json=data)
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == 'Payload is invalid'
def test_create_invalid_role(app, client):
utils = Utils(app, client)
data = {
'username': 'new_user',
'password': 'password_for_new_user',
'email': 'new_user@test.test',
'role': 'invalid'
}
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.post('/api/users', headers=headers, json=data)
assert resp.status_code == 404
assert json.loads(resp.data.decode()).get('message') == 'Role does not exist!'
def test_create_equal_usernames(app, client):
utils = Utils(app, client)
data = {
'username': 'test',
'password': 'password_for_new_user',
'email': 'new_user@test.test',
'role': 'invalid'
}
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.post('/api/users', headers=headers, json=data)
assert resp.status_code == 422
assert json.loads(resp.data.decode()).get('message') == 'Username already in use!'
def test_admin_update(app, client):
utils = Utils(app, client)
guid = utils.get_guid()
data = {'displayName': 'My new display name!'}
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.put(f'/api/users/{guid}', headers=headers, json=data)
assert resp.status_code == 200
assert json.loads(resp.data.decode()).get('data').get('displayName') == data.get('displayName')
def test_admin_update_without_data(app, client):
utils = Utils(app, client)
guid = utils.get_guid()
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.put(f'/api/users/{guid}', headers=headers)
assert resp.status_code == 200
def test_admin_update_invalid_data(app, client):
utils = Utils(app, client)
guid = utils.get_guid()
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.put(f'/api/users/{guid}', headers=headers, json={'invalid': 'invalid'})
assert resp.status_code == 400
def test_admin_update_non_existing_role(app, client):
utils = Utils(app, client)
guid = utils.get_guid()
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.put(f'/api/users/{guid}', headers=headers, json={'role': 'invalid'})
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == 'Invalid Role'
def test_admin_update_invalid_user(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.put(f'/api/users/invalid', headers=headers, json={'displayName': 'new'})
assert resp.status_code == 404
assert json.loads(resp.data.decode()).get('message') == 'User does not exist'
def test_admin_update_enable_2fa(app, client):
utils = Utils(app, client)
guid = utils.get_guid()
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.put(f'/api/users/{guid}', headers=headers, json={'totp_enabled': True})
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == 'You are not allowed to enable 2FA.'
def test_admin_update_disable_2fa(app, client):
utils = Utils(app, client)
utils.enable_2fa()
guid = utils.get_guid()
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
# check if 2fa is enabled
resp = client.get(f'/api/users/{guid}', headers=headers)
assert json.loads(resp.data.decode()).get('data').get('2fa')
# disable 2fa
resp = client.put(f'/api/users/{guid}', headers=headers, json={'totp_enabled': False})
assert resp.status_code == 200
assert not json.loads(resp.data.decode()).get('data').get('2fa')
def test_update(app, client):
utils = Utils(app, client)
data = {'displayName': 'My new display name!'}
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.put('/api/users/me', headers=headers, json=data)
assert resp.status_code == 200
assert json.loads(resp.data.decode()).get('data').get('displayName') == data.get('displayName')
def test_update_without_data(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.put('/api/users/me', headers=headers)
assert resp.status_code == 200
def test_update_invalid_data(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.put('/api/users/me', headers=headers, json={'invalid': 'invalid'})
assert resp.status_code == 400
def test_update_enable_2fa(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
# first step to enable 2fa, get secret key
resp = client.put('/api/users/me', headers=headers, json={'totp_enabled': True})
assert resp.status_code == 200
print(resp.data.decode())
assert not json.loads(resp.data.decode()).get('data').get('2fa')
assert '2fa_secret' in json.loads(resp.data.decode()).get('data')
secret = json.loads(resp.data.decode()).get('data').get('2fa_secret')
# get the qr code
resp = client.get('/api/users/2fa', headers=headers)
assert resp.status_code == 200
# todo check if svg in resp.data.decode()
# generate a 2fa token using the secret key, and use it to activate 2fa
totp_token = str(onetimepass.get_totp(secret)).zfill(6)
resp = client.post('/api/users/2fa', headers=headers, json={'token': str(totp_token)})
assert resp.status_code == 200
assert json.loads(resp.data.decode()).get('message') == '2fa has been enabled'
def test_update_enable_2fa_invalid_data(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
# first step to enable 2fa, get secret key
resp = client.put('/api/users/me', headers=headers, json={'totp_enabled': True})
assert resp.status_code == 200
assert not json.loads(resp.data.decode()).get('data').get('2fa')
assert '2fa_secret' in json.loads(resp.data.decode()).get('data')
secret = json.loads(resp.data.decode()).get('data').get('2fa_secret')
# generate a 2fa token using the secret key, and use it to activate 2fa
totp_token = str(onetimepass.get_totp(secret)).zfill(6)
resp = client.post('/api/users/2fa', headers=headers, json={'invalid': str(totp_token)})
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == 'Payload is invalid'
def test_update_enable_2fa_invalid_token(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
# first step to enable 2fa, get secret key
resp = client.put('/api/users/me', headers=headers, json={'totp_enabled': True})
assert resp.status_code == 200
assert not json.loads(resp.data.decode()).get('data').get('2fa')
assert '2fa_secret' in json.loads(resp.data.decode()).get('data')
secret = json.loads(resp.data.decode()).get('data').get('2fa_secret')
# generate a 2fa token using the secret key, and use it to activate 2fa
totp_token = str(onetimepass.get_totp(secret)).zfill(6)
resp = client.post('/api/users/2fa', headers=headers, json={'token': '000000'})
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == 'invalid token, try again'
resp = client.post('/api/users/2fa', headers=headers, json={'token': str(totp_token)})
assert resp.status_code == 200
assert json.loads(resp.data.decode()).get('message') == '2fa has been enabled'
def test_update_enable_2fa_only_stage_2(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
# generate a 2fa token using the secret key, and use it to activate 2fa
resp = client.post('/api/users/2fa', headers=headers, json={'token': '000000'})
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == '2fa is not setted up'
def test_update_show_qr_after_2fa_has_been_enabled(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
# first step to enable 2fa, get secret key
resp = client.put('/api/users/me', headers=headers, json={'totp_enabled': True})
assert resp.status_code == 200
assert not json.loads(resp.data.decode()).get('data').get('2fa')
assert '2fa_secret' in json.loads(resp.data.decode()).get('data')
# generate a 2fa token using the secret key, and use it to activate 2fa
secret = json.loads(resp.data.decode()).get('data').get('2fa_secret')
totp_token = str(onetimepass.get_totp(secret)).zfill(6)
resp = client.post('/api/users/2fa', headers=headers, json={'token': str(totp_token)})
assert resp.status_code == 200
assert json.loads(resp.data.decode()).get('message') == '2fa has been enabled'
# it should not be possible to generate the qr code after 2fa has been enabled
# this would be a potential security vulnerability
resp = client.get('/api/users/2fa', headers=headers)
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == 'Unable to generate QR Code'
def test_update_disable_2fa(app, client):
utils = Utils(app, client)
utils.enable_2fa()
headers = {'Authorization': f'Bearer {utils.generate_access_token()}'}
# check if 2fa is enabled
# resp = client.get('/api/auth', headers=headers)
resp = client.get('/api/auth', headers={'Authorization': f'Bearer {utils.generate_access_token()}'})
assert json.loads(resp.data.decode()).get('data').get('2fa')
# disable 2fa
resp = client.put(
f'/api/users/me',
headers=headers,
json={'totp_enabled': False, 'totp_token': utils.generate_2fa_token()}
)
assert resp.status_code == 200
assert not json.loads(resp.data.decode()).get('data').get('2fa')
def test_update_disable_2fa_without_token(app, client):
utils = Utils(app, client)
utils.enable_2fa()
headers = {'Authorization': f'Bearer {utils.generate_access_token()}'}
# check if 2fa is enabled
# resp = client.get('/api/auth', headers=headers)
resp = client.get('/api/auth', headers={'Authorization': f'Bearer {utils.generate_access_token()}'})
assert json.loads(resp.data.decode()).get('data').get('2fa')
# disable 2fa
resp = client.put(
f'/api/users/me',
headers=headers,
json={'totp_enabled': False}
)
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == 'Unable to deactivate 2fa, token not submitted'
def test_update_disable_2fa_invalid_token(app, client):
utils = Utils(app, client)
utils.enable_2fa()
headers = {'Authorization': f'Bearer {utils.generate_access_token()}'}
# check if 2fa is enabled
# resp = client.get('/api/auth', headers=headers)
resp = client.get('/api/auth', headers={'Authorization': f'Bearer {utils.generate_access_token()}'})
assert json.loads(resp.data.decode()).get('data').get('2fa')
# disable 2fa
resp = client.put(
f'/api/users/me',
headers=headers,
json={'totp_enabled': False, 'totp_token': '000000'}
)
assert resp.status_code == 400
assert json.loads(resp.data.decode()).get('message') == 'Unable to deactivate 2fa, token is invalid'
def test_update_modify_role(app, client):
utils = Utils(app, client)
data = {'role': 'admin'}
headers = {'Authorization': f'Bearer {utils.generate_access_token()}'}
resp = client.put('/api/users/me', headers=headers, json=data)
assert resp.status_code == 403
assert json.loads(resp.data.decode()).get('message') == 'You are not allowed to change your role!'
def test_delete(app, client):
utils = Utils(app, client)
# create user to delete
data = {
'username': 'new_user',
'password': 'password_for_new_user',
'email': 'new_user@test.test',
'role': 'user'
}
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.post('/api/users', headers=headers, json=data)
assert resp.status_code == 201
guid = utils.get_guid('new_user')
resp = client.delete(f'/api/users/{guid}', headers=headers)
assert resp.status_code == 200
assert json.loads(resp.data.decode()).get('data') == 'Successfully deleted user!'
def test_delete_without_data(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.delete(f'/api/users', headers=headers)
assert resp.status_code == 405
def test_delete_invalid_data(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.delete(f'/api/users/invalid', headers=headers)
assert resp.status_code == 404
assert json.loads(resp.data.decode()).get('message') == 'User does not exist'
def test_delete_without_permissions(app, client):
utils = Utils(app, client)
guid = utils.get_guid()
headers = {'Authorization': f'Bearer {utils.generate_access_token()}'}
resp = client.delete(f'/api/users/{guid}', headers=headers)
assert resp.status_code == 403
assert json.loads(resp.data.decode()).get('message') == 'Access Denied!'
def test_get(app, client):
utils = Utils(app, client)
guid = utils.get_guid()
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.get(f'/api/users/{guid}', headers=headers)
assert resp.status_code == 200
assert json.loads(resp.data.decode()).get('data').get('email') == 'test@example.com'
assert json.loads(resp.data.decode()).get('data').get('displayName') == 'test'
assert not json.loads(resp.data.decode()).get('data').get('2fa')
def test_get_invalid(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.get(f'/api/users/invalid', headers=headers)
assert resp.status_code == 404
def test_get_all(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_admin_access_token()}'}
resp = client.get(f'/api/users', headers=headers)
assert resp.status_code == 200
def test_get_all_without_permissions(app, client):
utils = Utils(app, client)
headers = {'Authorization': f'Bearer {utils.generate_access_token()}'}
resp = client.get(f'/api/users', headers=headers)
assert resp.status_code == 403
assert json.loads(resp.data.decode()).get('message') == 'Access Denied!'
| 38.632184 | 107 | 0.67587 | 2,289 | 16,805 | 4.821319 | 0.058104 | 0.053824 | 0.06216 | 0.072399 | 0.931588 | 0.917814 | 0.911653 | 0.906397 | 0.890721 | 0.867615 | 0 | 0.015517 | 0.163999 | 16,805 | 434 | 108 | 38.721198 | 0.770019 | 0.059804 | 0 | 0.722408 | 0 | 0 | 0.27165 | 0.087296 | 0 | 0 | 0 | 0.002304 | 0.280936 | 1 | 0.110368 | false | 0.033445 | 0.010033 | 0 | 0.120401 | 0.003344 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4ae7aba12817d372f888c8ab86f140281a9f1a5e | 170 | py | Python | sacorg/algorithms/__init__.py | abdcelikkanat/sacorg | 8501b0825ef6bbc705a2c34d3aa8799265f4ecc7 | [
"Apache-2.0"
] | null | null | null | sacorg/algorithms/__init__.py | abdcelikkanat/sacorg | 8501b0825ef6bbc705a2c34d3aa8799265f4ecc7 | [
"Apache-2.0"
] | null | null | null | sacorg/algorithms/__init__.py | abdcelikkanat/sacorg | 8501b0825ef6bbc705a2c34d3aa8799265f4ecc7 | [
"Apache-2.0"
] | null | null | null | from sacorg.algorithms.isgraphical import *
from sacorg.algorithms.mcmc import *
from sacorg.algorithms.blitzstein_diaconis import *
from sacorg.algorithms.myalg import * | 42.5 | 51 | 0.841176 | 21 | 170 | 6.761905 | 0.428571 | 0.28169 | 0.56338 | 0.549296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 170 | 4 | 52 | 42.5 | 0.916129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
ab2d2f25016d1253c8067b3fba169e5e196c3cd6 | 8,740 | py | Python | build/lib/nonsequitur/__init__.py | montoyamoraga/nonsequiturpy | d05e92bd48c103926aa102a93f8b554b7bac8b03 | [
"Apache-2.0"
] | 1 | 2017-05-27T00:40:31.000Z | 2017-05-27T00:40:31.000Z | build/lib/nonsequitur/__init__.py | montoyamoraga/nonsequiturpy | d05e92bd48c103926aa102a93f8b554b7bac8b03 | [
"Apache-2.0"
] | 1 | 2021-10-04T16:46:12.000Z | 2021-10-04T16:46:39.000Z | nonsequitur/__init__.py | montoyamoraga/nonsequiturpy | d05e92bd48c103926aa102a93f8b554b7bac8b03 | [
"Apache-2.0"
] | null | null | null | # nonsequitur.py module
# for generation of non sequitur text
# by aaron montoya-moraga
# part of my nyu itp thesis
# and also final project for class
# reading and writing electronic text
# by professor allison parrish
# april 2017
# v0.0.4
# special thanks:
# to allison parrish for pytracery, pycorpora, teaching me python and how to read and write digital text
# to kate compton for tracery
# to dariusk for corpora
#import dependencies
import tracery
import pycorpora
import random
from tracery.modifiers import base_english
import os
# definition of america/russia joke
def america_russia():
# retrieve verbs from pycorpora
# needs to be parsed
verbsAll = pycorpora.words.verbs['verbs']
# create empty lists in order to parse
verbsPresent = list()
verbsPast = list()
# go through every verb and retrieve just the present tense
for verb in range(len(verbsAll)):
verbsPresent.append(verbsAll[verb]['present'])
verbsPast.append(verbsAll[verb]['past'])
# pytracery rules
rules = {
'origin': 'In America, you #verb# #subject#. In Russia, you #verb# #subject#.',
'verb': verbsPresent,
'subject': pycorpora.words.strange_words['words']
}
# pytracery grammar
grammar = tracery.Grammar(rules)
# use english
grammar.add_modifiers(base_english)
# print result
print grammar.flatten("#origin#")
# definition of bar joke
def bar():
# retrieve verbs from pycorpora
# needs to be parsed
verbsAll = pycorpora.words.verbs['verbs']
# create empty lists in order to parse
verbsPresent = list()
verbsPast = list()
# go through every verb and retrieve just the present tense
for verb in range(len(verbsAll)):
verbsPresent.append(verbsAll[verb]['present'])
verbsPast.append(verbsAll[verb]['past'])
# number parsing
numbersInt = pycorpora.mathematics.primes['primes']
numbersString = list()
for number in numbersInt:
numbersString.append(str(number))
# pytracery rules
rules = {
'origin': '#occupation.a.capitalize# walks into a bar and orders a #beer#. When the bartender gets his drink, #gender# asks, "Bartender, how much do I owe you?" The bartender replies "Just #appliance.a# and some #drug#".',
'verb': verbsPast,
'countries': pycorpora.geography.countries['countries'],
'subject': pycorpora.words.strange_words['words'],
'occupation': pycorpora.humans.occupations['occupations'],
'gender': ['he', 'she'],
'number': numbersString,
'beer': pycorpora.foods.beer_styles['beer_styles'],
'new_technology': pycorpora.technology.new_technologies['technologies'],
'appliance': pycorpora.technology.appliances['appliances'],
'drug': pycorpora.medicine.drugs['drugs']
}
# pytracery grammar
grammar = tracery.Grammar(rules)
# use english
grammar.add_modifiers(base_english)
# print result
print grammar.flatten("#origin#")
# definition of why did the chicken cross the road joke.
def chicken():
# retrieve verbs from pycorpora
# needs to be parsed
verbsAll = pycorpora.words.verbs['verbs']
# create empty lists in order to parse
verbsPresent = list()
verbsPast = list()
# go through every verb and retrieve just the present tense
for verb in range(len(verbsAll)):
verbsPresent.append(verbsAll[verb]['present'])
verbsPast.append(verbsAll[verb]['past'])
# pytracery rules
rules = {
'origin': 'Why did the chicken cross the road? Because #gender# #verb# to #subject# the #subject#.',
'verb': verbsPast,
'subject': pycorpora.words.strange_words['words'],
'gender': ['he', 'she']
}
# pytracery grammar
grammar = tracery.Grammar(rules)
# use english
grammar.add_modifiers(base_english)
# print result
print grammar.flatten("#origin#")
def cows():
# retrieve verbs from pycorpora
# needs to be parsed
verbsAll = pycorpora.words.verbs['verbs']
# create empty lists in order to parse
verbsPresent = list()
verbsPast = list()
# go through every verb and retrieve just the present tense
for verb in range(len(verbsAll)):
verbsPresent.append(verbsAll[verb]['present'])
verbsPast.append(verbsAll[verb]['past'])
# pytracery rules
rules = {
'origin': '#countries#: you have two cows, the first one is #subject#, the second one is #subject#.',
'verb': verbsPast,
'countries': pycorpora.geography.countries['countries'],
'subject': pycorpora.words.strange_words['words'],
'gender': ['he', 'she']
}
# pytracery grammar
grammar = tracery.Grammar(rules)
# use english
grammar.add_modifiers(base_english)
# print result
print grammar.flatten("#origin#")
def knock_knock():
# retrieve verbs from pycorpora
# needs to be parsed
verbsAll = pycorpora.words.verbs['verbs']
# create empty lists in order to parse
verbsPresent = list()
verbsPast = list()
# go through every verb and retrieve just the present tense
for verb in range(len(verbsAll)):
verbsPresent.append(verbsAll[verb]['present'])
verbsPast.append(verbsAll[verb]['past'])
# pytracery rules
rules = {
'origin': '#[thingy:#subject#]story#',
'story': '- Knock knock \n- Who\'s there? \n- #thingy.capitalize#\n- #thingy.capitalize# who?\n- #thingy.capitalize# #object#',
'verb': verbsPast,
'countries': pycorpora.geography.countries['countries'],
'subject': pycorpora.words.strange_words['words'],
'gender': ['he', 'she'],
'object': pycorpora.objects.objects['objects']
}
# pytracery grammar
grammar = tracery.Grammar(rules)
# use english
grammar.add_modifiers(base_english)
# print result
print grammar.flatten("#origin#")
def lightbulb():
# retrieve verbs from pycorpora
# needs to be parsed
verbsAll = pycorpora.words.verbs['verbs']
# create empty lists in order to parse
verbsPresent = list()
verbsPast = list()
# go through every verb and retrieve just the present tense
for verb in range(len(verbsAll)):
verbsPresent.append(verbsAll[verb]['present'])
verbsPast.append(verbsAll[verb]['past'])
#number parsing
numbersInt = pycorpora.mathematics.primes['primes']
numbersString = list()
for number in numbersInt:
numbersString.append(str(number))
# pytracery rules
rules = {
'origin': '- How many #occupation.s# does it take to change a lightbulb?\n- #number#, because of #new_technology.s#.',
'verb': verbsPast,
'countries': pycorpora.geography.countries['countries'],
'subject': pycorpora.words.strange_words['words'],
'occupation': pycorpora.humans.occupations['occupations'],
'gender': ['he', 'she'],
'number': numbersString,
'new_technology': pycorpora.technology.new_technologies['technologies']
}
# pytracery grammar
grammar = tracery.Grammar(rules)
# use english
grammar.add_modifiers(base_english)
# print result
print grammar.flatten("#origin#")
# def of violas joke
def violas():
# retrieve verbs from pycorpora
# needs to be parsed
verbsAll = pycorpora.words.verbs['verbs']
# create empty lists in order to parse
verbsPresent = list()
verbsPast = list()
# go through every verb and retrieve just the present tense
for verb in range(len(verbsAll)):
verbsPresent.append(verbsAll[verb]['present'])
verbsPast.append(verbsAll[verb]['past'])
# number parsing
numbersInt = pycorpora.mathematics.primes['primes']
numbersString = list()
for number in numbersInt:
numbersString.append(str(number))
# pytracery rules
rules = {
'origin': 'What is the difference between violas and #object.s#?\nViolas #verb# #adverb#.',
'verb': verbsPresent,
'countries': pycorpora.geography.countries['countries'],
'subject': pycorpora.words.strange_words['words'],
'occupation': pycorpora.humans.occupations['occupations'],
'adverb': pycorpora.words.adverbs['adverbs'],
'gender': ['he', 'she'],
'number': numbersString,
'new_technology': pycorpora.technology.new_technologies['technologies'],
"object": pycorpora.objects.objects['objects']
}
# pytracery grammar
grammar = tracery.Grammar(rules)
# use english
grammar.add_modifiers(base_english)
# print result
print grammar.flatten("#origin#")
# TODO
# def print_stuff():
# os.startfile()
| 30.774648 | 230 | 0.659268 | 988 | 8,740 | 5.797571 | 0.187247 | 0.036662 | 0.043994 | 0.031774 | 0.772172 | 0.772172 | 0.765538 | 0.745461 | 0.745461 | 0.745461 | 0 | 0.00103 | 0.22254 | 8,740 | 283 | 231 | 30.883392 | 0.841943 | 0.234668 | 0 | 0.765101 | 0 | 0.020134 | 0.215194 | 0.007567 | 0 | 0 | 0 | 0.003534 | 0 | 0 | null | null | 0 | 0.033557 | null | null | 0.04698 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ab2d9aa95e16630b86a25b9eefa39b892f29fc2d | 1,275 | py | Python | 001-099/08/8.py | lunixbochs/project-euler | aa974c5ae68547309f33adbb4e633fe040964855 | [
"MIT"
] | 6 | 2015-07-21T20:45:08.000Z | 2021-03-13T14:07:48.000Z | 001-099/08/8.py | lunixbochs/project-euler | aa974c5ae68547309f33adbb4e633fe040964855 | [
"MIT"
] | null | null | null | 001-099/08/8.py | lunixbochs/project-euler | aa974c5ae68547309f33adbb4e633fe040964855 | [
"MIT"
] | 2 | 2017-10-28T09:52:08.000Z | 2019-04-11T00:55:36.000Z | num = '''
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
'''.replace('\n', '')
def product(s):
total = int(s[0])
for c in s[1:]:
total *= int(c)
return total
highest = 0
for i in xrange(len(num) - 13):
cur = num[i:i+13]
highest = max(highest, product(cur))
print highest
| 35.416667 | 50 | 0.891765 | 61 | 1,275 | 18.639344 | 0.688525 | 0.014072 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.848357 | 0.06902 | 1,275 | 35 | 51 | 36.428571 | 0.10952 | 0 | 0 | 0 | 0 | 0 | 0.802353 | 0.784314 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.03125 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
db9d2d35fc43b4234dbf6f8698d45f764ac17c58 | 3,131 | py | Python | src/napari_simpleitk_image_processing/_function.py | haesleinhuepf/napari-simpleitk-image-processing | 4100e543bb10555c1bf793b3da3a1ade660d3722 | [
"BSD-3-Clause"
] | 4 | 2021-11-28T14:23:03.000Z | 2022-03-26T23:13:19.000Z | src/napari_simpleitk_image_processing/_function.py | haesleinhuepf/napari-simpleitk-image-processing | 4100e543bb10555c1bf793b3da3a1ade660d3722 | [
"BSD-3-Clause"
] | 2 | 2021-12-31T11:47:56.000Z | 2022-01-01T23:37:51.000Z | src/napari_simpleitk_image_processing/_function.py | haesleinhuepf/napari-simpleitk-image-processing | 4100e543bb10555c1bf793b3da3a1ade660d3722 | [
"BSD-3-Clause"
] | null | null | null | from napari_plugin_engine import napari_hook_implementation
@napari_hook_implementation
def napari_experimental_provide_function():
from ._simpleitk_image_processing import median_filter, gaussian_blur, threshold_otsu, threshold_intermodes, \
threshold_kittler_illingworth, threshold_li, threshold_moments, threshold_renyi_entropy, \
threshold_shanbhag, threshold_yen, threshold_isodata, threshold_triangle, threshold_huang, \
threshold_maximum_entropy, \
signed_maurer_distance_map, morphological_watershed, morphological_gradient, standard_deviation_filter, \
simple_linear_iterative_clustering, scalar_image_k_means_clustering,\
connected_component_labeling, \
touching_objects_labeling, watershed_otsu_labeling, binary_fill_holes, invert_intensity, \
bilateral_filter, laplacian_filter, laplacian_of_gaussian_filter, binominal_blur_filter, \
canny_edge_detection, gradient_magnitude, h_maxima, \
h_minima, otsu_multiple_thresholds, regional_maxima, regional_minima, \
richardson_lucy_deconvolution, wiener_deconvolution, tikhonov_deconvolution, rescale_intensity, \
sobel, black_top_hat, white_top_hat, adaptive_histogram_equalization, curvature_flow_denoise, \
relabel_component, label_contour, \
label_statistics, pixel_count_map, elongation_map, feret_diameter_map, roundness_map
return [median_filter,
gaussian_blur,
threshold_otsu,
threshold_intermodes,
threshold_kittler_illingworth,
threshold_li,
threshold_moments,
threshold_renyi_entropy,
threshold_shanbhag,
threshold_yen,
threshold_isodata,
threshold_triangle,
threshold_huang,
threshold_maximum_entropy,
signed_maurer_distance_map,
morphological_watershed,
morphological_gradient,
standard_deviation_filter,
simple_linear_iterative_clustering,
scalar_image_k_means_clustering,
connected_component_labeling,
touching_objects_labeling,
watershed_otsu_labeling,
binary_fill_holes,
invert_intensity,
bilateral_filter,
laplacian_filter,
laplacian_of_gaussian_filter,
binominal_blur_filter,
canny_edge_detection,
gradient_magnitude,
h_maxima,
h_minima,
otsu_multiple_thresholds,
regional_maxima,
regional_minima,
richardson_lucy_deconvolution,
wiener_deconvolution,
tikhonov_deconvolution,
rescale_intensity,
sobel,
black_top_hat,
white_top_hat,
adaptive_histogram_equalization,
curvature_flow_denoise,
relabel_component,
label_contour,
label_statistics,
pixel_count_map,
elongation_map,
feret_diameter_map,
roundness_map
]
| 41.746667 | 114 | 0.680933 | 282 | 3,131 | 6.964539 | 0.358156 | 0.03055 | 0.02444 | 0.02444 | 0.922607 | 0.922607 | 0.922607 | 0.922607 | 0.922607 | 0.922607 | 0 | 0 | 0.279783 | 3,131 | 74 | 115 | 42.310811 | 0.870953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014085 | true | 0 | 0.028169 | 0 | 0.056338 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dbd9a1b4330e1fd5f09ecfef139173f62c68086f | 112 | py | Python | deep_rl/logging/__init__.py | df424/deep_rl | bfe4a5f54df38ec111fb0162fd575c668f9310d0 | [
"MIT"
] | null | null | null | deep_rl/logging/__init__.py | df424/deep_rl | bfe4a5f54df38ec111fb0162fd575c668f9310d0 | [
"MIT"
] | null | null | null | deep_rl/logging/__init__.py | df424/deep_rl | bfe4a5f54df38ec111fb0162fd575c668f9310d0 | [
"MIT"
] | null | null | null |
from deep_rl.logging.configure_logger import configure_logger
from deep_rl.logging.get_logger import get_logger | 37.333333 | 61 | 0.892857 | 18 | 112 | 5.222222 | 0.444444 | 0.170213 | 0.212766 | 0.361702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 112 | 3 | 62 | 37.333333 | 0.903846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
91de62dd8443aa650e0ac1c67897b40e3630f149 | 188 | py | Python | Python/Mixin/Shape.py | Gjacquenot/training-material | 16b29962bf5683f97a1072d961dd9f31e7468b8d | [
"CC-BY-4.0"
] | 115 | 2015-03-23T13:34:42.000Z | 2022-03-21T00:27:21.000Z | Python/Mixin/Shape.py | Gjacquenot/training-material | 16b29962bf5683f97a1072d961dd9f31e7468b8d | [
"CC-BY-4.0"
] | 56 | 2015-02-25T15:04:26.000Z | 2022-01-03T07:42:48.000Z | Python/Mixin/Shape.py | Gjacquenot/training-material | 16b29962bf5683f97a1072d961dd9f31e7468b8d | [
"CC-BY-4.0"
] | 59 | 2015-11-26T11:44:51.000Z | 2022-03-21T00:27:22.000Z | class Shape(object):
def __init__(self, name):
self._name = name
@property
def name(self):
return self._name
def __str__(self):
return self.name
| 15.666667 | 29 | 0.585106 | 23 | 188 | 4.347826 | 0.434783 | 0.32 | 0.28 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.319149 | 188 | 11 | 30 | 17.090909 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
37e136f956b79bab18a373b7b64d95ee45f455a0 | 122 | py | Python | src/graph_transpiler/webdnn/backend/webgl/operators/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | 1 | 2021-04-09T15:55:35.000Z | 2021-04-09T15:55:35.000Z | src/graph_transpiler/webdnn/backend/webgl/operators/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | src/graph_transpiler/webdnn/backend/webgl/operators/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | from webdnn.backend.webgl.operators import convert_r_to_rgba
from webdnn.backend.webgl.operators import convert_rgba_to_r
| 40.666667 | 60 | 0.885246 | 20 | 122 | 5.1 | 0.5 | 0.196078 | 0.333333 | 0.431373 | 0.862745 | 0.862745 | 0.862745 | 0 | 0 | 0 | 0 | 0 | 0.065574 | 122 | 2 | 61 | 61 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 10 |
5306877ca756dae679f596536954a99bb9af0811 | 9,202 | py | Python | load_balancer/test_load_balancer.py | guilhermegouw/balanceamento_de_carga | 003b75df4e144ea7467a6ea56e014c343010493c | [
"MIT"
] | null | null | null | load_balancer/test_load_balancer.py | guilhermegouw/balanceamento_de_carga | 003b75df4e144ea7467a6ea56e014c343010493c | [
"MIT"
] | null | null | null | load_balancer/test_load_balancer.py | guilhermegouw/balanceamento_de_carga | 003b75df4e144ea7467a6ea56e014c343010493c | [
"MIT"
] | null | null | null | from unittest import TestCase, main
from load_balancer import LoadBalancer, Server, Task
class TaskTests(TestCase):
def setUp(self) -> None:
self.task = Task(t_task=4)
return super().setUp()
def test_can_instantiate_a_task(self):
assert self.task
def test_t_task_attribute_return_right_value(self):
self.assertEqual(self.task._t_task, 4)
def test_task_has_attribute_tic(self):
assert self.task.tic()
def test_tic_decrement_t_task_by_1(self):
self.task.tic()
self.assertEqual(self.task._t_task, 3)
def test_task_has_attribute_is_active(self):
assert self.task.is_active()
def test_task_is_actvive(self):
"""
As t_task is set to 4 is_active() must return True
"""
self.assertEqual(self.task.is_active(), True)
def test_task_is_actvive(self):
"""
As t_task is set to 4 is_active() after 4 tics must return False
"""
self.task.tic()
self.task.tic()
self.task.tic()
self.task.tic()
self.assertEqual(self.task.is_active(), False)
class ServerTests(TestCase):
def setUp(self) -> None:
self.server = Server(u_max=2, t_task=4)
return super().setUp()
def test_can_instantiate_a_server(self):
assert self.server
def test_has_tasks(self):
self.assertEqual(self.server.has_tasks(), False)
def test_add_task(self):
self.assertEqual(self.server.add_task(), True)
def test_has_task_after_add_task(self):
self.server.add_task()
self.assertEqual(self.server.has_tasks(), True)
def test_add_task_twice(self):
self.server.add_task()
self.server.add_task()
self.assertEqual(self.server.has_tasks(), True)
def test_add_task_three_times(self):
"""
As the u_max is set to 2, a third task cannot be added.
"""
self.server.add_task()
self.server.add_task()
self.server.add_task()
self.assertEqual(self.server.add_task(), False)
def test_server_string_representation(self):
self.assertEqual(str(self.server), "0")
def test_server_string_representation_one_task(self):
self.server.add_task()
self.assertEqual(str(self.server), "1")
def test_server_string_representation_after_add_task_three_times(self):
"""
As the u_max is set to 2, a third task cannot be added so str(server) must return "2".
"""
self.server.add_task()
self.server.add_task()
self.server.add_task()
self.assertEqual(str(self.server), "2")
def test_server_tic_with_one_user_one_tic(self):
"""
Each task has 4 tics duration so, after one tic the server must keep that task.
"""
self.server.add_task()
self.server.tic()
self.assertEqual(str(self.server), "1")
def test_server_tic_with_one_user_two_tics(self):
"""
Each task has 4 tics duration so, after two tics the server must keep that task.
"""
self.server.add_task()
self.server.tic()
self.server.tic()
self.assertEqual(str(self.server), "1")
def test_server_tic_with_one_user_four_tics(self):
"""
Each task has 4 tics duration so, after four tics the server will no longer have that task.
"""
self.server.add_task()
self.server.tic()
self.server.tic()
self.server.tic()
self.server.tic()
self.assertEqual(str(self.server), "0")
class LoadBalancerTests(TestCase):
def setUp(self) -> None:
self.load_balancer = LoadBalancer(u_max=2, t_task=4)
return super().setUp()
def test_can_instantiate_load_balancer(self):
assert self.load_balancer
def test_load_balancer_string_representation(self):
self.assertEqual(str(self.load_balancer), "")
def test_serve_user(self):
self.load_balancer.serve_tasks(1)
self.assertEqual(str(self.load_balancer), "1")
def test_has_active_servers_True(self):
self.load_balancer.serve_tasks(2)
self.assertEqual(self.load_balancer.has_active_server(), True)
def test_has_active_servers_False(self):
self.load_balancer.serve_tasks(2)
self.load_balancer.tic()
self.load_balancer.tic()
self.load_balancer.tic()
self.load_balancer.tic()
self.assertEqual(self.load_balancer.has_active_server(), False)
def test_testcases_for_input_txt_file_first_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.assertEqual(str(self.load_balancer), "1")
def test_testcases_for_input_txt_file_second_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(3)
self.assertEqual(str(self.load_balancer), "2,2")
def test_testcases_for_input_txt_file_third_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(3)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.assertEqual(str(self.load_balancer), "2,2")
def test_testcases_for_input_txt_file_fourth_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(3)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.assertEqual(str(self.load_balancer), "2,2,1")
def test_testcases_for_input_txt_file_fifth_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(3)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.assertEqual(str(self.load_balancer), "1,2,1")
def test_testcases_for_input_txt_file_sixth_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(3)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.assertEqual(str(self.load_balancer), "2")
def test_testcases_for_input_txt_file_seventh_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(3)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.assertEqual(str(self.load_balancer), "2")
def test_testcases_for_input_txt_file_eighth_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(3)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.tic()
self.assertEqual(str(self.load_balancer), "1")
def test_testcases_for_input_txt_file_nineth_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(3)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.tic()
self.load_balancer.tic()
self.assertEqual(str(self.load_balancer), "1")
def test_testcases_for_input_txt_file_tenth_tic(self):
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(3)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.serve_tasks(0)
self.load_balancer.tic()
self.load_balancer.serve_tasks(1)
self.load_balancer.tic()
self.load_balancer.tic()
self.load_balancer.tic()
self.load_balancer.tic()
self.assertEqual(self.load_balancer.has_active_server(), False)
self.assertEqual(str(self.load_balancer), "")
if __name__ == "__main__":
main()
| 33.70696 | 99 | 0.663334 | 1,269 | 9,202 | 4.502758 | 0.076438 | 0.247812 | 0.322016 | 0.166258 | 0.880294 | 0.847217 | 0.797165 | 0.761463 | 0.739062 | 0.707385 | 0 | 0.012259 | 0.228755 | 9,202 | 272 | 100 | 33.830882 | 0.79287 | 0.055531 | 0 | 0.721154 | 0 | 0 | 0.004221 | 0 | 0 | 0 | 0 | 0 | 0.168269 | 1 | 0.177885 | false | 0 | 0.009615 | 0 | 0.216346 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
533821529c63f7457dbd921eb2d2f947c7dce5e4 | 1,799 | py | Python | example/test/core/light/point/unit.py | dmilos/IceRay | 4e01f141363c0d126d3c700c1f5f892967e3d520 | [
"MIT-0"
] | 2 | 2020-09-04T12:27:15.000Z | 2022-01-17T14:49:40.000Z | example/test/core/light/point/unit.py | dmilos/IceRay | 4e01f141363c0d126d3c700c1f5f892967e3d520 | [
"MIT-0"
] | null | null | null | example/test/core/light/point/unit.py | dmilos/IceRay | 4e01f141363c0d126d3c700c1f5f892967e3d520 | [
"MIT-0"
] | 1 | 2020-09-04T12:27:52.000Z | 2020-09-04T12:27:52.000Z | import math
import IceRayCpp
def name( ):
return "line"
def make( P_height = 1 + 1 + (math.sqrt(5)-1)/2 ):
point = IceRayCpp.LightPoint()
spot = IceRayCpp.LightTypeSpot()
#spot._0( IceRayCpp.GraphTypeColorRGB().fill( 0.4 ) )
#spot._1( IceRayCpp.GraphTypeColorRGB().fill( 0.3 ) )
#spot._2( IceRayCpp.GraphTypeColorRGB().load(1,2,3) )
point.spot( spot )
point.center( IceRayCpp.MathTypeCoord3D().load(0,0,P_height) )
return { 'this': point }
def makeX( P_x = 1 + 1 + (math.sqrt(5)-1)/2 ):
point = IceRayCpp.LightPoint()
spot = IceRayCpp.LightTypeSpot()
#spot._0( IceRayCpp.GraphTypeColorRGB().fill( 0.4 ) )
#spot._1( IceRayCpp.GraphTypeColorRGB().fill( 0.3 ) )
#spot._2( IceRayCpp.GraphTypeColorRGB().load(1,2,3) )
point.spot( spot )
point.center( IceRayCpp.MathTypeCoord3D().load( P_x, 0, 0 ) )
return { 'this': point }
def makeY( P_y = 1 + 1 + (math.sqrt(5)-1)/2 ):
point = IceRayCpp.LightPoint()
spot = IceRayCpp.LightTypeSpot()
#spot._0( IceRayCpp.GraphTypeColorRGB().fill( 0.4 ) )
#spot._1( IceRayCpp.GraphTypeColorRGB().fill( 0.3 ) )
#spot._2( IceRayCpp.GraphTypeColorRGB().load(1,2,3) )
point.spot( spot )
point.center( IceRayCpp.MathTypeCoord3D().load( 0, P_y, 0 ) )
return { 'this': point }
def makeZ( P_z = 1 + 1 + (math.sqrt(5)-1)/2 ):
point = IceRayCpp.LightPoint()
spot = IceRayCpp.LightTypeSpot()
#spot._0( IceRayCpp.GraphTypeColorRGB().fill( 0.4 ) )
#spot._1( IceRayCpp.GraphTypeColorRGB().fill( 0.3 ) )
#spot._2( IceRayCpp.GraphTypeColorRGB().load(1,2,3) )
point.spot( spot )
point.center( IceRayCpp.MathTypeCoord3D().load( 0, 0, P_z ) )
return { 'this': point }
| 27.257576 | 67 | 0.608671 | 224 | 1,799 | 4.799107 | 0.147321 | 0.290233 | 0.223256 | 0.230698 | 0.886512 | 0.851163 | 0.851163 | 0.851163 | 0.851163 | 0.851163 | 0 | 0.051576 | 0.224013 | 1,799 | 65 | 68 | 27.676923 | 0.718481 | 0.346859 | 0 | 0.571429 | 0 | 0 | 0.018215 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.071429 | 0.035714 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
536140b545574cf90b24c8aa3aae4f6ada1ebe39 | 7,989 | py | Python | scipy/stats/tests/test_binned_statistic.py | isjoung/scipy | 876a966a2b2016df9f7343f562ec70efa04a37f1 | [
"BSD-3-Clause"
] | null | null | null | scipy/stats/tests/test_binned_statistic.py | isjoung/scipy | 876a966a2b2016df9f7343f562ec70efa04a37f1 | [
"BSD-3-Clause"
] | null | null | null | scipy/stats/tests/test_binned_statistic.py | isjoung/scipy | 876a966a2b2016df9f7343f562ec70efa04a37f1 | [
"BSD-3-Clause"
] | 1 | 2017-03-02T23:53:50.000Z | 2017-03-02T23:53:50.000Z | from __future__ import division, print_function, absolute_import
import numpy as np
from numpy.testing import assert_array_almost_equal, run_module_suite
from scipy.stats import (binned_statistic, binned_statistic_2d,
binned_statistic_dd)
from scipy._lib.six import u
class TestBinnedStatistic(object):
@classmethod
def setup_class(cls):
np.random.seed(9865)
cls.x = np.random.random(100)
cls.y = np.random.random(100)
cls.v = np.random.random(100)
cls.X = np.random.random((100, 3))
def test_1d_count(self):
x = self.x
v = self.v
count1, edges1, bc = binned_statistic(x, v, 'count', bins=10)
count2, edges2 = np.histogram(x, bins=10)
assert_array_almost_equal(count1, count2)
assert_array_almost_equal(edges1, edges2)
def test_1d_sum(self):
x = self.x
v = self.v
sum1, edges1, bc = binned_statistic(x, v, 'sum', bins=10)
sum2, edges2 = np.histogram(x, bins=10, weights=v)
assert_array_almost_equal(sum1, sum2)
assert_array_almost_equal(edges1, edges2)
def test_1d_mean(self):
x = self.x
v = self.v
stat1, edges1, bc = binned_statistic(x, v, 'mean', bins=10)
stat2, edges2, bc = binned_statistic(x, v, np.mean, bins=10)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(edges1, edges2)
def test_1d_std(self):
x = self.x
v = self.v
stat1, edges1, bc = binned_statistic(x, v, 'std', bins=10)
stat2, edges2, bc = binned_statistic(x, v, np.std, bins=10)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(edges1, edges2)
def test_1d_median(self):
x = self.x
v = self.v
stat1, edges1, bc = binned_statistic(x, v, 'median', bins=10)
stat2, edges2, bc = binned_statistic(x, v, np.median, bins=10)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(edges1, edges2)
def test_1d_bincode(self):
x = self.x[:20]
v = self.v[:20]
count1, edges1, bc = binned_statistic(x, v, 'count', bins=3)
bc2 = np.array([3, 2, 1, 3, 2, 3, 3, 3, 3, 1, 1, 3, 3, 1, 2, 3, 1,
1, 2, 1])
bcount = [(bc == i).sum() for i in np.unique(bc)]
assert_array_almost_equal(bc, bc2)
assert_array_almost_equal(bcount, count1)
def test_1d_range_keyword(self):
# Regression test for gh-3063, range can be (min, max) or [(min, max)]
np.random.seed(9865)
x = np.arange(30)
data = np.random.random(30)
mean, bins, _ = binned_statistic(x[:15], data[:15])
mean_range, bins_range, _ = binned_statistic(x, data, range=[(0, 14)])
mean_range2, bins_range2, _ = binned_statistic(x, data, range=(0, 14))
assert_array_almost_equal(mean, mean_range)
assert_array_almost_equal(bins, bins_range)
assert_array_almost_equal(mean, mean_range2)
assert_array_almost_equal(bins, bins_range2)
def test_2d_count(self):
x = self.x
y = self.y
v = self.v
count1, binx1, biny1, bc = binned_statistic_2d(x, y, v, 'count', bins=5)
count2, binx2, biny2 = np.histogram2d(x, y, bins=5)
assert_array_almost_equal(count1, count2)
assert_array_almost_equal(binx1, binx2)
assert_array_almost_equal(biny1, biny2)
def test_2d_sum(self):
x = self.x
y = self.y
v = self.v
sum1, binx1, biny1, bc = binned_statistic_2d(x, y, v, 'sum', bins=5)
sum2, binx2, biny2 = np.histogram2d(x, y, bins=5, weights=v)
assert_array_almost_equal(sum1, sum2)
assert_array_almost_equal(binx1, binx2)
assert_array_almost_equal(biny1, biny2)
def test_2d_mean(self):
x = self.x
y = self.y
v = self.v
stat1, binx1, biny1, bc = binned_statistic_2d(x, y, v, 'mean', bins=5)
stat2, binx2, biny2, bc = binned_statistic_2d(x, y, v, np.mean, bins=5)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(binx1, binx2)
assert_array_almost_equal(biny1, biny2)
def test_2d_mean_unicode(self):
x = self.x
y = self.y
v = self.v
stat1, binx1, biny1, bc = binned_statistic_2d(x, y, v, u('mean'), bins=5)
stat2, binx2, biny2, bc = binned_statistic_2d(x, y, v, np.mean, bins=5)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(binx1, binx2)
assert_array_almost_equal(biny1, biny2)
def test_2d_std(self):
x = self.x
y = self.y
v = self.v
stat1, binx1, biny1, bc = binned_statistic_2d(x, y, v, 'std', bins=5)
stat2, binx2, biny2, bc = binned_statistic_2d(x, y, v, np.std, bins=5)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(binx1, binx2)
assert_array_almost_equal(biny1, biny2)
def test_2d_median(self):
x = self.x
y = self.y
v = self.v
stat1, binx1, biny1, bc = binned_statistic_2d(x, y, v, 'median', bins=5)
stat2, binx2, biny2, bc = binned_statistic_2d(x, y, v, np.median, bins=5)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(binx1, binx2)
assert_array_almost_equal(biny1, biny2)
def test_2d_bincode(self):
x = self.x[:20]
y = self.y[:20]
v = self.v[:20]
count1, binx1, biny1, bc = binned_statistic_2d(x, y, v, 'count', bins=3)
bc2 = np.array([17, 11, 6, 16, 11, 17, 18, 17, 17, 7, 6, 18, 16,
6, 11, 16, 6, 6, 11, 8])
bcount = [(bc == i).sum() for i in np.unique(bc)]
assert_array_almost_equal(bc, bc2)
count1adj = count1[count1.nonzero()]
assert_array_almost_equal(bcount, count1adj)
def test_dd_count(self):
X = self.X
v = self.v
count1, edges1, bc = binned_statistic_dd(X, v, 'count', bins=3)
count2, edges2 = np.histogramdd(X, bins=3)
assert_array_almost_equal(count1, count2)
assert_array_almost_equal(edges1, edges2)
def test_dd_sum(self):
X = self.X
v = self.v
sum1, edges1, bc = binned_statistic_dd(X, v, 'sum', bins=3)
sum2, edges2 = np.histogramdd(X, bins=3, weights=v)
assert_array_almost_equal(sum1, sum2)
assert_array_almost_equal(edges1, edges2)
def test_dd_mean(self):
X = self.X
v = self.v
stat1, edges1, bc = binned_statistic_dd(X, v, 'mean', bins=3)
stat2, edges2, bc = binned_statistic_dd(X, v, np.mean, bins=3)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(edges1, edges2)
def test_dd_std(self):
X = self.X
v = self.v
stat1, edges1, bc = binned_statistic_dd(X, v, 'std', bins=3)
stat2, edges2, bc = binned_statistic_dd(X, v, np.std, bins=3)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(edges1, edges2)
def test_dd_median(self):
X = self.X
v = self.v
stat1, edges1, bc = binned_statistic_dd(X, v, 'median', bins=3)
stat2, edges2, bc = binned_statistic_dd(X, v, np.median, bins=3)
assert_array_almost_equal(stat1, stat2)
assert_array_almost_equal(edges1, edges2)
def test_dd_bincode(self):
X = self.X[:20]
v = self.v[:20]
count1, edges1, bc = binned_statistic_dd(X, v, 'count', bins=3)
bc2 = np.array([63, 33, 86, 83, 88, 67, 57, 33, 42, 41, 82, 83, 92,
32, 36, 91, 43, 87, 81, 81])
bcount = [(bc == i).sum() for i in np.unique(bc)]
assert_array_almost_equal(bc, bc2)
count1adj = count1[count1.nonzero()]
assert_array_almost_equal(bcount, count1adj)
if __name__ == "__main__":
run_module_suite()
| 31.828685 | 81 | 0.607585 | 1,178 | 7,989 | 3.885399 | 0.10781 | 0.117763 | 0.181997 | 0.235525 | 0.833515 | 0.807079 | 0.7435 | 0.727988 | 0.710291 | 0.696745 | 0 | 0.068496 | 0.274502 | 7,989 | 250 | 82 | 31.956 | 0.721187 | 0.008512 | 0 | 0.535912 | 0 | 0 | 0.011365 | 0 | 0 | 0 | 0 | 0 | 0.270718 | 1 | 0.116022 | false | 0 | 0.027624 | 0 | 0.149171 | 0.005525 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5368b04082449026be5f507e50fd20208b2b35d8 | 113 | py | Python | hwxml/__init__.py | kittenswolf/hwxml | 80a09e04018e96a9ab54103aa760095ab201fbaf | [
"MIT"
] | null | null | null | hwxml/__init__.py | kittenswolf/hwxml | 80a09e04018e96a9ab54103aa760095ab201fbaf | [
"MIT"
] | null | null | null | hwxml/__init__.py | kittenswolf/hwxml | 80a09e04018e96a9ab54103aa760095ab201fbaf | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from . import parser
def parse(str_input):
return parser.parser(str_input).parse()
| 16.142857 | 43 | 0.663717 | 16 | 113 | 4.5625 | 0.6875 | 0.219178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010638 | 0.168142 | 113 | 6 | 44 | 18.833333 | 0.765957 | 0.185841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
f42ef942459a25081cceb1f03fedc67476ed7ed5 | 3,096 | py | Python | pirates/piratesgui/MessageGlobals.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 81 | 2018-04-08T18:14:24.000Z | 2022-01-11T07:22:15.000Z | pirates/piratesgui/MessageGlobals.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 4 | 2018-09-13T20:41:22.000Z | 2022-01-08T06:57:00.000Z | pirates/piratesgui/MessageGlobals.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 26 | 2018-05-26T12:49:27.000Z | 2021-09-11T09:11:59.000Z | from pirates.piratesbase import PiratesGlobals
from pirates.piratesbase import PLocalizer
from pirates.piratesgui import PiratesGuiGlobals
MSG_CAT_DEFAULT = 0
MSG_CAT_THREAT_LEVEL = 1
MSG_CAT_NO_PORT = 2
MSG_CAT_TELL_PORT = 3
MSG_CAT_ANNOUNCE_ATTACK = 4
MSG_CAT_SUNK_SHIP = 5
MSG_CAT_SHORE_CLOSE = 6
MSG_CAT_PURCHASE = 7
MSG_CAT_PURCHASE_FAILED = 8
MSG_CAT_LOOT_WARNING = 9
MMH_QUEUE = 0
MMH_FIRST = 1
MMH_LAST = 2
MMH_COMBINE = 3
MessageOptions = {MSG_CAT_DEFAULT: {'text_fg': PiratesGuiGlobals.TextFG1,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': True,'messageTime': 7.0,'multiMessageHandling': MMH_FIRST,'messagePrefix': '','priority': 0},MSG_CAT_THREAT_LEVEL: {'text_fg': PiratesGuiGlobals.TextFG1,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': True,'messageTime': 7.0,'multiMessageHandling': MMH_FIRST,'messagePrefix': '','priority': 0},MSG_CAT_NO_PORT: {'text_fg': PiratesGuiGlobals.TextFG6,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': True,'messageTime': 1.0,'multiMessageHandling': MMH_FIRST,'messagePrefix': '','priority': 0},MSG_CAT_TELL_PORT: {'text_fg': PiratesGuiGlobals.TextFG1,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': True,'messageTime': 7.0,'multiMessageHandling': MMH_FIRST,'messagePrefix': '','priority': 0},MSG_CAT_ANNOUNCE_ATTACK: {'text_fg': PiratesGuiGlobals.TextFG1,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': False,'messageTime': 7.0,'multiMessageHandling': MMH_FIRST,'messagePrefix': '','priority': 0},MSG_CAT_SUNK_SHIP: {'text_fg': PiratesGuiGlobals.TextFG1,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': False,'messageTime': 7.0,'multiMessageHandling': MMH_FIRST,'messagePrefix': '','priority': 0},MSG_CAT_SHORE_CLOSE: {'text_fg': PiratesGuiGlobals.TextFG6,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': True,'messageTime': 1.0,'multiMessageHandling': MMH_FIRST,'messagePrefix': '','priority': -1},MSG_CAT_PURCHASE: {'text_fg': PiratesGuiGlobals.TextFG1,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': True,'messageTime': 3.0,'multiMessageHandling': MMH_FIRST,'messagePrefix': '','priority': 0},MSG_CAT_PURCHASE_FAILED: {'text_fg': PiratesGuiGlobals.TextFG6,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': True,'messageTime': 3.0,'multiMessageHandling': MMH_FIRST,'messagePrefix': '','priority': 0},MSG_CAT_LOOT_WARNING: {'text_fg': PiratesGuiGlobals.TextFG6,'text_shadow': (0, 0, 0, 1),'text_font': PiratesGlobals.getPirateOutlineFont(),'text_scale': 0.05,'showBorder?': True,'messageTime': 6.0,'multiMessageHandling': MMH_LAST,'messagePrefix': '','priority': 0}} | 172 | 2,665 | 0.762274 | 404 | 3,096 | 5.569307 | 0.143564 | 0.053333 | 0.102222 | 0.053333 | 0.789333 | 0.776444 | 0.776444 | 0.776444 | 0.776444 | 0.773333 | 0 | 0.042612 | 0.060078 | 3,096 | 18 | 2,665 | 172 | 0.730584 | 0 | 0 | 0 | 0 | 0 | 0.322893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f4598d4cc7410b8bbc05a7adc8da7ac8e2a1e683 | 259,248 | py | Python | tests/unit/gapic/kms_v1/test_key_management_service.py | mblackbourne/python-kms | c10c73df969cf3fcfc7c49146d6ee0398ab5b30a | [
"Apache-2.0"
] | null | null | null | tests/unit/gapic/kms_v1/test_key_management_service.py | mblackbourne/python-kms | c10c73df969cf3fcfc7c49146d6ee0398ab5b30a | [
"Apache-2.0"
] | null | null | null | tests/unit/gapic/kms_v1/test_key_management_service.py | mblackbourne/python-kms | c10c73df969cf3fcfc7c49146d6ee0398ab5b30a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import mock
import grpc
from grpc.experimental import aio
import math
import pytest
from proto.marshal.rules.dates import DurationRule, TimestampRule
from google import auth
from google.api_core import client_options
from google.api_core import exceptions
from google.api_core import gapic_v1
from google.api_core import grpc_helpers
from google.api_core import grpc_helpers_async
from google.auth import credentials
from google.auth.exceptions import MutualTLSChannelError
from google.cloud.kms_v1.services.key_management_service import (
KeyManagementServiceAsyncClient,
)
from google.cloud.kms_v1.services.key_management_service import (
KeyManagementServiceClient,
)
from google.cloud.kms_v1.services.key_management_service import pagers
from google.cloud.kms_v1.services.key_management_service import transports
from google.cloud.kms_v1.types import resources
from google.cloud.kms_v1.types import service
from google.iam.v1 import iam_policy_pb2 as iam_policy # type: ignore
from google.iam.v1 import options_pb2 as options # type: ignore
from google.iam.v1 import policy_pb2 as policy # type: ignore
from google.oauth2 import service_account
from google.protobuf import duration_pb2 as duration # type: ignore
from google.protobuf import field_mask_pb2 as field_mask # type: ignore
from google.protobuf import timestamp_pb2 as timestamp # type: ignore
from google.protobuf import wrappers_pb2 as wrappers # type: ignore
def client_cert_source_callback():
return b"cert bytes", b"key bytes"
# If default endpoint is localhost, then default mtls endpoint will be the same.
# This method modifies the default endpoint so the client can produce a different
# mtls endpoint for endpoint testing purposes.
def modify_default_endpoint(client):
return (
"foo.googleapis.com"
if ("localhost" in client.DEFAULT_ENDPOINT)
else client.DEFAULT_ENDPOINT
)
def test__get_default_mtls_endpoint():
api_endpoint = "example.googleapis.com"
api_mtls_endpoint = "example.mtls.googleapis.com"
sandbox_endpoint = "example.sandbox.googleapis.com"
sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com"
non_googleapi = "api.example.com"
assert KeyManagementServiceClient._get_default_mtls_endpoint(None) is None
assert (
KeyManagementServiceClient._get_default_mtls_endpoint(api_endpoint)
== api_mtls_endpoint
)
assert (
KeyManagementServiceClient._get_default_mtls_endpoint(api_mtls_endpoint)
== api_mtls_endpoint
)
assert (
KeyManagementServiceClient._get_default_mtls_endpoint(sandbox_endpoint)
== sandbox_mtls_endpoint
)
assert (
KeyManagementServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint)
== sandbox_mtls_endpoint
)
assert (
KeyManagementServiceClient._get_default_mtls_endpoint(non_googleapi)
== non_googleapi
)
@pytest.mark.parametrize(
"client_class", [KeyManagementServiceClient, KeyManagementServiceAsyncClient]
)
def test_key_management_service_client_from_service_account_file(client_class):
creds = credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_file"
) as factory:
factory.return_value = creds
client = client_class.from_service_account_file("dummy/file/path.json")
assert client._transport._credentials == creds
client = client_class.from_service_account_json("dummy/file/path.json")
assert client._transport._credentials == creds
assert client._transport._host == "cloudkms.googleapis.com:443"
def test_key_management_service_client_get_transport_class():
transport = KeyManagementServiceClient.get_transport_class()
assert transport == transports.KeyManagementServiceGrpcTransport
transport = KeyManagementServiceClient.get_transport_class("grpc")
assert transport == transports.KeyManagementServiceGrpcTransport
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(
KeyManagementServiceClient,
transports.KeyManagementServiceGrpcTransport,
"grpc",
),
(
KeyManagementServiceAsyncClient,
transports.KeyManagementServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
@mock.patch.object(
KeyManagementServiceClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(KeyManagementServiceClient),
)
@mock.patch.object(
KeyManagementServiceAsyncClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(KeyManagementServiceAsyncClient),
)
def test_key_management_service_client_client_options(
client_class, transport_class, transport_name
):
# Check that if channel is provided we won't create a new one.
with mock.patch.object(KeyManagementServiceClient, "get_transport_class") as gtc:
transport = transport_class(credentials=credentials.AnonymousCredentials())
client = client_class(transport=transport)
gtc.assert_not_called()
# Check that if channel is provided via str we will create a new one.
with mock.patch.object(KeyManagementServiceClient, "get_transport_class") as gtc:
client = client_class(transport=transport_name)
gtc.assert_called()
# Check the case api_endpoint is provided.
options = client_options.ClientOptions(api_endpoint="squid.clam.whelk")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
ssl_channel_credentials=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "never".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
ssl_channel_credentials=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "always".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_MTLS_ENDPOINT,
scopes=None,
ssl_channel_credentials=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has
# unsupported value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}):
with pytest.raises(MutualTLSChannelError):
client = client_class()
# Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"}
):
with pytest.raises(ValueError):
client = client_class()
# Check the case quota_project_id is provided
options = client_options.ClientOptions(quota_project_id="octopus")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
ssl_channel_credentials=None,
quota_project_id="octopus",
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name,use_client_cert_env",
[
(
KeyManagementServiceClient,
transports.KeyManagementServiceGrpcTransport,
"grpc",
"true",
),
(
KeyManagementServiceAsyncClient,
transports.KeyManagementServiceGrpcAsyncIOTransport,
"grpc_asyncio",
"true",
),
(
KeyManagementServiceClient,
transports.KeyManagementServiceGrpcTransport,
"grpc",
"false",
),
(
KeyManagementServiceAsyncClient,
transports.KeyManagementServiceGrpcAsyncIOTransport,
"grpc_asyncio",
"false",
),
],
)
@mock.patch.object(
KeyManagementServiceClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(KeyManagementServiceClient),
)
@mock.patch.object(
KeyManagementServiceAsyncClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(KeyManagementServiceAsyncClient),
)
@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"})
def test_key_management_service_client_mtls_env_auto(
client_class, transport_class, transport_name, use_client_cert_env
):
# This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default
# mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists.
# Check the case client_cert_source is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
options = client_options.ClientOptions(
client_cert_source=client_cert_source_callback
)
with mock.patch.object(transport_class, "__init__") as patched:
ssl_channel_creds = mock.Mock()
with mock.patch(
"grpc.ssl_channel_credentials", return_value=ssl_channel_creds
):
patched.return_value = None
client = client_class(client_options=options)
if use_client_cert_env == "false":
expected_ssl_channel_creds = None
expected_host = client.DEFAULT_ENDPOINT
else:
expected_ssl_channel_creds = ssl_channel_creds
expected_host = client.DEFAULT_MTLS_ENDPOINT
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
ssl_channel_credentials=expected_ssl_channel_creds,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case ADC client cert is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.grpc.SslCredentials.__init__", return_value=None
):
with mock.patch(
"google.auth.transport.grpc.SslCredentials.is_mtls",
new_callable=mock.PropertyMock,
) as is_mtls_mock:
with mock.patch(
"google.auth.transport.grpc.SslCredentials.ssl_credentials",
new_callable=mock.PropertyMock,
) as ssl_credentials_mock:
if use_client_cert_env == "false":
is_mtls_mock.return_value = False
ssl_credentials_mock.return_value = None
expected_host = client.DEFAULT_ENDPOINT
expected_ssl_channel_creds = None
else:
is_mtls_mock.return_value = True
ssl_credentials_mock.return_value = mock.Mock()
expected_host = client.DEFAULT_MTLS_ENDPOINT
expected_ssl_channel_creds = (
ssl_credentials_mock.return_value
)
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
ssl_channel_credentials=expected_ssl_channel_creds,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case client_cert_source and ADC client cert are not provided.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.grpc.SslCredentials.__init__", return_value=None
):
with mock.patch(
"google.auth.transport.grpc.SslCredentials.is_mtls",
new_callable=mock.PropertyMock,
) as is_mtls_mock:
is_mtls_mock.return_value = False
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
ssl_channel_credentials=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(
KeyManagementServiceClient,
transports.KeyManagementServiceGrpcTransport,
"grpc",
),
(
KeyManagementServiceAsyncClient,
transports.KeyManagementServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_key_management_service_client_client_options_scopes(
client_class, transport_class, transport_name
):
# Check the case scopes are provided.
options = client_options.ClientOptions(scopes=["1", "2"],)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=["1", "2"],
ssl_channel_credentials=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(
KeyManagementServiceClient,
transports.KeyManagementServiceGrpcTransport,
"grpc",
),
(
KeyManagementServiceAsyncClient,
transports.KeyManagementServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_key_management_service_client_client_options_credentials_file(
client_class, transport_class, transport_name
):
# Check the case credentials file is provided.
options = client_options.ClientOptions(credentials_file="credentials.json")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file="credentials.json",
host=client.DEFAULT_ENDPOINT,
scopes=None,
ssl_channel_credentials=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
def test_key_management_service_client_client_options_from_dict():
with mock.patch(
"google.cloud.kms_v1.services.key_management_service.transports.KeyManagementServiceGrpcTransport.__init__"
) as grpc_transport:
grpc_transport.return_value = None
client = KeyManagementServiceClient(
client_options={"api_endpoint": "squid.clam.whelk"}
)
grpc_transport.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
ssl_channel_credentials=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
def test_list_key_rings(
transport: str = "grpc", request_type=service.ListKeyRingsRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.list_key_rings), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListKeyRingsResponse(
next_page_token="next_page_token_value", total_size=1086,
)
response = client.list_key_rings(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListKeyRingsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListKeyRingsPager)
assert response.next_page_token == "next_page_token_value"
assert response.total_size == 1086
def test_list_key_rings_from_dict():
test_list_key_rings(request_type=dict)
@pytest.mark.asyncio
async def test_list_key_rings_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.ListKeyRingsRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_key_rings), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListKeyRingsResponse(
next_page_token="next_page_token_value", total_size=1086,
)
)
response = await client.list_key_rings(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListKeyRingsAsyncPager)
assert response.next_page_token == "next_page_token_value"
assert response.total_size == 1086
def test_list_key_rings_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListKeyRingsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.list_key_rings), "__call__") as call:
call.return_value = service.ListKeyRingsResponse()
client.list_key_rings(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_key_rings_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListKeyRingsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_key_rings), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListKeyRingsResponse()
)
await client.list_key_rings(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_key_rings_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.list_key_rings), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListKeyRingsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_key_rings(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
def test_list_key_rings_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_key_rings(
service.ListKeyRingsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_key_rings_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_key_rings), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListKeyRingsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListKeyRingsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_key_rings(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
@pytest.mark.asyncio
async def test_list_key_rings_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_key_rings(
service.ListKeyRingsRequest(), parent="parent_value",
)
def test_list_key_rings_pager():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.list_key_rings), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListKeyRingsResponse(
key_rings=[
resources.KeyRing(),
resources.KeyRing(),
resources.KeyRing(),
],
next_page_token="abc",
),
service.ListKeyRingsResponse(key_rings=[], next_page_token="def",),
service.ListKeyRingsResponse(
key_rings=[resources.KeyRing(),], next_page_token="ghi",
),
service.ListKeyRingsResponse(
key_rings=[resources.KeyRing(), resources.KeyRing(),],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_key_rings(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, resources.KeyRing) for i in results)
def test_list_key_rings_pages():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.list_key_rings), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListKeyRingsResponse(
key_rings=[
resources.KeyRing(),
resources.KeyRing(),
resources.KeyRing(),
],
next_page_token="abc",
),
service.ListKeyRingsResponse(key_rings=[], next_page_token="def",),
service.ListKeyRingsResponse(
key_rings=[resources.KeyRing(),], next_page_token="ghi",
),
service.ListKeyRingsResponse(
key_rings=[resources.KeyRing(), resources.KeyRing(),],
),
RuntimeError,
)
pages = list(client.list_key_rings(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_key_rings_async_pager():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_key_rings),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListKeyRingsResponse(
key_rings=[
resources.KeyRing(),
resources.KeyRing(),
resources.KeyRing(),
],
next_page_token="abc",
),
service.ListKeyRingsResponse(key_rings=[], next_page_token="def",),
service.ListKeyRingsResponse(
key_rings=[resources.KeyRing(),], next_page_token="ghi",
),
service.ListKeyRingsResponse(
key_rings=[resources.KeyRing(), resources.KeyRing(),],
),
RuntimeError,
)
async_pager = await client.list_key_rings(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, resources.KeyRing) for i in responses)
@pytest.mark.asyncio
async def test_list_key_rings_async_pages():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_key_rings),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListKeyRingsResponse(
key_rings=[
resources.KeyRing(),
resources.KeyRing(),
resources.KeyRing(),
],
next_page_token="abc",
),
service.ListKeyRingsResponse(key_rings=[], next_page_token="def",),
service.ListKeyRingsResponse(
key_rings=[resources.KeyRing(),], next_page_token="ghi",
),
service.ListKeyRingsResponse(
key_rings=[resources.KeyRing(), resources.KeyRing(),],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_key_rings(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_list_crypto_keys(
transport: str = "grpc", request_type=service.ListCryptoKeysRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_keys), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListCryptoKeysResponse(
next_page_token="next_page_token_value", total_size=1086,
)
response = client.list_crypto_keys(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListCryptoKeysRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListCryptoKeysPager)
assert response.next_page_token == "next_page_token_value"
assert response.total_size == 1086
def test_list_crypto_keys_from_dict():
test_list_crypto_keys(request_type=dict)
@pytest.mark.asyncio
async def test_list_crypto_keys_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.ListCryptoKeysRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_keys), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListCryptoKeysResponse(
next_page_token="next_page_token_value", total_size=1086,
)
)
response = await client.list_crypto_keys(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListCryptoKeysAsyncPager)
assert response.next_page_token == "next_page_token_value"
assert response.total_size == 1086
def test_list_crypto_keys_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListCryptoKeysRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_keys), "__call__"
) as call:
call.return_value = service.ListCryptoKeysResponse()
client.list_crypto_keys(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_crypto_keys_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListCryptoKeysRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_keys), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListCryptoKeysResponse()
)
await client.list_crypto_keys(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_crypto_keys_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_keys), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListCryptoKeysResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_crypto_keys(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
def test_list_crypto_keys_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_crypto_keys(
service.ListCryptoKeysRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_crypto_keys_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_keys), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListCryptoKeysResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListCryptoKeysResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_crypto_keys(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
@pytest.mark.asyncio
async def test_list_crypto_keys_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_crypto_keys(
service.ListCryptoKeysRequest(), parent="parent_value",
)
def test_list_crypto_keys_pager():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_keys), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListCryptoKeysResponse(
crypto_keys=[
resources.CryptoKey(),
resources.CryptoKey(),
resources.CryptoKey(),
],
next_page_token="abc",
),
service.ListCryptoKeysResponse(crypto_keys=[], next_page_token="def",),
service.ListCryptoKeysResponse(
crypto_keys=[resources.CryptoKey(),], next_page_token="ghi",
),
service.ListCryptoKeysResponse(
crypto_keys=[resources.CryptoKey(), resources.CryptoKey(),],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_crypto_keys(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, resources.CryptoKey) for i in results)
def test_list_crypto_keys_pages():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_keys), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListCryptoKeysResponse(
crypto_keys=[
resources.CryptoKey(),
resources.CryptoKey(),
resources.CryptoKey(),
],
next_page_token="abc",
),
service.ListCryptoKeysResponse(crypto_keys=[], next_page_token="def",),
service.ListCryptoKeysResponse(
crypto_keys=[resources.CryptoKey(),], next_page_token="ghi",
),
service.ListCryptoKeysResponse(
crypto_keys=[resources.CryptoKey(), resources.CryptoKey(),],
),
RuntimeError,
)
pages = list(client.list_crypto_keys(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_crypto_keys_async_pager():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_keys),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListCryptoKeysResponse(
crypto_keys=[
resources.CryptoKey(),
resources.CryptoKey(),
resources.CryptoKey(),
],
next_page_token="abc",
),
service.ListCryptoKeysResponse(crypto_keys=[], next_page_token="def",),
service.ListCryptoKeysResponse(
crypto_keys=[resources.CryptoKey(),], next_page_token="ghi",
),
service.ListCryptoKeysResponse(
crypto_keys=[resources.CryptoKey(), resources.CryptoKey(),],
),
RuntimeError,
)
async_pager = await client.list_crypto_keys(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, resources.CryptoKey) for i in responses)
@pytest.mark.asyncio
async def test_list_crypto_keys_async_pages():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_keys),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListCryptoKeysResponse(
crypto_keys=[
resources.CryptoKey(),
resources.CryptoKey(),
resources.CryptoKey(),
],
next_page_token="abc",
),
service.ListCryptoKeysResponse(crypto_keys=[], next_page_token="def",),
service.ListCryptoKeysResponse(
crypto_keys=[resources.CryptoKey(),], next_page_token="ghi",
),
service.ListCryptoKeysResponse(
crypto_keys=[resources.CryptoKey(), resources.CryptoKey(),],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_crypto_keys(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_list_crypto_key_versions(
transport: str = "grpc", request_type=service.ListCryptoKeyVersionsRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_key_versions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListCryptoKeyVersionsResponse(
next_page_token="next_page_token_value", total_size=1086,
)
response = client.list_crypto_key_versions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListCryptoKeyVersionsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListCryptoKeyVersionsPager)
assert response.next_page_token == "next_page_token_value"
assert response.total_size == 1086
def test_list_crypto_key_versions_from_dict():
test_list_crypto_key_versions(request_type=dict)
@pytest.mark.asyncio
async def test_list_crypto_key_versions_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.ListCryptoKeyVersionsRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_key_versions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListCryptoKeyVersionsResponse(
next_page_token="next_page_token_value", total_size=1086,
)
)
response = await client.list_crypto_key_versions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListCryptoKeyVersionsAsyncPager)
assert response.next_page_token == "next_page_token_value"
assert response.total_size == 1086
def test_list_crypto_key_versions_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListCryptoKeyVersionsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_key_versions), "__call__"
) as call:
call.return_value = service.ListCryptoKeyVersionsResponse()
client.list_crypto_key_versions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_crypto_key_versions_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListCryptoKeyVersionsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_key_versions), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListCryptoKeyVersionsResponse()
)
await client.list_crypto_key_versions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_crypto_key_versions_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_key_versions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListCryptoKeyVersionsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_crypto_key_versions(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
def test_list_crypto_key_versions_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_crypto_key_versions(
service.ListCryptoKeyVersionsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_crypto_key_versions_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_key_versions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListCryptoKeyVersionsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListCryptoKeyVersionsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_crypto_key_versions(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
@pytest.mark.asyncio
async def test_list_crypto_key_versions_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_crypto_key_versions(
service.ListCryptoKeyVersionsRequest(), parent="parent_value",
)
def test_list_crypto_key_versions_pager():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_key_versions), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
],
next_page_token="abc",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[], next_page_token="def",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[resources.CryptoKeyVersion(),],
next_page_token="ghi",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_crypto_key_versions(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, resources.CryptoKeyVersion) for i in results)
def test_list_crypto_key_versions_pages():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_crypto_key_versions), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
],
next_page_token="abc",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[], next_page_token="def",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[resources.CryptoKeyVersion(),],
next_page_token="ghi",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
],
),
RuntimeError,
)
pages = list(client.list_crypto_key_versions(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_crypto_key_versions_async_pager():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_key_versions),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
],
next_page_token="abc",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[], next_page_token="def",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[resources.CryptoKeyVersion(),],
next_page_token="ghi",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
],
),
RuntimeError,
)
async_pager = await client.list_crypto_key_versions(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, resources.CryptoKeyVersion) for i in responses)
@pytest.mark.asyncio
async def test_list_crypto_key_versions_async_pages():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_crypto_key_versions),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
],
next_page_token="abc",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[], next_page_token="def",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[resources.CryptoKeyVersion(),],
next_page_token="ghi",
),
service.ListCryptoKeyVersionsResponse(
crypto_key_versions=[
resources.CryptoKeyVersion(),
resources.CryptoKeyVersion(),
],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_crypto_key_versions(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_list_import_jobs(
transport: str = "grpc", request_type=service.ListImportJobsRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_import_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListImportJobsResponse(
next_page_token="next_page_token_value", total_size=1086,
)
response = client.list_import_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListImportJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListImportJobsPager)
assert response.next_page_token == "next_page_token_value"
assert response.total_size == 1086
def test_list_import_jobs_from_dict():
test_list_import_jobs(request_type=dict)
@pytest.mark.asyncio
async def test_list_import_jobs_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.ListImportJobsRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_import_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListImportJobsResponse(
next_page_token="next_page_token_value", total_size=1086,
)
)
response = await client.list_import_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListImportJobsAsyncPager)
assert response.next_page_token == "next_page_token_value"
assert response.total_size == 1086
def test_list_import_jobs_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListImportJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_import_jobs), "__call__"
) as call:
call.return_value = service.ListImportJobsResponse()
client.list_import_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_import_jobs_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListImportJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_import_jobs), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListImportJobsResponse()
)
await client.list_import_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_import_jobs_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_import_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListImportJobsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_import_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
def test_list_import_jobs_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_import_jobs(
service.ListImportJobsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_import_jobs_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_import_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListImportJobsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListImportJobsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_import_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
@pytest.mark.asyncio
async def test_list_import_jobs_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_import_jobs(
service.ListImportJobsRequest(), parent="parent_value",
)
def test_list_import_jobs_pager():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_import_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListImportJobsResponse(
import_jobs=[
resources.ImportJob(),
resources.ImportJob(),
resources.ImportJob(),
],
next_page_token="abc",
),
service.ListImportJobsResponse(import_jobs=[], next_page_token="def",),
service.ListImportJobsResponse(
import_jobs=[resources.ImportJob(),], next_page_token="ghi",
),
service.ListImportJobsResponse(
import_jobs=[resources.ImportJob(), resources.ImportJob(),],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_import_jobs(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, resources.ImportJob) for i in results)
def test_list_import_jobs_pages():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.list_import_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListImportJobsResponse(
import_jobs=[
resources.ImportJob(),
resources.ImportJob(),
resources.ImportJob(),
],
next_page_token="abc",
),
service.ListImportJobsResponse(import_jobs=[], next_page_token="def",),
service.ListImportJobsResponse(
import_jobs=[resources.ImportJob(),], next_page_token="ghi",
),
service.ListImportJobsResponse(
import_jobs=[resources.ImportJob(), resources.ImportJob(),],
),
RuntimeError,
)
pages = list(client.list_import_jobs(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_import_jobs_async_pager():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_import_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListImportJobsResponse(
import_jobs=[
resources.ImportJob(),
resources.ImportJob(),
resources.ImportJob(),
],
next_page_token="abc",
),
service.ListImportJobsResponse(import_jobs=[], next_page_token="def",),
service.ListImportJobsResponse(
import_jobs=[resources.ImportJob(),], next_page_token="ghi",
),
service.ListImportJobsResponse(
import_jobs=[resources.ImportJob(), resources.ImportJob(),],
),
RuntimeError,
)
async_pager = await client.list_import_jobs(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, resources.ImportJob) for i in responses)
@pytest.mark.asyncio
async def test_list_import_jobs_async_pages():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials,
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.list_import_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListImportJobsResponse(
import_jobs=[
resources.ImportJob(),
resources.ImportJob(),
resources.ImportJob(),
],
next_page_token="abc",
),
service.ListImportJobsResponse(import_jobs=[], next_page_token="def",),
service.ListImportJobsResponse(
import_jobs=[resources.ImportJob(),], next_page_token="ghi",
),
service.ListImportJobsResponse(
import_jobs=[resources.ImportJob(), resources.ImportJob(),],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_import_jobs(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_get_key_ring(transport: str = "grpc", request_type=service.GetKeyRingRequest):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_key_ring), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.KeyRing(name="name_value",)
response = client.get_key_ring(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetKeyRingRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.KeyRing)
assert response.name == "name_value"
def test_get_key_ring_from_dict():
test_get_key_ring(request_type=dict)
@pytest.mark.asyncio
async def test_get_key_ring_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.GetKeyRingRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_key_ring), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.KeyRing(name="name_value",)
)
response = await client.get_key_ring(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.KeyRing)
assert response.name == "name_value"
def test_get_key_ring_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetKeyRingRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_key_ring), "__call__") as call:
call.return_value = resources.KeyRing()
client.get_key_ring(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_key_ring_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetKeyRingRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_key_ring), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.KeyRing())
await client.get_key_ring(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_key_ring_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_key_ring), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.KeyRing()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_key_ring(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_get_key_ring_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_key_ring(
service.GetKeyRingRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_key_ring_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_key_ring), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.KeyRing()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.KeyRing())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_key_ring(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_get_key_ring_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_key_ring(
service.GetKeyRingRequest(), name="name_value",
)
def test_get_crypto_key(
transport: str = "grpc", request_type=service.GetCryptoKeyRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_crypto_key), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey(
name="name_value",
purpose=resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT,
rotation_period=duration.Duration(seconds=751),
)
response = client.get_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetCryptoKeyRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKey)
assert response.name == "name_value"
assert response.purpose == resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT
def test_get_crypto_key_from_dict():
test_get_crypto_key(request_type=dict)
@pytest.mark.asyncio
async def test_get_crypto_key_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.GetCryptoKeyRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKey(
name="name_value",
purpose=resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT,
)
)
response = await client.get_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKey)
assert response.name == "name_value"
assert response.purpose == resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT
def test_get_crypto_key_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetCryptoKeyRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_crypto_key), "__call__") as call:
call.return_value = resources.CryptoKey()
client.get_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_crypto_key_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetCryptoKeyRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_crypto_key), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.CryptoKey())
await client.get_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_crypto_key_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_crypto_key), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_crypto_key(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_get_crypto_key_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_crypto_key(
service.GetCryptoKeyRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_crypto_key_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.CryptoKey())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_crypto_key(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_get_crypto_key_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_crypto_key(
service.GetCryptoKeyRequest(), name="name_value",
)
def test_get_crypto_key_version(
transport: str = "grpc", request_type=service.GetCryptoKeyVersionRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.get_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
response = client.get_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetCryptoKeyVersionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_get_crypto_key_version_from_dict():
test_get_crypto_key_version(request_type=dict)
@pytest.mark.asyncio
async def test_get_crypto_key_version_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.GetCryptoKeyVersionRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
)
response = await client.get_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_get_crypto_key_version_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetCryptoKeyVersionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.get_crypto_key_version), "__call__"
) as call:
call.return_value = resources.CryptoKeyVersion()
client.get_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_crypto_key_version_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetCryptoKeyVersionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_crypto_key_version), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
await client.get_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_crypto_key_version_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.get_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_crypto_key_version(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_get_crypto_key_version_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_crypto_key_version(
service.GetCryptoKeyVersionRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_crypto_key_version_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_crypto_key_version(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_get_crypto_key_version_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_crypto_key_version(
service.GetCryptoKeyVersionRequest(), name="name_value",
)
def test_get_public_key(
transport: str = "grpc", request_type=service.GetPublicKeyRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_public_key), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.PublicKey(
pem="pem_value",
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
name="name_value",
)
response = client.get_public_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetPublicKeyRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.PublicKey)
assert response.pem == "pem_value"
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.name == "name_value"
def test_get_public_key_from_dict():
test_get_public_key(request_type=dict)
@pytest.mark.asyncio
async def test_get_public_key_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.GetPublicKeyRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_public_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.PublicKey(
pem="pem_value",
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
name="name_value",
)
)
response = await client.get_public_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.PublicKey)
assert response.pem == "pem_value"
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.name == "name_value"
def test_get_public_key_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetPublicKeyRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_public_key), "__call__") as call:
call.return_value = resources.PublicKey()
client.get_public_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_public_key_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetPublicKeyRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_public_key), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.PublicKey())
await client.get_public_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_public_key_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_public_key), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.PublicKey()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_public_key(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_get_public_key_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_public_key(
service.GetPublicKeyRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_public_key_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_public_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.PublicKey()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.PublicKey())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_public_key(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_get_public_key_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_public_key(
service.GetPublicKeyRequest(), name="name_value",
)
def test_get_import_job(
transport: str = "grpc", request_type=service.GetImportJobRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_import_job), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.ImportJob(
name="name_value",
import_method=resources.ImportJob.ImportMethod.RSA_OAEP_3072_SHA1_AES_256,
protection_level=resources.ProtectionLevel.SOFTWARE,
state=resources.ImportJob.ImportJobState.PENDING_GENERATION,
)
response = client.get_import_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetImportJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.ImportJob)
assert response.name == "name_value"
assert (
response.import_method
== resources.ImportJob.ImportMethod.RSA_OAEP_3072_SHA1_AES_256
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert response.state == resources.ImportJob.ImportJobState.PENDING_GENERATION
def test_get_import_job_from_dict():
test_get_import_job(request_type=dict)
@pytest.mark.asyncio
async def test_get_import_job_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.GetImportJobRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_import_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.ImportJob(
name="name_value",
import_method=resources.ImportJob.ImportMethod.RSA_OAEP_3072_SHA1_AES_256,
protection_level=resources.ProtectionLevel.SOFTWARE,
state=resources.ImportJob.ImportJobState.PENDING_GENERATION,
)
)
response = await client.get_import_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.ImportJob)
assert response.name == "name_value"
assert (
response.import_method
== resources.ImportJob.ImportMethod.RSA_OAEP_3072_SHA1_AES_256
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert response.state == resources.ImportJob.ImportJobState.PENDING_GENERATION
def test_get_import_job_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetImportJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_import_job), "__call__") as call:
call.return_value = resources.ImportJob()
client.get_import_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_import_job_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetImportJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_import_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.ImportJob())
await client.get_import_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_import_job_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_import_job), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.ImportJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_import_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_get_import_job_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_import_job(
service.GetImportJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_import_job_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_import_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.ImportJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.ImportJob())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_import_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_get_import_job_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_import_job(
service.GetImportJobRequest(), name="name_value",
)
def test_create_key_ring(
transport: str = "grpc", request_type=service.CreateKeyRingRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.create_key_ring), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.KeyRing(name="name_value",)
response = client.create_key_ring(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateKeyRingRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.KeyRing)
assert response.name == "name_value"
def test_create_key_ring_from_dict():
test_create_key_ring(request_type=dict)
@pytest.mark.asyncio
async def test_create_key_ring_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.CreateKeyRingRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_key_ring), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.KeyRing(name="name_value",)
)
response = await client.create_key_ring(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.KeyRing)
assert response.name == "name_value"
def test_create_key_ring_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateKeyRingRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.create_key_ring), "__call__") as call:
call.return_value = resources.KeyRing()
client.create_key_ring(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_key_ring_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateKeyRingRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_key_ring), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.KeyRing())
await client.create_key_ring(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_key_ring_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.create_key_ring), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = resources.KeyRing()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_key_ring(
parent="parent_value",
key_ring_id="key_ring_id_value",
key_ring=resources.KeyRing(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].key_ring_id == "key_ring_id_value"
assert args[0].key_ring == resources.KeyRing(name="name_value")
def test_create_key_ring_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_key_ring(
service.CreateKeyRingRequest(),
parent="parent_value",
key_ring_id="key_ring_id_value",
key_ring=resources.KeyRing(name="name_value"),
)
@pytest.mark.asyncio
async def test_create_key_ring_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_key_ring), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.KeyRing()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.KeyRing())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_key_ring(
parent="parent_value",
key_ring_id="key_ring_id_value",
key_ring=resources.KeyRing(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].key_ring_id == "key_ring_id_value"
assert args[0].key_ring == resources.KeyRing(name="name_value")
@pytest.mark.asyncio
async def test_create_key_ring_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_key_ring(
service.CreateKeyRingRequest(),
parent="parent_value",
key_ring_id="key_ring_id_value",
key_ring=resources.KeyRing(name="name_value"),
)
def test_create_crypto_key(
transport: str = "grpc", request_type=service.CreateCryptoKeyRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.create_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey(
name="name_value",
purpose=resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT,
rotation_period=duration.Duration(seconds=751),
)
response = client.create_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateCryptoKeyRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKey)
assert response.name == "name_value"
assert response.purpose == resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT
def test_create_crypto_key_from_dict():
test_create_crypto_key(request_type=dict)
@pytest.mark.asyncio
async def test_create_crypto_key_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.CreateCryptoKeyRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKey(
name="name_value",
purpose=resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT,
)
)
response = await client.create_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKey)
assert response.name == "name_value"
assert response.purpose == resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT
def test_create_crypto_key_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateCryptoKeyRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.create_crypto_key), "__call__"
) as call:
call.return_value = resources.CryptoKey()
client.create_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_crypto_key_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateCryptoKeyRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_crypto_key), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.CryptoKey())
await client.create_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_crypto_key_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.create_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_crypto_key(
parent="parent_value",
crypto_key_id="crypto_key_id_value",
crypto_key=resources.CryptoKey(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].crypto_key_id == "crypto_key_id_value"
assert args[0].crypto_key == resources.CryptoKey(name="name_value")
def test_create_crypto_key_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_crypto_key(
service.CreateCryptoKeyRequest(),
parent="parent_value",
crypto_key_id="crypto_key_id_value",
crypto_key=resources.CryptoKey(name="name_value"),
)
@pytest.mark.asyncio
async def test_create_crypto_key_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.CryptoKey())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_crypto_key(
parent="parent_value",
crypto_key_id="crypto_key_id_value",
crypto_key=resources.CryptoKey(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].crypto_key_id == "crypto_key_id_value"
assert args[0].crypto_key == resources.CryptoKey(name="name_value")
@pytest.mark.asyncio
async def test_create_crypto_key_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_crypto_key(
service.CreateCryptoKeyRequest(),
parent="parent_value",
crypto_key_id="crypto_key_id_value",
crypto_key=resources.CryptoKey(name="name_value"),
)
def test_create_crypto_key_version(
transport: str = "grpc", request_type=service.CreateCryptoKeyVersionRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.create_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
response = client.create_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateCryptoKeyVersionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_create_crypto_key_version_from_dict():
test_create_crypto_key_version(request_type=dict)
@pytest.mark.asyncio
async def test_create_crypto_key_version_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.CreateCryptoKeyVersionRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
)
response = await client.create_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_create_crypto_key_version_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateCryptoKeyVersionRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.create_crypto_key_version), "__call__"
) as call:
call.return_value = resources.CryptoKeyVersion()
client.create_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_crypto_key_version_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateCryptoKeyVersionRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_crypto_key_version), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
await client.create_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_crypto_key_version_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.create_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_crypto_key_version(
parent="parent_value",
crypto_key_version=resources.CryptoKeyVersion(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].crypto_key_version == resources.CryptoKeyVersion(
name="name_value"
)
def test_create_crypto_key_version_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_crypto_key_version(
service.CreateCryptoKeyVersionRequest(),
parent="parent_value",
crypto_key_version=resources.CryptoKeyVersion(name="name_value"),
)
@pytest.mark.asyncio
async def test_create_crypto_key_version_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_crypto_key_version(
parent="parent_value",
crypto_key_version=resources.CryptoKeyVersion(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].crypto_key_version == resources.CryptoKeyVersion(
name="name_value"
)
@pytest.mark.asyncio
async def test_create_crypto_key_version_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_crypto_key_version(
service.CreateCryptoKeyVersionRequest(),
parent="parent_value",
crypto_key_version=resources.CryptoKeyVersion(name="name_value"),
)
def test_import_crypto_key_version(
transport: str = "grpc", request_type=service.ImportCryptoKeyVersionRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.import_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
response = client.import_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ImportCryptoKeyVersionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_import_crypto_key_version_from_dict():
test_import_crypto_key_version(request_type=dict)
@pytest.mark.asyncio
async def test_import_crypto_key_version_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.ImportCryptoKeyVersionRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.import_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
)
response = await client.import_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_import_crypto_key_version_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ImportCryptoKeyVersionRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.import_crypto_key_version), "__call__"
) as call:
call.return_value = resources.CryptoKeyVersion()
client.import_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_import_crypto_key_version_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ImportCryptoKeyVersionRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.import_crypto_key_version), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
await client.import_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_import_job(
transport: str = "grpc", request_type=service.CreateImportJobRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.create_import_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.ImportJob(
name="name_value",
import_method=resources.ImportJob.ImportMethod.RSA_OAEP_3072_SHA1_AES_256,
protection_level=resources.ProtectionLevel.SOFTWARE,
state=resources.ImportJob.ImportJobState.PENDING_GENERATION,
)
response = client.create_import_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateImportJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.ImportJob)
assert response.name == "name_value"
assert (
response.import_method
== resources.ImportJob.ImportMethod.RSA_OAEP_3072_SHA1_AES_256
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert response.state == resources.ImportJob.ImportJobState.PENDING_GENERATION
def test_create_import_job_from_dict():
test_create_import_job(request_type=dict)
@pytest.mark.asyncio
async def test_create_import_job_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.CreateImportJobRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_import_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.ImportJob(
name="name_value",
import_method=resources.ImportJob.ImportMethod.RSA_OAEP_3072_SHA1_AES_256,
protection_level=resources.ProtectionLevel.SOFTWARE,
state=resources.ImportJob.ImportJobState.PENDING_GENERATION,
)
)
response = await client.create_import_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.ImportJob)
assert response.name == "name_value"
assert (
response.import_method
== resources.ImportJob.ImportMethod.RSA_OAEP_3072_SHA1_AES_256
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert response.state == resources.ImportJob.ImportJobState.PENDING_GENERATION
def test_create_import_job_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateImportJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.create_import_job), "__call__"
) as call:
call.return_value = resources.ImportJob()
client.create_import_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_import_job_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateImportJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_import_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.ImportJob())
await client.create_import_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_import_job_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.create_import_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.ImportJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_import_job(
parent="parent_value",
import_job_id="import_job_id_value",
import_job=resources.ImportJob(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].import_job_id == "import_job_id_value"
assert args[0].import_job == resources.ImportJob(name="name_value")
def test_create_import_job_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_import_job(
service.CreateImportJobRequest(),
parent="parent_value",
import_job_id="import_job_id_value",
import_job=resources.ImportJob(name="name_value"),
)
@pytest.mark.asyncio
async def test_create_import_job_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_import_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.ImportJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.ImportJob())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_import_job(
parent="parent_value",
import_job_id="import_job_id_value",
import_job=resources.ImportJob(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].parent == "parent_value"
assert args[0].import_job_id == "import_job_id_value"
assert args[0].import_job == resources.ImportJob(name="name_value")
@pytest.mark.asyncio
async def test_create_import_job_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_import_job(
service.CreateImportJobRequest(),
parent="parent_value",
import_job_id="import_job_id_value",
import_job=resources.ImportJob(name="name_value"),
)
def test_update_crypto_key(
transport: str = "grpc", request_type=service.UpdateCryptoKeyRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.update_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey(
name="name_value",
purpose=resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT,
rotation_period=duration.Duration(seconds=751),
)
response = client.update_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateCryptoKeyRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKey)
assert response.name == "name_value"
assert response.purpose == resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT
def test_update_crypto_key_from_dict():
test_update_crypto_key(request_type=dict)
@pytest.mark.asyncio
async def test_update_crypto_key_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.UpdateCryptoKeyRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.update_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKey(
name="name_value",
purpose=resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT,
)
)
response = await client.update_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKey)
assert response.name == "name_value"
assert response.purpose == resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT
def test_update_crypto_key_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateCryptoKeyRequest()
request.crypto_key.name = "crypto_key.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.update_crypto_key), "__call__"
) as call:
call.return_value = resources.CryptoKey()
client.update_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "crypto_key.name=crypto_key.name/value",) in kw[
"metadata"
]
@pytest.mark.asyncio
async def test_update_crypto_key_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateCryptoKeyRequest()
request.crypto_key.name = "crypto_key.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.update_crypto_key), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.CryptoKey())
await client.update_crypto_key(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "crypto_key.name=crypto_key.name/value",) in kw[
"metadata"
]
def test_update_crypto_key_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.update_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_crypto_key(
crypto_key=resources.CryptoKey(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].crypto_key == resources.CryptoKey(name="name_value")
assert args[0].update_mask == field_mask.FieldMask(paths=["paths_value"])
def test_update_crypto_key_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_crypto_key(
service.UpdateCryptoKeyRequest(),
crypto_key=resources.CryptoKey(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
@pytest.mark.asyncio
async def test_update_crypto_key_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.update_crypto_key), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.CryptoKey())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_crypto_key(
crypto_key=resources.CryptoKey(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].crypto_key == resources.CryptoKey(name="name_value")
assert args[0].update_mask == field_mask.FieldMask(paths=["paths_value"])
@pytest.mark.asyncio
async def test_update_crypto_key_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_crypto_key(
service.UpdateCryptoKeyRequest(),
crypto_key=resources.CryptoKey(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
def test_update_crypto_key_version(
transport: str = "grpc", request_type=service.UpdateCryptoKeyVersionRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.update_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
response = client.update_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateCryptoKeyVersionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_update_crypto_key_version_from_dict():
test_update_crypto_key_version(request_type=dict)
@pytest.mark.asyncio
async def test_update_crypto_key_version_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.UpdateCryptoKeyVersionRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.update_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
)
response = await client.update_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_update_crypto_key_version_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateCryptoKeyVersionRequest()
request.crypto_key_version.name = "crypto_key_version.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.update_crypto_key_version), "__call__"
) as call:
call.return_value = resources.CryptoKeyVersion()
client.update_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"crypto_key_version.name=crypto_key_version.name/value",
) in kw["metadata"]
@pytest.mark.asyncio
async def test_update_crypto_key_version_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateCryptoKeyVersionRequest()
request.crypto_key_version.name = "crypto_key_version.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.update_crypto_key_version), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
await client.update_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"crypto_key_version.name=crypto_key_version.name/value",
) in kw["metadata"]
def test_update_crypto_key_version_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.update_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_crypto_key_version(
crypto_key_version=resources.CryptoKeyVersion(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].crypto_key_version == resources.CryptoKeyVersion(
name="name_value"
)
assert args[0].update_mask == field_mask.FieldMask(paths=["paths_value"])
def test_update_crypto_key_version_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_crypto_key_version(
service.UpdateCryptoKeyVersionRequest(),
crypto_key_version=resources.CryptoKeyVersion(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
@pytest.mark.asyncio
async def test_update_crypto_key_version_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.update_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_crypto_key_version(
crypto_key_version=resources.CryptoKeyVersion(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].crypto_key_version == resources.CryptoKeyVersion(
name="name_value"
)
assert args[0].update_mask == field_mask.FieldMask(paths=["paths_value"])
@pytest.mark.asyncio
async def test_update_crypto_key_version_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_crypto_key_version(
service.UpdateCryptoKeyVersionRequest(),
crypto_key_version=resources.CryptoKeyVersion(name="name_value"),
update_mask=field_mask.FieldMask(paths=["paths_value"]),
)
def test_encrypt(transport: str = "grpc", request_type=service.EncryptRequest):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.encrypt), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.EncryptResponse(
name="name_value",
ciphertext=b"ciphertext_blob",
verified_plaintext_crc32c=True,
verified_additional_authenticated_data_crc32c=True,
)
response = client.encrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.EncryptRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, service.EncryptResponse)
assert response.name == "name_value"
assert response.ciphertext == b"ciphertext_blob"
assert response.verified_plaintext_crc32c is True
assert response.verified_additional_authenticated_data_crc32c is True
def test_encrypt_from_dict():
test_encrypt(request_type=dict)
@pytest.mark.asyncio
async def test_encrypt_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.EncryptRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._client._transport.encrypt), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.EncryptResponse(
name="name_value",
ciphertext=b"ciphertext_blob",
verified_plaintext_crc32c=True,
verified_additional_authenticated_data_crc32c=True,
)
)
response = await client.encrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, service.EncryptResponse)
assert response.name == "name_value"
assert response.ciphertext == b"ciphertext_blob"
assert response.verified_plaintext_crc32c is True
assert response.verified_additional_authenticated_data_crc32c is True
def test_encrypt_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.EncryptRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.encrypt), "__call__") as call:
call.return_value = service.EncryptResponse()
client.encrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_encrypt_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.EncryptRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._client._transport.encrypt), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.EncryptResponse()
)
await client.encrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_encrypt_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.encrypt), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.EncryptResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.encrypt(
name="name_value", plaintext=b"plaintext_blob",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].plaintext == b"plaintext_blob"
def test_encrypt_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.encrypt(
service.EncryptRequest(), name="name_value", plaintext=b"plaintext_blob",
)
@pytest.mark.asyncio
async def test_encrypt_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._client._transport.encrypt), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.EncryptResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.EncryptResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.encrypt(name="name_value", plaintext=b"plaintext_blob",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].plaintext == b"plaintext_blob"
@pytest.mark.asyncio
async def test_encrypt_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.encrypt(
service.EncryptRequest(), name="name_value", plaintext=b"plaintext_blob",
)
def test_decrypt(transport: str = "grpc", request_type=service.DecryptRequest):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.decrypt), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.DecryptResponse(plaintext=b"plaintext_blob",)
response = client.decrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.DecryptRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, service.DecryptResponse)
assert response.plaintext == b"plaintext_blob"
def test_decrypt_from_dict():
test_decrypt(request_type=dict)
@pytest.mark.asyncio
async def test_decrypt_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.DecryptRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._client._transport.decrypt), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.DecryptResponse(plaintext=b"plaintext_blob",)
)
response = await client.decrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, service.DecryptResponse)
assert response.plaintext == b"plaintext_blob"
def test_decrypt_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DecryptRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.decrypt), "__call__") as call:
call.return_value = service.DecryptResponse()
client.decrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_decrypt_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DecryptRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._client._transport.decrypt), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.DecryptResponse()
)
await client.decrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_decrypt_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.decrypt), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.DecryptResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.decrypt(
name="name_value", ciphertext=b"ciphertext_blob",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].ciphertext == b"ciphertext_blob"
def test_decrypt_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.decrypt(
service.DecryptRequest(), name="name_value", ciphertext=b"ciphertext_blob",
)
@pytest.mark.asyncio
async def test_decrypt_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._client._transport.decrypt), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.DecryptResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.DecryptResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.decrypt(
name="name_value", ciphertext=b"ciphertext_blob",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].ciphertext == b"ciphertext_blob"
@pytest.mark.asyncio
async def test_decrypt_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.decrypt(
service.DecryptRequest(), name="name_value", ciphertext=b"ciphertext_blob",
)
def test_asymmetric_sign(
transport: str = "grpc", request_type=service.AsymmetricSignRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.asymmetric_sign), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.AsymmetricSignResponse(
signature=b"signature_blob", verified_digest_crc32c=True, name="name_value",
)
response = client.asymmetric_sign(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.AsymmetricSignRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, service.AsymmetricSignResponse)
assert response.signature == b"signature_blob"
assert response.verified_digest_crc32c is True
assert response.name == "name_value"
def test_asymmetric_sign_from_dict():
test_asymmetric_sign(request_type=dict)
@pytest.mark.asyncio
async def test_asymmetric_sign_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.AsymmetricSignRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.asymmetric_sign), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.AsymmetricSignResponse(
signature=b"signature_blob",
verified_digest_crc32c=True,
name="name_value",
)
)
response = await client.asymmetric_sign(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, service.AsymmetricSignResponse)
assert response.signature == b"signature_blob"
assert response.verified_digest_crc32c is True
assert response.name == "name_value"
def test_asymmetric_sign_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.AsymmetricSignRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.asymmetric_sign), "__call__") as call:
call.return_value = service.AsymmetricSignResponse()
client.asymmetric_sign(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_asymmetric_sign_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.AsymmetricSignRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.asymmetric_sign), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.AsymmetricSignResponse()
)
await client.asymmetric_sign(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_asymmetric_sign_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.asymmetric_sign), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.AsymmetricSignResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.asymmetric_sign(
name="name_value", digest=service.Digest(sha256=b"sha256_blob"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].digest == service.Digest(sha256=b"sha256_blob")
def test_asymmetric_sign_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.asymmetric_sign(
service.AsymmetricSignRequest(),
name="name_value",
digest=service.Digest(sha256=b"sha256_blob"),
)
@pytest.mark.asyncio
async def test_asymmetric_sign_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.asymmetric_sign), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.AsymmetricSignResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.AsymmetricSignResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.asymmetric_sign(
name="name_value", digest=service.Digest(sha256=b"sha256_blob"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].digest == service.Digest(sha256=b"sha256_blob")
@pytest.mark.asyncio
async def test_asymmetric_sign_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.asymmetric_sign(
service.AsymmetricSignRequest(),
name="name_value",
digest=service.Digest(sha256=b"sha256_blob"),
)
def test_asymmetric_decrypt(
transport: str = "grpc", request_type=service.AsymmetricDecryptRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.asymmetric_decrypt), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.AsymmetricDecryptResponse(
plaintext=b"plaintext_blob", verified_ciphertext_crc32c=True,
)
response = client.asymmetric_decrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.AsymmetricDecryptRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, service.AsymmetricDecryptResponse)
assert response.plaintext == b"plaintext_blob"
assert response.verified_ciphertext_crc32c is True
def test_asymmetric_decrypt_from_dict():
test_asymmetric_decrypt(request_type=dict)
@pytest.mark.asyncio
async def test_asymmetric_decrypt_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.AsymmetricDecryptRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.asymmetric_decrypt), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.AsymmetricDecryptResponse(
plaintext=b"plaintext_blob", verified_ciphertext_crc32c=True,
)
)
response = await client.asymmetric_decrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, service.AsymmetricDecryptResponse)
assert response.plaintext == b"plaintext_blob"
assert response.verified_ciphertext_crc32c is True
def test_asymmetric_decrypt_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.AsymmetricDecryptRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.asymmetric_decrypt), "__call__"
) as call:
call.return_value = service.AsymmetricDecryptResponse()
client.asymmetric_decrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_asymmetric_decrypt_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.AsymmetricDecryptRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.asymmetric_decrypt), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.AsymmetricDecryptResponse()
)
await client.asymmetric_decrypt(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_asymmetric_decrypt_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.asymmetric_decrypt), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.AsymmetricDecryptResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.asymmetric_decrypt(
name="name_value", ciphertext=b"ciphertext_blob",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].ciphertext == b"ciphertext_blob"
def test_asymmetric_decrypt_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.asymmetric_decrypt(
service.AsymmetricDecryptRequest(),
name="name_value",
ciphertext=b"ciphertext_blob",
)
@pytest.mark.asyncio
async def test_asymmetric_decrypt_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.asymmetric_decrypt), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.AsymmetricDecryptResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.AsymmetricDecryptResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.asymmetric_decrypt(
name="name_value", ciphertext=b"ciphertext_blob",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].ciphertext == b"ciphertext_blob"
@pytest.mark.asyncio
async def test_asymmetric_decrypt_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.asymmetric_decrypt(
service.AsymmetricDecryptRequest(),
name="name_value",
ciphertext=b"ciphertext_blob",
)
def test_update_crypto_key_primary_version(
transport: str = "grpc", request_type=service.UpdateCryptoKeyPrimaryVersionRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.update_crypto_key_primary_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey(
name="name_value",
purpose=resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT,
rotation_period=duration.Duration(seconds=751),
)
response = client.update_crypto_key_primary_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateCryptoKeyPrimaryVersionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKey)
assert response.name == "name_value"
assert response.purpose == resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT
def test_update_crypto_key_primary_version_from_dict():
test_update_crypto_key_primary_version(request_type=dict)
@pytest.mark.asyncio
async def test_update_crypto_key_primary_version_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.UpdateCryptoKeyPrimaryVersionRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.update_crypto_key_primary_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKey(
name="name_value",
purpose=resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT,
)
)
response = await client.update_crypto_key_primary_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKey)
assert response.name == "name_value"
assert response.purpose == resources.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT
def test_update_crypto_key_primary_version_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateCryptoKeyPrimaryVersionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.update_crypto_key_primary_version), "__call__"
) as call:
call.return_value = resources.CryptoKey()
client.update_crypto_key_primary_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_update_crypto_key_primary_version_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateCryptoKeyPrimaryVersionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.update_crypto_key_primary_version), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.CryptoKey())
await client.update_crypto_key_primary_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_update_crypto_key_primary_version_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.update_crypto_key_primary_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_crypto_key_primary_version(
name="name_value", crypto_key_version_id="crypto_key_version_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].crypto_key_version_id == "crypto_key_version_id_value"
def test_update_crypto_key_primary_version_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_crypto_key_primary_version(
service.UpdateCryptoKeyPrimaryVersionRequest(),
name="name_value",
crypto_key_version_id="crypto_key_version_id_value",
)
@pytest.mark.asyncio
async def test_update_crypto_key_primary_version_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.update_crypto_key_primary_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKey()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(resources.CryptoKey())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_crypto_key_primary_version(
name="name_value", crypto_key_version_id="crypto_key_version_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].crypto_key_version_id == "crypto_key_version_id_value"
@pytest.mark.asyncio
async def test_update_crypto_key_primary_version_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_crypto_key_primary_version(
service.UpdateCryptoKeyPrimaryVersionRequest(),
name="name_value",
crypto_key_version_id="crypto_key_version_id_value",
)
def test_destroy_crypto_key_version(
transport: str = "grpc", request_type=service.DestroyCryptoKeyVersionRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.destroy_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
response = client.destroy_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.DestroyCryptoKeyVersionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_destroy_crypto_key_version_from_dict():
test_destroy_crypto_key_version(request_type=dict)
@pytest.mark.asyncio
async def test_destroy_crypto_key_version_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.DestroyCryptoKeyVersionRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.destroy_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
)
response = await client.destroy_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_destroy_crypto_key_version_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DestroyCryptoKeyVersionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.destroy_crypto_key_version), "__call__"
) as call:
call.return_value = resources.CryptoKeyVersion()
client.destroy_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_destroy_crypto_key_version_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DestroyCryptoKeyVersionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.destroy_crypto_key_version), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
await client.destroy_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_destroy_crypto_key_version_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.destroy_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.destroy_crypto_key_version(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_destroy_crypto_key_version_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.destroy_crypto_key_version(
service.DestroyCryptoKeyVersionRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_destroy_crypto_key_version_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.destroy_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.destroy_crypto_key_version(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_destroy_crypto_key_version_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.destroy_crypto_key_version(
service.DestroyCryptoKeyVersionRequest(), name="name_value",
)
def test_restore_crypto_key_version(
transport: str = "grpc", request_type=service.RestoreCryptoKeyVersionRequest
):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.restore_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
response = client.restore_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.RestoreCryptoKeyVersionRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_restore_crypto_key_version_from_dict():
test_restore_crypto_key_version(request_type=dict)
@pytest.mark.asyncio
async def test_restore_crypto_key_version_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = service.RestoreCryptoKeyVersionRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.restore_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion(
name="name_value",
state=resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION,
protection_level=resources.ProtectionLevel.SOFTWARE,
algorithm=resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION,
import_job="import_job_value",
import_failure_reason="import_failure_reason_value",
)
)
response = await client.restore_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, resources.CryptoKeyVersion)
assert response.name == "name_value"
assert (
response.state
== resources.CryptoKeyVersion.CryptoKeyVersionState.PENDING_GENERATION
)
assert response.protection_level == resources.ProtectionLevel.SOFTWARE
assert (
response.algorithm
== resources.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
assert response.import_job == "import_job_value"
assert response.import_failure_reason == "import_failure_reason_value"
def test_restore_crypto_key_version_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.RestoreCryptoKeyVersionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.restore_crypto_key_version), "__call__"
) as call:
call.return_value = resources.CryptoKeyVersion()
client.restore_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_restore_crypto_key_version_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.RestoreCryptoKeyVersionRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.restore_crypto_key_version), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
await client.restore_crypto_key_version(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_restore_crypto_key_version_flattened():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.restore_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.restore_crypto_key_version(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
def test_restore_crypto_key_version_flattened_error():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.restore_crypto_key_version(
service.RestoreCryptoKeyVersionRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_restore_crypto_key_version_flattened_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.restore_crypto_key_version), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = resources.CryptoKeyVersion()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
resources.CryptoKeyVersion()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.restore_crypto_key_version(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
@pytest.mark.asyncio
async def test_restore_crypto_key_version_flattened_error_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.restore_crypto_key_version(
service.RestoreCryptoKeyVersionRequest(), name="name_value",
)
def test_credentials_transport_error():
# It is an error to provide credentials and a transport instance.
transport = transports.KeyManagementServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# It is an error to provide a credentials file and a transport instance.
transport = transports.KeyManagementServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = KeyManagementServiceClient(
client_options={"credentials_file": "credentials.json"},
transport=transport,
)
# It is an error to provide scopes and a transport instance.
transport = transports.KeyManagementServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = KeyManagementServiceClient(
client_options={"scopes": ["1", "2"]}, transport=transport,
)
def test_transport_instance():
# A client may be instantiated with a custom transport instance.
transport = transports.KeyManagementServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
client = KeyManagementServiceClient(transport=transport)
assert client._transport is transport
def test_transport_get_channel():
# A client may be instantiated with a custom transport instance.
transport = transports.KeyManagementServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
transport = transports.KeyManagementServiceGrpcAsyncIOTransport(
credentials=credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
@pytest.mark.parametrize(
"transport_class",
[
transports.KeyManagementServiceGrpcTransport,
transports.KeyManagementServiceGrpcAsyncIOTransport,
],
)
def test_transport_adc(transport_class):
# Test default credentials are used if not provided.
with mock.patch.object(auth, "default") as adc:
adc.return_value = (credentials.AnonymousCredentials(), None)
transport_class()
adc.assert_called_once()
def test_transport_grpc_default():
# A client should use the gRPC transport by default.
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
assert isinstance(client._transport, transports.KeyManagementServiceGrpcTransport,)
def test_key_management_service_base_transport_error():
# Passing both a credentials object and credentials_file should raise an error
with pytest.raises(exceptions.DuplicateCredentialArgs):
transport = transports.KeyManagementServiceTransport(
credentials=credentials.AnonymousCredentials(),
credentials_file="credentials.json",
)
def test_key_management_service_base_transport():
# Instantiate the base transport.
with mock.patch(
"google.cloud.kms_v1.services.key_management_service.transports.KeyManagementServiceTransport.__init__"
) as Transport:
Transport.return_value = None
transport = transports.KeyManagementServiceTransport(
credentials=credentials.AnonymousCredentials(),
)
# Every method on the transport should just blindly
# raise NotImplementedError.
methods = (
"list_key_rings",
"list_crypto_keys",
"list_crypto_key_versions",
"list_import_jobs",
"get_key_ring",
"get_crypto_key",
"get_crypto_key_version",
"get_public_key",
"get_import_job",
"create_key_ring",
"create_crypto_key",
"create_crypto_key_version",
"import_crypto_key_version",
"create_import_job",
"update_crypto_key",
"update_crypto_key_version",
"encrypt",
"decrypt",
"asymmetric_sign",
"asymmetric_decrypt",
"update_crypto_key_primary_version",
"destroy_crypto_key_version",
"restore_crypto_key_version",
"set_iam_policy",
"get_iam_policy",
"test_iam_permissions",
)
for method in methods:
with pytest.raises(NotImplementedError):
getattr(transport, method)(request=object())
def test_key_management_service_base_transport_with_credentials_file():
# Instantiate the base transport with a credentials file
with mock.patch.object(
auth, "load_credentials_from_file"
) as load_creds, mock.patch(
"google.cloud.kms_v1.services.key_management_service.transports.KeyManagementServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
load_creds.return_value = (credentials.AnonymousCredentials(), None)
transport = transports.KeyManagementServiceTransport(
credentials_file="credentials.json", quota_project_id="octopus",
)
load_creds.assert_called_once_with(
"credentials.json",
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/cloudkms",
),
quota_project_id="octopus",
)
def test_key_management_service_base_transport_with_adc():
# Test the default credentials are used if credentials and credentials_file are None.
with mock.patch.object(auth, "default") as adc, mock.patch(
"google.cloud.kms_v1.services.key_management_service.transports.KeyManagementServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
adc.return_value = (credentials.AnonymousCredentials(), None)
transport = transports.KeyManagementServiceTransport()
adc.assert_called_once()
def test_key_management_service_auth_adc():
# If no credentials are provided, we should use ADC credentials.
with mock.patch.object(auth, "default") as adc:
adc.return_value = (credentials.AnonymousCredentials(), None)
KeyManagementServiceClient()
adc.assert_called_once_with(
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/cloudkms",
),
quota_project_id=None,
)
def test_key_management_service_transport_auth_adc():
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(auth, "default") as adc:
adc.return_value = (credentials.AnonymousCredentials(), None)
transports.KeyManagementServiceGrpcTransport(
host="squid.clam.whelk", quota_project_id="octopus"
)
adc.assert_called_once_with(
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/cloudkms",
),
quota_project_id="octopus",
)
def test_key_management_service_host_no_port():
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="cloudkms.googleapis.com"
),
)
assert client._transport._host == "cloudkms.googleapis.com:443"
def test_key_management_service_host_with_port():
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="cloudkms.googleapis.com:8000"
),
)
assert client._transport._host == "cloudkms.googleapis.com:8000"
def test_key_management_service_grpc_transport_channel():
channel = grpc.insecure_channel("http://localhost/")
# Check that channel is used if provided.
transport = transports.KeyManagementServiceGrpcTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
def test_key_management_service_grpc_asyncio_transport_channel():
channel = aio.insecure_channel("http://localhost/")
# Check that channel is used if provided.
transport = transports.KeyManagementServiceGrpcAsyncIOTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
@pytest.mark.parametrize(
"transport_class",
[
transports.KeyManagementServiceGrpcTransport,
transports.KeyManagementServiceGrpcAsyncIOTransport,
],
)
def test_key_management_service_transport_channel_mtls_with_client_cert_source(
transport_class,
):
with mock.patch(
"grpc.ssl_channel_credentials", autospec=True
) as grpc_ssl_channel_cred:
with mock.patch.object(
transport_class, "create_channel", autospec=True
) as grpc_create_channel:
mock_ssl_cred = mock.Mock()
grpc_ssl_channel_cred.return_value = mock_ssl_cred
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
cred = credentials.AnonymousCredentials()
with pytest.warns(DeprecationWarning):
with mock.patch.object(auth, "default") as adc:
adc.return_value = (cred, None)
transport = transport_class(
host="squid.clam.whelk",
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=client_cert_source_callback,
)
adc.assert_called_once()
grpc_ssl_channel_cred.assert_called_once_with(
certificate_chain=b"cert bytes", private_key=b"key bytes"
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/cloudkms",
),
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
)
assert transport.grpc_channel == mock_grpc_channel
@pytest.mark.parametrize(
"transport_class",
[
transports.KeyManagementServiceGrpcTransport,
transports.KeyManagementServiceGrpcAsyncIOTransport,
],
)
def test_key_management_service_transport_channel_mtls_with_adc(transport_class):
mock_ssl_cred = mock.Mock()
with mock.patch.multiple(
"google.auth.transport.grpc.SslCredentials",
__init__=mock.Mock(return_value=None),
ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred),
):
with mock.patch.object(
transport_class, "create_channel", autospec=True
) as grpc_create_channel:
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
mock_cred = mock.Mock()
with pytest.warns(DeprecationWarning):
transport = transport_class(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=None,
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/cloudkms",
),
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
)
assert transport.grpc_channel == mock_grpc_channel
def test_crypto_key_path():
project = "squid"
location = "clam"
key_ring = "whelk"
crypto_key = "octopus"
expected = "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}".format(
project=project, location=location, key_ring=key_ring, crypto_key=crypto_key,
)
actual = KeyManagementServiceClient.crypto_key_path(
project, location, key_ring, crypto_key
)
assert expected == actual
def test_parse_crypto_key_path():
expected = {
"project": "oyster",
"location": "nudibranch",
"key_ring": "cuttlefish",
"crypto_key": "mussel",
}
path = KeyManagementServiceClient.crypto_key_path(**expected)
# Check that the path construction is reversible.
actual = KeyManagementServiceClient.parse_crypto_key_path(path)
assert expected == actual
def test_crypto_key_version_path():
project = "squid"
location = "clam"
key_ring = "whelk"
crypto_key = "octopus"
crypto_key_version = "oyster"
expected = "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}/cryptoKeyVersions/{crypto_key_version}".format(
project=project,
location=location,
key_ring=key_ring,
crypto_key=crypto_key,
crypto_key_version=crypto_key_version,
)
actual = KeyManagementServiceClient.crypto_key_version_path(
project, location, key_ring, crypto_key, crypto_key_version
)
assert expected == actual
def test_parse_crypto_key_version_path():
expected = {
"project": "nudibranch",
"location": "cuttlefish",
"key_ring": "mussel",
"crypto_key": "winkle",
"crypto_key_version": "nautilus",
}
path = KeyManagementServiceClient.crypto_key_version_path(**expected)
# Check that the path construction is reversible.
actual = KeyManagementServiceClient.parse_crypto_key_version_path(path)
assert expected == actual
def test_import_job_path():
project = "squid"
location = "clam"
key_ring = "whelk"
import_job = "octopus"
expected = "projects/{project}/locations/{location}/keyRings/{key_ring}/importJobs/{import_job}".format(
project=project, location=location, key_ring=key_ring, import_job=import_job,
)
actual = KeyManagementServiceClient.import_job_path(
project, location, key_ring, import_job
)
assert expected == actual
def test_parse_import_job_path():
expected = {
"project": "oyster",
"location": "nudibranch",
"key_ring": "cuttlefish",
"import_job": "mussel",
}
path = KeyManagementServiceClient.import_job_path(**expected)
# Check that the path construction is reversible.
actual = KeyManagementServiceClient.parse_import_job_path(path)
assert expected == actual
def test_key_ring_path():
project = "squid"
location = "clam"
key_ring = "whelk"
expected = "projects/{project}/locations/{location}/keyRings/{key_ring}".format(
project=project, location=location, key_ring=key_ring,
)
actual = KeyManagementServiceClient.key_ring_path(project, location, key_ring)
assert expected == actual
def test_parse_key_ring_path():
expected = {
"project": "octopus",
"location": "oyster",
"key_ring": "nudibranch",
}
path = KeyManagementServiceClient.key_ring_path(**expected)
# Check that the path construction is reversible.
actual = KeyManagementServiceClient.parse_key_ring_path(path)
assert expected == actual
def test_client_withDEFAULT_CLIENT_INFO():
client_info = gapic_v1.client_info.ClientInfo()
with mock.patch.object(
transports.KeyManagementServiceTransport, "_prep_wrapped_messages"
) as prep:
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
with mock.patch.object(
transports.KeyManagementServiceTransport, "_prep_wrapped_messages"
) as prep:
transport_class = KeyManagementServiceClient.get_transport_class()
transport = transport_class(
credentials=credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
def test_set_iam_policy(transport: str = "grpc"):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = iam_policy.SetIamPolicyRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.set_iam_policy), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = policy.Policy(version=774, etag=b"etag_blob",)
response = client.set_iam_policy(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, policy.Policy)
assert response.version == 774
assert response.etag == b"etag_blob"
@pytest.mark.asyncio
async def test_set_iam_policy_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = iam_policy.SetIamPolicyRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.set_iam_policy), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
policy.Policy(version=774, etag=b"etag_blob",)
)
response = await client.set_iam_policy(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, policy.Policy)
assert response.version == 774
assert response.etag == b"etag_blob"
def test_set_iam_policy_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = iam_policy.SetIamPolicyRequest()
request.resource = "resource/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.set_iam_policy), "__call__") as call:
call.return_value = policy.Policy()
client.set_iam_policy(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "resource=resource/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_set_iam_policy_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = iam_policy.SetIamPolicyRequest()
request.resource = "resource/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.set_iam_policy), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy.Policy())
await client.set_iam_policy(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "resource=resource/value",) in kw["metadata"]
def test_set_iam_policy_from_dict():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.set_iam_policy), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = policy.Policy()
response = client.set_iam_policy(
request={
"resource": "resource_value",
"policy": policy.Policy(version=774),
}
)
call.assert_called()
def test_get_iam_policy(transport: str = "grpc"):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = iam_policy.GetIamPolicyRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_iam_policy), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = policy.Policy(version=774, etag=b"etag_blob",)
response = client.get_iam_policy(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, policy.Policy)
assert response.version == 774
assert response.etag == b"etag_blob"
@pytest.mark.asyncio
async def test_get_iam_policy_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = iam_policy.GetIamPolicyRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_iam_policy), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
policy.Policy(version=774, etag=b"etag_blob",)
)
response = await client.get_iam_policy(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, policy.Policy)
assert response.version == 774
assert response.etag == b"etag_blob"
def test_get_iam_policy_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = iam_policy.GetIamPolicyRequest()
request.resource = "resource/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_iam_policy), "__call__") as call:
call.return_value = policy.Policy()
client.get_iam_policy(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "resource=resource/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_iam_policy_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = iam_policy.GetIamPolicyRequest()
request.resource = "resource/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.get_iam_policy), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy.Policy())
await client.get_iam_policy(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "resource=resource/value",) in kw["metadata"]
def test_get_iam_policy_from_dict():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.get_iam_policy), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = policy.Policy()
response = client.get_iam_policy(
request={
"resource": "resource_value",
"options": options.GetPolicyOptions(requested_policy_version=2598),
}
)
call.assert_called()
def test_test_iam_permissions(transport: str = "grpc"):
client = KeyManagementServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = iam_policy.TestIamPermissionsRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.test_iam_permissions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = iam_policy.TestIamPermissionsResponse(
permissions=["permissions_value"],
)
response = client.test_iam_permissions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, iam_policy.TestIamPermissionsResponse)
assert response.permissions == ["permissions_value"]
@pytest.mark.asyncio
async def test_test_iam_permissions_async(transport: str = "grpc_asyncio"):
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = iam_policy.TestIamPermissionsRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.test_iam_permissions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
iam_policy.TestIamPermissionsResponse(permissions=["permissions_value"],)
)
response = await client.test_iam_permissions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, iam_policy.TestIamPermissionsResponse)
assert response.permissions == ["permissions_value"]
def test_test_iam_permissions_field_headers():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = iam_policy.TestIamPermissionsRequest()
request.resource = "resource/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.test_iam_permissions), "__call__"
) as call:
call.return_value = iam_policy.TestIamPermissionsResponse()
client.test_iam_permissions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "resource=resource/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_test_iam_permissions_field_headers_async():
client = KeyManagementServiceAsyncClient(
credentials=credentials.AnonymousCredentials(),
)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = iam_policy.TestIamPermissionsRequest()
request.resource = "resource/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.test_iam_permissions), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
iam_policy.TestIamPermissionsResponse()
)
await client.test_iam_permissions(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "resource=resource/value",) in kw["metadata"]
def test_test_iam_permissions_from_dict():
client = KeyManagementServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.test_iam_permissions), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = iam_policy.TestIamPermissionsResponse()
response = client.test_iam_permissions(
request={
"resource": "resource_value",
"permissions": ["permissions_value"],
}
)
call.assert_called()
| 36.731085 | 147 | 0.686034 | 29,559 | 259,248 | 5.7959 | 0.016137 | 0.020225 | 0.026407 | 0.016344 | 0.955826 | 0.944969 | 0.931351 | 0.920553 | 0.91335 | 0.900065 | 0 | 0.004633 | 0.23312 | 259,248 | 7,057 | 148 | 36.73629 | 0.857091 | 0.216808 | 0 | 0.727805 | 0 | 0 | 0.072036 | 0.024093 | 0 | 0 | 0 | 0 | 0.152546 | 1 | 0.037404 | false | 0 | 0.071203 | 0.000451 | 0.109058 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f47f67d826364b427130aae8fd62c0f6b81fb52e | 7,694 | py | Python | Code/odooerp/odoo-8.0/openerp/addons/l10n_de/__openerp__.py | zhupangithub/WEBERP | 714512082ec5c6db07cbf6af0238ceefe2d2c1a5 | [
"MIT"
] | 1 | 2019-12-29T11:53:56.000Z | 2019-12-29T11:53:56.000Z | odoo/addons/l10n_de/__openerp__.py | tuanquanghpvn/odoo8-tutorial | 52d25f1ca5f233c431cb9d3b24b79c3b4fb5127e | [
"MIT"
] | null | null | null | odoo/addons/l10n_de/__openerp__.py | tuanquanghpvn/odoo8-tutorial | 52d25f1ca5f233c431cb9d3b24b79c3b4fb5127e | [
"MIT"
] | 3 | 2020-10-08T14:42:10.000Z | 2022-01-28T14:12:29.000Z | # -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
# SKR03
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# SKR04
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig,
# d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu
# Steuerschlüsseln.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
{
'name': 'Deutschland - Accounting',
'version': '1.0',
'author': 'openbig.org',
'website': 'http://www.openbig.org',
'category': 'Localization/Account Charts',
'description': """
Dieses Modul beinhaltet einen deutschen Kontenrahmen basierend auf dem SKR03.
==============================================================================
German accounting chart and localization.
""",
'depends': ['base', 'account', 'base_iban', 'base_vat', 'account_chart'],
'demo': [ ],
'data': [
'account_tax_skr03.xml',
'account_types_skr03.xml',
'account_chart_skr03.xml',
'account_chart_template_skr03.xml',
'account_tax_fiscal_position_skr03.xml',
'account_tax_skr04.xml',
'account_types_skr04.xml',
'account_chart_skr04.xml',
'account_chart_template_skr04.xml',
'account_tax_fiscal_position_skr04.xml',
'l10n_de_wizard.xml',
],
'installable': True,
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| 49.961039 | 84 | 0.758773 | 889 | 7,694 | 6.532058 | 0.311586 | 0.020665 | 0.018598 | 0.022042 | 0.801447 | 0.792147 | 0.786637 | 0.774927 | 0.774927 | 0.751507 | 0 | 0.010195 | 0.145828 | 7,694 | 153 | 85 | 50.287582 | 0.873402 | 0.821809 | 0 | 0 | 0 | 0 | 0.628286 | 0.317316 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
be9df240b8c2c68914ae3ebfdb595b1e66cb7191 | 36,673 | py | Python | sdk/python/pulumi_azure/domainservices/service.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/domainservices/service.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/domainservices/service.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['ServiceArgs', 'Service']
@pulumi.input_type
class ServiceArgs:
def __init__(__self__, *,
domain_name: pulumi.Input[str],
initial_replica_set: pulumi.Input['ServiceInitialReplicaSetArgs'],
resource_group_name: pulumi.Input[str],
sku: pulumi.Input[str],
filtered_sync_enabled: Optional[pulumi.Input[bool]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
notifications: Optional[pulumi.Input['ServiceNotificationsArgs']] = None,
secure_ldap: Optional[pulumi.Input['ServiceSecureLdapArgs']] = None,
security: Optional[pulumi.Input['ServiceSecurityArgs']] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a Service resource.
:param pulumi.Input[str] domain_name: The Active Directory domain to use. See [official documentation](https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-instance#create-a-managed-domain) for constraints and recommendations.
:param pulumi.Input['ServiceInitialReplicaSetArgs'] initial_replica_set: An `initial_replica_set` block as defined below. The initial replica set inherits the same location as the Domain Service resource.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group in which the Domain Service should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] sku: The SKU to use when provisioning the Domain Service resource. One of `Standard`, `Enterprise` or `Premium`.
:param pulumi.Input[bool] filtered_sync_enabled: Whether to enable group-based filtered sync (also called scoped synchronisation). Defaults to `false`.
:param pulumi.Input[str] location: The Azure location where the Domain Service exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: The display name for your managed Active Directory Domain Service resource. Changing this forces a new resource to be created.
:param pulumi.Input['ServiceNotificationsArgs'] notifications: A `notifications` block as defined below.
:param pulumi.Input['ServiceSecureLdapArgs'] secure_ldap: A `secure_ldap` block as defined below.
:param pulumi.Input['ServiceSecurityArgs'] security: A `security` block as defined below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags assigned to the resource.
"""
pulumi.set(__self__, "domain_name", domain_name)
pulumi.set(__self__, "initial_replica_set", initial_replica_set)
pulumi.set(__self__, "resource_group_name", resource_group_name)
pulumi.set(__self__, "sku", sku)
if filtered_sync_enabled is not None:
pulumi.set(__self__, "filtered_sync_enabled", filtered_sync_enabled)
if location is not None:
pulumi.set(__self__, "location", location)
if name is not None:
pulumi.set(__self__, "name", name)
if notifications is not None:
pulumi.set(__self__, "notifications", notifications)
if secure_ldap is not None:
pulumi.set(__self__, "secure_ldap", secure_ldap)
if security is not None:
pulumi.set(__self__, "security", security)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="domainName")
def domain_name(self) -> pulumi.Input[str]:
"""
The Active Directory domain to use. See [official documentation](https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-instance#create-a-managed-domain) for constraints and recommendations.
"""
return pulumi.get(self, "domain_name")
@domain_name.setter
def domain_name(self, value: pulumi.Input[str]):
pulumi.set(self, "domain_name", value)
@property
@pulumi.getter(name="initialReplicaSet")
def initial_replica_set(self) -> pulumi.Input['ServiceInitialReplicaSetArgs']:
"""
An `initial_replica_set` block as defined below. The initial replica set inherits the same location as the Domain Service resource.
"""
return pulumi.get(self, "initial_replica_set")
@initial_replica_set.setter
def initial_replica_set(self, value: pulumi.Input['ServiceInitialReplicaSetArgs']):
pulumi.set(self, "initial_replica_set", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the Resource Group in which the Domain Service should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter
def sku(self) -> pulumi.Input[str]:
"""
The SKU to use when provisioning the Domain Service resource. One of `Standard`, `Enterprise` or `Premium`.
"""
return pulumi.get(self, "sku")
@sku.setter
def sku(self, value: pulumi.Input[str]):
pulumi.set(self, "sku", value)
@property
@pulumi.getter(name="filteredSyncEnabled")
def filtered_sync_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to enable group-based filtered sync (also called scoped synchronisation). Defaults to `false`.
"""
return pulumi.get(self, "filtered_sync_enabled")
@filtered_sync_enabled.setter
def filtered_sync_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "filtered_sync_enabled", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
The Azure location where the Domain Service exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The display name for your managed Active Directory Domain Service resource. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def notifications(self) -> Optional[pulumi.Input['ServiceNotificationsArgs']]:
"""
A `notifications` block as defined below.
"""
return pulumi.get(self, "notifications")
@notifications.setter
def notifications(self, value: Optional[pulumi.Input['ServiceNotificationsArgs']]):
pulumi.set(self, "notifications", value)
@property
@pulumi.getter(name="secureLdap")
def secure_ldap(self) -> Optional[pulumi.Input['ServiceSecureLdapArgs']]:
"""
A `secure_ldap` block as defined below.
"""
return pulumi.get(self, "secure_ldap")
@secure_ldap.setter
def secure_ldap(self, value: Optional[pulumi.Input['ServiceSecureLdapArgs']]):
pulumi.set(self, "secure_ldap", value)
@property
@pulumi.getter
def security(self) -> Optional[pulumi.Input['ServiceSecurityArgs']]:
"""
A `security` block as defined below.
"""
return pulumi.get(self, "security")
@security.setter
def security(self, value: Optional[pulumi.Input['ServiceSecurityArgs']]):
pulumi.set(self, "security", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags assigned to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@pulumi.input_type
class _ServiceState:
def __init__(__self__, *,
deployment_id: Optional[pulumi.Input[str]] = None,
domain_name: Optional[pulumi.Input[str]] = None,
filtered_sync_enabled: Optional[pulumi.Input[bool]] = None,
initial_replica_set: Optional[pulumi.Input['ServiceInitialReplicaSetArgs']] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
notifications: Optional[pulumi.Input['ServiceNotificationsArgs']] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
resource_id: Optional[pulumi.Input[str]] = None,
secure_ldap: Optional[pulumi.Input['ServiceSecureLdapArgs']] = None,
security: Optional[pulumi.Input['ServiceSecurityArgs']] = None,
sku: Optional[pulumi.Input[str]] = None,
sync_owner: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tenant_id: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[int]] = None):
"""
Input properties used for looking up and filtering Service resources.
:param pulumi.Input[str] deployment_id: A unique ID for the managed domain deployment.
:param pulumi.Input[str] domain_name: The Active Directory domain to use. See [official documentation](https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-instance#create-a-managed-domain) for constraints and recommendations.
:param pulumi.Input[bool] filtered_sync_enabled: Whether to enable group-based filtered sync (also called scoped synchronisation). Defaults to `false`.
:param pulumi.Input['ServiceInitialReplicaSetArgs'] initial_replica_set: An `initial_replica_set` block as defined below. The initial replica set inherits the same location as the Domain Service resource.
:param pulumi.Input[str] location: The Azure location where the Domain Service exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: The display name for your managed Active Directory Domain Service resource. Changing this forces a new resource to be created.
:param pulumi.Input['ServiceNotificationsArgs'] notifications: A `notifications` block as defined below.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group in which the Domain Service should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_id: The Azure resource ID for the domain service.
:param pulumi.Input['ServiceSecureLdapArgs'] secure_ldap: A `secure_ldap` block as defined below.
:param pulumi.Input['ServiceSecurityArgs'] security: A `security` block as defined below.
:param pulumi.Input[str] sku: The SKU to use when provisioning the Domain Service resource. One of `Standard`, `Enterprise` or `Premium`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags assigned to the resource.
"""
if deployment_id is not None:
pulumi.set(__self__, "deployment_id", deployment_id)
if domain_name is not None:
pulumi.set(__self__, "domain_name", domain_name)
if filtered_sync_enabled is not None:
pulumi.set(__self__, "filtered_sync_enabled", filtered_sync_enabled)
if initial_replica_set is not None:
pulumi.set(__self__, "initial_replica_set", initial_replica_set)
if location is not None:
pulumi.set(__self__, "location", location)
if name is not None:
pulumi.set(__self__, "name", name)
if notifications is not None:
pulumi.set(__self__, "notifications", notifications)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if resource_id is not None:
pulumi.set(__self__, "resource_id", resource_id)
if secure_ldap is not None:
pulumi.set(__self__, "secure_ldap", secure_ldap)
if security is not None:
pulumi.set(__self__, "security", security)
if sku is not None:
pulumi.set(__self__, "sku", sku)
if sync_owner is not None:
pulumi.set(__self__, "sync_owner", sync_owner)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if tenant_id is not None:
pulumi.set(__self__, "tenant_id", tenant_id)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="deploymentId")
def deployment_id(self) -> Optional[pulumi.Input[str]]:
"""
A unique ID for the managed domain deployment.
"""
return pulumi.get(self, "deployment_id")
@deployment_id.setter
def deployment_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "deployment_id", value)
@property
@pulumi.getter(name="domainName")
def domain_name(self) -> Optional[pulumi.Input[str]]:
"""
The Active Directory domain to use. See [official documentation](https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-instance#create-a-managed-domain) for constraints and recommendations.
"""
return pulumi.get(self, "domain_name")
@domain_name.setter
def domain_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "domain_name", value)
@property
@pulumi.getter(name="filteredSyncEnabled")
def filtered_sync_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to enable group-based filtered sync (also called scoped synchronisation). Defaults to `false`.
"""
return pulumi.get(self, "filtered_sync_enabled")
@filtered_sync_enabled.setter
def filtered_sync_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "filtered_sync_enabled", value)
@property
@pulumi.getter(name="initialReplicaSet")
def initial_replica_set(self) -> Optional[pulumi.Input['ServiceInitialReplicaSetArgs']]:
"""
An `initial_replica_set` block as defined below. The initial replica set inherits the same location as the Domain Service resource.
"""
return pulumi.get(self, "initial_replica_set")
@initial_replica_set.setter
def initial_replica_set(self, value: Optional[pulumi.Input['ServiceInitialReplicaSetArgs']]):
pulumi.set(self, "initial_replica_set", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
The Azure location where the Domain Service exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The display name for your managed Active Directory Domain Service resource. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def notifications(self) -> Optional[pulumi.Input['ServiceNotificationsArgs']]:
"""
A `notifications` block as defined below.
"""
return pulumi.get(self, "notifications")
@notifications.setter
def notifications(self, value: Optional[pulumi.Input['ServiceNotificationsArgs']]):
pulumi.set(self, "notifications", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Resource Group in which the Domain Service should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="resourceId")
def resource_id(self) -> Optional[pulumi.Input[str]]:
"""
The Azure resource ID for the domain service.
"""
return pulumi.get(self, "resource_id")
@resource_id.setter
def resource_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_id", value)
@property
@pulumi.getter(name="secureLdap")
def secure_ldap(self) -> Optional[pulumi.Input['ServiceSecureLdapArgs']]:
"""
A `secure_ldap` block as defined below.
"""
return pulumi.get(self, "secure_ldap")
@secure_ldap.setter
def secure_ldap(self, value: Optional[pulumi.Input['ServiceSecureLdapArgs']]):
pulumi.set(self, "secure_ldap", value)
@property
@pulumi.getter
def security(self) -> Optional[pulumi.Input['ServiceSecurityArgs']]:
"""
A `security` block as defined below.
"""
return pulumi.get(self, "security")
@security.setter
def security(self, value: Optional[pulumi.Input['ServiceSecurityArgs']]):
pulumi.set(self, "security", value)
@property
@pulumi.getter
def sku(self) -> Optional[pulumi.Input[str]]:
"""
The SKU to use when provisioning the Domain Service resource. One of `Standard`, `Enterprise` or `Premium`.
"""
return pulumi.get(self, "sku")
@sku.setter
def sku(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "sku", value)
@property
@pulumi.getter(name="syncOwner")
def sync_owner(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "sync_owner")
@sync_owner.setter
def sync_owner(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "sync_owner", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags assigned to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="tenantId")
def tenant_id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "tenant_id")
@tenant_id.setter
def tenant_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "tenant_id", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "version", value)
class Service(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
domain_name: Optional[pulumi.Input[str]] = None,
filtered_sync_enabled: Optional[pulumi.Input[bool]] = None,
initial_replica_set: Optional[pulumi.Input[pulumi.InputType['ServiceInitialReplicaSetArgs']]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
notifications: Optional[pulumi.Input[pulumi.InputType['ServiceNotificationsArgs']]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
secure_ldap: Optional[pulumi.Input[pulumi.InputType['ServiceSecureLdapArgs']]] = None,
security: Optional[pulumi.Input[pulumi.InputType['ServiceSecurityArgs']]] = None,
sku: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
"""
## Import
Domain Services can be imported using the resource ID, together with the Replica Set ID that you wish to designate as the initial replica set, e.g.
```sh
$ pulumi import azure:domainservices/service:Service example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.AAD/domainServices/instance1/initialReplicaSetId/00000000-0000-0000-0000-000000000000
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] domain_name: The Active Directory domain to use. See [official documentation](https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-instance#create-a-managed-domain) for constraints and recommendations.
:param pulumi.Input[bool] filtered_sync_enabled: Whether to enable group-based filtered sync (also called scoped synchronisation). Defaults to `false`.
:param pulumi.Input[pulumi.InputType['ServiceInitialReplicaSetArgs']] initial_replica_set: An `initial_replica_set` block as defined below. The initial replica set inherits the same location as the Domain Service resource.
:param pulumi.Input[str] location: The Azure location where the Domain Service exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: The display name for your managed Active Directory Domain Service resource. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['ServiceNotificationsArgs']] notifications: A `notifications` block as defined below.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group in which the Domain Service should exist. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['ServiceSecureLdapArgs']] secure_ldap: A `secure_ldap` block as defined below.
:param pulumi.Input[pulumi.InputType['ServiceSecurityArgs']] security: A `security` block as defined below.
:param pulumi.Input[str] sku: The SKU to use when provisioning the Domain Service resource. One of `Standard`, `Enterprise` or `Premium`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags assigned to the resource.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ServiceArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
## Import
Domain Services can be imported using the resource ID, together with the Replica Set ID that you wish to designate as the initial replica set, e.g.
```sh
$ pulumi import azure:domainservices/service:Service example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.AAD/domainServices/instance1/initialReplicaSetId/00000000-0000-0000-0000-000000000000
```
:param str resource_name: The name of the resource.
:param ServiceArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ServiceArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
domain_name: Optional[pulumi.Input[str]] = None,
filtered_sync_enabled: Optional[pulumi.Input[bool]] = None,
initial_replica_set: Optional[pulumi.Input[pulumi.InputType['ServiceInitialReplicaSetArgs']]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
notifications: Optional[pulumi.Input[pulumi.InputType['ServiceNotificationsArgs']]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
secure_ldap: Optional[pulumi.Input[pulumi.InputType['ServiceSecureLdapArgs']]] = None,
security: Optional[pulumi.Input[pulumi.InputType['ServiceSecurityArgs']]] = None,
sku: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ServiceArgs.__new__(ServiceArgs)
if domain_name is None and not opts.urn:
raise TypeError("Missing required property 'domain_name'")
__props__.__dict__["domain_name"] = domain_name
__props__.__dict__["filtered_sync_enabled"] = filtered_sync_enabled
if initial_replica_set is None and not opts.urn:
raise TypeError("Missing required property 'initial_replica_set'")
__props__.__dict__["initial_replica_set"] = initial_replica_set
__props__.__dict__["location"] = location
__props__.__dict__["name"] = name
__props__.__dict__["notifications"] = notifications
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["secure_ldap"] = secure_ldap
__props__.__dict__["security"] = security
if sku is None and not opts.urn:
raise TypeError("Missing required property 'sku'")
__props__.__dict__["sku"] = sku
__props__.__dict__["tags"] = tags
__props__.__dict__["deployment_id"] = None
__props__.__dict__["resource_id"] = None
__props__.__dict__["sync_owner"] = None
__props__.__dict__["tenant_id"] = None
__props__.__dict__["version"] = None
super(Service, __self__).__init__(
'azure:domainservices/service:Service',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
deployment_id: Optional[pulumi.Input[str]] = None,
domain_name: Optional[pulumi.Input[str]] = None,
filtered_sync_enabled: Optional[pulumi.Input[bool]] = None,
initial_replica_set: Optional[pulumi.Input[pulumi.InputType['ServiceInitialReplicaSetArgs']]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
notifications: Optional[pulumi.Input[pulumi.InputType['ServiceNotificationsArgs']]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
resource_id: Optional[pulumi.Input[str]] = None,
secure_ldap: Optional[pulumi.Input[pulumi.InputType['ServiceSecureLdapArgs']]] = None,
security: Optional[pulumi.Input[pulumi.InputType['ServiceSecurityArgs']]] = None,
sku: Optional[pulumi.Input[str]] = None,
sync_owner: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tenant_id: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[int]] = None) -> 'Service':
"""
Get an existing Service resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] deployment_id: A unique ID for the managed domain deployment.
:param pulumi.Input[str] domain_name: The Active Directory domain to use. See [official documentation](https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-instance#create-a-managed-domain) for constraints and recommendations.
:param pulumi.Input[bool] filtered_sync_enabled: Whether to enable group-based filtered sync (also called scoped synchronisation). Defaults to `false`.
:param pulumi.Input[pulumi.InputType['ServiceInitialReplicaSetArgs']] initial_replica_set: An `initial_replica_set` block as defined below. The initial replica set inherits the same location as the Domain Service resource.
:param pulumi.Input[str] location: The Azure location where the Domain Service exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: The display name for your managed Active Directory Domain Service resource. Changing this forces a new resource to be created.
:param pulumi.Input[pulumi.InputType['ServiceNotificationsArgs']] notifications: A `notifications` block as defined below.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group in which the Domain Service should exist. Changing this forces a new resource to be created.
:param pulumi.Input[str] resource_id: The Azure resource ID for the domain service.
:param pulumi.Input[pulumi.InputType['ServiceSecureLdapArgs']] secure_ldap: A `secure_ldap` block as defined below.
:param pulumi.Input[pulumi.InputType['ServiceSecurityArgs']] security: A `security` block as defined below.
:param pulumi.Input[str] sku: The SKU to use when provisioning the Domain Service resource. One of `Standard`, `Enterprise` or `Premium`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags assigned to the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ServiceState.__new__(_ServiceState)
__props__.__dict__["deployment_id"] = deployment_id
__props__.__dict__["domain_name"] = domain_name
__props__.__dict__["filtered_sync_enabled"] = filtered_sync_enabled
__props__.__dict__["initial_replica_set"] = initial_replica_set
__props__.__dict__["location"] = location
__props__.__dict__["name"] = name
__props__.__dict__["notifications"] = notifications
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["resource_id"] = resource_id
__props__.__dict__["secure_ldap"] = secure_ldap
__props__.__dict__["security"] = security
__props__.__dict__["sku"] = sku
__props__.__dict__["sync_owner"] = sync_owner
__props__.__dict__["tags"] = tags
__props__.__dict__["tenant_id"] = tenant_id
__props__.__dict__["version"] = version
return Service(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="deploymentId")
def deployment_id(self) -> pulumi.Output[str]:
"""
A unique ID for the managed domain deployment.
"""
return pulumi.get(self, "deployment_id")
@property
@pulumi.getter(name="domainName")
def domain_name(self) -> pulumi.Output[str]:
"""
The Active Directory domain to use. See [official documentation](https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-instance#create-a-managed-domain) for constraints and recommendations.
"""
return pulumi.get(self, "domain_name")
@property
@pulumi.getter(name="filteredSyncEnabled")
def filtered_sync_enabled(self) -> pulumi.Output[Optional[bool]]:
"""
Whether to enable group-based filtered sync (also called scoped synchronisation). Defaults to `false`.
"""
return pulumi.get(self, "filtered_sync_enabled")
@property
@pulumi.getter(name="initialReplicaSet")
def initial_replica_set(self) -> pulumi.Output['outputs.ServiceInitialReplicaSet']:
"""
An `initial_replica_set` block as defined below. The initial replica set inherits the same location as the Domain Service resource.
"""
return pulumi.get(self, "initial_replica_set")
@property
@pulumi.getter
def location(self) -> pulumi.Output[str]:
"""
The Azure location where the Domain Service exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The display name for your managed Active Directory Domain Service resource. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def notifications(self) -> pulumi.Output['outputs.ServiceNotifications']:
"""
A `notifications` block as defined below.
"""
return pulumi.get(self, "notifications")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
The name of the Resource Group in which the Domain Service should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter(name="resourceId")
def resource_id(self) -> pulumi.Output[str]:
"""
The Azure resource ID for the domain service.
"""
return pulumi.get(self, "resource_id")
@property
@pulumi.getter(name="secureLdap")
def secure_ldap(self) -> pulumi.Output['outputs.ServiceSecureLdap']:
"""
A `secure_ldap` block as defined below.
"""
return pulumi.get(self, "secure_ldap")
@property
@pulumi.getter
def security(self) -> pulumi.Output['outputs.ServiceSecurity']:
"""
A `security` block as defined below.
"""
return pulumi.get(self, "security")
@property
@pulumi.getter
def sku(self) -> pulumi.Output[str]:
"""
The SKU to use when provisioning the Domain Service resource. One of `Standard`, `Enterprise` or `Premium`.
"""
return pulumi.get(self, "sku")
@property
@pulumi.getter(name="syncOwner")
def sync_owner(self) -> pulumi.Output[str]:
return pulumi.get(self, "sync_owner")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A mapping of tags assigned to the resource.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="tenantId")
def tenant_id(self) -> pulumi.Output[str]:
return pulumi.get(self, "tenant_id")
@property
@pulumi.getter
def version(self) -> pulumi.Output[int]:
return pulumi.get(self, "version")
| 48.767287 | 269 | 0.669266 | 4,295 | 36,673 | 5.517113 | 0.054249 | 0.085415 | 0.085795 | 0.048278 | 0.903233 | 0.888082 | 0.860863 | 0.84394 | 0.837146 | 0.80406 | 0 | 0.00468 | 0.225152 | 36,673 | 751 | 270 | 48.832224 | 0.829216 | 0.332261 | 0 | 0.715203 | 1 | 0 | 0.13062 | 0.044873 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164882 | false | 0.002141 | 0.014989 | 0.012848 | 0.280514 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
beae5f9c69a20801556f667d14c9d2033efcc729 | 119 | py | Python | crawel_utils/__init__.py | VIMerhan/CrawelUtils | aad711285387209be259c35ae8aece5918ac132d | [
"Apache-2.0"
] | 1 | 2018-09-18T05:02:17.000Z | 2018-09-18T05:02:17.000Z | crawel_utils/__init__.py | VIMerhan/CrawelUtils | aad711285387209be259c35ae8aece5918ac132d | [
"Apache-2.0"
] | null | null | null | crawel_utils/__init__.py | VIMerhan/CrawelUtils | aad711285387209be259c35ae8aece5918ac132d | [
"Apache-2.0"
] | null | null | null | from crawel_utils.download import Maoyan
import crawel_utils.agency as agency
from crawel_utils._version import version | 39.666667 | 41 | 0.882353 | 18 | 119 | 5.611111 | 0.5 | 0.326733 | 0.29703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092437 | 119 | 3 | 41 | 39.666667 | 0.935185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
beb2b9a392a07dfed6da66d28ffe66b1d21a296b | 3,142 | py | Python | src/Mesh/computeGeneratorsInst.cc.py | markguozhiming/spheral | bbb982102e61edb8a1d00cf780bfa571835e1b61 | [
"BSD-Source-Code",
"BSD-3-Clause-LBNL",
"FSFAP"
] | 1 | 2020-10-21T01:56:55.000Z | 2020-10-21T01:56:55.000Z | src/Mesh/computeGeneratorsInst.cc.py | markguozhiming/spheral | bbb982102e61edb8a1d00cf780bfa571835e1b61 | [
"BSD-Source-Code",
"BSD-3-Clause-LBNL",
"FSFAP"
] | null | null | null | src/Mesh/computeGeneratorsInst.cc.py | markguozhiming/spheral | bbb982102e61edb8a1d00cf780bfa571835e1b61 | [
"BSD-Source-Code",
"BSD-3-Clause-LBNL",
"FSFAP"
] | null | null | null | text = """
//------------------------------------------------------------------------------
// Explicit instantiation.
//------------------------------------------------------------------------------
#include "computeGenerators.cc"
namespace Spheral {
template void computeGenerators<Dim< %(ndim)s >,
vector<NodeList<Dim< %(ndim)s > >*>::iterator,
vector<Boundary<Dim< %(ndim)s > >*>::iterator>
(vector<NodeList<Dim< %(ndim)s > >*>::iterator nodeListBegin,
vector<NodeList<Dim< %(ndim)s > >*>::iterator nodeListEnd,
vector<Boundary<Dim< %(ndim)s > >*>::iterator boundaryBegin,
vector<Boundary<Dim< %(ndim)s > >*>::iterator boundaryEnd,
const bool meshGhostNodes,
const Dim< %(ndim)s >::Vector& xmin,
const Dim< %(ndim)s >::Vector& xmax,
vector<Dim< %(ndim)s >::Vector>& positions,
vector<Dim< %(ndim)s >::SymTensor>& Hs,
vector<unsigned>& offsets);
template void computeGenerators<Dim< %(ndim)s >,
vector<const NodeList<Dim< %(ndim)s > >*>::iterator,
vector<Boundary<Dim< %(ndim)s > >*>::iterator>
(vector<const NodeList<Dim< %(ndim)s > >*>::iterator nodeListBegin,
vector<const NodeList<Dim< %(ndim)s > >*>::iterator nodeListEnd,
vector<Boundary<Dim< %(ndim)s > >*>::iterator boundaryBegin,
vector<Boundary<Dim< %(ndim)s > >*>::iterator boundaryEnd,
const bool meshGhostNodes,
const Dim< %(ndim)s >::Vector& xmin,
const Dim< %(ndim)s >::Vector& xmax,
vector<Dim< %(ndim)s >::Vector>& positions,
vector<Dim< %(ndim)s >::SymTensor>& Hs,
vector<unsigned>& offsets);
template void computeGenerators<Dim< %(ndim)s >,
vector<const NodeList<Dim< %(ndim)s > >*>::iterator,
vector<Boundary<Dim< %(ndim)s > >*>::const_iterator>
(vector<const NodeList<Dim< %(ndim)s > >*>::iterator nodeListBegin,
vector<const NodeList<Dim< %(ndim)s > >*>::iterator nodeListEnd,
vector<Boundary<Dim< %(ndim)s > >*>::const_iterator boundaryBegin,
vector<Boundary<Dim< %(ndim)s > >*>::const_iterator boundaryEnd,
const bool meshGhostNodes,
const Dim< %(ndim)s >::Vector& xmin,
const Dim< %(ndim)s >::Vector& xmax,
vector<Dim< %(ndim)s >::Vector>& positions,
vector<Dim< %(ndim)s >::SymTensor>& Hs,
vector<unsigned>& offsets);
template void computeGenerators<Dim< %(ndim)s >,
vector<NodeList<Dim< %(ndim)s > >*>::const_iterator,
vector<Boundary<Dim< %(ndim)s > >*>::const_iterator>
(vector<NodeList<Dim< %(ndim)s > >*>::const_iterator nodeListBegin,
vector<NodeList<Dim< %(ndim)s > >*>::const_iterator nodeListEnd,
vector<Boundary<Dim< %(ndim)s > >*>::const_iterator boundaryBegin,
vector<Boundary<Dim< %(ndim)s > >*>::const_iterator boundaryEnd,
const bool meshGhostNodes,
const Dim< %(ndim)s >::Vector& xmin,
const Dim< %(ndim)s >::Vector& xmax,
vector<Dim< %(ndim)s >::Vector>& positions,
vector<Dim< %(ndim)s >::SymTensor>& Hs,
vector<unsigned>& offsets);
}
"""
| 46.895522 | 86 | 0.568746 | 327 | 3,142 | 5.437309 | 0.094801 | 0.173228 | 0.197975 | 0.125984 | 0.962317 | 0.962317 | 0.962317 | 0.896513 | 0.896513 | 0.866142 | 0 | 0 | 0.207193 | 3,142 | 66 | 87 | 47.606061 | 0.71377 | 0 | 0 | 0.8 | 0 | 0 | 0.995544 | 0.0993 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
fe78bdc492b31f6390280d57e1c04c474a884eda | 39,693 | py | Python | sdk/ml/azure-ai-ml/azure/ai/ml/_restclient/v2021_10_01/aio/operations/_online_endpoints_operations.py | dubiety/azure-sdk-for-python | 62ffa839f5d753594cf0fe63668f454a9d87a346 | [
"MIT"
] | 1 | 2022-02-01T18:50:12.000Z | 2022-02-01T18:50:12.000Z | sdk/ml/azure-ai-ml/azure/ai/ml/_restclient/v2021_10_01/aio/operations/_online_endpoints_operations.py | ellhe-blaster/azure-sdk-for-python | 82193ba5e81cc5e5e5a5239bba58abe62e86f469 | [
"MIT"
] | null | null | null | sdk/ml/azure-ai-ml/azure/ai/ml/_restclient/v2021_10_01/aio/operations/_online_endpoints_operations.py | ellhe-blaster/azure-sdk-for-python | 82193ba5e81cc5e5e5a5239bba58abe62e86f469 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import functools
from typing import Any, AsyncIterable, Callable, Dict, Generic, Optional, TypeVar, Union
import warnings
from azure.core.async_paging import AsyncItemPaged, AsyncList
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse
from azure.core.polling import AsyncLROPoller, AsyncNoPolling, AsyncPollingMethod
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator import distributed_trace
from azure.core.tracing.decorator_async import distributed_trace_async
from azure.mgmt.core.exceptions import ARMErrorFormat
from azure.mgmt.core.polling.async_arm_polling import AsyncARMPolling
from ... import models as _models
from ..._vendor import _convert_request
from ...operations._online_endpoints_operations import build_create_or_update_request_initial, build_delete_request_initial, build_get_request, build_get_token_request, build_list_keys_request, build_list_request, build_regenerate_keys_request_initial, build_update_request_initial
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class OnlineEndpointsOperations:
"""OnlineEndpointsOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~azure.mgmt.machinelearningservices.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace
def list(
self,
resource_group_name: str,
workspace_name: str,
name: Optional[str] = None,
count: Optional[int] = None,
compute_type: Optional[Union[str, "_models.EndpointComputeType"]] = None,
skip: Optional[str] = None,
tags: Optional[str] = None,
properties: Optional[str] = None,
order_by: Optional[Union[str, "_models.OrderString"]] = None,
**kwargs: Any
) -> AsyncIterable["_models.OnlineEndpointTrackedResourceArmPaginatedResult"]:
"""List Online Endpoints.
List Online Endpoints.
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param workspace_name: Name of Azure Machine Learning workspace.
:type workspace_name: str
:param name: Name of the endpoint.
:type name: str
:param count: Number of endpoints to be retrieved in a page of results.
:type count: int
:param compute_type: EndpointComputeType to be filtered by.
:type compute_type: str or ~azure.mgmt.machinelearningservices.models.EndpointComputeType
:param skip: Continuation token for pagination.
:type skip: str
:param tags: A set of tags with which to filter the returned models. It is a comma separated
string of tags key or tags key=value. Example: tagKey1,tagKey2,tagKey3=value3 .
:type tags: str
:param properties: A set of properties with which to filter the returned models. It is a comma
separated string of properties key and/or properties key=value Example:
propKey1,propKey2,propKey3=value3 .
:type properties: str
:param order_by: The option to order the response.
:type order_by: str or ~azure.mgmt.machinelearningservices.models.OrderString
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either OnlineEndpointTrackedResourceArmPaginatedResult or
the result of cls(response)
:rtype:
~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.machinelearningservices.models.OnlineEndpointTrackedResourceArmPaginatedResult]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.OnlineEndpointTrackedResourceArmPaginatedResult"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
def prepare_request(next_link=None):
if not next_link:
request = build_list_request(
subscription_id=self._config.subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
name=name,
count=count,
compute_type=compute_type,
skip=skip,
tags=tags,
properties=properties,
order_by=order_by,
template_url=self.list.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
else:
request = build_list_request(
subscription_id=self._config.subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
name=name,
count=count,
compute_type=compute_type,
skip=skip,
tags=tags,
properties=properties,
order_by=order_by,
template_url=next_link,
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
request.method = "GET"
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize("OnlineEndpointTrackedResourceArmPaginatedResult", pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints'} # type: ignore
async def _delete_initial(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
**kwargs: Any
) -> None:
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_delete_request_initial(
endpoint_name=endpoint_name,
subscription_id=self._config.subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self._delete_initial.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
response_headers = {}
if response.status_code == 202:
response_headers['x-ms-async-operation-timeout']=self._deserialize('duration', response.headers.get('x-ms-async-operation-timeout'))
response_headers['Location']=self._deserialize('str', response.headers.get('Location'))
response_headers['Retry-After']=self._deserialize('int', response.headers.get('Retry-After'))
if cls:
return cls(pipeline_response, None, response_headers)
_delete_initial.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}'} # type: ignore
@distributed_trace_async
async def begin_delete(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
**kwargs: Any
) -> AsyncLROPoller[None]:
"""Delete Online Endpoint (asynchronous).
Delete Online Endpoint (asynchronous).
:param endpoint_name: Online Endpoint name.
:type endpoint_name: str
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param workspace_name: Name of Azure Machine Learning workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be AsyncARMPolling. Pass in False for
this operation to not poll, or pass in your own initialized polling object for a personal
polling strategy.
:paramtype polling: bool or ~azure.core.polling.AsyncPollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of AsyncLROPoller that returns either None or the result of cls(response)
:rtype: ~azure.core.polling.AsyncLROPoller[None]
:raises: ~azure.core.exceptions.HttpResponseError
"""
polling = kwargs.pop('polling', True) # type: Union[bool, azure.core.polling.AsyncPollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType[None]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = await self._delete_initial(
endpoint_name=endpoint_name,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
def get_long_running_output(pipeline_response):
if cls:
return cls(pipeline_response, None, {})
if polling is True: polling_method = AsyncARMPolling(lro_delay, **kwargs)
elif polling is False: polling_method = AsyncNoPolling()
else: polling_method = polling
if cont_token:
return AsyncLROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return AsyncLROPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_delete.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}'} # type: ignore
@distributed_trace_async
async def get(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
**kwargs: Any
) -> "_models.OnlineEndpointData":
"""Get Online Endpoint.
Get Online Endpoint.
:param endpoint_name: Online Endpoint name.
:type endpoint_name: str
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param workspace_name: Name of Azure Machine Learning workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: OnlineEndpointData, or the result of cls(response)
:rtype: ~azure.mgmt.machinelearningservices.models.OnlineEndpointData
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.OnlineEndpointData"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_request(
endpoint_name=endpoint_name,
subscription_id=self._config.subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('OnlineEndpointData', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}'} # type: ignore
async def _update_initial(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
body: "_models.PartialOnlineEndpointPartialTrackedResource",
**kwargs: Any
) -> Optional["_models.OnlineEndpointData"]:
cls = kwargs.pop('cls', None) # type: ClsType[Optional["_models.OnlineEndpointData"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
_json = self._serialize.body(body, 'PartialOnlineEndpointPartialTrackedResource')
request = build_update_request_initial(
endpoint_name=endpoint_name,
subscription_id=self._config.subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
template_url=self._update_initial.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = None
response_headers = {}
if response.status_code == 200:
deserialized = self._deserialize('OnlineEndpointData', pipeline_response)
if response.status_code == 202:
response_headers['x-ms-async-operation-timeout']=self._deserialize('duration', response.headers.get('x-ms-async-operation-timeout'))
response_headers['Location']=self._deserialize('str', response.headers.get('Location'))
response_headers['Retry-After']=self._deserialize('int', response.headers.get('Retry-After'))
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
_update_initial.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}'} # type: ignore
@distributed_trace_async
async def begin_update(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
body: "_models.PartialOnlineEndpointPartialTrackedResource",
**kwargs: Any
) -> AsyncLROPoller["_models.OnlineEndpointData"]:
"""Update Online Endpoint (asynchronous).
Update Online Endpoint (asynchronous).
:param endpoint_name: Online Endpoint name.
:type endpoint_name: str
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param workspace_name: Name of Azure Machine Learning workspace.
:type workspace_name: str
:param body: Online Endpoint entity to apply during operation.
:type body:
~azure.mgmt.machinelearningservices.models.PartialOnlineEndpointPartialTrackedResource
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be AsyncARMPolling. Pass in False for
this operation to not poll, or pass in your own initialized polling object for a personal
polling strategy.
:paramtype polling: bool or ~azure.core.polling.AsyncPollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of AsyncLROPoller that returns either OnlineEndpointData or the result of
cls(response)
:rtype:
~azure.core.polling.AsyncLROPoller[~azure.mgmt.machinelearningservices.models.OnlineEndpointData]
:raises: ~azure.core.exceptions.HttpResponseError
"""
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
polling = kwargs.pop('polling', True) # type: Union[bool, azure.core.polling.AsyncPollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType["_models.OnlineEndpointData"]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = await self._update_initial(
endpoint_name=endpoint_name,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
body=body,
content_type=content_type,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
def get_long_running_output(pipeline_response):
response = pipeline_response.http_response
deserialized = self._deserialize('OnlineEndpointData', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
if polling is True: polling_method = AsyncARMPolling(lro_delay, **kwargs)
elif polling is False: polling_method = AsyncNoPolling()
else: polling_method = polling
if cont_token:
return AsyncLROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return AsyncLROPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_update.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}'} # type: ignore
async def _create_or_update_initial(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
body: "_models.OnlineEndpointData",
**kwargs: Any
) -> "_models.OnlineEndpointData":
cls = kwargs.pop('cls', None) # type: ClsType["_models.OnlineEndpointData"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
_json = self._serialize.body(body, 'OnlineEndpointData')
request = build_create_or_update_request_initial(
endpoint_name=endpoint_name,
subscription_id=self._config.subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
template_url=self._create_or_update_initial.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
response_headers = {}
if response.status_code == 200:
deserialized = self._deserialize('OnlineEndpointData', pipeline_response)
if response.status_code == 201:
response_headers['x-ms-async-operation-timeout']=self._deserialize('duration', response.headers.get('x-ms-async-operation-timeout'))
response_headers['Azure-AsyncOperation']=self._deserialize('str', response.headers.get('Azure-AsyncOperation'))
deserialized = self._deserialize('OnlineEndpointData', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
_create_or_update_initial.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}'} # type: ignore
@distributed_trace_async
async def begin_create_or_update(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
body: "_models.OnlineEndpointData",
**kwargs: Any
) -> AsyncLROPoller["_models.OnlineEndpointData"]:
"""Create or update Online Endpoint (asynchronous).
Create or update Online Endpoint (asynchronous).
:param endpoint_name: Online Endpoint name.
:type endpoint_name: str
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param workspace_name: Name of Azure Machine Learning workspace.
:type workspace_name: str
:param body: Online Endpoint entity to apply during operation.
:type body: ~azure.mgmt.machinelearningservices.models.OnlineEndpointData
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be AsyncARMPolling. Pass in False for
this operation to not poll, or pass in your own initialized polling object for a personal
polling strategy.
:paramtype polling: bool or ~azure.core.polling.AsyncPollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of AsyncLROPoller that returns either OnlineEndpointData or the result of
cls(response)
:rtype:
~azure.core.polling.AsyncLROPoller[~azure.mgmt.machinelearningservices.models.OnlineEndpointData]
:raises: ~azure.core.exceptions.HttpResponseError
"""
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
polling = kwargs.pop('polling', True) # type: Union[bool, azure.core.polling.AsyncPollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType["_models.OnlineEndpointData"]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = await self._create_or_update_initial(
endpoint_name=endpoint_name,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
body=body,
content_type=content_type,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
def get_long_running_output(pipeline_response):
response = pipeline_response.http_response
deserialized = self._deserialize('OnlineEndpointData', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
if polling is True: polling_method = AsyncARMPolling(lro_delay, **kwargs)
elif polling is False: polling_method = AsyncNoPolling()
else: polling_method = polling
if cont_token:
return AsyncLROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return AsyncLROPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_create_or_update.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}'} # type: ignore
@distributed_trace_async
async def list_keys(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
**kwargs: Any
) -> "_models.EndpointAuthKeys":
"""List EndpointAuthKeys for an Endpoint using Key-based authentication.
List EndpointAuthKeys for an Endpoint using Key-based authentication.
:param endpoint_name: Online Endpoint name.
:type endpoint_name: str
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param workspace_name: Name of Azure Machine Learning workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: EndpointAuthKeys, or the result of cls(response)
:rtype: ~azure.mgmt.machinelearningservices.models.EndpointAuthKeys
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.EndpointAuthKeys"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_list_keys_request(
endpoint_name=endpoint_name,
subscription_id=self._config.subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.list_keys.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('EndpointAuthKeys', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
list_keys.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}/listKeys'} # type: ignore
async def _regenerate_keys_initial(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
body: "_models.RegenerateEndpointKeysRequest",
**kwargs: Any
) -> None:
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
_json = self._serialize.body(body, 'RegenerateEndpointKeysRequest')
request = build_regenerate_keys_request_initial(
endpoint_name=endpoint_name,
subscription_id=self._config.subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
template_url=self._regenerate_keys_initial.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
response_headers = {}
if response.status_code == 202:
response_headers['Location']=self._deserialize('str', response.headers.get('Location'))
response_headers['Retry-After']=self._deserialize('int', response.headers.get('Retry-After'))
if cls:
return cls(pipeline_response, None, response_headers)
_regenerate_keys_initial.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}/regenerateKeys'} # type: ignore
@distributed_trace_async
async def begin_regenerate_keys(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
body: "_models.RegenerateEndpointKeysRequest",
**kwargs: Any
) -> AsyncLROPoller[None]:
"""Regenerate EndpointAuthKeys for an Endpoint using Key-based authentication (asynchronous).
Regenerate EndpointAuthKeys for an Endpoint using Key-based authentication (asynchronous).
:param endpoint_name: Online Endpoint name.
:type endpoint_name: str
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param workspace_name: Name of Azure Machine Learning workspace.
:type workspace_name: str
:param body: RegenerateKeys request .
:type body: ~azure.mgmt.machinelearningservices.models.RegenerateEndpointKeysRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be AsyncARMPolling. Pass in False for
this operation to not poll, or pass in your own initialized polling object for a personal
polling strategy.
:paramtype polling: bool or ~azure.core.polling.AsyncPollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of AsyncLROPoller that returns either None or the result of cls(response)
:rtype: ~azure.core.polling.AsyncLROPoller[None]
:raises: ~azure.core.exceptions.HttpResponseError
"""
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
polling = kwargs.pop('polling', True) # type: Union[bool, azure.core.polling.AsyncPollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType[None]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = await self._regenerate_keys_initial(
endpoint_name=endpoint_name,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
body=body,
content_type=content_type,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
def get_long_running_output(pipeline_response):
if cls:
return cls(pipeline_response, None, {})
if polling is True: polling_method = AsyncARMPolling(lro_delay, lro_options={'final-state-via': 'location'}, **kwargs)
elif polling is False: polling_method = AsyncNoPolling()
else: polling_method = polling
if cont_token:
return AsyncLROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return AsyncLROPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_regenerate_keys.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}/regenerateKeys'} # type: ignore
@distributed_trace_async
async def get_token(
self,
endpoint_name: str,
resource_group_name: str,
workspace_name: str,
**kwargs: Any
) -> "_models.EndpointAuthToken":
"""Retrieve a valid AAD token for an Endpoint using AMLToken-based authentication.
Retrieve a valid AAD token for an Endpoint using AMLToken-based authentication.
:param endpoint_name: Online Endpoint name.
:type endpoint_name: str
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param workspace_name: Name of Azure Machine Learning workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: EndpointAuthToken, or the result of cls(response)
:rtype: ~azure.mgmt.machinelearningservices.models.EndpointAuthToken
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.EndpointAuthToken"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_token_request(
endpoint_name=endpoint_name,
subscription_id=self._config.subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_token.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('EndpointAuthToken', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_token.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/onlineEndpoints/{endpointName}/token'} # type: ignore
| 47.197384 | 281 | 0.676391 | 4,166 | 39,693 | 6.232837 | 0.077532 | 0.031041 | 0.035354 | 0.015405 | 0.842525 | 0.832974 | 0.819726 | 0.818339 | 0.817916 | 0.808865 | 0 | 0.004574 | 0.239967 | 39,693 | 840 | 282 | 47.253571 | 0.856139 | 0.090318 | 0 | 0.744361 | 0 | 0.020677 | 0.136646 | 0.102503 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013158 | false | 0 | 0.030075 | 0 | 0.103383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fe80ad46dad826ebf418eee76af388bc71316f7e | 6,103 | py | Python | scripts/plot_optical_flow.py | tjvandal/geostationary-superslomo | 2fcacd6ab8cc01b02709b098f83c92b5b754a919 | [
"Apache-2.0"
] | 10 | 2021-08-02T15:48:51.000Z | 2022-02-05T23:52:19.000Z | scripts/plot_optical_flow.py | tjvandal/geostationary-superslomo | 2fcacd6ab8cc01b02709b098f83c92b5b754a919 | [
"Apache-2.0"
] | null | null | null | scripts/plot_optical_flow.py | tjvandal/geostationary-superslomo | 2fcacd6ab8cc01b02709b098f83c92b5b754a919 | [
"Apache-2.0"
] | 1 | 2021-08-19T19:00:09.000Z | 2021-08-19T19:00:09.000Z | import os, sys
sys.path.append(os.path.dirname(os.path.abspath(os.getcwd())))
from data import goes16s3
from tools import utils, inference_tools, plotting
from slomo import unet
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import time
import os
import seaborn as sns
import metpy
sns.set_context("paper", font_scale=1.6)
#dayofyear = 281
#year = 2017
#dayofyear = dt.datetime(year, 9, 6).timetuple().tm_yday
year = 2017
month = 9
day = 8
n_channels = 8
t = 1.0
product = 'ABI-L1b-RadC'
data_directory = '/nex/datapoolne/goes16'
#product = 'ABI-L1b-RadM'
#data_directory = '/nobackupp10/tvandal/data/goes16'
hour = 18
minute = 2
minute_delta = 15
nn_model = unet.UNetMedium
discard = 64
dayofyear = dt.datetime(year, month, day).timetuple().tm_yday
multivariate = True
if multivariate:
checkpoint = '../saved-models/1.4-unet-medium/9Min-%iChannels-MV/' % n_channels
else:
checkpoint = '../saved-models/1.4-unet-medium/9Min-%iChannels-SV/' % n_channels
noaadata = goes16s3.NOAAGOESS3(product=product, channels=range(1,n_channels+1),
save_directory=data_directory, skip_connection=True)
I0, I1 = noaadata.load_snapshots(year, dayofyear, hour, minute, minute_delta=minute_delta)
from data import goes16s3
from tools import utils, inference_tools, plotting
from slomo import unet
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import time
import os
import seaborn as sns
import metpy
sns.set_context("paper", font_scale=1.6)
#dayofyear = 281
#year = 2017
#dayofyear = dt.datetime(year, 9, 6).timetuple().tm_yday
year = 2017
month = 9
day = 8
n_channels = 8
t = 1.0
product = 'ABI-L1b-RadC'
data_directory = '/nex/datapoolne/goes16'
#product = 'ABI-L1b-RadM'
#data_directory = '/nobackupp10/tvandal/data/goes16'
hour = 18
minute = 2
minute_delta = 15
nn_model = unet.UNetMedium
discard = 64
dayofyear = dt.datetime(year, month, day).timetuple().tm_yday
multivariate = True
if multivariate:
checkpoint = '../saved-models/1.4-unet-medium/9Min-%iChannels-MV/' % n_channels
else:
checkpoint = '../saved-models/1.4-unet-medium/9Min-%iChannels-SV/' % n_channels
noaadata = goes16s3.NOAAGOESS3(product=product, channels=range(1,n_channels+1),
save_directory=data_directory, skip_connection=True)
I0, I1 = noaadata.load_snapshots(year, dayofyear, hour, minute, minute_delta=minute_delta)
if not os.path.exists('figures/network'):
os.makedirs('figures/network')
#plotting.plot_3channel_image(I1.values[:,discard:-discard, discard:-discard])
#plt.savefig("figures/falsergb_image1-{}.png".format(product), dpi=300, pad_inches=0)
plotting.plot_3channel_image_projection(I1)
plt.show()
sys.exit()
flownet, interpnet, warper= inference_tools.load_models(n_channels, checkpoint,
multivariate=multivariate,
nn_model=nn_model)
vector_data = inference_tools.single_inference_split(I0.values, I1.values, t,
flownet, interpnet, multivariate,
overlap=128, block_size=256+128,
discard=discard)
print("vector data keys: {}".format(vector_data.keys()))
plotting.plot_3channel_image((I1-I0).values[:,discard:-discard, discard:-discard]*2)
plt.savefig("figures/diff_images-{}.png".format(product), dpi=300, pad_inches=0)
f_01 = vector_data['f_01']
total_flow = f_01 + vector_data['delta_f_t1']
if product == 'ABI-L1b-RadC':
down = 20
else:
down = 10
for c in [7,]:
u = total_flow[2*c] * -1
v = total_flow[2*c+1]
ax = plotting.flow_quiver_plot(u, v, down=down)
plt.savefig("figures/quiver_plot_band{}-{}.png".format(c+1, product), dpi=300, pad_inches=0)
visible = vector_data['V_t0'][c]
ratio = 1.*visible.shape[0] / visible.shape[1]
hi = int(ratio * 10.)
wi = int(10.)
fig = plt.figure(figsize=(wi,hi), frameon=False)
ax = fig.add_axes([0, 0, 1, 1])
ax.imshow(visible, cmap='Greys')
ax.axis('off')
plt.savefig("figures/visible_{}-{}.png".format(c, product), dpi=300, pad_inches=0)
plt.show()
if not os.path.exists('figures/network'):
os.makedirs('figures/network')
#plotting.plot_3channel_image(I1.values[:,discard:-discard, discard:-discard])
#plt.savefig("figures/falsergb_image1-{}.png".format(product), dpi=300, pad_inches=0)
plotting.plot_3channel_image_projection(I1)
plt.show()
sys.exit()
flownet, interpnet, warper= inference_tools.load_models(n_channels, checkpoint,
multivariate=multivariate,
nn_model=nn_model)
vector_data = inference_tools.single_inference_split(I0.values, I1.values, t,
flownet, interpnet, multivariate,
overlap=128, block_size=256+128,
discard=discard)
print("vector data keys: {}".format(vector_data.keys()))
plotting.plot_3channel_image((I1-I0).values[:,discard:-discard, discard:-discard]*2)
plt.savefig("figures/diff_images-{}.png".format(product), dpi=300, pad_inches=0)
f_01 = vector_data['f_01']
total_flow = f_01 + vector_data['delta_f_t1']
if product == 'ABI-L1b-RadC':
down = 20
else:
down = 10
for c in [7,]:
u = total_flow[2*c] * -1
v = total_flow[2*c+1]
ax = plotting.flow_quiver_plot(u, v, down=down)
plt.savefig("figures/quiver_plot_band{}-{}.png".format(c+1, product), dpi=300, pad_inches=0)
visible = vector_data['V_t0'][c]
ratio = 1.*visible.shape[0] / visible.shape[1]
hi = int(ratio * 10.)
wi = int(10.)
fig = plt.figure(figsize=(wi,hi), frameon=False)
ax = fig.add_axes([0, 0, 1, 1])
ax.imshow(visible, cmap='Greys')
ax.axis('off')
plt.savefig("figures/visible_{}-{}.png".format(c, product), dpi=300, pad_inches=0)
plt.show()
| 29.626214 | 96 | 0.652466 | 842 | 6,103 | 4.584323 | 0.199525 | 0.050777 | 0.043523 | 0.033161 | 0.984974 | 0.984974 | 0.984974 | 0.984974 | 0.984974 | 0.984974 | 0 | 0.048983 | 0.210552 | 6,103 | 205 | 97 | 29.770732 | 0.752179 | 0.103883 | 0 | 0.985507 | 0 | 0 | 0.114778 | 0.076274 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.152174 | 0 | 0.152174 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fe85e797766fbdcdd5d668dec57d4f877f84da11 | 1,694 | py | Python | insator_plans.py | Mike-msoh/insator-service-broker-insatorLab | 8d21b31d1d687149b4a84c6531f85bf24e958603 | [
"Apache-2.0"
] | null | null | null | insator_plans.py | Mike-msoh/insator-service-broker-insatorLab | 8d21b31d1d687149b4a84c6531f85bf24e958603 | [
"Apache-2.0"
] | null | null | null | insator_plans.py | Mike-msoh/insator-service-broker-insatorLab | 8d21b31d1d687149b4a84c6531f85bf24e958603 | [
"Apache-2.0"
] | null | null | null | import uuid
def plan_a():
plan = {"name" : "insatorplan-a",
"description" : "Describe the characteristics of this plan. For example, Dedicated schema and tablespace per service instance on a shared server. 1GB and 10GB of compressed database storage can hold up to 5GB and 50GB of uncompressed data respectively based on typical compression ratios.",
"free" : True,
"id" : uuid.uuid4(), # SHOULD BE UNIQUE
"metadata" : {"bullets" :["A description of the resources that can be used with the plan.","1 Auth Module per instance. Can host 100 concurrent auth operation.","1 GB Min per instance. 10 GB Max per instance."],"costs":[{"unitId" : "INSTANCES_PER_MONTH","unit" : "MONTHLY","partNumber" : ""}],"displayName":"insatorPlanA"}}
return plan
def plan_b():
plan = {"name" : "insatorplan-b",
"description" : "Describe the characteristics of this plan. For example, Dedicated schema and tablespace per service instance on a shared server. 1GB and 10GB of compressed database storage can hold up to 5GB and 50GB of uncompressed data respectively based on typical compression ratios.",
"free" : True,
"id" : uuid.uuid4(), # SHOULD BE UNIQUE
"metadata" : {
"bullets" :[
"A description of the resources that can be used with the plan.",
"10 Auth Module per instance. Can host 1000 concurrent auth operation.",
"10 GB Min per instance. 100 GB Max per instance.",
],
"costs" :[
{
"unitId" : "INSTANCES_PER_MONTH",
"unit" : "MONTHLY",
"partNumber" : ""
}
],
"displayName":"insatorPlanB"
}}
return plan
| 49.823529 | 328 | 0.643447 | 212 | 1,694 | 5.113208 | 0.377358 | 0.060886 | 0.035055 | 0.068266 | 0.809963 | 0.809963 | 0.758303 | 0.758303 | 0.758303 | 0.758303 | 0 | 0.025217 | 0.250885 | 1,694 | 33 | 329 | 51.333333 | 0.828999 | 0.019481 | 0 | 0.344828 | 0 | 0.068966 | 0.689614 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.034483 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
229d1b18de2d7a2d5252ed57f0b9377a18ccdd7e | 5,213 | py | Python | Project 02.py | neilvarunnaidu/SSW555 | 2c8af75442823253040ac91883972478a11102cc | [
"MIT"
] | null | null | null | Project 02.py | neilvarunnaidu/SSW555 | 2c8af75442823253040ac91883972478a11102cc | [
"MIT"
] | null | null | null | Project 02.py | neilvarunnaidu/SSW555 | 2c8af75442823253040ac91883972478a11102cc | [
"MIT"
] | null | null | null | #NEIL VARUN NAIDU
#CWID 10468310
#CS 555
#Project 02
#opening GEDCOM file
text_file = open(r'C:\Users\Neil Naidu\Desktop\SampleInput.ged', 'r')
with open('output.txt', 'w') as f:
#BEGIN
print("\f", file = f)
#Traverse File
for line in text_file:
print("\n-->", line, file = f)
#handles empty lines in GEDCOM files
if line == "\n":
print("<-- <whitespace>", file = f)
line_words = line.split()
#handles no tag and whitespaces in individual lines of the GEDCOM file
if len(line_words) > 2:
level_number = int(line[:1])
if line_words[2] == "INDI" or line_words[2] == "FAM":
#extract argument
line_arg = line.split(' ',2)[1]
#extract the tag of the line
line_tag = line_words[2].strip()
else:
#extract the tag of the line
line_tag = line_words[1].strip()
#extract argument
line_arg = line.split(' ',2)[2]
#print the attributes of input line
print("<--",level_number,"|", line_tag,"|", file = f)
#check if the tag is valid according to the overview document
if level_number == 0 and line_tag == "INDI":
print("Y", file = f)
elif level_number == 1 and line_tag == "NAME":
print("Y", file = f)
elif level_number == 1 and line_tag == "SEX":
print("Y", file = f)
elif level_number == 1 and line_tag == "BIRT":
print("Y", file = f)
elif level_number == 1 and line_tag == "DEAT":
print("Y", file = f)
elif level_number == 0 and line_tag == "FAM":
print("Y", file = f)
elif level_number == 1 and line_tag == "FAMS":
print("Y", file = f)
elif level_number == 1 and line_tag == "MARR":
print("Y", file = f)
elif level_number == 1 and line_tag == "FAMC":
print("Y", file = f)
elif level_number == 0 and line_tag == "HEAD":
print("Y", file = f)
elif level_number == 1 and line_tag == "HUSB":
print("Y", file = f)
elif level_number == 1 and line_tag == "WIFE":
print("Y", file = f)
elif level_number == 0 and line_tag == "TRLR":
print("Y", file = f)
elif level_number == 0 and line_tag == "NOTE":
print("Y", file = f)
elif level_number == 2 and line_tag == "DATE":
print("Y", file = f)
elif level_number == 1 and line_tag == "CHIL":
print("Y", file = f)
elif level_number == 1 and line_tag == "DIV":
print("Y", file = f)
else:
print("N", file = f)
#print line argument
print("|",line_arg, file = f)
#for lines without arguments
elif len(line_words) == 2:
#extract the tag of the line
line_tag = line_words[1].strip()
level_number = int(line[:1])
#print the attributes of input line
print("<--",level_number,"|", line_tag,"|", file = f)
#check if the tag is valid according to the overview document
if level_number == 0 and line_tag == "INDI":
print("Y|", file = f)
elif level_number == 1 and line_tag == "NAME":
print("Y|", file = f)
elif level_number == 1 and line_tag == "SEX":
print("Y|", file = f)
elif level_number == 1 and line_tag == "BIRT":
print("Y|", file = f)
elif level_number == 1 and line_tag == "DEAT":
print("Y|", file = f)
elif level_number == 0 and line_tag == "FAM":
print("Y|", file = f)
elif level_number == 1 and line_tag == "FAMS":
print("Y|", file = f)
elif level_number == 1 and line_tag == "MARR":
print("Y|", file = f)
elif level_number == 1 and line_tag == "FAMC":
print("Y|", file = f)
elif level_number == 0 and line_tag == "HEAD":
print("Y|", file = f)
elif level_number == 1 and line_tag == "HUSB":
print("Y|", file = f)
elif level_number == 1 and line_tag == "WIFE":
print("Y|", file = f)
elif level_number == 0 and line_tag == "TRLR":
print("Y|", file = f)
elif level_number == 0 and line_tag == "NOTE":
print("Y|", file = f)
elif level_number == 2 and line_tag == "DATE":
print("Y|", file = f)
elif level_number == 1 and line_tag == "CHIL":
print("Y|", file = f)
elif level_number == 1 and line_tag == "DIV":
print("Y|", file = f)
else:
print("N|", file = f)
#end of file
| 44.555556 | 79 | 0.462114 | 641 | 5,213 | 3.617785 | 0.143526 | 0.090556 | 0.146615 | 0.161276 | 0.800345 | 0.783959 | 0.783959 | 0.756361 | 0.756361 | 0.756361 | 0 | 0.019455 | 0.408402 | 5,213 | 116 | 80 | 44.939655 | 0.73249 | 0.104355 | 0 | 0.819149 | 0 | 0 | 0.062665 | 0.006399 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.446809 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
22cf61f7542d875f277a6f0a9220f7139f176527 | 175 | py | Python | test/test/test_docs.py | BCG-Gamma/pytools | d7be703e0665917cd75b671564d5c0163f13b77b | [
"Apache-2.0"
] | 17 | 2021-01-12T08:07:11.000Z | 2022-03-03T22:59:04.000Z | test/test/test_docs.py | BCG-Gamma/pytools | d7be703e0665917cd75b671564d5c0163f13b77b | [
"Apache-2.0"
] | 10 | 2021-01-08T17:04:39.000Z | 2022-01-18T13:21:52.000Z | test/test/test_docs.py | BCG-Gamma/pytools | d7be703e0665917cd75b671564d5c0163f13b77b | [
"Apache-2.0"
] | 1 | 2021-11-06T00:16:43.000Z | 2021-11-06T00:16:43.000Z | """
Test docstrings.
"""
from pytools.api import DocValidator
def test_docstrings() -> None:
assert DocValidator(root_dir="src").validate_doc(), "docstrings are valid"
| 17.5 | 78 | 0.725714 | 21 | 175 | 5.904762 | 0.809524 | 0.225806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137143 | 175 | 9 | 79 | 19.444444 | 0.821192 | 0.091429 | 0 | 0 | 0 | 0 | 0.152318 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
22e33c5e82f04cb1483bdfd1d7a97425ae953056 | 10,406 | py | Python | src/python/tests/core/base/retry_test.py | nopsledder/clusterfuzz | 529963438d956e46ddddfb62debc6ed808be0083 | [
"Apache-2.0"
] | 3 | 2020-12-30T07:00:55.000Z | 2021-03-16T10:55:05.000Z | src/python/tests/core/base/retry_test.py | nopsledder/clusterfuzz | 529963438d956e46ddddfb62debc6ed808be0083 | [
"Apache-2.0"
] | 34 | 2020-08-18T18:47:00.000Z | 2021-07-14T07:47:35.000Z | src/python/tests/core/base/retry_test.py | nopsledder/clusterfuzz | 529963438d956e46ddddfb62debc6ed808be0083 | [
"Apache-2.0"
] | 1 | 2020-04-25T16:37:10.000Z | 2020-04-25T16:37:10.000Z | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Retry tests."""
# pylint: disable=protected-access
from builtins import range
import mock
import unittest
from base import retry
from metrics import monitor
from metrics import monitoring_metrics
from tests.test_libs import helpers
class WrapTest(unittest.TestCase):
"""Test retry decorator."""
def setUp(self):
helpers.patch(self, [
'base.retry.sleep',
])
self.func_body = mock.MagicMock()
monitor.metrics_store().reset_for_testing()
@retry.wrap(retries=4, delay=10, backoff=2, function='_func')
def _func(self, a):
return self.func_body(a)
@retry.wrap(retries=4, delay=10, backoff=2, function='_func')
def _yield_func(self, a):
for _ in range(3):
yield self.func_body(a)
@retry.wrap(
retries=4, delay=10, backoff=2, function='_func', retry_on_false=True)
def _func2(self, a):
return self.func_body(a)
class _FakeException(Exception):
pass
@retry.wrap(
retries=4,
delay=10,
backoff=2,
function='_func',
exception_type=_FakeException)
def _func_exception_type(self, a):
return self.func_body(a)
@retry.wrap(
retries=4,
delay=10,
backoff=2,
function='_func',
exception_type=_FakeException)
def _yield_func_exception_type(self, a):
for _ in range(3):
yield self.func_body(a)
def test_retry_and_succeed(self):
"""Test when retry once and succeed for regular function.."""
self.func_body.side_effect = [self._FakeException(), 456]
self.assertEqual(456, self._func(123))
self.assertEqual(2, self.func_body.call_count)
self.func_body.assert_has_calls([mock.call(123), mock.call(123)])
self.assertEqual(1, self.mock.sleep.call_count)
self.mock.sleep.assert_has_calls([mock.call(10)])
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_retry_and_succeed_yield(self):
"""Test when retry once and succeed for generator function.."""
self.func_body.side_effect = [self._FakeException(), 1, 2, 3]
results = [i for i in self._yield_func(123)]
self.assertEqual([1, 2, 3], results)
self.assertEqual(4, self.func_body.call_count)
self.func_body.assert_has_calls(
[mock.call(123),
mock.call(123),
mock.call(123),
mock.call(123)])
self.assertEqual(1, self.mock.sleep.call_count)
self.mock.sleep.assert_has_calls([mock.call(10)])
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_retry_and_succeed_yield_with_exceptions_in_middle(self):
"""Test when retry once and succeed for generator function with exceptions
happening after the first element."""
self.func_body.side_effect = [
1, self._FakeException(), 1, 2,
self._FakeException(), 1, 2, 3
]
results = [i for i in self._yield_func(123)]
self.assertEqual([1, 2, 3], results)
self.assertEqual(8, self.func_body.call_count)
self.func_body.assert_has_calls([mock.call(123)] * 8)
self.assertEqual(2, self.mock.sleep.call_count)
self.mock.sleep.assert_has_calls([mock.call(10), mock.call(20)])
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_exceed_try_limit(self):
"""Test when exceeding limit for regular function."""
self.func_body.side_effect = self._FakeException()
with self.assertRaises(self._FakeException):
self._func(123)
self.assertEqual(5, self.func_body.call_count)
self.func_body.assert_has_calls([mock.call(123)] * 5)
self.assertEqual(4, self.mock.sleep.call_count)
self.mock.sleep.assert_has_calls(
[mock.call(10),
mock.call(20),
mock.call(40),
mock.call(80)])
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_exceed_try_limit_yield(self):
"""Test when exceeding limit for generator function."""
self.func_body.side_effect = self._FakeException()
with self.assertRaises(self._FakeException):
for _ in self._yield_func(123):
pass
self.assertEqual(5, self.func_body.call_count)
self.func_body.assert_has_calls([mock.call(123)] * 5)
self.assertEqual(4, self.mock.sleep.call_count)
self.mock.sleep.assert_has_calls(
[mock.call(10),
mock.call(20),
mock.call(40),
mock.call(80)])
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_retry_exception_type_mismatch(self):
"""Test retry with exception mismatching type for regular function."""
self.func_body.side_effect = [Exception]
with self.assertRaises(Exception):
self._func_exception_type(123)
self.assertEqual(1, self.func_body.call_count)
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_retry_exception_type_mismatch_yield(self):
"""Test retry with exception mismatching type for generator function."""
self.func_body.side_effect = [Exception]
with self.assertRaises(Exception):
for _ in self._yield_func_exception_type(123):
pass
self.assertEqual(1, self.func_body.call_count)
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_retry_exception_type_match(self):
"""Test retry with exception matching type for regular function."""
self.func_body.side_effect = [self._FakeException(), 456]
self.assertEqual(456, self._func_exception_type(123))
self.assertEqual(2, self.func_body.call_count)
self.func_body.assert_has_calls([mock.call(123), mock.call(123)])
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_retry_exception_type_match_yield(self):
"""Test retry with exception matching type for generator function."""
self.func_body.side_effect = [self._FakeException(), 1, 2, 3]
results = [i for i in self._yield_func_exception_type(123)]
self.assertEqual([1, 2, 3], results)
self.assertEqual(4, self.func_body.call_count)
self.func_body.assert_has_calls(
[mock.call(123),
mock.call(123),
mock.call(123),
mock.call(123)])
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_retry_succeed_on_false(self):
"""Test retry on returning false and succeeding later."""
self.func_body.side_effect = [False, True]
self.assertTrue(self._func2(123))
self.assertEqual(2, self.func_body.call_count)
self.func_body.assert_has_calls([mock.call(123), mock.call(123)])
self.assertEqual(1, self.mock.sleep.call_count)
self.mock.sleep.assert_has_calls([mock.call(10)])
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
def test_retry_fail_on_false(self):
"""Test retry on returning false."""
self.func_body.return_value = False
self.assertFalse(self._func2(123))
self.assertEqual(5, self.func_body.call_count)
self.func_body.assert_has_calls([mock.call(123)] * 5)
self.assertEqual(4, self.mock.sleep.call_count)
self.mock.sleep.assert_has_calls(
[mock.call(10),
mock.call(20),
mock.call(40),
mock.call(80)])
self.assertEqual(
0,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': True,
}))
self.assertEqual(
1,
monitoring_metrics.TRY_COUNT.get({
'function': '_func',
'is_succeeded': False,
}))
| 29.230337 | 78 | 0.628003 | 1,289 | 10,406 | 4.837083 | 0.124903 | 0.10826 | 0.071211 | 0.088212 | 0.829671 | 0.805613 | 0.79567 | 0.778829 | 0.739535 | 0.73344 | 0 | 0.029532 | 0.248318 | 10,406 | 355 | 79 | 29.312676 | 0.767579 | 0.122718 | 0 | 0.814545 | 0 | 0 | 0.065333 | 0 | 0 | 0 | 0 | 0 | 0.243636 | 1 | 0.061818 | false | 0.010909 | 0.025455 | 0.010909 | 0.105455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a3aa9251b8daf0d93dca9d8b645fa5e1d0a548e8 | 15,268 | py | Python | aggregators.py | rakshit-agrawal/LEAP | 53b5a79525b25855feca22ee0035e644a2ef09c2 | [
"BSD-2-Clause"
] | 15 | 2019-03-13T06:17:02.000Z | 2021-11-13T04:09:45.000Z | aggregators.py | rakshit-agrawal/LEAP | 53b5a79525b25855feca22ee0035e644a2ef09c2 | [
"BSD-2-Clause"
] | 1 | 2020-11-20T19:04:33.000Z | 2020-11-20T19:04:33.000Z | aggregators.py | rakshit-agrawal/LEAP | 53b5a79525b25855feca22ee0035e644a2ef09c2 | [
"BSD-2-Clause"
] | 2 | 2021-07-28T05:47:34.000Z | 2021-09-10T09:40:00.000Z | """
aggregators
This file maintains methods for path aggregation
- Rakshit Agrawal, 2018
"""
import numpy as np
import tensorflow as tf
kl = tf.keras.layers
class Aggregators(object):
def __init__(self,
node_count,
path_lengths,
additional_embeddings=None,
ordered_args=None):
self.node_count = node_count
self.path_lengths = path_lengths
self.additional_embeddings = additional_embeddings
self.ordered_args = ordered_args if not None else {}
# Set the problem for output layers
self.problem = self.ordered_args.get('problem', 'link')
def get_build_method(self, model_name):
model_ref = {
'avg_pool': self.build_mean_model,
'dense_max': self.build_dense_max_model,
'seq_of_seq': self.build_seq_of_seq_model,
'edge_conv': self.build_edge_conv_model
}
return model_ref.get(model_name, None)
def get_final_output_layer(self, n_classes=None):
""" Create final layer of model based on problem type. """
if n_classes is not None and isinstance(n_classes, int):
return kl.Dense(n_classes, activation='softmax', name='final_val')
if self.problem == 'link':
return kl.Dense(1, activation='sigmoid', name='final_val')
if self.problem == 'wsn':
if self.ordered_args.get('regression_only',False):
return kl.Dense(1, name='final_val')
return kl.Dense(1, activation='tanh', name='final_val')
def build_mean_model(self,
emb_dims=32,
dense_dims=32,
classifier_dims=32,
dropout=0.5,
known_embeddings=None,
show_summary=True):
""" Build a mean model """
if isinstance(dense_dims, dict):
assert set(dense_dims.keys()) == set(self.path_lengths.keys())
elif isinstance(dense_dims, int):
dense_dims = {i: dense_dims for i in self.path_lengths}
node_inp = kl.Input((2,), name='node_pair_input')
node_feature_values = []
if known_embeddings is None:
emb = kl.Embedding(input_dim=self.node_count,output_dim=emb_dims,
name='embedding_layer')
else:
assert isinstance(known_embeddings, kl.Embedding)
emb = known_embeddings
node_emb = emb(node_inp)
processed_node_pair = kl.Flatten()(node_emb)
node_feature_values.append(processed_node_pair)
if self.additional_embeddings is not None:
# Add node features through more embedding layers
if isinstance(self.additional_embeddings, list):
assert all([isinstance(i, np.ndarray) for i in self.additional_embeddings])
elif isinstance(self.additional_embeddings, np.ndarray):
self.additional_embeddings = [self.additional_embeddings]
else:
raise ValueError("Unknown embedding type provided.")
for i, emb_weights in enumerate(self.additional_embeddings):
emb_layer = kl.Embedding(input_dim=emb_weights.shape[0],
output_dim=emb_weights.shape[1],
weights=[emb_weights],
trainable=False,
name='node_features_{}'.format(i+1))
node_features = emb_layer(node_inp)
processed_features = kl.Flatten()(node_features)
node_feature_values.append(processed_features)
path_inps = {}
path_embs = {}
processed_paths = {}
for path_len in self.path_lengths:
path_inps[path_len] = kl.Input((None, path_len), name='path_%d_input' % path_len)
path_embs[path_len] = emb(path_inps[path_len])
processed_paths[path_len] = kl.TimeDistributed(
kl.Flatten(name='flatten_for_%d_paths' % path_len)
)(path_embs[path_len])
processed_paths[path_len] = kl.GlobalAveragePooling1D(name='final_mean_pool_for_%d_paths' % path_len)(
processed_paths[path_len])
combined = kl.Concatenate()(node_feature_values + processed_paths.values())
d2_out = kl.Dense(classifier_dims, name='dense_on_combined')(combined)
d2_out = kl.Dropout(dropout)(d2_out)
out = self.get_final_output_layer()(d2_out)
model = tf.keras.Model(inputs=[node_inp] + path_inps.values(), outputs=out)
if show_summary:
model.summary()
return model
def build_dense_max_model(self,
emb_dims=32,
dense_dims=32,
classifier_dims=32,
dropout=0.5,
known_embeddings=None,
show_summary=True):
""" Build a dense max model """
if isinstance(dense_dims, dict):
assert set(dense_dims.keys()) == set(self.path_lengths.keys())
elif isinstance(dense_dims, int):
dense_dims = {i: dense_dims for i in self.path_lengths}
node_inp = kl.Input((2,), name='node_pair_input')
node_feature_values = []
if known_embeddings is None:
emb = kl.Embedding(input_dim=self.node_count, output_dim=emb_dims,
name='embedding_layer')
else:
assert isinstance(known_embeddings, kl.Embedding)
emb = known_embeddings
node_emb = emb(node_inp)
processed_node_pair = kl.Flatten()(node_emb)
node_feature_values.append(processed_node_pair)
if self.additional_embeddings is not None:
# Add node features through more embedding layers
if isinstance(self.additional_embeddings, list):
assert all([isinstance(i, kl.Embedding) for i in self.additional_embeddings])
elif isinstance(self.additional_embeddings, kl.Embedding):
self.additional_embeddings = [self.additional_embeddings]
else:
raise ValueError("Unkonwn embedding type provided.")
for emb_layer in self.additional_embeddings:
node_features = emb_layer(node_inp)
processed_features = kl.Flatten()(node_features)
node_feature_values.append(processed_features)
path_inps = {}
path_embs = {}
processed_paths = {}
for path_len in self.path_lengths:
path_inps[path_len] = kl.Input((None, path_len), name='path_%d_input' % path_len)
path_embs[path_len] = emb(path_inps[path_len])
processed_paths[path_len] = kl.TimeDistributed(
kl.Flatten(name='flatten_for_%d_paths' % path_len)
)(path_embs[path_len])
processed_paths[path_len] = kl.TimeDistributed(
kl.Dense(dense_dims[path_len], name='dense_for_%d_paths' % path_len),
name='td_dense_for_%d_paths' % path_len)(processed_paths[path_len])
processed_paths[path_len] = kl.Dropout(dropout)(processed_paths[path_len])
processed_paths[path_len] = kl.GlobalMaxPooling1D(name='final_max_pool_for_%d_paths' % path_len)(
processed_paths[path_len])
combined = kl.Concatenate()(node_feature_values + processed_paths.values())
d2_out = kl.Dense(classifier_dims, name='dense_on_combined')(combined)
d2_out = kl.Dropout(dropout)(d2_out)
out = self.get_final_output_layer()(d2_out)
model = tf.keras.Model(inputs=[node_inp] + path_inps.values(), outputs=out)
if show_summary:
model.summary()
return model
def build_seq_of_seq_model(self,
emb_dims=32,
dense_dims=32,
classifier_dims=32,
dropout=0.5,
known_embeddings=None,
show_summary=True
):
""" Build a sequence of sequence model """
if isinstance(dense_dims, dict):
assert set(dense_dims.keys()) == set(self.path_lengths.keys())
elif isinstance(dense_dims, int):
dense_dims = {i: dense_dims for i in self.path_lengths}
node_inp = kl.Input((2,), name='node_pair_input')
node_feature_values = []
if known_embeddings is None:
emb = kl.Embedding(input_dim=self.node_count, output_dim=emb_dims,
name='embedding_layer')
else:
assert isinstance(known_embeddings, kl.Embedding)
emb = known_embeddings
node_emb = emb(node_inp)
processed_node_pair = kl.Flatten()(node_emb)
node_feature_values.append(processed_node_pair)
if self.additional_embeddings is not None:
# Add node features through more embedding layers
if isinstance(self.additional_embeddings, list):
assert all([isinstance(i, kl.Embedding) for i in self.additional_embeddings])
elif isinstance(self.additional_embeddings, kl.Embedding):
self.additional_embeddings = [self.additional_embeddings]
else:
raise ValueError("Unkonwn embedding type provided.")
for emb_layer in self.additional_embeddings:
node_features = emb_layer(node_inp)
processed_features = kl.Flatten()(node_features)
node_feature_values.append(processed_features)
path_inps = {}
path_embs = {}
processed_paths = {}
for path_len in self.path_lengths:
path_inps[path_len] = kl.Input((None, path_len), name='path_%d_input' % path_len)
path_embs[path_len] = emb(path_inps[path_len])
processed_paths[path_len] = kl.TimeDistributed(
kl.LSTM(dense_dims[path_len], return_sequences=True, name='lstm_for_%d_paths' % path_len),
name='td_lstm_for_%d_paths' % path_len)(path_embs[path_len])
processed_paths[path_len] = kl.Dropout(dropout)(processed_paths[path_len])
processed_paths[path_len] = kl.TimeDistributed(
kl.GlobalMaxPool1D(name='global_max_pool_for_%d_paths' % path_len),
name='td_for_global_max_pool_for_%d_paths' % path_len)(processed_paths[path_len])
processed_paths[path_len] = kl.LSTM(dense_dims[path_len] * 2, return_sequences=True,
name='lstm_for_%d_paths' % path_len)(processed_paths[path_len])
processed_paths[path_len] = kl.GlobalMaxPooling1D(name='final_max_pool_for_%d_paths' % path_len)(
processed_paths[path_len])
combined = kl.Concatenate()(node_feature_values + processed_paths.values())
d2_out = kl.Dense(classifier_dims, name='dense_on_combined')(combined)
d2_out = kl.Dropout(dropout)(d2_out)
out = self.get_final_output_layer()(d2_out)
model = tf.keras.Model(inputs=[node_inp] + path_inps.values(), outputs=out)
if show_summary:
model.summary()
return model
def build_edge_conv_model(self,
emb_dims=32,
dense_dims=32,
classifier_dims=32,
dropout=0.5,
known_embeddings=None,
show_summary=True
):
""" Build an edge conv model """
if isinstance(dense_dims, dict):
assert set(dense_dims.keys()) == set(self.path_lengths.keys())
elif isinstance(dense_dims, int):
dense_dims = {i: dense_dims for i in self.path_lengths}
node_inp = kl.Input((2,), name='node_pair_input')
node_feature_values = []
if known_embeddings is None:
emb = kl.Embedding(input_dim=self.node_count, output_dim=emb_dims,
name='embedding_layer')
else:
assert isinstance(known_embeddings, kl.Embedding)
emb = known_embeddings
node_emb = emb(node_inp)
processed_node_pair = kl.Flatten()(node_emb)
node_feature_values.append(processed_node_pair)
if self.additional_embeddings is not None:
# Add node features through more embedding layers
if isinstance(self.additional_embeddings, list):
assert all([isinstance(i, kl.Embedding) for i in self.additional_embeddings])
elif isinstance(self.additional_embeddings, kl.Embedding):
self.additional_embeddings = [self.additional_embeddings]
else:
raise ValueError("Unkonwn embedding type provided.")
for emb_layer in self.additional_embeddings:
node_features = emb_layer(node_inp)
processed_features = kl.Flatten()(node_features)
node_feature_values.append(processed_features)
path_inps = {}
path_embs = {}
processed_paths = {}
for path_len in self.path_lengths:
path_inps[path_len] = kl.Input((None, path_len), name='path_%d_input' % path_len)
path_embs[path_len] = emb(path_inps[path_len])
processed_paths[path_len] = kl.TimeDistributed(
kl.Conv1D(filters=emb_dims, kernel_size=2, strides=1,
name='conv_for_%d_paths' % path_len),
name='td_conv_for_%d_paths' % path_len)(path_embs[path_len])
processed_paths[path_len] = kl.Dropout(dropout)(processed_paths[path_len])
processed_paths[path_len] = kl.TimeDistributed(
kl.GlobalMaxPool1D(name='global_max_pool_for_%d_paths' % path_len),
name='td_for_global_max_pool_for_%d_paths' % path_len)(processed_paths[path_len])
processed_paths[path_len] = kl.LSTM(dense_dims[path_len] * 2, return_sequences=True,
name='lstm_for_%d_paths' % path_len)(processed_paths[path_len])
processed_paths[path_len] = kl.GlobalMaxPooling1D(name='final_max_pool_for_%d_paths' % path_len)(
processed_paths[path_len])
combined = kl.Concatenate()(node_feature_values + processed_paths.values())
d2_out = kl.Dense(classifier_dims, name='dense_on_combined')(combined)
d2_out = kl.Dropout(dropout)(d2_out)
out = self.get_final_output_layer()(d2_out)
model = tf.keras.Model(inputs=[node_inp] + path_inps.values(), outputs=out)
if show_summary:
model.summary()
return model
if __name__ == "__main__":
# Test
ag = Aggregators(node_count=200, path_lengths=[3, 4])
model = ag.build_mean_model()
model = ag.build_dense_max_model()
model = ag.build_seq_of_seq_model()
model = ag.build_edge_conv_model()
| 40.714667 | 114 | 0.599817 | 1,785 | 15,268 | 4.79944 | 0.087955 | 0.063733 | 0.064433 | 0.068635 | 0.842302 | 0.831563 | 0.820824 | 0.814988 | 0.814988 | 0.805183 | 0 | 0.007379 | 0.307702 | 15,268 | 374 | 115 | 40.823529 | 0.803122 | 0.031242 | 0 | 0.784387 | 0 | 0 | 0.063534 | 0.017358 | 0 | 0 | 0 | 0 | 0.04461 | 1 | 0.026022 | false | 0 | 0.007435 | 0 | 0.070632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a3aea179c0cbc8f4ab9d7f9a66d9c9444ee5c613 | 213 | py | Python | simple_youtube_video_commenter/__init__.py | Koldar/simple-youtube-video-commenter | 331664a0af003f3fe14f8cfb264f91318f2f03b6 | [
"MIT"
] | null | null | null | simple_youtube_video_commenter/__init__.py | Koldar/simple-youtube-video-commenter | 331664a0af003f3fe14f8cfb264f91318f2f03b6 | [
"MIT"
] | null | null | null | simple_youtube_video_commenter/__init__.py | Koldar/simple-youtube-video-commenter | 331664a0af003f3fe14f8cfb264f91318f2f03b6 | [
"MIT"
] | null | null | null | from simple_youtube_video_commenter.SimpleYoutubeVideoCommenter import SimpleYoutubeVideoCommenter
from simple_youtube_video_commenter.FlowFlags import FlowFlag
from simple_youtube_video_commenter import version | 71 | 99 | 0.929577 | 23 | 213 | 8.217391 | 0.434783 | 0.15873 | 0.269841 | 0.349206 | 0.492063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061033 | 213 | 3 | 100 | 71 | 0.945 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a3f29d264d77d0b9e09e7fcda3c066fae2ec270c | 7,647 | py | Python | ports/esp32/font2.py | kekemuyu/linewatch | 2cbba739a3773dafc8ebbe46cb1f1ce3b467c4bb | [
"MIT"
] | 22 | 2020-11-12T11:30:44.000Z | 2022-03-04T08:41:49.000Z | ports/esp32/font2.py | kekemuyu/linewatch | 2cbba739a3773dafc8ebbe46cb1f1ce3b467c4bb | [
"MIT"
] | 1 | 2020-11-23T10:02:42.000Z | 2020-11-30T12:33:27.000Z | ports/esp32/font2.py | kekemuyu/linewatch | 2cbba739a3773dafc8ebbe46cb1f1ce3b467c4bb | [
"MIT"
] | 9 | 2020-11-12T10:23:27.000Z | 2021-04-18T14:46:24.000Z |
hanzi_16x16={
'晴':
[0x00,0x20,0x00,0x20,0x7B,0xFE,0x48,0x20,0x49,0xFC,0x48,0x20,0x4B,0xFE,0x78,0x00,
0x49,0xFC,0x49,0x04,0x49,0xFC,0x49,0x04,0x79,0xFC,0x49,0x04,0x01,0x14,0x01,0x08],
'多':
[0x02,0x00,0x02,0x00,0x07,0xF0,0x08,0x20,0x38,0x40,0x04,0x80,0x03,0x40,0x0C,0x80,
0x71,0xF8,0x02,0x08,0x0C,0x10,0x32,0x20,0x01,0x40,0x01,0x80,0x0E,0x00,0x70,0x00],
'云':
[0x00,0x00,0x3F,0xF8,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFE,0x02,0x00,
0x04,0x00,0x04,0x00,0x08,0x40,0x10,0x20,0x20,0x10,0x7F,0xF8,0x20,0x08,0x00,0x08],
'雷':
[0x00,0x00,0x3F,0xF8,0x01,0x00,0x7F,0xFE,0x41,0x02,0x9D,0x74,0x01,0x00,0x1D,0x70,
0x00,0x00,0x3F,0xF8,0x21,0x08,0x21,0x08,0x3F,0xF8,0x21,0x08,0x21,0x08,0x3F,0xF8],
'阵':
[0x00,0x40,0x7C,0x40,0x44,0x40,0x4B,0xFE,0x48,0x80,0x50,0xA0,0x49,0x20,0x49,0xFC,
0x44,0x20,0x44,0x20,0x44,0x20,0x6B,0xFE,0x50,0x20,0x40,0x20,0x40,0x20,0x40,0x20],
'雨':
[0x00,0x00,0xFF,0xFE,0x01,0x00,0x01,0x00,0x01,0x00,0x7F,0xFC,0x41,0x04,0x41,0x04,
0x49,0x44,0x45,0x24,0x41,0x04,0x49,0x44,0x45,0x24,0x41,0x04,0x41,0x14,0x40,0x08],
'大':
[0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xFF,0xFE,0x01,0x00,0x01,0x00,
0x02,0x80,0x02,0x80,0x04,0x40,0x04,0x40,0x08,0x20,0x10,0x10,0x20,0x08,0xC0,0x06],
'小':
[0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x11,0x10,0x11,0x08,0x11,0x04,
0x21,0x04,0x21,0x02,0x41,0x02,0x81,0x02,0x01,0x00,0x01,0x00,0x05,0x00,0x02,0x00],
'中':
[0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x3F,0xF8,0x21,0x08,0x21,0x08,0x21,0x08,
0x21,0x08,0x21,0x08,0x3F,0xF8,0x21,0x08,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00],
'转':
[0x20,0x20,0x20,0x20,0x20,0x20,0xFD,0xFC,0x40,0x20,0x50,0x40,0x93,0xFE,0xFC,0x40,
0x10,0x80,0x11,0xFC,0x1C,0x04,0xF0,0x88,0x50,0x50,0x10,0x20,0x10,0x10,0x10,0x10],
'雾':
[0x3F,0xF8,0x01,0x00,0x7F,0xFE,0x41,0x02,0x9D,0x74,0x01,0x00,0x1D,0x70,0x04,0x00,
0x0F,0xE0,0x14,0x40,0x03,0x80,0x1C,0x70,0xE2,0x0E,0x0F,0xE0,0x04,0x20,0x18,0x60],
'霾':
[0x3F,0xF8,0x01,0x00,0x7F,0xFE,0x41,0x02,0x9D,0x74,0x30,0x00,0xCB,0xFC,0x2D,0x24,
0x31,0xFC,0xC9,0x24,0x15,0xFC,0x64,0x20,0x0D,0xFC,0x34,0x20,0xC5,0xFE,0x18,0x00],
'冰':
[0x00,0x40,0x40,0x40,0x20,0x40,0x20,0x44,0x00,0x68,0x07,0x70,0x11,0x60,0x11,0x50,
0x21,0x50,0xE2,0x48,0x22,0x48,0x24,0x44,0x28,0x42,0x20,0x40,0x21,0x40,0x00,0x80],
'雹':
[0x3F,0xF8,0x01,0x00,0x7F,0xFE,0x41,0x02,0x9D,0x74,0x01,0x00,0x1D,0x70,0x08,0x00,
0x1F,0xF0,0x20,0x10,0x5F,0x90,0x10,0x90,0x1F,0xD0,0x10,0x20,0x10,0x04,0x0F,0xFC],
'阴':
[0x00,0x00,0x7D,0xFC,0x45,0x04,0x49,0x04,0x49,0x04,0x51,0xFC,0x49,0x04,0x49,0x04,
0x45,0x04,0x45,0xFC,0x45,0x04,0x69,0x04,0x52,0x04,0x42,0x04,0x44,0x14,0x48,0x08],
}
weather_icon={
'晴':
[0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x03,0xC0,0x00,
0x00,0x03,0xC0,0x00,0x00,0x03,0xC0,0x00,0x03,0x80,0x01,0xC0,0x03,0xCF,0xF3,0xC0,
0x03,0xFF,0xFF,0xC0,0x01,0xFF,0xFF,0x80,0x00,0xFF,0xFF,0x00,0x00,0xFF,0xFF,0x00,
0x01,0xFF,0xFF,0x80,0x01,0xFF,0xFF,0x80,0x1D,0xFF,0xFF,0xB8,0x1D,0xFF,0xFF,0xB8,
0x1D,0xFF,0xFF,0xB8,0x1D,0xFF,0xFF,0xB8,0x01,0xFF,0xFF,0x80,0x01,0xFF,0xFF,0x80,
0x00,0xFF,0xFF,0x00,0x00,0xFF,0xFF,0x00,0x01,0xFF,0xFF,0x80,0x03,0xFF,0xFF,0xC0,
0x03,0xCF,0xF3,0xC0,0x03,0x80,0x01,0xC0,0x00,0x03,0xC0,0x00,0x00,0x03,0xC0,0x00,
0x00,0x03,0xC0,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00],
'云':
[0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x0F,0xF0,0x00,
0x00,0x3F,0xFC,0x00,0x00,0x7F,0xFE,0x00,0x00,0x7F,0xFE,0x00,0x00,0xFF,0xFF,0x00,
0x00,0xFF,0xFF,0x00,0x0F,0xFF,0xFF,0xF0,0x1F,0xFF,0xFF,0xF8,0x3F,0xFF,0xFF,0xFC,
0x3F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,
0x3F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,0x1F,0xFF,0xFF,0xF8,
0x0F,0xFF,0xFF,0xF0,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00],
'阴':
[0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFE,0x00,0x00,0x01,0xFF,0x00,
0x00,0x03,0xFF,0x80,0x00,0x07,0xFF,0xC0,0x00,0x3F,0xFF,0xF0,0x00,0x7F,0xFF,0xF8,
0x00,0xFF,0xFF,0xFC,0x01,0xFF,0xFF,0xFC,0x01,0xFF,0xFF,0xFC,0x0F,0xFF,0xFF,0xFC,
0x1F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xF8,
0x3F,0xFF,0xFF,0xC0,0x3F,0xFF,0xFF,0xC0,0x3F,0xFF,0xFF,0xC0,0x3F,0xFF,0xFF,0xC0,
0x1F,0xFF,0xFF,0x80,0x0F,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00],
'雨':
[0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFE,0x00,
0x00,0x01,0xFF,0x00,0x00,0x03,0xFF,0x80,0x00,0x07,0xFF,0xC0,0x00,0x3F,0xFF,0xF0,
0x00,0x7F,0xFF,0xF8,0x00,0xFF,0xFF,0xFC,0x01,0xFF,0xFF,0xFC,0x01,0xFF,0xFF,0xFC,
0x0F,0xFF,0xFF,0xFC,0x1F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,
0x3F,0xFF,0xFF,0xF8,0x3F,0xFF,0xFF,0xC0,0x3F,0xFF,0xFF,0xC0,0x3F,0xFF,0xFF,0xC0,
0x3F,0xFF,0xFF,0xC0,0x1F,0xFF,0xFF,0x80,0x0F,0xFF,0xFF,0x00,0x00,0xEE,0xE0,0x00,
0x01,0xCE,0xE0,0x00,0x01,0xDC,0xE0,0x00,0x00,0x1C,0xC0,0x00,0x00,0x38,0x00,0x00,
0x00,0x38,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00],
'雪':
[0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFE,0x00,
0x00,0x01,0xFF,0x00,0x00,0x03,0xFF,0x80,0x00,0x07,0xFF,0xC0,0x00,0x3F,0xFF,0xF0,
0x00,0x7F,0xFF,0xF8,0x00,0xFF,0xFF,0xFC,0x01,0xFF,0xFF,0xFC,0x01,0xFF,0xFF,0xFC,
0x0F,0xFF,0xFF,0xFC,0x1F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,0x3F,0xFF,0xFF,0xFC,
0x3F,0xFF,0xFF,0xF8,0x3F,0xFF,0xFF,0xC0,0x3F,0xFF,0xFF,0xC0,0x3F,0xFF,0xFF,0xC0,
0x3F,0xFF,0xFF,0xC0,0x1F,0xFF,0xFF,0x80,0x0F,0xFF,0xFF,0x00,0x03,0xBF,0xDC,0x00,
0x00,0x3F,0xC0,0x00,0x07,0xBF,0xDE,0x00,0x07,0x8F,0x1E,0x00,0x07,0x80,0x1E,0x00,
0x07,0x80,0x1E,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00],
'雾':
[0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x0F,0xF0,0x00,0x00,0x1F,0xF8,0x00,
0x00,0x3F,0xFC,0x00,0x00,0x7F,0xFE,0x00,0x00,0x7F,0xFE,0x00,0x03,0xFF,0xFF,0xC0,
0x07,0xFF,0xFF,0xE0,0x0F,0xFF,0xFF,0xF0,0x0F,0xFF,0xFF,0xF0,0x0F,0xFF,0xFF,0xF0,
0x0F,0xFF,0xFF,0xF0,0x0F,0xFF,0xFF,0xF0,0x3F,0xFF,0xFF,0xF0,0x3F,0xFF,0xFF,0xF0,
0x07,0xFF,0xFF,0xE0,0x03,0xFF,0xFF,0xFC,0x00,0x0F,0xFF,0xFC,0x00,0x00,0x00,0x00,
0x0F,0xFF,0xFC,0x00,0x0F,0xFF,0xFC,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00],
}
#16x20
bluetooth_icon=[0x00,0x00,0x02,0x00,0x03,0x00,0x03,0xC0,0x03,0xF0,0x03,0x38,0xC3,0x1E,0x63,0x3C,
0x3B,0xF0,0x0F,0xC0,0x07,0x80,0x0F,0xC0,0x1B,0xE0,0x33,0x38,0x63,0x1E,0xC3,0x3C,
0x03,0xF0,0x03,0xC0,0x03,0x80,0x02,0x00]
#16x16
wifi_icon=[0x00,0x00,0x00,0x00,0x0F,0xE0,0x3C,0x78,0x60,0x0C,0xCF,0xE6,0x1F,0xF0,0x30,0x18,
0x07,0xC0,0x0F,0xE0,0x04,0x40,0x00,0x00,0x03,0x80,0x03,0x80,0x03,0x80,0x00,0x00]
bluetooth_8x8=[0x00,0x06,0x25,0x16,0x0C,0x16,0x25,0x06]
wifi_8x8=[0x00,0x00,0x1C,0x3E,0x7F,0x3E,0x1C,0x08]
alarm_8x8=[0xDB,0xE7,0x42,0x99,0x99,0x42,0x66,0x99]
degree_8x8=[0xCC,0xD2,0x20,0x20,0x20,0x20,0x12,0x0C]
| 57.067164 | 96 | 0.722506 | 1,391 | 7,647 | 3.966211 | 0.094896 | 0.400218 | 0.491572 | 0.609027 | 0.635128 | 0.621352 | 0.595251 | 0.57785 | 0.518398 | 0.518398 | 0 | 0.468723 | 0.084347 | 7,647 | 133 | 97 | 57.496241 | 0.319195 | 0.001308 | 0 | 0.232143 | 0 | 0 | 0.002756 | 0 | 0 | 1 | 0.709618 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4302e2e4c620a2a7f778d03a55a596dbdc06cec5 | 26,510 | py | Python | test.py | veraivan/othello-game | 018ae6654bdba70c117ce84302f1d16d062ccbe1 | [
"MIT"
] | null | null | null | test.py | veraivan/othello-game | 018ae6654bdba70c117ce84302f1d16d062ccbe1 | [
"MIT"
] | null | null | null | test.py | veraivan/othello-game | 018ae6654bdba70c117ce84302f1d16d062ccbe1 | [
"MIT"
] | null | null | null | import time
from minimax import entrenar_nuevo_agente, minimax, minimax_alfa_beta, validacion_agente, AgenteRL
from gui.figuras import Tablero
def tests():
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(2) vs Alfabeta(2)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(2, 2)
print("Duracion media turno Minimax(2): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(2): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(2): ", victorias_minimax)
print("Victorias de Alfabeta(2)", victorias_alfabeta)
print("Empates", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(2) vs Alfabeta(3)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(2, 3)
print("Duracion media turno Minimax(2): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(3): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(2): ", victorias_minimax)
print("Victorias de Alfabeta(3)", victorias_alfabeta)
print("Empates", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(2) vs Alfabeta(4)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(2, 4)
print("Duracion media turno Minimax(2): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(4): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(2): ", victorias_minimax)
print("Victorias de Alfabeta(6)", victorias_alfabeta)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(3) vs Alfabeta(2)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(3, 2)
print("Duracion media turno Minimax(3): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(2): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(3): ", victorias_minimax)
print("Victorias de Alfabeta(2)", victorias_alfabeta)
print("Empates", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(3) vs Alfabeta(3)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(3, 3)
print("Duracion media turno Minimax(3): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(3): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(3): ", victorias_minimax)
print("Victorias de Alfabeta(3)", victorias_alfabeta)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(3) vs Alfabeta(4)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(3, 4)
print("Duracion media turno Minimax(3): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(4): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(3): ", victorias_minimax)
print("Victorias de Alfabeta(4)", victorias_alfabeta)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(4) vs Alfabeta(3)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(4, 3)
print("Duracion media turno Minimax(4): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(3): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(4): ", victorias_minimax)
print("Victorias de Alfabeta(3)", victorias_alfabeta)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(4) vs Alfabeta(4)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(4, 4)
print("Duracion media turno Minimax(4): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(4): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(4): ", victorias_minimax)
print("Victorias de Alfabeta(4)", victorias_alfabeta)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(4) vs Alfabeta(5)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(4, 5)
print("Duracion media turno Minimax(4): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(5): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(4): ", victorias_minimax)
print("Victorias de Alfabeta(5)", victorias_alfabeta)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(5) vs Alfabeta(3)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(5, 3)
print("Duracion media turno Minimax(5): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(3): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(5): ", victorias_minimax)
print("Victorias de Alfabeta(3)", victorias_alfabeta)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(5) vs Alfabeta(4)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(5, 4)
print("Duracion media turno Minimax(5): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(4): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(5): ", victorias_minimax)
print("Victorias de Alfabeta(4)", victorias_alfabeta)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(5) vs Alfabeta(5)")
duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates = minimax_vs_alfabeta(5, 5)
print("Duracion media turno Minimax(5): ", duracion_media_turno_minimax)
print("Duracion media turno Alfabeta(5): ", duracion_media_turno_alfabeta)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(5): ", victorias_minimax)
print("Victorias de Alfabeta(5)", victorias_alfabeta)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
agente = entrenar_nuevo_agente()
print("Minimax(2) vs Agente RL")
duracion_media_turno_minimax, duracion_media_turno_agenterl, duracion_media_partida, victorias_minimax, victorias_agenterl, empates = minimax_vs_agenteRL(False, 2, agente)
print("Duracion media turno Minimax(2): ", duracion_media_turno_minimax)
print("Duracion media turno Agente RL: ", duracion_media_turno_agenterl)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(2): ", victorias_minimax)
print("Victorias de Agente RL", victorias_agenterl)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(3) vs Agente RL")
duracion_media_turno_minimax, duracion_media_turno_agenterl, duracion_media_partida, victorias_minimax, victorias_agenterl, empates = minimax_vs_agenteRL(False, 3, agente)
print("Duracion media turno Minimax(3): ", duracion_media_turno_minimax)
print("Duracion media turno Agente RL: ", duracion_media_turno_agenterl)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(3): ", victorias_minimax)
print("Victorias de Agente RL", victorias_agenterl)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Minimax(4) vs Agente RL")
duracion_media_turno_minimax, duracion_media_turno_agenterl, duracion_media_partida, victorias_minimax, victorias_agenterl, empates = minimax_vs_agenteRL(False, 4, agente)
print("Duracion media turno Minimax(4): ", duracion_media_turno_minimax)
print("Duracion media turno Agente RL: ", duracion_media_turno_agenterl)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Minimax(4): ", victorias_minimax)
print("Victorias de Agente RL", victorias_agenterl)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Alfabeta(2) vs Agente RL")
duracion_media_turno_minimax, duracion_media_turno_agenterl, duracion_media_partida, victorias_minimax, victorias_agenterl, empates = minimax_vs_agenteRL(True, 2, agente)
print("Duracion media turno Alfabeta(2): ", duracion_media_turno_minimax)
print("Duracion media turno Agente RL: ", duracion_media_turno_agenterl)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Alfabeta(2): ", victorias_minimax)
print("Victorias de Agente RL", victorias_agenterl)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Alfabeta(3) vs Agente RL")
duracion_media_turno_minimax, duracion_media_turno_agenterl, duracion_media_partida, victorias_minimax, victorias_agenterl, empates = minimax_vs_agenteRL(True, 3, agente)
print("Duracion media turno Alfabeta(3): ", duracion_media_turno_minimax)
print("Duracion media turno Agente RL: ", duracion_media_turno_agenterl)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Alfabeta(3): ", victorias_minimax)
print("Victorias de Agente RL", victorias_agenterl)
print("Empates: ", empates)
print()
print("--------------------------------------------------------------------------------------------------------------------------------------------")
print("Alfabeta(4) vs Agente RL")
duracion_media_turno_minimax, duracion_media_turno_agenterl, duracion_media_partida, victorias_minimax, victorias_agenterl, empates = minimax_vs_agenteRL(True, 4, agente)
print("Duracion media turno Alfabeta(4): ", duracion_media_turno_minimax)
print("Duracion media turno Agente RL: ", duracion_media_turno_agenterl)
print("Duracion media partida: ", duracion_media_partida)
print("Victorias de Alfabeta(4): ", victorias_minimax)
print("Victorias de Agente RL", victorias_agenterl)
print("Empates: ", empates)
print()
print()
print(
"--------------------------------------------------------------------------------------------------------------------------------------------")
print(
"--------------------------------------------------------------------------------------------------------------------------------------------")
print('Entrenamiento Agente RL')
print(
"--------------------------------------------------------------------------------------------------------------------------------------------")
print('Con 50 ciclos y contrincante aleatorio:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(50, 1)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 100 ciclos y contrincante aleatorio:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(100, 1)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 150 ciclos y contrincante aleatorio:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(150, 1)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 200 ciclos y contrincante aleatorio:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(200, 1)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 250 ciclos y contrincante aleatorio:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(250, 1)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 300 ciclos y contrincante aleatorio:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(300, 1)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
# print('Con 350 ciclos y contrincante aleatorio:')
# tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(350, 1)
# print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
# print('Tasa de victorias:', tasa_victorias)
# print('Tasa de derrotas:', tasa_derrotas)
# print('Tasa de empates:', tasa_empates)
# print()
# print('Con 400 ciclos y contrincante aleatorio:')
# tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(400, 1)
# print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
# print('Tasa de victorias:', tasa_victorias)
# print('Tasa de derrotas:', tasa_derrotas)
# print('Tasa de empates:', tasa_empates)
# print()
# print('Con 450 ciclos y contrincante aleatorio:')
# tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(450, 1)
# print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
# print('Tasa de victorias:', tasa_victorias)
# print('Tasa de derrotas:', tasa_derrotas)
# print('Tasa de empates:', tasa_empates)
# print()
# print('Con 500 ciclos y contrincante aleatorio:')
# tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(500, 1)
# print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
# print('Tasa de victorias:', tasa_victorias)
# print('Tasa de derrotas:', tasa_derrotas)
# print('Tasa de empates:', tasa_empates)
# print()
# print('Con 550 ciclos y contrincante aleatorio:')
# tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(550, 1)
# print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
# print('Tasa de victorias:', tasa_victorias)
# print('Tasa de derrotas:', tasa_derrotas)
# print('Tasa de empates:', tasa_empates)
# print()
# print('Con 600 ciclos y contrincante aleatorio:')
# tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(600, 1)
# print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
# print('Tasa de victorias:', tasa_victorias)
# print('Tasa de derrotas:', tasa_derrotas)
# print('Tasa de empates:', tasa_empates)
# print()
# print('Con 650 ciclos y contrincante aleatorio:')
# tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(650, 1)
# print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
# print('Tasa de victorias:', tasa_victorias)
# print('Tasa de derrotas:', tasa_derrotas)
# print('Tasa de empates:', tasa_empates)
# print()
# print('Con 700 ciclos y contrincante aleatorio:')
# tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(700, 1)
# print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
# print('Tasa de victorias:', tasa_victorias)
# print('Tasa de derrotas:', tasa_derrotas)
# print('Tasa de empates:', tasa_empates)
print()
print(
"--------------------------------------------------------------------------------------------------------------------------------------------")
print('Con 50 ciclos y contrincante Minimax:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(50, 2)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 100 ciclos y contrincante Minimax:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(100, 2)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 150 ciclos y contrincante Minimax:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(150, 2)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 200 ciclos y contrincante Minimax:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(200, 2)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 250 ciclos y contrincante Minimax:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(250, 2)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
print('Con 300 ciclos y contrincante Minimax:')
tiempo_entrenamiento, tasa_victorias, tasa_derrotas, tasa_empates = evaluacion_agente(300, 2)
print('Tiempo de entrenamiento: ', tiempo_entrenamiento)
print('Tasa de victorias:', tasa_victorias)
print('Tasa de derrotas:', tasa_derrotas)
print('Tasa de empates:', tasa_empates)
print()
def minimax_vs_alfabeta(n_minimax, n_alfabeta):
tiempo_individual_minimax = 0
tiempo_individual_alfabeta = 0
duracion_partida = 0
victorias_minimax = 0
victorias_alfabeta = 0
empates = 0
for i in range(100):
tablero = Tablero()
turno = -1
jugador_minimax = 1
jugador_alfabeta = -1
inicio_juego = time.time()
print(i)
while not tablero.finDeJuego():
if turno == 1:
inicio_turno = time.time()
minimax.N_LIMIT = n_minimax
movimiento = minimax(tablero, n_minimax, jugador_minimax)
if movimiento:
x, y = movimiento
tablero.colocar_ficha_nueva(x, y, jugador_minimax)
fin_turno = time.time()
tiempo_individual_minimax += (fin_turno - inicio_turno)
turno = -1
elif turno == -1:
inicio_turno = time.time()
minimax.N_LIMIT = n_alfabeta
movimiento = minimax_alfa_beta(tablero, n_alfabeta, jugador_alfabeta)
if movimiento:
x, y = movimiento
tablero.colocar_ficha_nueva(x, y, jugador_alfabeta)
fin_turno = time.time()
tiempo_individual_alfabeta += (fin_turno - inicio_turno)
turno = 1
fin_juego = time.time()
duracion_partida += (fin_juego - inicio_juego)
result = tablero.calcular_resultado()
if result == jugador_minimax:
victorias_minimax += 1
elif result == jugador_alfabeta:
victorias_alfabeta += 1
else:
empates += 1
duracion_media_turno_minimax = tiempo_individual_minimax/100
duracion_media_turno_alfabeta = tiempo_individual_alfabeta/100
duracion_media_partida = duracion_partida/100
return duracion_media_turno_minimax, duracion_media_turno_alfabeta, duracion_media_partida, victorias_minimax, victorias_alfabeta, empates
def minimax_vs_agenteRL(poda, n, agente):
tiempo_individual_minimax = 0
tiempo_individual_agenterl = 0
duracion_partida = 0
victorias_minimax = 0
victorias_agenterl = 0
empates = 0
for i in range(100):
tablero = Tablero()
turno = 1
jugador_minimax = 1
jugador_agenterl = -1
inicio_juego = time.time()
while not tablero.finDeJuego():
if turno == 1:
inicio_turno = time.time()
if poda == False:
movimiento = minimax(tablero, n, jugador_minimax)
if movimiento:
x, y = movimiento
tablero.colocar_ficha_nueva(x, y, jugador_minimax)
else:
movimiento = minimax_alfa_beta(tablero, n, jugador_minimax)
if movimiento:
x, y = movimiento
tablero.colocar_ficha_nueva(x, y, jugador_minimax)
fin_turno = time.time()
tiempo_individual_minimax += (fin_turno - inicio_turno)
turno = -1
elif turno == -1:
inicio_turno = time.time()
agente.tablero = tablero
agente.jugar(jugador_agenterl)
fin_turno = time.time()
tiempo_individual_agenterl += (fin_turno - inicio_turno)
turno = 1
fin_juego = time.time()
duracion_partida += (fin_juego - inicio_juego)
result = tablero.calcular_resultado()
if result == jugador_minimax:
victorias_minimax += 1
elif result == jugador_agenterl:
victorias_agenterl += 1
else:
empates += 1
duracion_media_turno_minimax = tiempo_individual_minimax/100
duracion_media_turno_agenterl = tiempo_individual_agenterl/100
duracion_media_partida = duracion_partida/100
return duracion_media_turno_minimax, duracion_media_turno_agenterl, duracion_media_partida, victorias_minimax, victorias_agenterl, empates
def evaluacion_agente(ciclos_entrenamiento, jugador):
n = 5
total_tasa_victorias = 0
total_tasa_derrotas = 0
total_tasa_empates = 0
total_tiempo_entrenamiento = 0
for i in range(n):
print(i)
inicio_entrenamiento = time.time()
agente = entrenar_nuevo_agente(ciclos_entrenamiento, jugador)
fin_entrenamiento = time.time()
tiempo_entrenamiento = fin_entrenamiento - inicio_entrenamiento
tasa_victorias, tasa_derrotas, tasa_empates = validacion_agente(agente, 50)
total_tasa_victorias += tasa_victorias
total_tasa_derrotas += tasa_derrotas
total_tasa_empates += tasa_empates
total_tiempo_entrenamiento += tiempo_entrenamiento
return total_tiempo_entrenamiento/n, total_tasa_victorias/n, total_tasa_derrotas/n, total_tasa_empates/n
if __name__ == "__main__":
tests()
| 56.52452 | 175 | 0.628593 | 2,762 | 26,510 | 5.753077 | 0.032947 | 0.142354 | 0.131403 | 0.086532 | 0.925677 | 0.922404 | 0.90516 | 0.900503 | 0.895091 | 0.889239 | 0 | 0.014451 | 0.167333 | 26,510 | 468 | 176 | 56.645299 | 0.7054 | 0.100754 | 0 | 0.697733 | 0 | 0 | 0.319051 | 0.129504 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010076 | false | 0 | 0.007557 | 0 | 0.025189 | 0.612091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
4306b63c16e2728b36dd08976955c01e1acebdd2 | 233 | py | Python | godity/core/__init__.py | samueldev45/GodityEngine | ccc3e04ec711e9a2c30f1ee59e9e32d646a3d557 | [
"MIT"
] | 8 | 2021-01-25T19:23:27.000Z | 2021-07-19T14:31:54.000Z | godity/core/__init__.py | samueldev45/GodityEngine | ccc3e04ec711e9a2c30f1ee59e9e32d646a3d557 | [
"MIT"
] | null | null | null | godity/core/__init__.py | samueldev45/GodityEngine | ccc3e04ec711e9a2c30f1ee59e9e32d646a3d557 | [
"MIT"
] | 2 | 2021-01-25T19:26:15.000Z | 2021-08-07T00:10:02.000Z | from godity.core.App import *
from godity.core.Window import *
from godity.core.Entity import *
from godity.core.Component import *
from godity.core.Scene import *
from godity.core.Timer import *
from godity.core.Layer import * | 33.285714 | 36 | 0.76824 | 35 | 233 | 5.114286 | 0.314286 | 0.391061 | 0.547486 | 0.670391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141631 | 233 | 7 | 37 | 33.285714 | 0.895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4324ddd743654eba8ce9a7caed9ee5a3c1d43c50 | 175 | py | Python | pyperp/providers/__init__.py | DeveloperInProgress/PyPerp | 1a307e564a45c85f2348af721f896677c99232fe | [
"MIT"
] | 9 | 2021-09-30T11:03:19.000Z | 2022-01-27T16:21:41.000Z | pyperp/providers/__init__.py | DeveloperInProgress/PyPerp | 1a307e564a45c85f2348af721f896677c99232fe | [
"MIT"
] | 3 | 2021-10-10T10:25:54.000Z | 2021-12-13T21:28:05.000Z | pyperp/providers/__init__.py | DeveloperInProgress/PyPerp | 1a307e564a45c85f2348af721f896677c99232fe | [
"MIT"
] | 1 | 2021-11-08T03:04:20.000Z | 2021-11-08T03:04:20.000Z | from pyperp.providers.apiProvider import *
from pyperp.providers.arbitrumRinkeby import *
from pyperp.providers.optimismKovan import *
from pyperp.providers.optimism import *
| 35 | 46 | 0.84 | 20 | 175 | 7.35 | 0.4 | 0.272109 | 0.517007 | 0.510204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091429 | 175 | 4 | 47 | 43.75 | 0.924528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
433fda0186ad93c1a3305d21f6923500defd950f | 131 | py | Python | kcve/datasets/__init__.py | lemmersj/ground-truth-or-daer | e4e7ba43123bb97ab1fa0242093b56a15c7ba54b | [
"MIT"
] | null | null | null | kcve/datasets/__init__.py | lemmersj/ground-truth-or-daer | e4e7ba43123bb97ab1fa0242093b56a15c7ba54b | [
"MIT"
] | null | null | null | kcve/datasets/__init__.py | lemmersj/ground-truth-or-daer | e4e7ba43123bb97ab1fa0242093b56a15c7ba54b | [
"MIT"
] | null | null | null | """__init__.py for the datasets folder."""
from .both_dataset import *
from .mturk_dataset import *
from .pascal_dataset import *
| 26.2 | 42 | 0.755725 | 18 | 131 | 5.111111 | 0.666667 | 0.423913 | 0.369565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137405 | 131 | 4 | 43 | 32.75 | 0.814159 | 0.274809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4a43127ce54366837fb7f9915945f1f86f917211 | 37,132 | py | Python | tests/test_libpurecoollink.py | zezuladp/libpurecoollink | a91362c57a0bc4126279c8c51c407dd713b08e10 | [
"Apache-2.0"
] | 215 | 2017-05-07T06:12:07.000Z | 2022-01-19T06:44:38.000Z | tests/test_libpurecoollink.py | zezuladp/libpurecoollink | a91362c57a0bc4126279c8c51c407dd713b08e10 | [
"Apache-2.0"
] | 36 | 2017-06-17T22:04:37.000Z | 2022-03-13T11:19:51.000Z | tests/test_libpurecoollink.py | zezuladp/libpurecoollink | a91362c57a0bc4126279c8c51c407dd713b08e10 | [
"Apache-2.0"
] | 60 | 2017-05-07T15:07:40.000Z | 2022-01-18T14:55:57.000Z | import unittest
from unittest import mock
from unittest.mock import Mock
import json
from libpurecoollink.dyson_device import NetworkDevice
from libpurecoollink.dyson_pure_cool_link import DysonPureCoolState, \
DysonEnvironmentalSensorState, DysonPureCoolLink
from libpurecoollink.dyson_pure_hotcool_link import DysonPureHotCoolLink
from libpurecoollink.dyson_pure_state import DysonPureHotCoolState
from libpurecoollink.const import FanMode, NightMode, FanSpeed, Oscillation, \
FanState, QualityTarget, StandbyMonitoring as SM, \
DYSON_PURE_COOL_LINK_DESK as Desk, DYSON_PURE_HOT_COOL_LINK_TOUR as Hot, \
HeatMode, HeatState, HeatTarget, FocusMode, TiltState, ResetFilter
from libpurecoollink.exceptions import DysonInvalidTargetTemperatureException
def _mocked_request_state(*args, **kwargs):
assert args[0] == '475/device-id-1/command'
msg = json.loads(args[1])
assert msg['msg'] in ['REQUEST-CURRENT-STATE',
'REQUEST-PRODUCT-ENVIRONMENT-CURRENT-SENSOR-DATA']
assert msg['time']
def _mocked_send_command(*args, **kwargs):
assert args[0] == '{0}/device-id-1/command'.format(Desk)
payload = json.loads(args[1])
if payload['msg'] == "STATE-SET":
assert payload['time']
assert payload['data']['fmod'] == "FAN"
assert payload['data']['nmod'] == "OFF"
assert payload['data']['oson'] == "ON"
assert payload['data']['rstf'] == "STET"
assert payload['data']['qtar'] == "0004"
assert payload['data']['fnsp'] == "0003"
assert payload['data']['sltm'] == "STET"
assert payload['data']['rhtm'] == "ON"
assert payload['mode-reason'] == "LAPP"
assert payload['msg'] == "STATE-SET"
assert args[2] == 1
def _mocked_send_command_hot(*args, **kwargs):
assert args[0] == '{0}/device-id-1/command'.format(Hot)
payload = json.loads(args[1])
if payload['msg'] == "STATE-SET":
assert payload['time']
assert payload['data']['fmod'] == "FAN"
assert payload['data']['nmod'] == "OFF"
assert payload['data']['oson'] == "ON"
assert payload['data']['rstf'] == "STET"
assert payload['data']['qtar'] == "0004"
assert payload['data']['fnsp'] == "0003"
assert payload['data']['sltm'] == "STET"
assert payload['data']['rhtm'] == "ON"
assert payload['data']['hmod'] == "HEAT"
assert payload['data']['hmax'] == "2980"
assert payload['data']['ffoc'] == "ON"
assert payload['mode-reason'] == "LAPP"
assert payload['msg'] == "STATE-SET"
assert args[2] == 1
def _mocked_send_command_rst_filter(*args, **kwargs):
assert args[0] == '475/device-id-1/command'
payload = json.loads(args[1])
if payload['msg'] == "STATE-SET":
assert payload['time']
assert payload['data']['fmod'] == "FAN"
assert payload['data']['nmod'] == "OFF"
assert payload['data']['oson'] == "ON"
assert payload['data']['rstf'] == "RSTF"
assert payload['data']['qtar'] == "0004"
assert payload['data']['fnsp'] == "0003"
assert payload['data']['sltm'] == "STET"
assert payload['data']['rhtm'] == "ON"
assert payload['mode-reason'] == "LAPP"
assert payload['msg'] == "STATE-SET"
assert args[2] == 1
def _mocked_send_command_timer(*args, **kwargs):
assert args[0] == '475/device-id-1/command'
payload = json.loads(args[1])
if payload['msg'] == "STATE-SET":
assert payload['time']
assert payload['data']['sltm'] == 10
assert args[2] == 1
def _mocked_send_command_timer_off(*args, **kwargs):
assert args[0] == '475/device-id-1/command'
payload = json.loads(args[1])
if payload['msg'] == "STATE-SET":
assert payload['time']
assert payload['data']['sltm'] == 0
assert args[2] == 1
def on_add_device(network_device):
pass
class TestLibPureCoolLink(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
@mock.patch('paho.mqtt.client.Client.loop_start')
@mock.patch('paho.mqtt.client.Client.connect')
def test_connect_device(self, mocked_connect, mocked_loop):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/"
"70ZGysII1Ke1i0ZHakFH84DZuxsSQ4KTT2v"
"bCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
network_device = NetworkDevice('device-1', 'host', 1111)
device.state_data_available()
device.sensor_data_available()
device.connection_callback(True)
device._add_network_device(network_device)
connected = device.auto_connect()
self.assertTrue(connected)
self.assertIsNone(device.state)
self.assertEqual(device.network_device, network_device)
self.assertEqual(mocked_connect.call_count, 1)
self.assertEqual(mocked_loop.call_count, 1)
device.disconnect()
@mock.patch('paho.mqtt.client.Client.loop_start')
@mock.patch('paho.mqtt.client.Client.connect')
def test_connect_device_with_config(self, mocked_connect, mocked_loop):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/"
"70ZGysII1Ke1i0ZHakFH84DZuxsSQ4KTT2v"
"bCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
device.connection_callback(True)
device.state_data_available()
device.sensor_data_available()
connected = device.connect("192.168.0.2")
self.assertTrue(connected)
self.assertIsNone(device.state)
self.assertEqual(device.network_device.name, "device-1")
self.assertEqual(device.network_device.address, "192.168.0.2")
self.assertEqual(device.network_device.port, 1883)
self.assertEqual(mocked_connect.call_count, 1)
self.assertEqual(mocked_loop.call_count, 1)
device.disconnect()
@mock.patch('paho.mqtt.client.Client.loop_stop')
@mock.patch('paho.mqtt.client.Client.loop_start')
@mock.patch('paho.mqtt.client.Client.connect')
def test_connect_device_with_config_failed(self,
mocked_connect,
mocked_loop_start,
mocked_loop_stop):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/"
"70ZGysII1Ke1i0ZHakFH84DZuxsSQ4KTT2v"
"bCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
device.connection_callback(False)
connected = device.connect("192.168.0.2")
self.assertFalse(connected)
self.assertIsNone(device.state)
self.assertEqual(device.network_device.name, "device-1")
self.assertEqual(device.network_device.address, "192.168.0.2")
self.assertEqual(device.network_device.port, 1883)
self.assertEqual(mocked_connect.call_count, 1)
self.assertEqual(mocked_loop_start.call_count, 1)
self.assertEqual(mocked_loop_stop.call_count, 1)
@mock.patch('libpurecoollink.zeroconf.Zeroconf.close')
def test_connect_device_fail(self, mocked_close_zeroconf):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/"
"70ZGysII1Ke1i0ZHakFH84DZuxsSQ4KTT2v"
"bCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
connected = device.auto_connect(retry=1, timeout=1)
self.assertFalse(connected)
self.assertEqual(mocked_close_zeroconf.call_count, 1)
def test_status_topic(self):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/"
"70ZGysII1Ke1i0ZHakFH84DZuxsSQ4KTT2v"
"bCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
self.assertEqual(device.status_topic, "475/device-id-1/status/current")
@mock.patch('socket.inet_ntoa', )
def test_device_dyson_listener(self, mocked_ntoa):
listener = DysonPureCoolLink.DysonDeviceListener('serial-1',
on_add_device)
zeroconf = Mock()
listener.remove_service(zeroconf, "ptype", "serial-1")
info = Mock()
info.address = "192.168.0.1"
zeroconf.get_service_info = Mock()
zeroconf.get_service_info.return_value = info
listener.add_service(zeroconf, '_dyson_mqtt._tcp.local.',
'ptype_serial-1._dyson_mqtt._tcp.local.')
def test_on_connect(self):
client = Mock()
client.subscribe = Mock()
userdata = Mock()
userdata.status_topic = "ptype/serial/status/current"
DysonPureCoolLink.on_connect(client, userdata, None, 0)
userdata.connection_callback.assert_called_with(True)
self.assertEqual(userdata.connection_callback.call_count, 1)
client.subscribe.assert_called_with("ptype/serial/status/current")
def test_on_connect_failed(self):
userdata = Mock()
userdata.product_type = 'ptype'
userdata.serial = 'serial'
DysonPureCoolLink.on_connect(None, userdata, None, 1)
userdata.connection_callback.assert_called_with(False)
self.assertEqual(userdata.connection_callback.call_count, 1)
def test_add_message_listener(self):
def on_message():
pass
def on_message_2():
pass
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
device.add_message_listener(on_message)
assert len(device.callback_message) == 1
device.remove_message_listener(on_message)
assert len(device.callback_message) == 0
device.add_message_listener(on_message_2)
device.add_message_listener(on_message)
assert len(device.callback_message) == 2
device.clear_message_listener()
assert len(device.callback_message) == 0
def test_on_message(self):
def on_message(msg):
assert isinstance(msg, DysonPureCoolState)
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/"
"70ZGysII1Ke1i0ZHakFH84DZuxsSQ4KTT2v"
"bCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
device.add_message_listener(on_message)
msg = Mock()
payload = open("tests/data/state.json", "r").read()
msg.payload = Mock()
msg.payload.decode.return_value = payload
DysonPureCoolLink.on_message(None, device, msg)
def test_on_message_hot(self):
def on_message(msg):
assert isinstance(msg, DysonPureHotCoolState)
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/"
"70ZGysII1Ke1i0ZHakFH84DZuxsSQ4KTT2v"
"bCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "455"
})
device.add_message_listener(on_message)
msg = Mock()
payload = open("tests/data/state_hot.json", "r").read()
msg.payload = Mock()
msg.payload.decode.return_value = payload
DysonPureCoolLink.on_message(None, device, msg)
def test_on_message_sensor(self):
def on_message(msg):
assert isinstance(msg, DysonEnvironmentalSensorState)
userdata = Mock()
userdata.callback_message = [on_message]
msg = Mock()
payload = b'{"msg": "ENVIRONMENTAL-CURRENT-SENSOR-DATA","time":' \
b'"2017-06-17T23:05:49.001Z","data": '\
b'{"tact": "2967","hact": "0054","pact": "0004",' \
b'"vact": "0005","sltm": "0028"}}'
msg.payload = payload
DysonPureCoolLink.on_message(None, userdata, msg)
def test_on_message_with_unknown_message(self):
def on_message(msg):
# Should not be called
assert msg == 0
userdata = Mock()
userdata.callback_message = [on_message]
msg = Mock()
payload = b'{"msg": "ENVIRONMENTAL-CURRENT-SENSOR-DATAS","time":' \
b'"2017-06-17T23:05:49.001Z","data": ' \
b'{"tact": "2967","hact": "0054","pact": "0004",' \
b'"vact": "0005","sltm": "0028"}}'
msg.payload = payload
DysonPureCoolLink.on_message(None, userdata, msg)
def test_on_message_without_callback(self):
userdata = Mock()
userdata.callback_message = []
msg = Mock()
payload = b'{"msg":"CURRENT-STATE","time":' \
b'"2017-02-19T15:00:18.000Z","mode-reason":"LAPP",' \
b'"state-reason":"MODE","dial":"OFF","rssi":"-58",' \
b'"product-state":{"fmod":"AUTO","fnst":"FAN",' \
b'"fnsp":"AUTO","qtar":"0004","oson":"OFF","rhtm":"ON",' \
b'"filf":"2159","ercd":"02C0","nmod":"ON","wacd":"NONE"},' \
b'"scheduler":{"srsc":"cbd0","dstv":"0001","tzid":"0001"}}'
msg.payload = payload
DysonPureCoolLink.on_message(None, userdata, msg)
@mock.patch('paho.mqtt.client.Client.publish',
side_effect=_mocked_request_state)
@mock.patch('paho.mqtt.client.Client.connect')
def test_request_state(self, mocked_connect, mocked_publish):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
network_device = NetworkDevice('device-1', 'host', 1111)
device.connection_callback(True)
device._add_network_device(network_device)
device.state_data_available()
device.sensor_data_available()
connected = device.connect(None)
self.assertTrue(connected)
self.assertEqual(mocked_connect.call_count, 1)
self.assertEqual(mocked_publish.call_count, 2)
device.request_current_state()
self.assertEqual(mocked_publish.call_count, 3)
device.request_environmental_state()
self.assertEqual(mocked_publish.call_count, 4)
device.disconnect()
@mock.patch('paho.mqtt.client.Client.publish',
side_effect=_mocked_request_state)
@mock.patch('paho.mqtt.client.Client.connect')
def test_dont_request_state_if_not_connected(self, mocked_connect,
mocked_publish):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
network_device = NetworkDevice('device-1', 'host', 1111)
device.connection_callback(False)
device._add_network_device(network_device)
connected = device.connect(None, "192.168.0.2")
self.assertFalse(connected)
self.assertEqual(mocked_connect.call_count, 1)
device.request_current_state()
self.assertEqual(mocked_publish.call_count, 0)
device.request_environmental_state()
self.assertEqual(mocked_publish.call_count, 0)
@mock.patch('paho.mqtt.client.Client.publish',
side_effect=_mocked_send_command)
@mock.patch('paho.mqtt.client.Client.connect')
def test_set_configuration(self, mocked_connect, mocked_publish):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": Desk
})
network_device = NetworkDevice('device-1', 'host', 1111)
device._add_network_device(network_device)
device._current_state = DysonPureCoolState(
open("tests/data/state.json", "r").read())
device.connection_callback(True)
device.state_data_available()
device.sensor_data_available()
connected = device.auto_connect()
self.assertTrue(connected)
self.assertEqual(mocked_connect.call_count, 1)
device.set_configuration(fan_mode=FanMode.FAN,
oscillation=Oscillation.OSCILLATION_ON,
fan_speed=FanSpeed.FAN_SPEED_3,
night_mode=NightMode.NIGHT_MODE_OFF,
quality_target=QualityTarget.QUALITY_NORMAL,
standby_monitoring=SM.STANDBY_MONITORING_ON
)
self.assertEqual(mocked_publish.call_count, 3)
self.assertEqual(device.__repr__(),
"DysonPureCoolLink(serial=device-id-1,active=True,"
"name=device-1,version=21.03.08,auto_update=True,"
"new_version_available=False,product_type=469,"
"network_device=NetworkDevice(name=device-1,"
"address=host,port=1111))")
device.disconnect()
@mock.patch('paho.mqtt.client.Client.publish',
side_effect=_mocked_send_command_hot)
@mock.patch('paho.mqtt.client.Client.connect')
def test_set_configuration_hot(self, mocked_connect, mocked_publish):
device = DysonPureHotCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": Hot
})
network_device = NetworkDevice('device-1', 'host', 1111)
device._add_network_device(network_device)
device._current_state = DysonPureCoolState(
open("tests/data/state_hot.json", "r").read())
device.connection_callback(True)
device.state_data_available()
device.sensor_data_available()
connected = device.auto_connect()
self.assertTrue(connected)
self.assertEqual(mocked_connect.call_count, 1)
device.set_configuration(fan_mode=FanMode.FAN,
oscillation=Oscillation.OSCILLATION_ON,
fan_speed=FanSpeed.FAN_SPEED_3,
night_mode=NightMode.NIGHT_MODE_OFF,
quality_target=QualityTarget.QUALITY_NORMAL,
standby_monitoring=SM.STANDBY_MONITORING_ON,
heat_mode=HeatMode.HEAT_ON,
focus_mode=FocusMode.FOCUS_ON,
heat_target=HeatTarget.celsius(25)
)
self.assertEqual(mocked_publish.call_count, 3)
self.assertEqual(device.__repr__(),
"DysonPureHotCoolLink(serial=device-id-1,active=True,"
"name=device-1,version=21.03.08,auto_update=True,"
"new_version_available=False,product_type=455,"
"network_device=NetworkDevice(name=device-1,"
"address=host,port=1111))")
device.disconnect()
@mock.patch('paho.mqtt.client.Client.publish',
side_effect=_mocked_send_command_rst_filter)
@mock.patch('paho.mqtt.client.Client.connect')
def test_set_configuration_rst_filter(self, mocked_connect,
mocked_publish):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
network_device = NetworkDevice('device-1', 'host', 1111)
device._add_network_device(network_device)
device._current_state = DysonPureCoolState(
open("tests/data/state.json", "r").read())
device.connection_callback(True)
device.state_data_available()
device.sensor_data_available()
connected = device.auto_connect()
self.assertTrue(connected)
self.assertEqual(mocked_connect.call_count, 1)
device.set_configuration(fan_mode=FanMode.FAN,
oscillation=Oscillation.OSCILLATION_ON,
fan_speed=FanSpeed.FAN_SPEED_3,
night_mode=NightMode.NIGHT_MODE_OFF,
quality_target=QualityTarget.QUALITY_NORMAL,
standby_monitoring=SM.STANDBY_MONITORING_ON,
reset_filter=ResetFilter.RESET_FILTER
)
self.assertEqual(mocked_publish.call_count, 3)
self.assertEqual(device.__repr__(),
"DysonPureCoolLink(serial=device-id-1,active=True,"
"name=device-1,version=21.03.08,auto_update=True,"
"new_version_available=False,product_type=475,"
"network_device=NetworkDevice(name=device-1,"
"address=host,port=1111))")
device.disconnect()
@mock.patch('paho.mqtt.client.Client.publish',
side_effect=_mocked_send_command_timer)
@mock.patch('paho.mqtt.client.Client.connect')
def test_set_configuration_timer(self, mocked_connect, mocked_publish):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
network_device = NetworkDevice('device-1', 'host', 1111)
device._add_network_device(network_device)
device._current_state = DysonPureCoolState(
open("tests/data/state.json", "r").read())
device.connection_callback(True)
device.state_data_available()
device.sensor_data_available()
connected = device.auto_connect()
self.assertTrue(connected)
self.assertEqual(mocked_connect.call_count, 1)
device.set_configuration(sleep_timer=10)
self.assertEqual(mocked_publish.call_count, 3)
self.assertEqual(device.__repr__(),
"DysonPureCoolLink(serial=device-id-1,active=True,"
"name=device-1,version=21.03.08,auto_update=True,"
"new_version_available=False,product_type=475,"
"network_device=NetworkDevice(name=device-1,"
"address=host,port=1111))")
device.disconnect()
@mock.patch('paho.mqtt.client.Client.publish',
side_effect=_mocked_send_command_timer_off)
@mock.patch('paho.mqtt.client.Client.connect')
def test_set_configuration_timer_off(self, mocked_connect, mocked_publish):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
network_device = NetworkDevice('device-1', 'host', 1111)
device._add_network_device(network_device)
device._current_state = DysonPureCoolState(
open("tests/data/state.json", "r").read())
device.connection_callback(True)
device.state_data_available()
device.sensor_data_available()
connected = device.auto_connect()
self.assertTrue(connected)
self.assertEqual(mocked_connect.call_count, 1)
device.set_configuration(sleep_timer=0)
self.assertEqual(mocked_publish.call_count, 3)
self.assertEqual(device.__repr__(),
"DysonPureCoolLink(serial=device-id-1,active=True,"
"name=device-1,version=21.03.08,auto_update=True,"
"new_version_available=False,product_type=475,"
"network_device=NetworkDevice(name=device-1,"
"address=host,port=1111))")
device.disconnect()
@mock.patch('paho.mqtt.client.Client.publish',
side_effect=_mocked_send_command)
@mock.patch('paho.mqtt.client.Client.connect')
def test_dont_set_configuration_if_not_connected(self, mocked_connect,
mocked_publish):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
network_device = NetworkDevice('device-1', 'host', 1111)
device._add_network_device(network_device)
device._current_state = DysonPureCoolState(
open("tests/data/state.json", "r").read())
device.connection_callback(False)
connected = device.auto_connect()
self.assertFalse(connected)
self.assertEqual(mocked_connect.call_count, 1)
device.set_configuration(fan_mode=FanMode.FAN,
oscillation=Oscillation.OSCILLATION_ON,
fan_speed=FanSpeed.FAN_SPEED_3,
night_mode=NightMode.NIGHT_MODE_OFF)
self.assertEqual(mocked_publish.call_count, 0)
self.assertEqual(device.__repr__(),
"DysonPureCoolLink(serial=device-id-1,active=True,"
"name=device-1,version=21.03.08,auto_update=True,"
"new_version_available=False,product_type=475,"
"network_device=NetworkDevice(name=device-1,"
"address=host,port=1111))")
def test_network_device(self):
device = NetworkDevice("device", "192.168.1.1", "8090")
self.assertEqual(device.name, "device")
self.assertEqual(device.address, "192.168.1.1")
self.assertEqual(device.port, "8090")
self.assertEqual(device.__repr__(),
"NetworkDevice(name=device,address=192.168.1.1,"
"port=8090)")
def test_dyson_state(self):
dyson_state = DysonPureCoolState(
open("tests/data/state.json", "r").read())
self.assertEqual(dyson_state.fan_mode, FanMode.AUTO.value)
self.assertEqual(dyson_state.fan_state, FanState.FAN_ON.value)
self.assertEqual(dyson_state.night_mode, NightMode.NIGHT_MODE_ON.value)
self.assertEqual(dyson_state.speed, FanSpeed.FAN_SPEED_AUTO.value)
self.assertEqual(dyson_state.oscillation,
Oscillation.OSCILLATION_OFF.value)
self.assertEqual(dyson_state.filter_life, '2087')
self.assertEqual(dyson_state.__repr__(),
"DysonPureCoolState(fan_mode=AUTO,fan_state=FAN,"
"night_mode=ON,speed=AUTO,oscillation=OFF,"
"filter_life=2087,quality_target=0004,"
"standby_monitoring=ON)")
self.assertEqual(dyson_state.quality_target,
QualityTarget.QUALITY_NORMAL.value)
self.assertEqual(dyson_state.standby_monitoring,
SM.STANDBY_MONITORING_ON.value)
def test_dyson_state_hot(self):
dyson_state = DysonPureHotCoolState(
open("tests/data/state_hot.json", "r").read())
self.assertEqual(dyson_state.fan_mode, FanMode.AUTO.value)
self.assertEqual(dyson_state.fan_state, FanState.FAN_ON.value)
self.assertEqual(dyson_state.night_mode, NightMode.NIGHT_MODE_ON.value)
self.assertEqual(dyson_state.speed, FanSpeed.FAN_SPEED_AUTO.value)
self.assertEqual(dyson_state.oscillation,
Oscillation.OSCILLATION_OFF.value)
self.assertEqual(dyson_state.filter_life, '2087')
self.assertEqual(dyson_state.heat_mode, HeatMode.HEAT_ON.value)
self.assertEqual(dyson_state.heat_state, HeatState.HEAT_STATE_ON.value)
self.assertEqual(dyson_state.tilt, TiltState.TILT_FALSE.value)
self.assertEqual(dyson_state.focus_mode, FocusMode.FOCUS_ON.value)
self.assertEqual(dyson_state.heat_target, '2950')
self.assertEqual(dyson_state.__repr__(),
"DysonHotCoolState(fan_mode=AUTO,fan_state=FAN,"
"night_mode=ON,speed=AUTO,oscillation=OFF,"
"filter_life=2087,quality_target=0004,"
"standby_monitoring=ON,tilt=OK,focus_mode=ON,"
"heat_mode=HEAT,heat_target=2950,heat_state=HEAT)")
self.assertEqual(dyson_state.quality_target,
QualityTarget.QUALITY_NORMAL.value)
self.assertEqual(dyson_state.standby_monitoring,
SM.STANDBY_MONITORING_ON.value)
def test_sensor_state(self):
sensor_state = DysonEnvironmentalSensorState(
open("tests/data/sensor.json", "r").read())
self.assertEqual(sensor_state.sleep_timer, 28)
self.assertEqual(sensor_state.dust, 4)
self.assertEqual(sensor_state.humidity, 54)
self.assertEqual(sensor_state.temperature, 296.7)
self.assertEqual(sensor_state.volatil_organic_compounds, 5)
self.assertEqual(sensor_state.__repr__(),
"DysonEnvironmentalSensorState(humidity=54,"
"air quality=5,temperature=296.7,"
"dust=4,sleep_timer=28)")
def test_sensor_state_sleep_timer_off(self):
sensor_state = DysonEnvironmentalSensorState(
open("tests/data/sensor_sltm_off.json", "r").read())
self.assertEqual(sensor_state.sleep_timer, 0)
self.assertEqual(sensor_state.dust, 4)
self.assertEqual(sensor_state.humidity, 54)
self.assertEqual(sensor_state.temperature, 296.7)
self.assertEqual(sensor_state.volatil_organic_compounds, 5)
def test_heat_target_celsius(self):
self.assertEqual(HeatTarget.celsius(25), "2980")
with self.assertRaises(DysonInvalidTargetTemperatureException) as ex:
HeatTarget.celsius(38)
invalid_target_exception = ex.exception
self.assertEqual(invalid_target_exception.temperature_unit,
DysonInvalidTargetTemperatureException.CELSIUS)
self.assertEqual(invalid_target_exception.current_value, 38)
self.assertEqual(invalid_target_exception.__repr__(),
"38 is not a valid temperature target. "
"It must be between 1 to 37 inclusive.")
def test_heat_target_fahrenheit(self):
self.assertEqual(HeatTarget.fahrenheit(77), "2980")
with self.assertRaises(DysonInvalidTargetTemperatureException) as ex:
HeatTarget.fahrenheit(99)
invalid_target_exception = ex.exception
self.assertEqual(invalid_target_exception.temperature_unit,
DysonInvalidTargetTemperatureException.FAHRENHEIT)
self.assertEqual(invalid_target_exception.current_value, 99)
self.assertEqual(invalid_target_exception.__repr__(),
"99 is not a valid temperature target. "
"It must be between 34 to 98 inclusive.")
def test_device_connected(self):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
device.connected = True
self.assertTrue(device.connected)
device.connected = False
self.assertFalse(device.connected)
def test_environment_state(self):
device = DysonPureCoolLink({
"Active": True,
"Serial": "device-id-1",
"Name": "device-1",
"ScaleUnit": "SU01",
"Version": "21.03.08",
"LocalCredentials": "1/aJ5t52WvAfn+z+fjDuef86kQDQPefbQ6/70ZGysII1K"
"e1i0ZHakFH84DZuxsSQ4KTT2vbCm7uYeTORULKLKQ==",
"AutoUpdate": True,
"NewVersionAvailable": False,
"ProductType": "475"
})
sensor_state = DysonEnvironmentalSensorState(
open("tests/data/sensor.json", "r").read())
device.environmental_state = sensor_state
self.assertEqual(device.environmental_state.dust, 4)
| 45.117861 | 79 | 0.591323 | 3,608 | 37,132 | 5.880266 | 0.081486 | 0.063631 | 0.016591 | 0.016968 | 0.832202 | 0.818816 | 0.795956 | 0.784455 | 0.754101 | 0.741092 | 0 | 0.041834 | 0.289292 | 37,132 | 822 | 80 | 45.172749 | 0.762107 | 0.000539 | 0 | 0.750327 | 0 | 0 | 0.230935 | 0.13786 | 0 | 0 | 0 | 0 | 0.226144 | 1 | 0.060131 | false | 0.006536 | 0.013072 | 0 | 0.07451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4a6c0c14e038e79c273aaf52c130fa92855951a9 | 17,075 | py | Python | intern/service/boss/v1/tests/test_volume.py | heliy/intern | 80d00f5c142f0ac5b76f6fa87419a9ffc50c243c | [
"Apache-2.0"
] | 15 | 2017-01-13T23:06:38.000Z | 2021-09-22T11:33:02.000Z | intern/service/boss/v1/tests/test_volume.py | heliy/intern | 80d00f5c142f0ac5b76f6fa87419a9ffc50c243c | [
"Apache-2.0"
] | 49 | 2017-04-26T13:21:26.000Z | 2021-11-16T14:03:58.000Z | intern/service/boss/v1/tests/test_volume.py | heliy/intern | 80d00f5c142f0ac5b76f6fa87419a9ffc50c243c | [
"Apache-2.0"
] | 18 | 2017-02-17T23:12:37.000Z | 2021-09-27T08:53:32.000Z | # Copyright 2016 The Johns Hopkins University Applied Physics Laboratory
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from intern.service.boss.v1.volume import VolumeService_1
from intern.service.boss import BaseVersion
from intern.service.boss.v1.volume import CacheMode
from intern.resource.boss.resource import ChannelResource
import blosc
import numpy
from requests import HTTPError, PreparedRequest, Response, Session
import unittest
from mock import patch, ANY
class TestVolume_v1(unittest.TestCase):
def setUp(self):
self.vol = VolumeService_1()
self.chan = ChannelResource('chan', 'foo', 'bar', 'image', datatype='uint16')
self.anno_chan = ChannelResource('anno_chan', 'foo', 'bar', 'annotation', datatype='uint64', sources=['chan'])
@patch('requests.Session', autospec=True)
def test_create_cutout_success(self, mock_session):
resolution = 0
x_range = [20, 40]
y_range = [50, 70]
z_range = [30, 50]
time_range = [10, 25]
data = numpy.random.randint(0, 3000, (15, 20, 20, 20), numpy.uint16)
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
mock_session.prepare_request.return_value = PreparedRequest()
fake_response = Response()
fake_response.status_code = 201
mock_session.send.return_value = fake_response
send_opts = {}
self.vol.create_cutout(
self.chan, resolution, x_range, y_range, z_range, time_range, data,
url_prefix, auth, mock_session, send_opts)
@patch('requests.Session', autospec=True)
def test_create_large_cutout_success(self, mock_session):
resolution = 0
x_range = [3000, 6000]
y_range = [3000, 6000]
z_range = [30, 63]
time_range = [10, 25]
data = numpy.random.randint(0, 3000, (15, 20, 20, 20), numpy.uint16)
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
mock_session.prepare_request.return_value = PreparedRequest()
fake_response = Response()
fake_response.status_code = 201
mock_session.send.return_value = fake_response
send_opts = {}
self.vol.create_cutout(
self.chan, resolution, x_range, y_range, z_range, time_range, data,
url_prefix, auth, mock_session, send_opts)
@patch('requests.Session', autospec=True)
def test_create_cutout_failure(self, mock_session):
resolution = 0
x_range = [20, 40]
y_range = [50, 70]
z_range = [30, 50]
time_range = [10, 25]
data = numpy.random.randint(0, 3000, (15, 20, 20, 20), numpy.uint16)
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
mock_session.prepare_request.return_value = PreparedRequest()
fake_response = Response()
fake_response.status_code = 403
mock_session.send.return_value = fake_response
send_opts = {}
with self.assertRaises(HTTPError):
self.vol.create_cutout(
self.chan, resolution, x_range, y_range, z_range, time_range, data,
url_prefix, auth, mock_session, send_opts)
@patch('requests.Session', autospec=True)
def test_get_cutout_success(self, mock_session):
resolution = 0
x_range = [20, 40]
y_range = [50, 70]
z_range = [30, 50]
time_range = [10, 25]
id_list = []
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
fake_prepped_req = PreparedRequest()
fake_prepped_req.headers = {}
mock_session.prepare_request.return_value = fake_prepped_req
data = numpy.random.randint(0, 3000, (15, 20, 20, 20), numpy.uint16)
compressed_data = blosc.compress(data, typesize=16)
fake_response = Response()
fake_response.status_code = 200
fake_response._content = compressed_data
mock_session.send.return_value = fake_response
send_opts = {}
actual = self.vol.get_cutout(
self.chan, resolution, x_range, y_range, z_range, time_range, id_list,
url_prefix, auth, mock_session, send_opts)
numpy.testing.assert_array_equal(data, actual)
@patch('requests.Session', autospec=True)
def test_get_cutout_failure(self, mock_session):
resolution = 0
x_range = [20, 40]
y_range = [50, 70]
z_range = [30, 50]
time_range = [10, 25]
id_list = []
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
fake_prepped_req = PreparedRequest()
fake_prepped_req.headers = {}
mock_session.prepare_request.return_value = fake_prepped_req
fake_response = Response()
fake_response.status_code = 403
mock_session.send.return_value = fake_response
send_opts = {}
with self.assertRaises(HTTPError):
actual = self.vol.get_cutout(
self.chan, resolution, x_range, y_range, z_range, time_range, id_list,
url_prefix, auth, mock_session, send_opts)
@patch('requests.Session', autospec=True)
def test_get_cutout_access_mode_defaults_no_cache_small_cutout(self, mock_session):
"""Ensure no-cache defaults to True."""
resolution = 0
x_range = [20, 40]
y_range = [50, 70]
z_range = [30, 50]
time_range = [10, 25]
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
id_list = []
mock_session.prepare_request.return_value = PreparedRequest()
mock_session.prepare_request.return_value.headers = {}
fake_response = Response()
fake_response.status_code = 200
data = numpy.random.randint(0, 3000, (15, 20, 20, 20), numpy.uint16)
compressed_data = blosc.compress(data, typesize=16)
fake_response._content = compressed_data
mock_session.send.return_value = fake_response
send_opts = {}
with patch.object(
BaseVersion, 'get_cutout_request', autospec=True, wraps=BaseVersion.get_cutout_request) as req_spy:
self.vol.get_cutout(self.chan, resolution, x_range, y_range, z_range,
time_range, id_list, url_prefix, auth, mock_session, send_opts)
req_spy.assert_called_with(ANY, ANY, 'GET', ANY, url_prefix, auth, resolution,
x_range, y_range, z_range, time_range, id_list=[], access_mode=CacheMode.no_cache)
self.assertEqual(1, req_spy.call_count)
@patch('requests.Session', autospec=True)
def test_get_cutout_access_mode_defaults_no_cache_large_cutout(self, mock_session):
"""Ensure no-cache defaults to True for all recursive calls generated
by get_cutout."""
resolution = 0
x_range = [20, 1045]
y_range = [50, 1075]
z_range = [30, 33]
time_range = [10, 11]
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
id_list = []
mock_session.prepare_request.return_value = PreparedRequest()
mock_session.prepare_request.return_value.headers = {}
fake_response = Response()
fake_response.status_code = 200
data = numpy.random.randint(0, 3000, (1, 3, 1025, 1025), numpy.uint16)
compressed_data = blosc.compress(data, typesize=16)
fake_response._content = compressed_data
mock_session.send.return_value = fake_response
send_opts = {}
with patch.object(
BaseVersion, 'get_cutout_request', autospec=True, wraps=BaseVersion.get_cutout_request) as req_spy:
self.vol.get_cutout(self.chan, resolution, x_range, y_range, z_range,
time_range, id_list, url_prefix, auth, mock_session, send_opts)
req_spy.assert_called_with(ANY, ANY, 'GET', ANY, url_prefix, auth, resolution,
ANY, ANY, ANY, ANY, id_list=[], access_mode=CacheMode.no_cache)
# Verify that chunking occured.
self.assertTrue(req_spy.call_count > 0)
@patch('requests.Session', autospec=True)
def test_get_cutout_access_mode_raw_small_cutout(self, mock_session):
"""Ensure no-cache defaults to True."""
resolution = 0
x_range = [20, 40]
y_range = [50, 70]
z_range = [30, 50]
time_range = [10, 25]
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
id_list = []
mock_session.prepare_request.return_value = PreparedRequest()
mock_session.prepare_request.return_value.headers = {}
fake_response = Response()
fake_response.status_code = 200
data = numpy.random.randint(0, 3000, (15, 20, 20, 20), numpy.uint16)
compressed_data = blosc.compress(data, typesize=16)
fake_response._content = compressed_data
mock_session.send.return_value = fake_response
send_opts = {}
with patch.object(
BaseVersion, 'get_cutout_request', autospec=True, wraps=BaseVersion.get_cutout_request) as req_spy:
self.vol.get_cutout(self.chan, resolution, x_range, y_range, z_range,
time_range, id_list, url_prefix, auth, mock_session, send_opts, access_mode=CacheMode.raw)
req_spy.assert_called_with(ANY, ANY, 'GET', ANY, url_prefix, auth, resolution,
x_range, y_range, z_range, time_range, id_list=[], access_mode=CacheMode.raw)
self.assertEqual(1, req_spy.call_count)
@patch('requests.Session', autospec=True)
def test_get_cutout_access_mode_raw_large_cutout(self, mock_session):
"""Ensure no-cache defaults to True."""
resolution = 0
x_range = [20, 1045]
y_range = [50, 1075]
z_range = [30, 33]
time_range = [10, 11]
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
id_list = []
mock_session.prepare_request.return_value = PreparedRequest()
mock_session.prepare_request.return_value.headers = {}
fake_response = Response()
fake_response.status_code = 200
data = numpy.random.randint(0, 3000, (1, 3, 1025, 1025), numpy.uint16)
compressed_data = blosc.compress(data, typesize=16)
fake_response._content = compressed_data
mock_session.send.return_value = fake_response
send_opts = {}
with patch.object(
BaseVersion, 'get_cutout_request', autospec=True, wraps=BaseVersion.get_cutout_request) as req_spy:
self.vol.get_cutout(self.chan, resolution, x_range, y_range, z_range,
time_range, id_list, url_prefix, auth, mock_session, send_opts, access_mode=CacheMode.raw)
req_spy.assert_called_with(ANY, ANY, 'GET', ANY, url_prefix, auth, resolution,
x_range, y_range, z_range, time_range, id_list=[], access_mode=CacheMode.raw)
self.assertEqual(1, req_spy.call_count)
@patch('requests.Session', autospec=True)
def test_get_cutout_access_mode_raw_small_cutout(self, mock_session):
"""Ensure no-cache defaults to True."""
resolution = 0
x_range = [20, 40]
y_range = [50, 70]
z_range = [30, 50]
time_range = [10, 25]
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
id_list = []
mock_session.prepare_request.return_value = PreparedRequest()
mock_session.prepare_request.return_value.headers = {}
fake_response = Response()
fake_response.status_code = 200
data = numpy.random.randint(0, 3000, (15, 20, 20, 20), numpy.uint16)
compressed_data = blosc.compress(data, typesize=16)
fake_response._content = compressed_data
mock_session.send.return_value = fake_response
send_opts = {}
with patch.object(
BaseVersion, 'get_cutout_request', autospec=True, wraps=BaseVersion.get_cutout_request) as req_spy:
self.vol.get_cutout(self.chan, resolution, x_range, y_range, z_range,
time_range, id_list, url_prefix, auth, mock_session, send_opts, access_mode=CacheMode.cache)
req_spy.assert_called_with(ANY, ANY, 'GET', ANY, url_prefix, auth, resolution,
x_range, y_range, z_range, time_range, id_list=[], access_mode=CacheMode.cache)
self.assertEqual(1, req_spy.call_count)
@patch('requests.Session', autospec=True)
def test_get_cutout_access_mode_cache_large_cutout(self, mock_session):
"""Ensure no-cache defaults to True."""
resolution = 0
x_range = [20, 1045]
y_range = [50, 1075]
z_range = [30, 33]
time_range = [10, 11]
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
id_list = []
mock_session.prepare_request.return_value = PreparedRequest()
mock_session.prepare_request.return_value.headers = {}
fake_response = Response()
fake_response.status_code = 200
data = numpy.random.randint(0, 3000, (1, 3, 1025, 1025), numpy.uint16)
compressed_data = blosc.compress(data, typesize=16)
fake_response._content = compressed_data
mock_session.send.return_value = fake_response
send_opts = {}
with patch.object(
BaseVersion, 'get_cutout_request', autospec=True, wraps=BaseVersion.get_cutout_request) as req_spy:
self.vol.get_cutout(self.chan, resolution, x_range, y_range, z_range,
time_range, id_list, url_prefix, auth, mock_session, send_opts, access_mode=CacheMode.cache)
req_spy.assert_called_with(ANY, ANY, 'GET', ANY, url_prefix, auth, resolution,
x_range, y_range, z_range, time_range, id_list=[], access_mode=CacheMode.cache)
self.assertEqual(1, req_spy.call_count)
@patch('requests.Response', autospec=True)
@patch('requests.Session', autospec=True)
def test_get_bounding_box_success(self, mock_session, mock_resp):
resolution = 0
id = 44444
bb_type = 'loose'
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
send_opts = {}
fake_prepped_req = PreparedRequest()
fake_prepped_req.headers = {}
mock_session.prepare_request.return_value = fake_prepped_req
mock_session.send.return_value = mock_resp
mock_resp.status_code = 200
mock_resp.json.return_value = expected = {
'x_range': [0, 10],
'y_range': [0, 10],
'z_range': [0, 10],
't_range': [0, 10]
}
actual = self.vol.get_bounding_box(
self.anno_chan, resolution, id, bb_type,
url_prefix, auth, mock_session, send_opts)
self.assertEqual(expected, actual)
@patch('requests.Response', autospec=True)
@patch('requests.Session', autospec=True)
def test_get_ids_in_region_success(self, mock_session, mock_resp):
resolution = 0
x_range = [0, 100]
y_range = [10, 50]
z_range = [20, 42]
t_range = [0, 1]
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
send_opts = {}
fake_prepped_req = PreparedRequest()
fake_prepped_req.headers = {}
mock_session.prepare_request.return_value = fake_prepped_req
mock_session.send.return_value = mock_resp
mock_resp.status_code = 200
mock_resp.json.return_value = { 'ids': ['1', '10'] }
actual = self.vol.get_ids_in_region(
self.anno_chan, resolution, x_range, y_range, z_range, t_range,
url_prefix, auth, mock_session, send_opts)
expected = [1, 10]
self.assertEqual(expected, actual)
@patch('requests.Session', autospec=True)
def test_get_ids_in_region_failure(self, mock_session):
resolution = 0
x_range = [0, 100]
y_range = [10, 50]
z_range = [20, 42]
t_range = [0, 1]
url_prefix = 'https://api.theboss.io'
auth = 'mytoken'
send_opts = {}
fake_prepped_req = PreparedRequest()
fake_prepped_req.headers = {}
mock_session.prepare_request.return_value = fake_prepped_req
fake_response = Response()
fake_response.status_code = 403
mock_session.send.return_value = fake_response
send_opts = {}
with self.assertRaises(HTTPError):
actual = self.vol.get_ids_in_region(
self.anno_chan, resolution, x_range, y_range, z_range, t_range,
url_prefix, auth, mock_session, send_opts)
| 40.654762 | 118 | 0.642987 | 2,167 | 17,075 | 4.777111 | 0.099677 | 0.065881 | 0.040572 | 0.0483 | 0.883887 | 0.881955 | 0.876932 | 0.863794 | 0.851623 | 0.837713 | 0 | 0.038936 | 0.253939 | 17,075 | 419 | 119 | 40.75179 | 0.773687 | 0.05101 | 0 | 0.866469 | 0 | 0 | 0.054792 | 0 | 0 | 0 | 0 | 0 | 0.053412 | 1 | 0.04451 | false | 0 | 0.026706 | 0 | 0.074184 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4a713330ff7a6c3620e3c32439c3134da231ee7b | 7,487 | py | Python | shared.py | ororopickpocket/Sovryn-smart-contracts | b3e0fda3bc3809697e647060f40af99c322866b2 | [
"Apache-2.0"
] | null | null | null | shared.py | ororopickpocket/Sovryn-smart-contracts | b3e0fda3bc3809697e647060f40af99c322866b2 | [
"Apache-2.0"
] | null | null | null | shared.py | ororopickpocket/Sovryn-smart-contracts | b3e0fda3bc3809697e647060f40af99c322866b2 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python3
from munch import Munch
def Constants():
return Munch({
"ZERO_ADDRESS": "0x0000000000000000000000000000000000000000",
"ONE_ADDRESS": "0x0000000000000000000000000000000000000001",
"MAX_UINT": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
"ZERO_32": "0x0000000000000000000000000000000000000000000000000000000000000000",
"TINY_AMOUNT": 25 * 10**13
})
def Addresses():
return Munch.fromDict({
"development": {
"KyberContractAddress": "0x0000000000000000000000000000000000000000",
"WETHTokenAddress": "0x0b1ba0af832d7c05fd64161e0db78e85978e8082",
"BZRXTokenAddress": "0x0000000000000000000000000000000000000000",
"SAITokenAddress": "0x0000000000000000000000000000000000000000",
},
"ropsten": {
"ENSRegistry": "0x112234455c3a32fd11230c42e7bccd4a84e02010",
"ENSResolver": "0x9C4c3B509e47a298544d0fD0591B47550845e903",
"OracleNotifier": "0xe09011af509f72c46312ebabceabc7c5ea7e6991",
"KyberContractAddress": "0x818E6FECD516Ecc3849DAf6845e3EC868087B755",
"BZRXTokenAddress": "0xf8b0b6ee32a617beca665b6c5b241ac15b1acdd5",
"BZRXTokenAddressSale": "0x450e617b88366fde63c18880acbdeb35a5812eee",
"SovrynEtherAddress": "0xa3eBDf66e0292F1d5FD82Ae3fcd92551Ac9dB081",
"MultiSig": "0x35b94649Bd03D13eF08e999127351Cc52286473C",
"TokenizedRegistry": "0xd03eea21041a19672e451bcbb413ce8be72d0381",
"LoanTokenSettings": "0x633a8328ae5947FA5E173Cd5e2c8a838637939c3",
"LoanTokenSettingsLowerAdmin": "0xfC92Cf77FC3ef447F631a37E341c6803AdCEe622",
"WETHTokenAddress": "0xc778417e063141139fce010982780140aa0cd5ab",
"SAITokenAddress": "0xad6d458402f60fd3bd25163575031acdce07538d", # Kyber SAI
"WBTCTokenAddress": "0x95cc8d8f29d0f7fcc425e8708893e759d1599c97" # Kyber ENG
},
"kovan": {
"sovrynProtocol": "0xAbd9372723C735D426D0a760D047206Fe115ee6d", #"0x10fA193fB1d00e3C1033B0BB003AbB5f7a5595bB", #"0xD59bd0Cd1461605C31E1C88543E4DbA1Bf6fcaEC", #"0x14Ce6475946ee20e709042556Eda9B95673f47c0", #"0xCc3d7DF311Ba18DCD3dF09401f3C3E1ED1D52405", #"0x115338E77339d64b3d58181Aa9c0518df9D18022", #"0xa62236aB5825325d7a1F762c389608e84D38f17F",
"ENSRegistry": "0x9590A50Ee1043F8915FF72C0aCC2Dbc600080d36",
"ENSResolver": "0x44b92B8F27abAC2ebc9d0C4fa6fF0EEd4E98ba79",
"WethHelper": "0x3b5bDCCDFA2a0a1911984F203C19628EeB6036e0",
"SovrynProxy": "0x9009e85a687b55b5d6c314363c228803fad32d01",
"SovrynVault": "0xce069b35ae99762bee444c81dec1728aa99afd4b",
"OracleNotifier": "0xc406f51A23F28D6559e311010d3EcD8A07696a45",
"KyberContractAddress": "0x692f391bCc85cefCe8C237C01e1f636BbD70EA4D",
"BZRXTokenAddress": "0xe3e682A8Fc7EFec410E4099cc09EfCC0743C634a",
"SovrynEtherAddress": "0xd0a1e359811322d97991e03f863a0c30c2cf029c",
"MultiSig": "0x0000000000000000000000000000000000000000",
"TokenizedRegistry": "0xF1C87dD61BF8a4e21978487e2705D52AA687F97E",
"LoanTokenSettings": "0xa11A720bdAC34139EF17bD76dC30230777001bDc",
"LoanTokenSettingsLowerAdmin": "0xa1FB8F53678885D952dcdAeDf63E7fbf1F3e909f",
"PositionTokenSettingsV2": "0x9039aa76ec9d3a7c9dcec1ee008c7b9b1163f709",
"PositionTokenLogicV2_Initialize": "0x1665364b226e8aa9e545b613ccded1c4b0834fcf",
"WETHTokenAddress": "0xd0A1E359811322d97991E03f863a0C30C2cF029C",
"SAITokenAddress": "0xC4375B7De8af5a38a93548eb8453a498222C4fF2",
"DAITokenAddress": "0x4f96fe3b7a6cf9725f59d353f723c1bdb64ca6aa",
"CHAITokenAddress": "0x71DD45d9579A499B58aa85F50E5E3B241Ca2d10d",
"KNCTokenAddress": "0xad67cB4d63C9da94AcA37fDF2761AaDF780ff4a2",
"LINKTokenAddress": "0xd40390b1ce132ad0bc3765ad0ee42e04d4c52dd6",
},
"rinkeby": {
"OracleNotifier": "0xDF65BD1Bb78E93B533fd95e9Ce30775Dac023F35",
"KyberContractAddress": "0xF77eC7Ed5f5B9a5aee4cfa6FFCaC6A4C315BaC76",
"LoanTokenSettings": "0xebec45f9f4011faf1605a77bae0b4e5188068a1f",
"LoanTokenSettingsLowerAdmin": "0x47b2150f92e272db622ad3ce9a023c9e076354bc",
"BZRXTokenAddress": "0xb70ce29af9de22e28509cdcf3e0368b5a550548a",
"SovrynEtherAddress": "0xc778417e063141139fce010982780140aa0cd5ab",
"MultiSig": "0x0000000000000000000000000000000000000000",
"WETHTokenAddress": "0xc778417e063141139fce010982780140aa0cd5ab",
"DAITokenAddress": "0x5592ec0cfb4dbc12d3ab100b257153436a1f0fea", # Compound DAI
"REPTokenAddress": "0x6e894660985207feb7cf89faf048998c71e8ee89", # Compound REP
},
"mainnet": {
"ENSRegistry": "0x314159265dd8dbb310642f98f50c066173c1259b",
"ENSResolver": "0xD3ddcCDD3b25A8a7423B5bEe360a42146eb4Baf3",
"WethHelper": "0x3b5bDCCDFA2a0a1911984F203C19628EeB6036e0",
"SovrynProxy": "0x1cf226e9413addaf22412a2e182f9c0de44af002",
"SovrynVault": "0x8b3d70d628ebd30d4a2ea82db95ba2e906c71633",
"OracleNotifier": "0x6d20ea6fe6d67363684e22f1485712cfdccf177a",
"KyberContractAddress": "0x818e6fecd516ecc3849daf6845e3ec868087b755", # Mainnet (https://kyber.network/swap)
"KyberRegisterWallet": "0xECa04bB23612857650D727B8ed008f80952654ee",
"BZRXTokenAddress": "0x1c74cff0376fb4031cd7492cd6db2d66c3f2c6b9",
"BZRXTokenAddressSale": "0x0b12cf7964731f7190b74600fcdad9ba4cac870c",
"SovrynEtherAddress": "0x96CCe310096755f69594212d5D5fB5485577E7d1",
"MultiSig": "0x758dae5e06e11322c8be3463578150401cd31165",
"Timelock": "0xbb536eb24fb89b544d4bd9e9f1f34d9fd902bb96",
"TokenizedRegistry": "0xd8dc30d298ccf40042991cb4b96a540d8affe73a",
"LoanTokenSettings": "0x776fbb4dbfb4af02e9a72d64ea81453cb383874b",
"LoanTokenSettingsLowerAdmin": "0x95e92dce515e64ba90da7000b3554919784064bd",
"PositionTokenSettingsV2": "0xeD1e4EdF6C020efe4fc520cfEb4084aeBE969111",
"SovrynOracleHelper": "0xee14de2e67e1ec23c8561a6fad2635ff1b618db6",
"WETHTokenAddress": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
"SAITokenAddress": "0x89d24a6b4ccb1b6faa2625fe562bdd9a23260359",
"DAITokenAddress": "0x6b175474e89094c44da98b954eedeac495271d0f",
"CHAITokenAddress": "0x06AF07097C9Eeb7fD685c692751D5C66dB49c215",
"USDCTokenAddress": "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48",
"WBTCTokenAddress": "0x2260fac5e5542a773aa44fbcfedf7c193bc2c599",
"BATTokenAddress": "0x0d8775f648430679a709e98d2b0cb6250d2887ef",
"KNCTokenAddress": "0xdd974d5c2e2928dea5f71b9825b8b646686bd200",
"MKRTokenAddress": "0x9f8f72aa9304c8b593d555f12ef6589cc3a579a2",
"REPTokenAddress": "0x1985365e9f78359a9b6ad760e32412f4a445e862",
"ZRXTokenAddress": "0xe41d2489571d322189246dafa5ebde1f4699f498",
"LINKTokenAddress": "0x514910771af9ca656af840dff83e8264ecf986ca",
"SUSDTokenAddress": "0x57ab1ec28d129707052df4df418d58a2d46d5f51", # <- proxy, actual -> "0x57Ab1E02fEE23774580C119740129eAC7081e9D3"
"USDTTokenAddress": "0xdac17f958d2ee523a2206206994597c13d831ec7",
}
})
| 68.688073 | 357 | 0.736877 | 225 | 7,487 | 24.493333 | 0.693333 | 0.003992 | 0.022863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.415756 | 0.181114 | 7,487 | 108 | 358 | 69.324074 | 0.483119 | 0.058769 | 0 | 0.076923 | 0 | 0 | 0.718368 | 0.546631 | 0 | 0 | 0.52033 | 0 | 0 | 1 | 0.019231 | true | 0 | 0.009615 | 0.019231 | 0.048077 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
438cfba7bcdc40ebf84a91f50beca35b0d0c1963 | 107 | py | Python | flambe/utils/__init__.py | ethan-asapp/flambe | 70257167058c7b82ee39f74167a6161bd264ad18 | [
"MIT"
] | 148 | 2019-08-29T21:19:03.000Z | 2022-03-18T06:13:53.000Z | flambe/utils/__init__.py | cle-ros/flambe | 0dc2f5b2b286694defe8abf450fe5be9ae12c097 | [
"MIT"
] | 108 | 2019-09-03T14:36:10.000Z | 2020-05-13T15:53:14.000Z | flambe/utils/__init__.py | cle-ros/flambe | 0dc2f5b2b286694defe8abf450fe5be9ae12c097 | [
"MIT"
] | 21 | 2019-09-08T14:09:45.000Z | 2020-12-27T04:12:33.000Z | from flambe.utils.config import generate_config_from_template
__all__ = ['generate_config_from_template']
| 26.75 | 61 | 0.859813 | 14 | 107 | 5.857143 | 0.571429 | 0.341463 | 0.439024 | 0.634146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074766 | 107 | 3 | 62 | 35.666667 | 0.828283 | 0 | 0 | 0 | 1 | 0 | 0.271028 | 0.271028 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
439cda4f35894dfce2b687b86aa7cad1543fc177 | 920 | py | Python | model/serialize/dump.py | toastisme/dials | 6bc8ababc33bfe334513677f8adb65c0e90003f3 | [
"BSD-3-Clause"
] | 58 | 2015-10-15T09:28:20.000Z | 2022-03-28T20:09:38.000Z | model/serialize/dump.py | toastisme/dials | 6bc8ababc33bfe334513677f8adb65c0e90003f3 | [
"BSD-3-Clause"
] | 1,741 | 2015-11-24T08:17:02.000Z | 2022-03-31T15:46:42.000Z | model/serialize/dump.py | toastisme/dials | 6bc8ababc33bfe334513677f8adb65c0e90003f3 | [
"BSD-3-Clause"
] | 45 | 2015-10-14T13:44:16.000Z | 2022-03-22T14:45:56.000Z | import pickle
def reflections(obj, outfile):
"""
Dump the given object to file
:param obj: The reflection list to dump
:param outfile: The output file name or file object
"""
if isinstance(outfile, str):
with open(outfile, "wb") as outfile:
pickle.dump(obj, outfile, pickle.HIGHEST_PROTOCOL)
# Otherwise assume the input is a file and write to it
else:
pickle.dump(obj, outfile, pickle.HIGHEST_PROTOCOL)
def reference(obj, outfile):
"""
Dump the given object to file
:param obj: The reference list to dump
:param outfile: The output file name or file object
"""
if isinstance(outfile, str):
with open(outfile, "wb") as outfile:
pickle.dump(obj, outfile, pickle.HIGHEST_PROTOCOL)
# Otherwise assume the input is a file and write to it
else:
pickle.dump(obj, outfile, pickle.HIGHEST_PROTOCOL)
| 26.285714 | 62 | 0.657609 | 126 | 920 | 4.769841 | 0.293651 | 0.099834 | 0.086522 | 0.133111 | 0.905158 | 0.905158 | 0.905158 | 0.905158 | 0.905158 | 0.905158 | 0 | 0 | 0.259783 | 920 | 34 | 63 | 27.058824 | 0.882526 | 0.381522 | 0 | 0.769231 | 0 | 0 | 0.007678 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
43afc2be0916d5ac01eb7a04fbbe235af317a756 | 127,287 | py | Python | 002-pyopengl/PyOpenGL-Demo-3.0.1b1/PyOpenGL-Demo/GLUT/tom/logo.py | lhl/vrdev | fc1a9af2b51d159c99c8779349ef3392a70ed9ed | [
"Apache-2.0"
] | 12 | 2015-12-02T02:36:36.000Z | 2020-09-20T17:14:24.000Z | 002-pyopengl/PyOpenGL-Demo-3.0.1b1/PyOpenGL-Demo/GLUT/tom/logo.py | lhl/vrdev | fc1a9af2b51d159c99c8779349ef3392a70ed9ed | [
"Apache-2.0"
] | null | null | null | 002-pyopengl/PyOpenGL-Demo-3.0.1b1/PyOpenGL-Demo/GLUT/tom/logo.py | lhl/vrdev | fc1a9af2b51d159c99c8779349ef3392a70ed9ed | [
"Apache-2.0"
] | 8 | 2016-11-02T11:17:04.000Z | 2021-10-21T07:42:19.000Z | # This is statement is required by the build system to query build info
if __name__ == '__build__':
raise Exception
import string
__version__ = string.split('$Revision: 1.1.1.1 $')[1]
__date__ = string.join(string.split('$Date: 2007/02/15 19:25:13 $')[1:3], ' ')
__author__ = 'John Popplewell <john@johnnypops.demon.co.uk>'
from OpenGL.GL import *
def define_logo():
n = glNormal3f
v = glVertex3f
glBegin(GL_TRIANGLES)
n(0,0,-1)
v(64.375,5.5,-1)
n(0,0,-1)
v(59.375,5.25,-1)
n(0,0,-1)
v(59.375,5.5,-1)
n(0,0,-1)
v(59.375,5.25,-1)
n(0,0,-1)
v(64.375,5.5,-1)
n(0,0,-1)
v(60.125,5,-1)
n(0,0,-1)
v(60.125,5,-1)
n(0,0,-1)
v(64.375,5.5,-1)
n(0,0,-1)
v(60.375,4.5,-1)
n(0,0,-1)
v(60.375,4.5,-1)
n(0,0,-1)
v(64.375,5.5,-1)
n(0,0,-1)
v(60.375,3.5,-1)
n(0,0,-1)
v(60.375,3.5,-1)
n(0,0,-1)
v(64.375,5.5,-1)
n(0,0,-1)
v(63.375,5,-1)
n(0,0,-1)
v(60.375,3.5,-1)
n(0,0,-1)
v(63.375,5,-1)
n(0,0,-1)
v(62.875,4.25,-1)
n(0,0,-1)
v(60.375,3.5,-1)
n(0,0,-1)
v(62.875,4.25,-1)
n(0,0,-1)
v(62.625,3.5,-1)
n(0,0,-1)
v(60.375,3.5,-1)
n(0,0,-1)
v(62.625,3.5,-1)
n(0,0,-1)
v(60.625,-3.5,-1)
n(0,0,-1)
v(60.625,-3.5,-1)
n(0,0,-1)
v(62.625,3.5,-1)
n(0,0,-1)
v(60.875,-2.75,-1)
n(0,0,-1)
v(63.375,5,-1)
n(0,0,-1)
v(64.375,5.5,-1)
n(0,0,-1)
v(64.375,5.25,-1)
n(0,0,-1)
v(57.875,-4.75,-1)
n(0,0,-1)
v(56.625,-5.25,-1)
n(0,0,-1)
v(56.625,-5,-1)
n(0,0,-1)
v(56.625,-5.25,-1)
n(0,0,-1)
v(57.875,-4.75,-1)
n(0,0,-1)
v(65.125,-5.25,-1)
n(0,0,-1)
v(65.125,-5.25,-1)
n(0,0,-1)
v(57.875,-4.75,-1)
n(0,0,-1)
v(58.125,-4.25,-1)
n(0,0,-1)
v(65.125,-5.25,-1)
n(0,0,-1)
v(58.125,-4.25,-1)
n(0,0,-1)
v(61.375,-4.5,-1)
n(0,0,-1)
v(61.375,-4.5,-1)
n(0,0,-1)
v(58.125,-4.25,-1)
n(0,0,-1)
v(60.625,-4.25,-1)
n(0,0,-1)
v(60.625,-4.25,-1)
n(0,0,-1)
v(58.125,-4.25,-1)
n(0,0,-1)
v(58.375,-3.25,-1)
n(0,0,-1)
v(60.625,-4.25,-1)
n(0,0,-1)
v(58.375,-3.25,-1)
n(0,0,-1)
v(60.375,3.5,-1)
n(0,0,-1)
v(60.625,-4.25,-1)
n(0,0,-1)
v(60.375,3.5,-1)
n(0,0,-1)
v(60.625,-3.5,-1)
n(0,0,-1)
v(60.625,-4.25,-1)
n(0,0,-1)
v(60.625,-3.5,-1)
n(0,0,-1)
v(60.625,-4,-1)
n(0,0,-1)
v(65.125,-5.25,-1)
n(0,0,-1)
v(61.375,-4.5,-1)
n(0,0,-1)
v(62.625,-4.25,-1)
n(0,0,-1)
v(65.125,-5.25,-1)
n(0,0,-1)
v(62.625,-4.25,-1)
n(0,0,-1)
v(63.875,-4,-1)
n(0,0,-1)
v(65.125,-5.25,-1)
n(0,0,-1)
v(63.875,-4,-1)
n(0,0,-1)
v(64.875,-3.25,-1)
n(0,0,-1)
v(65.125,-5.25,-1)
n(0,0,-1)
v(64.875,-3.25,-1)
n(0,0,-1)
v(65.875,-2.25,-1)
n(0,0,-1)
v(65.125,-5.25,-1)
n(0,0,-1)
v(65.875,-2.25,-1)
n(0,0,-1)
v(66.125,-2.25,-1)
n(0,0,-1)
v(49.375,4.25,-1)
n(0,0,-1)
v(48.875,-1.5,-1)
n(0,0,-1)
v(48.125,3.25,-1)
n(0,0,-1)
v(48.875,-1.5,-1)
n(0,0,-1)
v(49.375,4.25,-1)
n(0,0,-1)
v(49.125,0.25,-1)
n(0,0,-1)
v(49.125,0.25,-1)
n(0,0,-1)
v(49.375,4.25,-1)
n(0,0,-1)
v(49.625,1.75,-1)
n(0,0,-1)
v(49.625,1.75,-1)
n(0,0,-1)
v(49.375,4.25,-1)
n(0,0,-1)
v(51.625,5.25,-1)
n(0,0,-1)
v(49.625,1.75,-1)
n(0,0,-1)
v(51.625,5.25,-1)
n(0,0,-1)
v(50.625,3.25,-1)
n(0,0,-1)
v(50.625,3.25,-1)
n(0,0,-1)
v(51.625,5.25,-1)
n(0,0,-1)
v(51.625,4.25,-1)
n(0,0,-1)
v(51.625,4.25,-1)
n(0,0,-1)
v(51.625,5.25,-1)
n(0,0,-1)
v(53.875,5.75,-1)
n(0,0,-1)
v(51.625,4.25,-1)
n(0,0,-1)
v(53.875,5.75,-1)
n(0,0,-1)
v(52.875,4.75,-1)
n(0,0,-1)
v(52.875,4.75,-1)
n(0,0,-1)
v(53.875,5.75,-1)
n(0,0,-1)
v(53.875,5,-1)
n(0,0,-1)
v(53.875,5,-1)
n(0,0,-1)
v(53.875,5.75,-1)
n(0,0,-1)
v(54.875,5.75,-1)
n(0,0,-1)
v(53.875,5,-1)
n(0,0,-1)
v(54.875,5.75,-1)
n(0,0,-1)
v(54.875,4.75,-1)
n(0,0,-1)
v(54.875,4.75,-1)
n(0,0,-1)
v(54.875,5.75,-1)
n(0,0,-1)
v(55.875,5.5,-1)
n(0,0,-1)
v(54.875,4.75,-1)
n(0,0,-1)
v(55.875,5.5,-1)
n(0,0,-1)
v(55.625,4.25,-1)
n(0,0,-1)
v(55.625,4.25,-1)
n(0,0,-1)
v(55.875,5.5,-1)
n(0,0,-1)
v(56.125,3.5,-1)
n(0,0,-1)
v(56.125,3.5,-1)
n(0,0,-1)
v(55.875,5.5,-1)
n(0,0,-1)
v(56.375,5.25,-1)
n(0,0,-1)
v(56.125,3.5,-1)
n(0,0,-1)
v(56.375,5.25,-1)
n(0,0,-1)
v(56.375,2.25,-1)
n(0,0,-1)
v(56.375,2.25,-1)
n(0,0,-1)
v(56.375,5.25,-1)
n(0,0,-1)
v(57.125,5.75,-1)
n(0,0,-1)
v(56.375,2.25,-1)
n(0,0,-1)
v(57.125,5.75,-1)
n(0,0,-1)
v(56.625,2.25,-1)
n(0,0,-1)
v(56.625,2.25,-1)
n(0,0,-1)
v(57.125,5.75,-1)
n(0,0,-1)
v(57.375,5.75,-1)
n(0,0,-1)
v(46.875,0.75,-1)
n(0,0,-1)
v(46.875,-2.25,-1)
n(0,0,-1)
v(46.625,-0.75,-1)
n(0,0,-1)
v(46.875,-2.25,-1)
n(0,0,-1)
v(46.875,0.75,-1)
n(0,0,-1)
v(47.375,2,-1)
n(0,0,-1)
v(46.875,-2.25,-1)
n(0,0,-1)
v(47.375,2,-1)
n(0,0,-1)
v(47.375,-3.25,-1)
n(0,0,-1)
v(47.375,-3.25,-1)
n(0,0,-1)
v(47.375,2,-1)
n(0,0,-1)
v(48.125,3.25,-1)
n(0,0,-1)
v(47.375,-3.25,-1)
n(0,0,-1)
v(48.125,3.25,-1)
n(0,0,-1)
v(47.875,-4.25,-1)
n(0,0,-1)
v(47.875,-4.25,-1)
n(0,0,-1)
v(48.125,3.25,-1)
n(0,0,-1)
v(48.875,-5,-1)
n(0,0,-1)
v(48.875,-5,-1)
n(0,0,-1)
v(48.125,3.25,-1)
n(0,0,-1)
v(48.875,-1.5,-1)
n(0,0,-1)
v(48.875,-5,-1)
n(0,0,-1)
v(48.875,-1.5,-1)
n(0,0,-1)
v(49.125,-3,-1)
n(0,0,-1)
v(48.875,-5,-1)
n(0,0,-1)
v(49.125,-3,-1)
n(0,0,-1)
v(49.875,-5.5,-1)
n(0,0,-1)
v(49.875,-5.5,-1)
n(0,0,-1)
v(49.125,-3,-1)
n(0,0,-1)
v(49.625,-4,-1)
n(0,0,-1)
v(49.875,-5.5,-1)
n(0,0,-1)
v(49.625,-4,-1)
n(0,0,-1)
v(50.375,-4.5,-1)
n(0,0,-1)
v(49.875,-5.5,-1)
n(0,0,-1)
v(50.375,-4.5,-1)
n(0,0,-1)
v(51.125,-5.5,-1)
n(0,0,-1)
v(51.125,-5.5,-1)
n(0,0,-1)
v(50.375,-4.5,-1)
n(0,0,-1)
v(51.375,-4.75,-1)
n(0,0,-1)
v(51.125,-5.5,-1)
n(0,0,-1)
v(51.375,-4.75,-1)
n(0,0,-1)
v(52.875,-5.25,-1)
n(0,0,-1)
v(52.875,-5.25,-1)
n(0,0,-1)
v(51.375,-4.75,-1)
n(0,0,-1)
v(52.375,-4.5,-1)
n(0,0,-1)
v(52.875,-5.25,-1)
n(0,0,-1)
v(52.375,-4.5,-1)
n(0,0,-1)
v(53.125,-1.75,-1)
n(0,0,-1)
v(52.875,-5.25,-1)
n(0,0,-1)
v(53.125,-1.75,-1)
n(0,0,-1)
v(54.625,-4.75,-1)
n(0,0,-1)
v(54.625,-4.75,-1)
n(0,0,-1)
v(53.125,-1.75,-1)
n(0,0,-1)
v(53.375,-1,-1)
n(0,0,-1)
v(52.375,0,-1)
n(0,0,-1)
v(53.125,-0.5,-1)
n(0,0,-1)
v(52.125,-0.25,-1)
n(0,0,-1)
v(53.125,-0.5,-1)
n(0,0,-1)
v(52.375,0,-1)
n(0,0,-1)
v(56.875,0,-1)
n(0,0,-1)
v(53.125,-0.5,-1)
n(0,0,-1)
v(56.875,0,-1)
n(0,0,-1)
v(53.375,-1,-1)
n(0,0,-1)
v(53.375,-1,-1)
n(0,0,-1)
v(56.875,0,-1)
n(0,0,-1)
v(55.625,-0.75,-1)
n(0,0,-1)
v(53.375,-1,-1)
n(0,0,-1)
v(55.625,-0.75,-1)
n(0,0,-1)
v(54.625,-4.75,-1)
n(0,0,-1)
v(55.625,-0.75,-1)
n(0,0,-1)
v(56.875,0,-1)
n(0,0,-1)
v(56.125,-0.5,-1)
n(0,0,-1)
v(56.125,-0.5,-1)
n(0,0,-1)
v(56.875,0,-1)
n(0,0,-1)
v(56.625,-0.25,-1)
n(0,0,-1)
v(55.625,-1.5,-1)
n(0,0,-1)
v(54.625,-4.75,-1)
n(0,0,-1)
v(55.625,-0.75,-1)
n(0,0,-1)
v(32.875,1.5,-1)
n(0,0,-1)
v(32.125,-2.25,-1)
n(0,0,-1)
v(31.625,0.5,-1)
n(0,0,-1)
v(32.125,-2.25,-1)
n(0,0,-1)
v(32.875,1.5,-1)
n(0,0,-1)
v(32.375,-1.75,-1)
n(0,0,-1)
v(32.375,-1.75,-1)
n(0,0,-1)
v(32.875,1.5,-1)
n(0,0,-1)
v(32.875,0,-1)
n(0,0,-1)
v(32.875,0,-1)
n(0,0,-1)
v(32.875,1.5,-1)
n(0,0,-1)
v(33.625,1.25,-1)
n(0,0,-1)
v(33.625,1.25,-1)
n(0,0,-1)
v(32.875,1.5,-1)
n(0,0,-1)
v(34.125,2,-1)
n(0,0,-1)
v(33.625,1.25,-1)
n(0,0,-1)
v(34.125,2,-1)
n(0,0,-1)
v(33.875,1.5,-1)
n(0,0,-1)
v(33.875,1.5,-1)
n(0,0,-1)
v(34.125,2,-1)
n(0,0,-1)
v(34.375,1.75,-1)
n(0,0,-1)
v(34.375,1.75,-1)
n(0,0,-1)
v(34.125,2,-1)
n(0,0,-1)
v(35.375,2.25,-1)
n(0,0,-1)
v(34.375,1.75,-1)
n(0,0,-1)
v(35.375,2.25,-1)
n(0,0,-1)
v(34.625,1.5,-1)
n(0,0,-1)
v(34.625,1.5,-1)
n(0,0,-1)
v(35.375,2.25,-1)
n(0,0,-1)
v(34.625,1.25,-1)
n(0,0,-1)
v(34.625,1.25,-1)
n(0,0,-1)
v(35.375,2.25,-1)
n(0,0,-1)
v(35.125,-1.25,-1)
n(0,0,-1)
v(35.125,-1.25,-1)
n(0,0,-1)
v(35.375,2.25,-1)
n(0,0,-1)
v(36.375,0,-1)
n(0,0,-1)
v(36.375,0,-1)
n(0,0,-1)
v(35.375,2.25,-1)
n(0,0,-1)
v(36.375,2,-1)
n(0,0,-1)
v(36.375,0,-1)
n(0,0,-1)
v(36.375,2,-1)
n(0,0,-1)
v(36.625,1,-1)
n(0,0,-1)
v(32.375,-1.75,-1)
n(0,0,-1)
v(33.625,-2,-1)
n(0,0,-1)
v(32.125,-2.25,-1)
n(0,0,-1)
v(33.625,-2,-1)
n(0,0,-1)
v(32.375,-1.75,-1)
n(0,0,-1)
v(33.125,-1.5,-1)
n(0,0,-1)
v(33.625,-2,-1)
n(0,0,-1)
v(33.125,-1.5,-1)
n(0,0,-1)
v(33.875,-1,-1)
n(0,0,-1)
v(33.625,-2,-1)
n(0,0,-1)
v(33.875,-1,-1)
n(0,0,-1)
v(35.125,-1.25,-1)
n(0,0,-1)
v(35.125,-1.25,-1)
n(0,0,-1)
v(33.875,-1,-1)
n(0,0,-1)
v(34.375,0,-1)
n(0,0,-1)
v(35.125,-1.25,-1)
n(0,0,-1)
v(34.375,0,-1)
n(0,0,-1)
v(34.625,1.25,-1)
n(0,0,-1)
v(25.875,2.25,-1)
n(0,0,-1)
v(25.375,1,-1)
n(0,0,-1)
v(25.375,2.25,-1)
n(0,0,-1)
v(25.375,2.25,-1)
n(0,0,-1)
v(22.375,1.5,-1)
n(0,0,-1)
v(22.375,1.75,-1)
n(0,0,-1)
v(22.375,1.5,-1)
n(0,0,-1)
v(25.375,2.25,-1)
n(0,0,-1)
v(23.125,1.5,-1)
n(0,0,-1)
v(23.125,1.5,-1)
n(0,0,-1)
v(25.375,2.25,-1)
n(0,0,-1)
v(23.125,1.25,-1)
n(0,0,-1)
v(23.125,1.25,-1)
n(0,0,-1)
v(25.375,2.25,-1)
n(0,0,-1)
v(23.375,1,-1)
n(0,0,-1)
v(23.375,1,-1)
n(0,0,-1)
v(25.375,2.25,-1)
n(0,0,-1)
v(23.875,-4.25,-1)
n(0,0,-1)
v(23.375,1,-1)
n(0,0,-1)
v(23.875,-4.25,-1)
n(0,0,-1)
v(23.375,-6.5,-1)
n(0,0,-1)
v(23.875,-4.25,-1)
n(0,0,-1)
v(25.375,2.25,-1)
n(0,0,-1)
v(25.125,0.25,-1)
n(0,0,-1)
v(25.125,0.25,-1)
n(0,0,-1)
v(25.375,2.25,-1)
n(0,0,-1)
v(25.375,1,-1)
n(0,0,-1)
v(25.125,0.25,-1)
n(0,0,-1)
v(25.375,1,-1)
n(0,0,-1)
v(25.875,1,-1)
n(0,0,-1)
v(25.875,1,-1)
n(0,0,-1)
v(25.375,1,-1)
n(0,0,-1)
v(26.625,2,-1)
n(0,0,-1)
v(25.875,1,-1)
n(0,0,-1)
v(26.625,2,-1)
n(0,0,-1)
v(26.375,1.25,-1)
n(0,0,-1)
v(26.375,1.25,-1)
n(0,0,-1)
v(26.625,2,-1)
n(0,0,-1)
v(26.875,1,-1)
n(0,0,-1)
v(26.875,1,-1)
n(0,0,-1)
v(26.625,2,-1)
n(0,0,-1)
v(27.625,2.25,-1)
n(0,0,-1)
v(26.875,1,-1)
n(0,0,-1)
v(27.625,2.25,-1)
n(0,0,-1)
v(27.125,0,-1)
n(0,0,-1)
v(23.875,-4.25,-1)
n(0,0,-1)
v(23.875,-5,-1)
n(0,0,-1)
v(23.375,-6.5,-1)
n(0,0,-1)
v(23.875,-5,-1)
n(0,0,-1)
v(23.875,-4.25,-1)
n(0,0,-1)
v(24.375,-4.75,-1)
n(0,0,-1)
v(23.875,-5,-1)
n(0,0,-1)
v(24.375,-4.75,-1)
n(0,0,-1)
v(24.375,-5.5,-1)
n(0,0,-1)
v(24.375,-5.5,-1)
n(0,0,-1)
v(24.375,-4.75,-1)
n(0,0,-1)
v(24.875,-5,-1)
n(0,0,-1)
v(24.375,-5.5,-1)
n(0,0,-1)
v(24.875,-5,-1)
n(0,0,-1)
v(24.875,-5.5,-1)
n(0,0,-1)
v(24.875,-5.5,-1)
n(0,0,-1)
v(24.875,-5,-1)
n(0,0,-1)
v(25.125,-4.75,-1)
n(0,0,-1)
v(24.875,-5.5,-1)
n(0,0,-1)
v(25.125,-4.75,-1)
n(0,0,-1)
v(26.375,-5,-1)
n(0,0,-1)
v(26.375,-5,-1)
n(0,0,-1)
v(25.125,-4.75,-1)
n(0,0,-1)
v(25.875,-4.25,-1)
n(0,0,-1)
v(26.375,-5,-1)
n(0,0,-1)
v(25.875,-4.25,-1)
n(0,0,-1)
v(26.375,-3.25,-1)
n(0,0,-1)
v(26.375,-5,-1)
n(0,0,-1)
v(26.375,-3.25,-1)
n(0,0,-1)
v(26.875,-1.5,-1)
n(0,0,-1)
v(26.375,-5,-1)
n(0,0,-1)
v(26.875,-1.5,-1)
n(0,0,-1)
v(27.625,-4,-1)
n(0,0,-1)
v(27.625,-4,-1)
n(0,0,-1)
v(26.875,-1.5,-1)
n(0,0,-1)
v(27.125,0,-1)
n(0,0,-1)
v(27.625,-4,-1)
n(0,0,-1)
v(27.125,0,-1)
n(0,0,-1)
v(27.625,2.25,-1)
n(0,0,-1)
v(27.625,-4,-1)
n(0,0,-1)
v(27.625,2.25,-1)
n(0,0,-1)
v(28.375,2,-1)
n(0,0,-1)
v(27.625,-4,-1)
n(0,0,-1)
v(28.375,2,-1)
n(0,0,-1)
v(28.875,-2,-1)
n(0,0,-1)
v(28.875,-2,-1)
n(0,0,-1)
v(28.375,2,-1)
n(0,0,-1)
v(28.875,1.75,-1)
n(0,0,-1)
v(28.875,-2,-1)
n(0,0,-1)
v(28.875,1.75,-1)
n(0,0,-1)
v(29.125,0.25,-1)
n(0,0,-1)
v(23.375,1,-1)
n(0,0,-1)
v(23.125,-7.75,-1)
n(0,0,-1)
v(23.125,0,-1)
n(0,0,-1)
v(23.125,-7.75,-1)
n(0,0,-1)
v(23.375,1,-1)
n(0,0,-1)
v(23.375,-6.5,-1)
n(0,0,-1)
v(20.625,-8.25,-1)
n(0,0,-1)
v(19.625,-8.75,-1)
n(0,0,-1)
v(19.625,-8.5,-1)
n(0,0,-1)
v(19.625,-8.75,-1)
n(0,0,-1)
v(20.625,-8.25,-1)
n(0,0,-1)
v(24.125,-8.75,-1)
n(0,0,-1)
v(24.125,-8.75,-1)
n(0,0,-1)
v(20.625,-8.25,-1)
n(0,0,-1)
v(20.875,-7.75,-1)
n(0,0,-1)
v(24.125,-8.75,-1)
n(0,0,-1)
v(20.875,-7.75,-1)
n(0,0,-1)
v(23.625,-8.5,-1)
n(0,0,-1)
v(23.625,-8.5,-1)
n(0,0,-1)
v(20.875,-7.75,-1)
n(0,0,-1)
v(21.125,-6.75,-1)
n(0,0,-1)
v(23.625,-8.5,-1)
n(0,0,-1)
v(21.125,-6.75,-1)
n(0,0,-1)
v(23.375,-8.25,-1)
n(0,0,-1)
v(23.375,-8.25,-1)
n(0,0,-1)
v(21.125,-6.75,-1)
n(0,0,-1)
v(23.125,-7.75,-1)
n(0,0,-1)
v(23.125,-7.75,-1)
n(0,0,-1)
v(21.125,-6.75,-1)
n(0,0,-1)
v(23.125,0,-1)
n(0,0,-1)
v(24.125,-8.75,-1)
n(0,0,-1)
v(23.625,-8.5,-1)
n(0,0,-1)
v(24.125,-8.5,-1)
n(0,0,-1)
v(38.375,-0.25,-1)
n(0,0,-1)
v(38.875,-5.25,-1)
n(0,0,-1)
v(36.875,-5.25,-1)
n(0,0,-1)
v(38.875,-5.25,-1)
n(0,0,-1)
v(38.375,-0.25,-1)
n(0,0,-1)
v(38.625,1,-1)
n(0,0,-1)
v(40.625,2.25,-1)
n(0,0,-1)
v(37.875,1.5,-1)
n(0,0,-1)
v(37.875,1.75,-1)
n(0,0,-1)
v(37.875,1.5,-1)
n(0,0,-1)
v(40.625,2.25,-1)
n(0,0,-1)
v(38.375,1.5,-1)
n(0,0,-1)
v(38.375,1.5,-1)
n(0,0,-1)
v(40.625,2.25,-1)
n(0,0,-1)
v(38.625,1.25,-1)
n(0,0,-1)
v(38.625,1.25,-1)
n(0,0,-1)
v(40.625,2.25,-1)
n(0,0,-1)
v(38.625,1,-1)
n(0,0,-1)
v(38.625,1,-1)
n(0,0,-1)
v(40.625,2.25,-1)
n(0,0,-1)
v(38.875,-5.25,-1)
n(0,0,-1)
v(38.875,-5.25,-1)
n(0,0,-1)
v(40.625,2.25,-1)
n(0,0,-1)
v(39.625,-2.75,-1)
n(0,0,-1)
v(39.625,-2.75,-1)
n(0,0,-1)
v(40.625,2.25,-1)
n(0,0,-1)
v(40.125,-1.25,-1)
n(0,0,-1)
v(40.125,-1.25,-1)
n(0,0,-1)
v(40.625,2.25,-1)
n(0,0,-1)
v(41.125,2.25,-1)
n(0,0,-1)
v(40.125,-1.25,-1)
n(0,0,-1)
v(40.625,-1,-1)
n(0,0,-1)
v(39.625,-2.75,-1)
n(0,0,-1)
v(40.625,-1,-1)
n(0,0,-1)
v(40.125,-1.25,-1)
n(0,0,-1)
v(40.875,0,-1)
n(0,0,-1)
v(40.625,-1,-1)
n(0,0,-1)
v(40.875,0,-1)
n(0,0,-1)
v(41.625,0.25,-1)
n(0,0,-1)
v(41.625,0.25,-1)
n(0,0,-1)
v(40.875,0,-1)
n(0,0,-1)
v(41.625,0.75,-1)
n(0,0,-1)
v(41.625,0.25,-1)
n(0,0,-1)
v(41.625,0.75,-1)
n(0,0,-1)
v(42.375,1.5,-1)
n(0,0,-1)
v(41.625,0.25,-1)
n(0,0,-1)
v(42.375,1.5,-1)
n(0,0,-1)
v(42.375,0.75,-1)
n(0,0,-1)
v(42.375,0.75,-1)
n(0,0,-1)
v(42.375,1.5,-1)
n(0,0,-1)
v(42.875,2,-1)
n(0,0,-1)
v(42.375,0.75,-1)
n(0,0,-1)
v(42.875,2,-1)
n(0,0,-1)
v(42.625,0.75,-1)
n(0,0,-1)
v(42.625,0.75,-1)
n(0,0,-1)
v(42.875,2,-1)
n(0,0,-1)
v(42.625,0.5,-1)
n(0,0,-1)
v(42.625,0.5,-1)
n(0,0,-1)
v(42.875,2,-1)
n(0,0,-1)
v(42.625,0,-1)
n(0,0,-1)
v(43.875,2.25,-1)
n(0,0,-1)
v(43.375,-4.25,-1)
n(0,0,-1)
v(42.875,2,-1)
n(0,0,-1)
v(43.375,-4.25,-1)
n(0,0,-1)
v(43.875,2.25,-1)
n(0,0,-1)
v(43.625,-3.75,-1)
n(0,0,-1)
v(43.625,-3.75,-1)
n(0,0,-1)
v(43.875,2.25,-1)
n(0,0,-1)
v(44.625,-0.25,-1)
n(0,0,-1)
v(44.625,-0.25,-1)
n(0,0,-1)
v(43.875,2.25,-1)
n(0,0,-1)
v(44.625,1.75,-1)
n(0,0,-1)
v(44.625,-0.25,-1)
n(0,0,-1)
v(44.625,1.75,-1)
n(0,0,-1)
v(44.875,0.75,-1)
n(0,0,-1)
v(41.625,-3.5,-1)
n(0,0,-1)
v(41.625,-5.25,-1)
n(0,0,-1)
v(41.375,-4.5,-1)
n(0,0,-1)
v(41.625,-5.25,-1)
n(0,0,-1)
v(41.625,-3.5,-1)
n(0,0,-1)
v(42.625,0,-1)
n(0,0,-1)
v(41.625,-5.25,-1)
n(0,0,-1)
v(42.625,0,-1)
n(0,0,-1)
v(42.375,-5.5,-1)
n(0,0,-1)
v(42.375,-5.5,-1)
n(0,0,-1)
v(42.625,0,-1)
n(0,0,-1)
v(43.375,-4.25,-1)
n(0,0,-1)
v(42.375,-5.5,-1)
n(0,0,-1)
v(43.375,-4.25,-1)
n(0,0,-1)
v(43.875,-5,-1)
n(0,0,-1)
v(43.375,-4.25,-1)
n(0,0,-1)
v(42.625,0,-1)
n(0,0,-1)
v(42.875,2,-1)
n(0,0,-1)
v(43.875,-5,-1)
n(0,0,-1)
v(43.375,-4.25,-1)
n(0,0,-1)
v(43.625,-4.5,-1)
n(0,0,-1)
v(43.875,-5,-1)
n(0,0,-1)
v(43.625,-4.5,-1)
n(0,0,-1)
v(43.875,-4.5,-1)
n(0,0,-1)
v(43.875,-5,-1)
n(0,0,-1)
v(43.875,-4.5,-1)
n(0,0,-1)
v(44.625,-3.75,-1)
n(0,0,-1)
v(43.875,-5,-1)
n(0,0,-1)
v(44.625,-3.75,-1)
n(0,0,-1)
v(45.125,-3.75,-1)
n(0,0,-1)
v(45.125,-3.75,-1)
n(0,0,-1)
v(44.625,-3.75,-1)
n(0,0,-1)
v(44.875,-3.5,-1)
n(0,0,-1)
v(30.375,-2,-1)
n(0,0,-1)
v(30.375,-4.25,-1)
n(0,0,-1)
v(30.125,-3.25,-1)
n(0,0,-1)
v(30.375,-4.25,-1)
n(0,0,-1)
v(30.375,-2,-1)
n(0,0,-1)
v(30.875,-0.75,-1)
n(0,0,-1)
v(30.375,-4.25,-1)
n(0,0,-1)
v(30.875,-0.75,-1)
n(0,0,-1)
v(30.625,-4.75,-1)
n(0,0,-1)
v(30.625,-4.75,-1)
n(0,0,-1)
v(30.875,-0.75,-1)
n(0,0,-1)
v(31.375,-5.25,-1)
n(0,0,-1)
v(31.375,-5.25,-1)
n(0,0,-1)
v(30.875,-0.75,-1)
n(0,0,-1)
v(31.625,0.5,-1)
n(0,0,-1)
v(31.375,-5.25,-1)
n(0,0,-1)
v(31.625,0.5,-1)
n(0,0,-1)
v(32.375,-5.5,-1)
n(0,0,-1)
v(32.375,-5.5,-1)
n(0,0,-1)
v(31.625,0.5,-1)
n(0,0,-1)
v(32.125,-3.25,-1)
n(0,0,-1)
v(32.125,-3.25,-1)
n(0,0,-1)
v(31.625,0.5,-1)
n(0,0,-1)
v(32.125,-2.25,-1)
n(0,0,-1)
v(32.375,-5.5,-1)
n(0,0,-1)
v(32.125,-3.25,-1)
n(0,0,-1)
v(32.375,-4.25,-1)
n(0,0,-1)
v(32.375,-5.5,-1)
n(0,0,-1)
v(32.375,-4.25,-1)
n(0,0,-1)
v(33.375,-4.5,-1)
n(0,0,-1)
v(32.375,-5.5,-1)
n(0,0,-1)
v(33.375,-4.5,-1)
n(0,0,-1)
v(34.125,-5,-1)
n(0,0,-1)
v(34.125,-5,-1)
n(0,0,-1)
v(33.375,-4.5,-1)
n(0,0,-1)
v(34.375,-4.25,-1)
n(0,0,-1)
v(34.125,-5,-1)
n(0,0,-1)
v(34.375,-4.25,-1)
n(0,0,-1)
v(35.125,-4.5,-1)
n(0,0,-1)
v(35.125,-4.5,-1)
n(0,0,-1)
v(34.375,-4.25,-1)
n(0,0,-1)
v(35.625,-3.5,-1)
n(0,0,-1)
v(35.125,-4.5,-1)
n(0,0,-1)
v(35.625,-3.5,-1)
n(0,0,-1)
v(35.875,-3.75,-1)
n(0,0,-1)
v(8.375,2,-1)
n(0,0,-1)
v(8.125,0.75,-1)
n(0,0,-1)
v(8.125,1.25,-1)
n(0,0,-1)
v(8.125,0.75,-1)
n(0,0,-1)
v(8.375,2,-1)
n(0,0,-1)
v(8.625,0.25,-1)
n(0,0,-1)
v(8.625,0.25,-1)
n(0,0,-1)
v(8.375,2,-1)
n(0,0,-1)
v(9.125,2.25,-1)
n(0,0,-1)
v(8.625,0.25,-1)
n(0,0,-1)
v(9.125,2.25,-1)
n(0,0,-1)
v(9.125,-0.25,-1)
n(0,0,-1)
v(9.125,-0.25,-1)
n(0,0,-1)
v(9.125,2.25,-1)
n(0,0,-1)
v(9.875,2,-1)
n(0,0,-1)
v(9.125,-0.25,-1)
n(0,0,-1)
v(9.875,2,-1)
n(0,0,-1)
v(9.375,-1.25,-1)
n(0,0,-1)
v(9.375,-1.25,-1)
n(0,0,-1)
v(9.875,2,-1)
n(0,0,-1)
v(9.875,-0.25,-1)
n(0,0,-1)
v(9.875,-0.25,-1)
n(0,0,-1)
v(9.875,2,-1)
n(0,0,-1)
v(10.125,1,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(3.375,1.25,-1)
n(0,0,-1)
v(3.375,1.5,-1)
n(0,0,-1)
v(3.375,1.25,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(3.625,1.25,-1)
n(0,0,-1)
v(3.625,1.25,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(4.375,1,-1)
n(0,0,-1)
v(4.375,1,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(4.625,0.5,-1)
n(0,0,-1)
v(4.625,0.5,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(4.625,-0.25,-1)
n(0,0,-1)
v(4.625,-0.25,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(4.875,-3.25,-1)
n(0,0,-1)
v(4.875,-3.25,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(4.875,-4.5,-1)
n(0,0,-1)
v(4.875,-4.5,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(4.875,-5.25,-1)
n(0,0,-1)
v(4.875,-5.25,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(4.875,-6,-1)
n(0,0,-1)
v(4.875,-6,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(4.875,-7.25,-1)
n(0,0,-1)
v(4.875,-7.25,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(5.625,-6.5,-1)
n(0,0,-1)
v(5.625,-6.5,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(6.625,-4.25,-1)
n(0,0,-1)
v(5.625,-6.5,-1)
n(0,0,-1)
v(6.625,-4.25,-1)
n(0,0,-1)
v(6.875,-4.75,-1)
n(0,0,-1)
v(6.625,-4.25,-1)
n(0,0,-1)
v(5.875,2.25,-1)
n(0,0,-1)
v(6.125,1.5,-1)
n(0,0,-1)
v(6.625,-4.25,-1)
n(0,0,-1)
v(6.125,1.5,-1)
n(0,0,-1)
v(6.375,0,-1)
n(0,0,-1)
v(6.875,-4.75,-1)
n(0,0,-1)
v(6.625,-4.25,-1)
n(0,0,-1)
v(7.625,-3,-1)
n(0,0,-1)
v(6.875,-4.75,-1)
n(0,0,-1)
v(7.625,-3,-1)
n(0,0,-1)
v(8.375,-2.75,-1)
n(0,0,-1)
v(8.375,-2.75,-1)
n(0,0,-1)
v(7.625,-3,-1)
n(0,0,-1)
v(8.625,-1.25,-1)
n(0,0,-1)
v(8.375,-2.75,-1)
n(0,0,-1)
v(8.625,-1.25,-1)
n(0,0,-1)
v(9.375,-1.25,-1)
n(0,0,-1)
v(9.375,-1.25,-1)
n(0,0,-1)
v(8.625,-1.25,-1)
n(0,0,-1)
v(9.125,-0.25,-1)
n(0,0,-1)
v(1.625,-7,-1)
n(0,0,-1)
v(1.625,-8.5,-1)
n(0,0,-1)
v(1.375,-7.75,-1)
n(0,0,-1)
v(1.625,-8.5,-1)
n(0,0,-1)
v(1.625,-7,-1)
n(0,0,-1)
v(2.375,-6.75,-1)
n(0,0,-1)
v(1.625,-8.5,-1)
n(0,0,-1)
v(2.375,-6.75,-1)
n(0,0,-1)
v(2.375,-8.75,-1)
n(0,0,-1)
v(2.375,-8.75,-1)
n(0,0,-1)
v(2.375,-6.75,-1)
n(0,0,-1)
v(3.125,-7.25,-1)
n(0,0,-1)
v(2.375,-8.75,-1)
n(0,0,-1)
v(3.125,-7.25,-1)
n(0,0,-1)
v(3.125,-8.5,-1)
n(0,0,-1)
v(3.125,-8.5,-1)
n(0,0,-1)
v(3.125,-7.25,-1)
n(0,0,-1)
v(3.625,-7.5,-1)
n(0,0,-1)
v(3.125,-8.5,-1)
n(0,0,-1)
v(3.625,-7.5,-1)
n(0,0,-1)
v(4.125,-8,-1)
n(0,0,-1)
v(4.125,-8,-1)
n(0,0,-1)
v(3.625,-7.5,-1)
n(0,0,-1)
v(3.875,-7.5,-1)
n(0,0,-1)
v(4.125,-8,-1)
n(0,0,-1)
v(3.875,-7.5,-1)
n(0,0,-1)
v(4.625,-7,-1)
n(0,0,-1)
v(4.125,-8,-1)
n(0,0,-1)
v(4.625,-7,-1)
n(0,0,-1)
v(4.875,-7.25,-1)
n(0,0,-1)
v(4.875,-7.25,-1)
n(0,0,-1)
v(4.625,-7,-1)
n(0,0,-1)
v(4.875,-6,-1)
n(0,0,-1)
v(-0.125,5.5,-1)
n(0,0,-1)
v(-4.375,5.25,-1)
n(0,0,-1)
v(-4.375,5.5,-1)
n(0,0,-1)
v(-4.375,5.25,-1)
n(0,0,-1)
v(-0.125,5.5,-1)
n(0,0,-1)
v(-3.375,5,-1)
n(0,0,-1)
v(-3.375,5,-1)
n(0,0,-1)
v(-0.125,5.5,-1)
n(0,0,-1)
v(-3.125,4.5,-1)
n(0,0,-1)
v(-3.125,4.5,-1)
n(0,0,-1)
v(-0.125,5.5,-1)
n(0,0,-1)
v(-0.875,4.75,-1)
n(0,0,-1)
v(-3.125,4.5,-1)
n(0,0,-1)
v(-0.875,4.75,-1)
n(0,0,-1)
v(-3.125,-3.25,-1)
n(0,0,-1)
v(-3.125,-3.25,-1)
n(0,0,-1)
v(-0.875,4.75,-1)
n(0,0,-1)
v(-2.125,0.25,-1)
n(0,0,-1)
v(-0.875,4.75,-1)
n(0,0,-1)
v(-0.125,5.5,-1)
n(0,0,-1)
v(-0.375,4.75,-1)
n(0,0,-1)
v(-0.375,4.75,-1)
n(0,0,-1)
v(-0.125,5.5,-1)
n(0,0,-1)
v(0.875,4.25,-1)
n(0,0,-1)
v(0.875,4.25,-1)
n(0,0,-1)
v(-0.125,5.5,-1)
n(0,0,-1)
v(1.375,5.5,-1)
n(0,0,-1)
v(0.875,4.25,-1)
n(0,0,-1)
v(1.375,5.5,-1)
n(0,0,-1)
v(1.375,3.25,-1)
n(0,0,-1)
v(1.375,3.25,-1)
n(0,0,-1)
v(1.375,5.5,-1)
n(0,0,-1)
v(2.375,5,-1)
n(0,0,-1)
v(-2.125,0.25,-1)
n(0,0,-1)
v(-2.125,-0.25,-1)
n(0,0,-1)
v(-3.125,-3.25,-1)
n(0,0,-1)
v(-2.125,-0.25,-1)
n(0,0,-1)
v(-2.125,0.25,-1)
n(0,0,-1)
v(-1.125,0.25,-1)
n(0,0,-1)
v(-2.125,-0.25,-1)
n(0,0,-1)
v(-1.125,0.25,-1)
n(0,0,-1)
v(-0.625,-0.5,-1)
n(0,0,-1)
v(-0.625,-0.5,-1)
n(0,0,-1)
v(-1.125,0.25,-1)
n(0,0,-1)
v(-0.125,0.5,-1)
n(0,0,-1)
v(-0.625,-0.5,-1)
n(0,0,-1)
v(-0.125,0.5,-1)
n(0,0,-1)
v(1.125,-0.25,-1)
n(0,0,-1)
v(1.125,-0.25,-1)
n(0,0,-1)
v(-0.125,0.5,-1)
n(0,0,-1)
v(0.625,1.25,-1)
n(0,0,-1)
v(1.125,-0.25,-1)
n(0,0,-1)
v(0.625,1.25,-1)
n(0,0,-1)
v(1.125,2.25,-1)
n(0,0,-1)
v(1.125,-0.25,-1)
n(0,0,-1)
v(1.125,2.25,-1)
n(0,0,-1)
v(1.375,3.25,-1)
n(0,0,-1)
v(1.125,-0.25,-1)
n(0,0,-1)
v(1.375,3.25,-1)
n(0,0,-1)
v(2.375,0.5,-1)
n(0,0,-1)
v(2.375,0.5,-1)
n(0,0,-1)
v(1.375,3.25,-1)
n(0,0,-1)
v(2.375,5,-1)
n(0,0,-1)
v(2.375,0.5,-1)
n(0,0,-1)
v(2.375,5,-1)
n(0,0,-1)
v(3.375,4.25,-1)
n(0,0,-1)
v(2.375,0.5,-1)
n(0,0,-1)
v(3.375,4.25,-1)
n(0,0,-1)
v(3.375,1.75,-1)
n(0,0,-1)
v(3.375,1.75,-1)
n(0,0,-1)
v(3.375,4.25,-1)
n(0,0,-1)
v(3.625,3,-1)
n(0,0,-1)
v(-5.375,-3.25,-1)
n(0,0,-1)
v(-5.625,-4.5,-1)
n(0,0,-1)
v(-5.625,-4,-1)
n(0,0,-1)
v(-6.125,-5,-1)
n(0,0,-1)
v(-7.125,-5.25,-1)
n(0,0,-1)
v(-7.125,-5,-1)
n(0,0,-1)
v(-7.125,-5.25,-1)
n(0,0,-1)
v(-6.125,-5,-1)
n(0,0,-1)
v(-1.875,-5.25,-1)
n(0,0,-1)
v(-1.875,-5.25,-1)
n(0,0,-1)
v(-6.125,-5,-1)
n(0,0,-1)
v(-5.625,-4.5,-1)
n(0,0,-1)
v(-1.875,-5.25,-1)
n(0,0,-1)
v(-5.625,-4.5,-1)
n(0,0,-1)
v(-2.625,-5,-1)
n(0,0,-1)
v(-2.625,-5,-1)
n(0,0,-1)
v(-5.625,-4.5,-1)
n(0,0,-1)
v(-5.375,-3.25,-1)
n(0,0,-1)
v(-2.625,-5,-1)
n(0,0,-1)
v(-5.375,-3.25,-1)
n(0,0,-1)
v(-2.875,-4.75,-1)
n(0,0,-1)
v(-2.875,-4.75,-1)
n(0,0,-1)
v(-5.375,-3.25,-1)
n(0,0,-1)
v(-3.125,-4.25,-1)
n(0,0,-1)
v(-3.125,-4.25,-1)
n(0,0,-1)
v(-5.375,-3.25,-1)
n(0,0,-1)
v(-3.375,3.5,-1)
n(0,0,-1)
v(-3.125,-4.25,-1)
n(0,0,-1)
v(-3.375,3.5,-1)
n(0,0,-1)
v(-3.125,4.5,-1)
n(0,0,-1)
v(-3.125,-4.25,-1)
n(0,0,-1)
v(-3.125,4.5,-1)
n(0,0,-1)
v(-3.125,-3.25,-1)
n(0,0,-1)
v(-1.875,-5.25,-1)
n(0,0,-1)
v(-2.625,-5,-1)
n(0,0,-1)
v(-1.875,-5,-1)
n(0,0,-1)
v(13.125,3.5,-1)
n(0,0,-1)
v(13.625,0,-1)
n(0,0,-1)
v(13.125,-2.75,-1)
n(0,0,-1)
v(13.625,0,-1)
n(0,0,-1)
v(13.125,3.5,-1)
n(0,0,-1)
v(14.625,4.75,-1)
n(0,0,-1)
v(13.125,-2.75,-1)
n(0,0,-1)
v(13.625,0,-1)
n(0,0,-1)
v(13.375,-1.5,-1)
n(0,0,-1)
v(13.625,0,-1)
n(0,0,-1)
v(14.625,4.75,-1)
n(0,0,-1)
v(14.125,1.5,-1)
n(0,0,-1)
v(14.125,1.5,-1)
n(0,0,-1)
v(14.625,4.75,-1)
n(0,0,-1)
v(14.625,2.5,-1)
n(0,0,-1)
v(14.625,2.5,-1)
n(0,0,-1)
v(14.625,4.75,-1)
n(0,0,-1)
v(15.125,3.5,-1)
n(0,0,-1)
v(15.125,3.5,-1)
n(0,0,-1)
v(14.625,4.75,-1)
n(0,0,-1)
v(16.375,5.5,-1)
n(0,0,-1)
v(15.125,3.5,-1)
n(0,0,-1)
v(16.375,5.5,-1)
n(0,0,-1)
v(15.625,4,-1)
n(0,0,-1)
v(15.625,4,-1)
n(0,0,-1)
v(16.375,5.5,-1)
n(0,0,-1)
v(16.625,4.75,-1)
n(0,0,-1)
v(16.625,4.75,-1)
n(0,0,-1)
v(16.375,5.5,-1)
n(0,0,-1)
v(17.875,5.75,-1)
n(0,0,-1)
v(16.625,4.75,-1)
n(0,0,-1)
v(17.875,5.75,-1)
n(0,0,-1)
v(17.875,5,-1)
n(0,0,-1)
v(17.875,5,-1)
n(0,0,-1)
v(17.875,5.75,-1)
n(0,0,-1)
v(18.875,5.5,-1)
n(0,0,-1)
v(17.875,5,-1)
n(0,0,-1)
v(18.875,5.5,-1)
n(0,0,-1)
v(18.375,4.75,-1)
n(0,0,-1)
v(18.375,4.75,-1)
n(0,0,-1)
v(18.875,5.5,-1)
n(0,0,-1)
v(18.875,4.5,-1)
n(0,0,-1)
v(18.875,4.5,-1)
n(0,0,-1)
v(18.875,5.5,-1)
n(0,0,-1)
v(19.875,5.25,-1)
n(0,0,-1)
v(18.875,4.5,-1)
n(0,0,-1)
v(19.875,5.25,-1)
n(0,0,-1)
v(19.375,4,-1)
n(0,0,-1)
v(19.375,4,-1)
n(0,0,-1)
v(19.875,5.25,-1)
n(0,0,-1)
v(19.375,3,-1)
n(0,0,-1)
v(11.125,0,-1)
n(0,0,-1)
v(11.375,-3.5,-1)
n(0,0,-1)
v(10.875,-1.75,-1)
n(0,0,-1)
v(11.375,-3.5,-1)
n(0,0,-1)
v(11.125,0,-1)
n(0,0,-1)
v(11.875,1.75,-1)
n(0,0,-1)
v(11.375,-3.5,-1)
n(0,0,-1)
v(11.875,1.75,-1)
n(0,0,-1)
v(12.375,-5,-1)
n(0,0,-1)
v(12.375,-5,-1)
n(0,0,-1)
v(11.875,1.75,-1)
n(0,0,-1)
v(13.125,3.5,-1)
n(0,0,-1)
v(12.375,-5,-1)
n(0,0,-1)
v(13.125,3.5,-1)
n(0,0,-1)
v(13.125,-2.75,-1)
n(0,0,-1)
v(12.375,-5,-1)
n(0,0,-1)
v(13.125,-2.75,-1)
n(0,0,-1)
v(13.375,-5.5,-1)
n(0,0,-1)
v(13.375,-5.5,-1)
n(0,0,-1)
v(13.125,-2.75,-1)
n(0,0,-1)
v(13.625,-4.25,-1)
n(0,0,-1)
v(13.375,-5.5,-1)
n(0,0,-1)
v(13.625,-4.25,-1)
n(0,0,-1)
v(14.625,-5.5,-1)
n(0,0,-1)
v(14.625,-5.5,-1)
n(0,0,-1)
v(13.625,-4.25,-1)
n(0,0,-1)
v(14.125,-4.75,-1)
n(0,0,-1)
v(14.625,-5.5,-1)
n(0,0,-1)
v(14.125,-4.75,-1)
n(0,0,-1)
v(14.875,-4.75,-1)
n(0,0,-1)
v(14.625,-5.5,-1)
n(0,0,-1)
v(14.875,-4.75,-1)
n(0,0,-1)
v(16.125,-5.25,-1)
n(0,0,-1)
v(16.125,-5.25,-1)
n(0,0,-1)
v(14.875,-4.75,-1)
n(0,0,-1)
v(15.625,-4.5,-1)
n(0,0,-1)
v(16.125,-5.25,-1)
n(0,0,-1)
v(15.625,-4.5,-1)
n(0,0,-1)
v(16.375,-4.25,-1)
n(0,0,-1)
v(16.125,-5.25,-1)
n(0,0,-1)
v(16.375,-4.25,-1)
n(0,0,-1)
v(17.625,-4.75,-1)
n(0,0,-1)
v(17.625,-4.75,-1)
n(0,0,-1)
v(16.375,-4.25,-1)
n(0,0,-1)
v(17.625,-2.75,-1)
n(0,0,-1)
v(17.625,-4.75,-1)
n(0,0,-1)
v(17.625,-2.75,-1)
n(0,0,-1)
v(18.625,-1,-1)
n(0,0,-1)
v(17.625,-4.75,-1)
n(0,0,-1)
v(18.625,-1,-1)
n(0,0,-1)
v(19.125,-3.75,-1)
n(0,0,-1)
v(19.125,-3.75,-1)
n(0,0,-1)
v(18.625,-1,-1)
n(0,0,-1)
v(19.125,1.25,-1)
n(0,0,-1)
v(19.125,-3.75,-1)
n(0,0,-1)
v(19.125,1.25,-1)
n(0,0,-1)
v(19.375,3,-1)
n(0,0,-1)
v(19.125,-3.75,-1)
n(0,0,-1)
v(19.375,3,-1)
n(0,0,-1)
v(20.375,-2,-1)
n(0,0,-1)
v(20.375,-2,-1)
n(0,0,-1)
v(19.375,3,-1)
n(0,0,-1)
v(19.875,5.25,-1)
n(0,0,-1)
v(20.375,-2,-1)
n(0,0,-1)
v(19.875,5.25,-1)
n(0,0,-1)
v(21.125,4,-1)
n(0,0,-1)
v(20.375,-2,-1)
n(0,0,-1)
v(21.125,4,-1)
n(0,0,-1)
v(21.375,0,-1)
n(0,0,-1)
v(21.375,0,-1)
n(0,0,-1)
v(21.125,4,-1)
n(0,0,-1)
v(21.625,3,-1)
n(0,0,-1)
v(21.375,0,-1)
n(0,0,-1)
v(21.625,3,-1)
n(0,0,-1)
v(21.625,2,-1)
n(0,0,1)
v(64.375,5.5,1)
n(0,0,1)
v(59.375,5.5,1)
n(0,0,1)
v(59.375,5.25,1)
n(0,0,1)
v(59.375,5.25,1)
n(0,0,1)
v(60.125,5,1)
n(0,0,1)
v(64.375,5.5,1)
n(0,0,1)
v(60.125,5,1)
n(0,0,1)
v(60.375,4.5,1)
n(0,0,1)
v(64.375,5.5,1)
n(0,0,1)
v(60.375,4.5,1)
n(0,0,1)
v(60.375,3.5,1)
n(0,0,1)
v(64.375,5.5,1)
n(0,0,1)
v(60.375,3.5,1)
n(0,0,1)
v(63.375,5,1)
n(0,0,1)
v(64.375,5.5,1)
n(0,0,1)
v(60.375,3.5,1)
n(0,0,1)
v(62.875,4.25,1)
n(0,0,1)
v(63.375,5,1)
n(0,0,1)
v(60.375,3.5,1)
n(0,0,1)
v(62.625,3.5,1)
n(0,0,1)
v(62.875,4.25,1)
n(0,0,1)
v(60.375,3.5,1)
n(0,0,1)
v(60.625,-3.5,1)
n(0,0,1)
v(62.625,3.5,1)
n(0,0,1)
v(60.625,-3.5,1)
n(0,0,1)
v(60.875,-2.75,1)
n(0,0,1)
v(62.625,3.5,1)
n(0,0,1)
v(63.375,5,1)
n(0,0,1)
v(64.375,5.25,1)
n(0,0,1)
v(64.375,5.5,1)
n(0,0,1)
v(57.875,-4.75,1)
n(0,0,1)
v(56.625,-5,1)
n(0,0,1)
v(56.625,-5.25,1)
n(0,0,1)
v(56.625,-5.25,1)
n(0,0,1)
v(65.125,-5.25,1)
n(0,0,1)
v(57.875,-4.75,1)
n(0,0,1)
v(65.125,-5.25,1)
n(0,0,1)
v(58.125,-4.25,1)
n(0,0,1)
v(57.875,-4.75,1)
n(0,0,1)
v(65.125,-5.25,1)
n(0,0,1)
v(61.375,-4.5,1)
n(0,0,1)
v(58.125,-4.25,1)
n(0,0,1)
v(61.375,-4.5,1)
n(0,0,1)
v(60.625,-4.25,1)
n(0,0,1)
v(58.125,-4.25,1)
n(0,0,1)
v(60.625,-4.25,1)
n(0,0,1)
v(58.375,-3.25,1)
n(0,0,1)
v(58.125,-4.25,1)
n(0,0,1)
v(60.625,-4.25,1)
n(0,0,1)
v(60.375,3.5,1)
n(0,0,1)
v(58.375,-3.25,1)
n(0,0,1)
v(60.625,-4.25,1)
n(0,0,1)
v(60.625,-3.5,1)
n(0,0,1)
v(60.375,3.5,1)
n(0,0,1)
v(60.625,-4.25,1)
n(0,0,1)
v(60.625,-4,1)
n(0,0,1)
v(60.625,-3.5,1)
n(0,0,1)
v(65.125,-5.25,1)
n(0,0,1)
v(62.625,-4.25,1)
n(0,0,1)
v(61.375,-4.5,1)
n(0,0,1)
v(65.125,-5.25,1)
n(0,0,1)
v(63.875,-4,1)
n(0,0,1)
v(62.625,-4.25,1)
n(0,0,1)
v(65.125,-5.25,1)
n(0,0,1)
v(64.875,-3.25,1)
n(0,0,1)
v(63.875,-4,1)
n(0,0,1)
v(65.125,-5.25,1)
n(0,0,1)
v(65.875,-2.25,1)
n(0,0,1)
v(64.875,-3.25,1)
n(0,0,1)
v(65.125,-5.25,1)
n(0,0,1)
v(66.125,-2.25,1)
n(0,0,1)
v(65.875,-2.25,1)
n(0,0,1)
v(49.375,4.25,1)
n(0,0,1)
v(48.125,3.25,1)
n(0,0,1)
v(48.875,-1.5,1)
n(0,0,1)
v(48.875,-1.5,1)
n(0,0,1)
v(49.125,0.25,1)
n(0,0,1)
v(49.375,4.25,1)
n(0,0,1)
v(49.125,0.25,1)
n(0,0,1)
v(49.625,1.75,1)
n(0,0,1)
v(49.375,4.25,1)
n(0,0,1)
v(49.625,1.75,1)
n(0,0,1)
v(51.625,5.25,1)
n(0,0,1)
v(49.375,4.25,1)
n(0,0,1)
v(49.625,1.75,1)
n(0,0,1)
v(50.625,3.25,1)
n(0,0,1)
v(51.625,5.25,1)
n(0,0,1)
v(50.625,3.25,1)
n(0,0,1)
v(51.625,4.25,1)
n(0,0,1)
v(51.625,5.25,1)
n(0,0,1)
v(51.625,4.25,1)
n(0,0,1)
v(53.875,5.75,1)
n(0,0,1)
v(51.625,5.25,1)
n(0,0,1)
v(51.625,4.25,1)
n(0,0,1)
v(52.875,4.75,1)
n(0,0,1)
v(53.875,5.75,1)
n(0,0,1)
v(52.875,4.75,1)
n(0,0,1)
v(53.875,5,1)
n(0,0,1)
v(53.875,5.75,1)
n(0,0,1)
v(53.875,5,1)
n(0,0,1)
v(54.875,5.75,1)
n(0,0,1)
v(53.875,5.75,1)
n(0,0,1)
v(53.875,5,1)
n(0,0,1)
v(54.875,4.75,1)
n(0,0,1)
v(54.875,5.75,1)
n(0,0,1)
v(54.875,4.75,1)
n(0,0,1)
v(55.875,5.5,1)
n(0,0,1)
v(54.875,5.75,1)
n(0,0,1)
v(54.875,4.75,1)
n(0,0,1)
v(55.625,4.25,1)
n(0,0,1)
v(55.875,5.5,1)
n(0,0,1)
v(55.625,4.25,1)
n(0,0,1)
v(56.125,3.5,1)
n(0,0,1)
v(55.875,5.5,1)
n(0,0,1)
v(56.125,3.5,1)
n(0,0,1)
v(56.375,5.25,1)
n(0,0,1)
v(55.875,5.5,1)
n(0,0,1)
v(56.125,3.5,1)
n(0,0,1)
v(56.375,2.25,1)
n(0,0,1)
v(56.375,5.25,1)
n(0,0,1)
v(56.375,2.25,1)
n(0,0,1)
v(57.125,5.75,1)
n(0,0,1)
v(56.375,5.25,1)
n(0,0,1)
v(56.375,2.25,1)
n(0,0,1)
v(56.625,2.25,1)
n(0,0,1)
v(57.125,5.75,1)
n(0,0,1)
v(56.625,2.25,1)
n(0,0,1)
v(57.375,5.75,1)
n(0,0,1)
v(57.125,5.75,1)
n(0,0,1)
v(46.875,0.75,1)
n(0,0,1)
v(46.625,-0.75,1)
n(0,0,1)
v(46.875,-2.25,1)
n(0,0,1)
v(46.875,-2.25,1)
n(0,0,1)
v(47.375,2,1)
n(0,0,1)
v(46.875,0.75,1)
n(0,0,1)
v(46.875,-2.25,1)
n(0,0,1)
v(47.375,-3.25,1)
n(0,0,1)
v(47.375,2,1)
n(0,0,1)
v(47.375,-3.25,1)
n(0,0,1)
v(48.125,3.25,1)
n(0,0,1)
v(47.375,2,1)
n(0,0,1)
v(47.375,-3.25,1)
n(0,0,1)
v(47.875,-4.25,1)
n(0,0,1)
v(48.125,3.25,1)
n(0,0,1)
v(47.875,-4.25,1)
n(0,0,1)
v(48.875,-5,1)
n(0,0,1)
v(48.125,3.25,1)
n(0,0,1)
v(48.875,-5,1)
n(0,0,1)
v(48.875,-1.5,1)
n(0,0,1)
v(48.125,3.25,1)
n(0,0,1)
v(48.875,-5,1)
n(0,0,1)
v(49.125,-3,1)
n(0,0,1)
v(48.875,-1.5,1)
n(0,0,1)
v(48.875,-5,1)
n(0,0,1)
v(49.875,-5.5,1)
n(0,0,1)
v(49.125,-3,1)
n(0,0,1)
v(49.875,-5.5,1)
n(0,0,1)
v(49.625,-4,1)
n(0,0,1)
v(49.125,-3,1)
n(0,0,1)
v(49.875,-5.5,1)
n(0,0,1)
v(50.375,-4.5,1)
n(0,0,1)
v(49.625,-4,1)
n(0,0,1)
v(49.875,-5.5,1)
n(0,0,1)
v(51.125,-5.5,1)
n(0,0,1)
v(50.375,-4.5,1)
n(0,0,1)
v(51.125,-5.5,1)
n(0,0,1)
v(51.375,-4.75,1)
n(0,0,1)
v(50.375,-4.5,1)
n(0,0,1)
v(51.125,-5.5,1)
n(0,0,1)
v(52.875,-5.25,1)
n(0,0,1)
v(51.375,-4.75,1)
n(0,0,1)
v(52.875,-5.25,1)
n(0,0,1)
v(52.375,-4.5,1)
n(0,0,1)
v(51.375,-4.75,1)
n(0,0,1)
v(52.875,-5.25,1)
n(0,0,1)
v(53.125,-1.75,1)
n(0,0,1)
v(52.375,-4.5,1)
n(0,0,1)
v(52.875,-5.25,1)
n(0,0,1)
v(54.625,-4.75,1)
n(0,0,1)
v(53.125,-1.75,1)
n(0,0,1)
v(54.625,-4.75,1)
n(0,0,1)
v(53.375,-1,1)
n(0,0,1)
v(53.125,-1.75,1)
n(0,0,1)
v(52.375,0,1)
n(0,0,1)
v(52.125,-0.25,1)
n(0,0,1)
v(53.125,-0.5,1)
n(0,0,1)
v(53.125,-0.5,1)
n(0,0,1)
v(56.875,0,1)
n(0,0,1)
v(52.375,0,1)
n(0,0,1)
v(53.125,-0.5,1)
n(0,0,1)
v(53.375,-1,1)
n(0,0,1)
v(56.875,0,1)
n(0,0,1)
v(53.375,-1,1)
n(0,0,1)
v(55.625,-0.75,1)
n(0,0,1)
v(56.875,0,1)
n(0,0,1)
v(53.375,-1,1)
n(0,0,1)
v(54.625,-4.75,1)
n(0,0,1)
v(55.625,-0.75,1)
n(0,0,1)
v(55.625,-0.75,1)
n(0,0,1)
v(56.125,-0.5,1)
n(0,0,1)
v(56.875,0,1)
n(0,0,1)
v(56.125,-0.5,1)
n(0,0,1)
v(56.625,-0.25,1)
n(0,0,1)
v(56.875,0,1)
n(0,0,1)
v(55.625,-1.5,1)
n(0,0,1)
v(55.625,-0.75,1)
n(0,0,1)
v(54.625,-4.75,1)
n(0,0,1)
v(32.875,1.5,1)
n(0,0,1)
v(31.625,0.5,1)
n(0,0,1)
v(32.125,-2.25,1)
n(0,0,1)
v(32.125,-2.25,1)
n(0,0,1)
v(32.375,-1.75,1)
n(0,0,1)
v(32.875,1.5,1)
n(0,0,1)
v(32.375,-1.75,1)
n(0,0,1)
v(32.875,0,1)
n(0,0,1)
v(32.875,1.5,1)
n(0,0,1)
v(32.875,0,1)
n(0,0,1)
v(33.625,1.25,1)
n(0,0,1)
v(32.875,1.5,1)
n(0,0,1)
v(33.625,1.25,1)
n(0,0,1)
v(34.125,2,1)
n(0,0,1)
v(32.875,1.5,1)
n(0,0,1)
v(33.625,1.25,1)
n(0,0,1)
v(33.875,1.5,1)
n(0,0,1)
v(34.125,2,1)
n(0,0,1)
v(33.875,1.5,1)
n(0,0,1)
v(34.375,1.75,1)
n(0,0,1)
v(34.125,2,1)
n(0,0,1)
v(34.375,1.75,1)
n(0,0,1)
v(35.375,2.25,1)
n(0,0,1)
v(34.125,2,1)
n(0,0,1)
v(34.375,1.75,1)
n(0,0,1)
v(34.625,1.5,1)
n(0,0,1)
v(35.375,2.25,1)
n(0,0,1)
v(34.625,1.5,1)
n(0,0,1)
v(34.625,1.25,1)
n(0,0,1)
v(35.375,2.25,1)
n(0,0,1)
v(34.625,1.25,1)
n(0,0,1)
v(35.125,-1.25,1)
n(0,0,1)
v(35.375,2.25,1)
n(0,0,1)
v(35.125,-1.25,1)
n(0,0,1)
v(36.375,0,1)
n(0,0,1)
v(35.375,2.25,1)
n(0,0,1)
v(36.375,0,1)
n(0,0,1)
v(36.375,2,1)
n(0,0,1)
v(35.375,2.25,1)
n(0,0,1)
v(36.375,0,1)
n(0,0,1)
v(36.625,1,1)
n(0,0,1)
v(36.375,2,1)
n(0,0,1)
v(32.375,-1.75,1)
n(0,0,1)
v(32.125,-2.25,1)
n(0,0,1)
v(33.625,-2,1)
n(0,0,1)
v(33.625,-2,1)
n(0,0,1)
v(33.125,-1.5,1)
n(0,0,1)
v(32.375,-1.75,1)
n(0,0,1)
v(33.625,-2,1)
n(0,0,1)
v(33.875,-1,1)
n(0,0,1)
v(33.125,-1.5,1)
n(0,0,1)
v(33.625,-2,1)
n(0,0,1)
v(35.125,-1.25,1)
n(0,0,1)
v(33.875,-1,1)
n(0,0,1)
v(35.125,-1.25,1)
n(0,0,1)
v(34.375,0,1)
n(0,0,1)
v(33.875,-1,1)
n(0,0,1)
v(35.125,-1.25,1)
n(0,0,1)
v(34.625,1.25,1)
n(0,0,1)
v(34.375,0,1)
n(0,0,1)
v(25.875,2.25,1)
n(0,0,1)
v(25.375,2.25,1)
n(0,0,1)
v(25.375,1,1)
n(0,0,1)
v(25.375,2.25,1)
n(0,0,1)
v(22.375,1.75,1)
n(0,0,1)
v(22.375,1.5,1)
n(0,0,1)
v(22.375,1.5,1)
n(0,0,1)
v(23.125,1.5,1)
n(0,0,1)
v(25.375,2.25,1)
n(0,0,1)
v(23.125,1.5,1)
n(0,0,1)
v(23.125,1.25,1)
n(0,0,1)
v(25.375,2.25,1)
n(0,0,1)
v(23.125,1.25,1)
n(0,0,1)
v(23.375,1,1)
n(0,0,1)
v(25.375,2.25,1)
n(0,0,1)
v(23.375,1,1)
n(0,0,1)
v(23.875,-4.25,1)
n(0,0,1)
v(25.375,2.25,1)
n(0,0,1)
v(23.375,1,1)
n(0,0,1)
v(23.375,-6.5,1)
n(0,0,1)
v(23.875,-4.25,1)
n(0,0,1)
v(23.875,-4.25,1)
n(0,0,1)
v(25.125,0.25,1)
n(0,0,1)
v(25.375,2.25,1)
n(0,0,1)
v(25.125,0.25,1)
n(0,0,1)
v(25.375,1,1)
n(0,0,1)
v(25.375,2.25,1)
n(0,0,1)
v(25.125,0.25,1)
n(0,0,1)
v(25.875,1,1)
n(0,0,1)
v(25.375,1,1)
n(0,0,1)
v(25.875,1,1)
n(0,0,1)
v(26.625,2,1)
n(0,0,1)
v(25.375,1,1)
n(0,0,1)
v(25.875,1,1)
n(0,0,1)
v(26.375,1.25,1)
n(0,0,1)
v(26.625,2,1)
n(0,0,1)
v(26.375,1.25,1)
n(0,0,1)
v(26.875,1,1)
n(0,0,1)
v(26.625,2,1)
n(0,0,1)
v(26.875,1,1)
n(0,0,1)
v(27.625,2.25,1)
n(0,0,1)
v(26.625,2,1)
n(0,0,1)
v(26.875,1,1)
n(0,0,1)
v(27.125,0,1)
n(0,0,1)
v(27.625,2.25,1)
n(0,0,1)
v(23.875,-4.25,1)
n(0,0,1)
v(23.375,-6.5,1)
n(0,0,1)
v(23.875,-5,1)
n(0,0,1)
v(23.875,-5,1)
n(0,0,1)
v(24.375,-4.75,1)
n(0,0,1)
v(23.875,-4.25,1)
n(0,0,1)
v(23.875,-5,1)
n(0,0,1)
v(24.375,-5.5,1)
n(0,0,1)
v(24.375,-4.75,1)
n(0,0,1)
v(24.375,-5.5,1)
n(0,0,1)
v(24.875,-5,1)
n(0,0,1)
v(24.375,-4.75,1)
n(0,0,1)
v(24.375,-5.5,1)
n(0,0,1)
v(24.875,-5.5,1)
n(0,0,1)
v(24.875,-5,1)
n(0,0,1)
v(24.875,-5.5,1)
n(0,0,1)
v(25.125,-4.75,1)
n(0,0,1)
v(24.875,-5,1)
n(0,0,1)
v(24.875,-5.5,1)
n(0,0,1)
v(26.375,-5,1)
n(0,0,1)
v(25.125,-4.75,1)
n(0,0,1)
v(26.375,-5,1)
n(0,0,1)
v(25.875,-4.25,1)
n(0,0,1)
v(25.125,-4.75,1)
n(0,0,1)
v(26.375,-5,1)
n(0,0,1)
v(26.375,-3.25,1)
n(0,0,1)
v(25.875,-4.25,1)
n(0,0,1)
v(26.375,-5,1)
n(0,0,1)
v(26.875,-1.5,1)
n(0,0,1)
v(26.375,-3.25,1)
n(0,0,1)
v(26.375,-5,1)
n(0,0,1)
v(27.625,-4,1)
n(0,0,1)
v(26.875,-1.5,1)
n(0,0,1)
v(27.625,-4,1)
n(0,0,1)
v(27.125,0,1)
n(0,0,1)
v(26.875,-1.5,1)
n(0,0,1)
v(27.625,-4,1)
n(0,0,1)
v(27.625,2.25,1)
n(0,0,1)
v(27.125,0,1)
n(0,0,1)
v(27.625,-4,1)
n(0,0,1)
v(28.375,2,1)
n(0,0,1)
v(27.625,2.25,1)
n(0,0,1)
v(27.625,-4,1)
n(0,0,1)
v(28.875,-2,1)
n(0,0,1)
v(28.375,2,1)
n(0,0,1)
v(28.875,-2,1)
n(0,0,1)
v(28.875,1.75,1)
n(0,0,1)
v(28.375,2,1)
n(0,0,1)
v(28.875,-2,1)
n(0,0,1)
v(29.125,0.25,1)
n(0,0,1)
v(28.875,1.75,1)
n(0,0,1)
v(23.375,1,1)
n(0,0,1)
v(23.125,0,1)
n(0,0,1)
v(23.125,-7.75,1)
n(0,0,1)
v(23.125,-7.75,1)
n(0,0,1)
v(23.375,-6.5,1)
n(0,0,1)
v(23.375,1,1)
n(0,0,1)
v(20.625,-8.25,1)
n(0,0,1)
v(19.625,-8.5,1)
n(0,0,1)
v(19.625,-8.75,1)
n(0,0,1)
v(19.625,-8.75,1)
n(0,0,1)
v(24.125,-8.75,1)
n(0,0,1)
v(20.625,-8.25,1)
n(0,0,1)
v(24.125,-8.75,1)
n(0,0,1)
v(20.875,-7.75,1)
n(0,0,1)
v(20.625,-8.25,1)
n(0,0,1)
v(24.125,-8.75,1)
n(0,0,1)
v(23.625,-8.5,1)
n(0,0,1)
v(20.875,-7.75,1)
n(0,0,1)
v(23.625,-8.5,1)
n(0,0,1)
v(21.125,-6.75,1)
n(0,0,1)
v(20.875,-7.75,1)
n(0,0,1)
v(23.625,-8.5,1)
n(0,0,1)
v(23.375,-8.25,1)
n(0,0,1)
v(21.125,-6.75,1)
n(0,0,1)
v(23.375,-8.25,1)
n(0,0,1)
v(23.125,-7.75,1)
n(0,0,1)
v(21.125,-6.75,1)
n(0,0,1)
v(23.125,-7.75,1)
n(0,0,1)
v(23.125,0,1)
n(0,0,1)
v(21.125,-6.75,1)
n(0,0,1)
v(24.125,-8.75,1)
n(0,0,1)
v(24.125,-8.5,1)
n(0,0,1)
v(23.625,-8.5,1)
n(0,0,1)
v(38.375,-0.25,1)
n(0,0,1)
v(36.875,-5.25,1)
n(0,0,1)
v(38.875,-5.25,1)
n(0,0,1)
v(38.875,-5.25,1)
n(0,0,1)
v(38.625,1,1)
n(0,0,1)
v(38.375,-0.25,1)
n(0,0,1)
v(40.625,2.25,1)
n(0,0,1)
v(37.875,1.75,1)
n(0,0,1)
v(37.875,1.5,1)
n(0,0,1)
v(37.875,1.5,1)
n(0,0,1)
v(38.375,1.5,1)
n(0,0,1)
v(40.625,2.25,1)
n(0,0,1)
v(38.375,1.5,1)
n(0,0,1)
v(38.625,1.25,1)
n(0,0,1)
v(40.625,2.25,1)
n(0,0,1)
v(38.625,1.25,1)
n(0,0,1)
v(38.625,1,1)
n(0,0,1)
v(40.625,2.25,1)
n(0,0,1)
v(38.625,1,1)
n(0,0,1)
v(38.875,-5.25,1)
n(0,0,1)
v(40.625,2.25,1)
n(0,0,1)
v(38.875,-5.25,1)
n(0,0,1)
v(39.625,-2.75,1)
n(0,0,1)
v(40.625,2.25,1)
n(0,0,1)
v(39.625,-2.75,1)
n(0,0,1)
v(40.125,-1.25,1)
n(0,0,1)
v(40.625,2.25,1)
n(0,0,1)
v(40.125,-1.25,1)
n(0,0,1)
v(41.125,2.25,1)
n(0,0,1)
v(40.625,2.25,1)
n(0,0,1)
v(40.125,-1.25,1)
n(0,0,1)
v(39.625,-2.75,1)
n(0,0,1)
v(40.625,-1,1)
n(0,0,1)
v(40.625,-1,1)
n(0,0,1)
v(40.875,0,1)
n(0,0,1)
v(40.125,-1.25,1)
n(0,0,1)
v(40.625,-1,1)
n(0,0,1)
v(41.625,0.25,1)
n(0,0,1)
v(40.875,0,1)
n(0,0,1)
v(41.625,0.25,1)
n(0,0,1)
v(41.625,0.75,1)
n(0,0,1)
v(40.875,0,1)
n(0,0,1)
v(41.625,0.25,1)
n(0,0,1)
v(42.375,1.5,1)
n(0,0,1)
v(41.625,0.75,1)
n(0,0,1)
v(41.625,0.25,1)
n(0,0,1)
v(42.375,0.75,1)
n(0,0,1)
v(42.375,1.5,1)
n(0,0,1)
v(42.375,0.75,1)
n(0,0,1)
v(42.875,2,1)
n(0,0,1)
v(42.375,1.5,1)
n(0,0,1)
v(42.375,0.75,1)
n(0,0,1)
v(42.625,0.75,1)
n(0,0,1)
v(42.875,2,1)
n(0,0,1)
v(42.625,0.75,1)
n(0,0,1)
v(42.625,0.5,1)
n(0,0,1)
v(42.875,2,1)
n(0,0,1)
v(42.625,0.5,1)
n(0,0,1)
v(42.625,0,1)
n(0,0,1)
v(42.875,2,1)
n(0,0,1)
v(43.875,2.25,1)
n(0,0,1)
v(42.875,2,1)
n(0,0,1)
v(43.375,-4.25,1)
n(0,0,1)
v(43.375,-4.25,1)
n(0,0,1)
v(43.625,-3.75,1)
n(0,0,1)
v(43.875,2.25,1)
n(0,0,1)
v(43.625,-3.75,1)
n(0,0,1)
v(44.625,-0.25,1)
n(0,0,1)
v(43.875,2.25,1)
n(0,0,1)
v(44.625,-0.25,1)
n(0,0,1)
v(44.625,1.75,1)
n(0,0,1)
v(43.875,2.25,1)
n(0,0,1)
v(44.625,-0.25,1)
n(0,0,1)
v(44.875,0.75,1)
n(0,0,1)
v(44.625,1.75,1)
n(0,0,1)
v(41.625,-3.5,1)
n(0,0,1)
v(41.375,-4.5,1)
n(0,0,1)
v(41.625,-5.25,1)
n(0,0,1)
v(41.625,-5.25,1)
n(0,0,1)
v(42.625,0,1)
n(0,0,1)
v(41.625,-3.5,1)
n(0,0,1)
v(41.625,-5.25,1)
n(0,0,1)
v(42.375,-5.5,1)
n(0,0,1)
v(42.625,0,1)
n(0,0,1)
v(42.375,-5.5,1)
n(0,0,1)
v(43.375,-4.25,1)
n(0,0,1)
v(42.625,0,1)
n(0,0,1)
v(42.375,-5.5,1)
n(0,0,1)
v(43.875,-5,1)
n(0,0,1)
v(43.375,-4.25,1)
n(0,0,1)
v(43.375,-4.25,1)
n(0,0,1)
v(42.875,2,1)
n(0,0,1)
v(42.625,0,1)
n(0,0,1)
v(43.875,-5,1)
n(0,0,1)
v(43.625,-4.5,1)
n(0,0,1)
v(43.375,-4.25,1)
n(0,0,1)
v(43.875,-5,1)
n(0,0,1)
v(43.875,-4.5,1)
n(0,0,1)
v(43.625,-4.5,1)
n(0,0,1)
v(43.875,-5,1)
n(0,0,1)
v(44.625,-3.75,1)
n(0,0,1)
v(43.875,-4.5,1)
n(0,0,1)
v(43.875,-5,1)
n(0,0,1)
v(45.125,-3.75,1)
n(0,0,1)
v(44.625,-3.75,1)
n(0,0,1)
v(45.125,-3.75,1)
n(0,0,1)
v(44.875,-3.5,1)
n(0,0,1)
v(44.625,-3.75,1)
n(0,0,1)
v(30.375,-2,1)
n(0,0,1)
v(30.125,-3.25,1)
n(0,0,1)
v(30.375,-4.25,1)
n(0,0,1)
v(30.375,-4.25,1)
n(0,0,1)
v(30.875,-0.75,1)
n(0,0,1)
v(30.375,-2,1)
n(0,0,1)
v(30.375,-4.25,1)
n(0,0,1)
v(30.625,-4.75,1)
n(0,0,1)
v(30.875,-0.75,1)
n(0,0,1)
v(30.625,-4.75,1)
n(0,0,1)
v(31.375,-5.25,1)
n(0,0,1)
v(30.875,-0.75,1)
n(0,0,1)
v(31.375,-5.25,1)
n(0,0,1)
v(31.625,0.5,1)
n(0,0,1)
v(30.875,-0.75,1)
n(0,0,1)
v(31.375,-5.25,1)
n(0,0,1)
v(32.375,-5.5,1)
n(0,0,1)
v(31.625,0.5,1)
n(0,0,1)
v(32.375,-5.5,1)
n(0,0,1)
v(32.125,-3.25,1)
n(0,0,1)
v(31.625,0.5,1)
n(0,0,1)
v(32.125,-3.25,1)
n(0,0,1)
v(32.125,-2.25,1)
n(0,0,1)
v(31.625,0.5,1)
n(0,0,1)
v(32.375,-5.5,1)
n(0,0,1)
v(32.375,-4.25,1)
n(0,0,1)
v(32.125,-3.25,1)
n(0,0,1)
v(32.375,-5.5,1)
n(0,0,1)
v(33.375,-4.5,1)
n(0,0,1)
v(32.375,-4.25,1)
n(0,0,1)
v(32.375,-5.5,1)
n(0,0,1)
v(34.125,-5,1)
n(0,0,1)
v(33.375,-4.5,1)
n(0,0,1)
v(34.125,-5,1)
n(0,0,1)
v(34.375,-4.25,1)
n(0,0,1)
v(33.375,-4.5,1)
n(0,0,1)
v(34.125,-5,1)
n(0,0,1)
v(35.125,-4.5,1)
n(0,0,1)
v(34.375,-4.25,1)
n(0,0,1)
v(35.125,-4.5,1)
n(0,0,1)
v(35.625,-3.5,1)
n(0,0,1)
v(34.375,-4.25,1)
n(0,0,1)
v(35.125,-4.5,1)
n(0,0,1)
v(35.875,-3.75,1)
n(0,0,1)
v(35.625,-3.5,1)
n(0,0,1)
v(8.375,2,1)
n(0,0,1)
v(8.125,1.25,1)
n(0,0,1)
v(8.125,0.75,1)
n(0,0,1)
v(8.125,0.75,1)
n(0,0,1)
v(8.625,0.25,1)
n(0,0,1)
v(8.375,2,1)
n(0,0,1)
v(8.625,0.25,1)
n(0,0,1)
v(9.125,2.25,1)
n(0,0,1)
v(8.375,2,1)
n(0,0,1)
v(8.625,0.25,1)
n(0,0,1)
v(9.125,-0.25,1)
n(0,0,1)
v(9.125,2.25,1)
n(0,0,1)
v(9.125,-0.25,1)
n(0,0,1)
v(9.875,2,1)
n(0,0,1)
v(9.125,2.25,1)
n(0,0,1)
v(9.125,-0.25,1)
n(0,0,1)
v(9.375,-1.25,1)
n(0,0,1)
v(9.875,2,1)
n(0,0,1)
v(9.375,-1.25,1)
n(0,0,1)
v(9.875,-0.25,1)
n(0,0,1)
v(9.875,2,1)
n(0,0,1)
v(9.875,-0.25,1)
n(0,0,1)
v(10.125,1,1)
n(0,0,1)
v(9.875,2,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(3.375,1.5,1)
n(0,0,1)
v(3.375,1.25,1)
n(0,0,1)
v(3.375,1.25,1)
n(0,0,1)
v(3.625,1.25,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(3.625,1.25,1)
n(0,0,1)
v(4.375,1,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(4.375,1,1)
n(0,0,1)
v(4.625,0.5,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(4.625,0.5,1)
n(0,0,1)
v(4.625,-0.25,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(4.625,-0.25,1)
n(0,0,1)
v(4.875,-3.25,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(4.875,-3.25,1)
n(0,0,1)
v(4.875,-4.5,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(4.875,-4.5,1)
n(0,0,1)
v(4.875,-5.25,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(4.875,-5.25,1)
n(0,0,1)
v(4.875,-6,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(4.875,-6,1)
n(0,0,1)
v(4.875,-7.25,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(4.875,-7.25,1)
n(0,0,1)
v(5.625,-6.5,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(5.625,-6.5,1)
n(0,0,1)
v(6.625,-4.25,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(5.625,-6.5,1)
n(0,0,1)
v(6.875,-4.75,1)
n(0,0,1)
v(6.625,-4.25,1)
n(0,0,1)
v(6.625,-4.25,1)
n(0,0,1)
v(6.125,1.5,1)
n(0,0,1)
v(5.875,2.25,1)
n(0,0,1)
v(6.625,-4.25,1)
n(0,0,1)
v(6.375,0,1)
n(0,0,1)
v(6.125,1.5,1)
n(0,0,1)
v(6.875,-4.75,1)
n(0,0,1)
v(7.625,-3,1)
n(0,0,1)
v(6.625,-4.25,1)
n(0,0,1)
v(6.875,-4.75,1)
n(0,0,1)
v(8.375,-2.75,1)
n(0,0,1)
v(7.625,-3,1)
n(0,0,1)
v(8.375,-2.75,1)
n(0,0,1)
v(8.625,-1.25,1)
n(0,0,1)
v(7.625,-3,1)
n(0,0,1)
v(8.375,-2.75,1)
n(0,0,1)
v(9.375,-1.25,1)
n(0,0,1)
v(8.625,-1.25,1)
n(0,0,1)
v(9.375,-1.25,1)
n(0,0,1)
v(9.125,-0.25,1)
n(0,0,1)
v(8.625,-1.25,1)
n(0,0,1)
v(1.625,-7,1)
n(0,0,1)
v(1.375,-7.75,1)
n(0,0,1)
v(1.625,-8.5,1)
n(0,0,1)
v(1.625,-8.5,1)
n(0,0,1)
v(2.375,-6.75,1)
n(0,0,1)
v(1.625,-7,1)
n(0,0,1)
v(1.625,-8.5,1)
n(0,0,1)
v(2.375,-8.75,1)
n(0,0,1)
v(2.375,-6.75,1)
n(0,0,1)
v(2.375,-8.75,1)
n(0,0,1)
v(3.125,-7.25,1)
n(0,0,1)
v(2.375,-6.75,1)
n(0,0,1)
v(2.375,-8.75,1)
n(0,0,1)
v(3.125,-8.5,1)
n(0,0,1)
v(3.125,-7.25,1)
n(0,0,1)
v(3.125,-8.5,1)
n(0,0,1)
v(3.625,-7.5,1)
n(0,0,1)
v(3.125,-7.25,1)
n(0,0,1)
v(3.125,-8.5,1)
n(0,0,1)
v(4.125,-8,1)
n(0,0,1)
v(3.625,-7.5,1)
n(0,0,1)
v(4.125,-8,1)
n(0,0,1)
v(3.875,-7.5,1)
n(0,0,1)
v(3.625,-7.5,1)
n(0,0,1)
v(4.125,-8,1)
n(0,0,1)
v(4.625,-7,1)
n(0,0,1)
v(3.875,-7.5,1)
n(0,0,1)
v(4.125,-8,1)
n(0,0,1)
v(4.875,-7.25,1)
n(0,0,1)
v(4.625,-7,1)
n(0,0,1)
v(4.875,-7.25,1)
n(0,0,1)
v(4.875,-6,1)
n(0,0,1)
v(4.625,-7,1)
n(0,0,1)
v(-0.125,5.5,1)
n(0,0,1)
v(-4.375,5.5,1)
n(0,0,1)
v(-4.375,5.25,1)
n(0,0,1)
v(-4.375,5.25,1)
n(0,0,1)
v(-3.375,5,1)
n(0,0,1)
v(-0.125,5.5,1)
n(0,0,1)
v(-3.375,5,1)
n(0,0,1)
v(-3.125,4.5,1)
n(0,0,1)
v(-0.125,5.5,1)
n(0,0,1)
v(-3.125,4.5,1)
n(0,0,1)
v(-0.875,4.75,1)
n(0,0,1)
v(-0.125,5.5,1)
n(0,0,1)
v(-3.125,4.5,1)
n(0,0,1)
v(-3.125,-3.25,1)
n(0,0,1)
v(-0.875,4.75,1)
n(0,0,1)
v(-3.125,-3.25,1)
n(0,0,1)
v(-2.125,0.25,1)
n(0,0,1)
v(-0.875,4.75,1)
n(0,0,1)
v(-0.875,4.75,1)
n(0,0,1)
v(-0.375,4.75,1)
n(0,0,1)
v(-0.125,5.5,1)
n(0,0,1)
v(-0.375,4.75,1)
n(0,0,1)
v(0.875,4.25,1)
n(0,0,1)
v(-0.125,5.5,1)
n(0,0,1)
v(0.875,4.25,1)
n(0,0,1)
v(1.375,5.5,1)
n(0,0,1)
v(-0.125,5.5,1)
n(0,0,1)
v(0.875,4.25,1)
n(0,0,1)
v(1.375,3.25,1)
n(0,0,1)
v(1.375,5.5,1)
n(0,0,1)
v(1.375,3.25,1)
n(0,0,1)
v(2.375,5,1)
n(0,0,1)
v(1.375,5.5,1)
n(0,0,1)
v(-2.125,0.25,1)
n(0,0,1)
v(-3.125,-3.25,1)
n(0,0,1)
v(-2.125,-0.25,1)
n(0,0,1)
v(-2.125,-0.25,1)
n(0,0,1)
v(-1.125,0.25,1)
n(0,0,1)
v(-2.125,0.25,1)
n(0,0,1)
v(-2.125,-0.25,1)
n(0,0,1)
v(-0.625,-0.5,1)
n(0,0,1)
v(-1.125,0.25,1)
n(0,0,1)
v(-0.625,-0.5,1)
n(0,0,1)
v(-0.125,0.5,1)
n(0,0,1)
v(-1.125,0.25,1)
n(0,0,1)
v(-0.625,-0.5,1)
n(0,0,1)
v(1.125,-0.25,1)
n(0,0,1)
v(-0.125,0.5,1)
n(0,0,1)
v(1.125,-0.25,1)
n(0,0,1)
v(0.625,1.25,1)
n(0,0,1)
v(-0.125,0.5,1)
n(0,0,1)
v(1.125,-0.25,1)
n(0,0,1)
v(1.125,2.25,1)
n(0,0,1)
v(0.625,1.25,1)
n(0,0,1)
v(1.125,-0.25,1)
n(0,0,1)
v(1.375,3.25,1)
n(0,0,1)
v(1.125,2.25,1)
n(0,0,1)
v(1.125,-0.25,1)
n(0,0,1)
v(2.375,0.5,1)
n(0,0,1)
v(1.375,3.25,1)
n(0,0,1)
v(2.375,0.5,1)
n(0,0,1)
v(2.375,5,1)
n(0,0,1)
v(1.375,3.25,1)
n(0,0,1)
v(2.375,0.5,1)
n(0,0,1)
v(3.375,4.25,1)
n(0,0,1)
v(2.375,5,1)
n(0,0,1)
v(2.375,0.5,1)
n(0,0,1)
v(3.375,1.75,1)
n(0,0,1)
v(3.375,4.25,1)
n(0,0,1)
v(3.375,1.75,1)
n(0,0,1)
v(3.625,3,1)
n(0,0,1)
v(3.375,4.25,1)
n(0,0,1)
v(-5.375,-3.25,1)
n(0,0,1)
v(-5.625,-4,1)
n(0,0,1)
v(-5.625,-4.5,1)
n(0,0,1)
v(-6.125,-5,1)
n(0,0,1)
v(-7.125,-5,1)
n(0,0,1)
v(-7.125,-5.25,1)
n(0,0,1)
v(-7.125,-5.25,1)
n(0,0,1)
v(-1.875,-5.25,1)
n(0,0,1)
v(-6.125,-5,1)
n(0,0,1)
v(-1.875,-5.25,1)
n(0,0,1)
v(-5.625,-4.5,1)
n(0,0,1)
v(-6.125,-5,1)
n(0,0,1)
v(-1.875,-5.25,1)
n(0,0,1)
v(-2.625,-5,1)
n(0,0,1)
v(-5.625,-4.5,1)
n(0,0,1)
v(-2.625,-5,1)
n(0,0,1)
v(-5.375,-3.25,1)
n(0,0,1)
v(-5.625,-4.5,1)
n(0,0,1)
v(-2.625,-5,1)
n(0,0,1)
v(-2.875,-4.75,1)
n(0,0,1)
v(-5.375,-3.25,1)
n(0,0,1)
v(-2.875,-4.75,1)
n(0,0,1)
v(-3.125,-4.25,1)
n(0,0,1)
v(-5.375,-3.25,1)
n(0,0,1)
v(-3.125,-4.25,1)
n(0,0,1)
v(-3.375,3.5,1)
n(0,0,1)
v(-5.375,-3.25,1)
n(0,0,1)
v(-3.125,-4.25,1)
n(0,0,1)
v(-3.125,4.5,1)
n(0,0,1)
v(-3.375,3.5,1)
n(0,0,1)
v(-3.125,-4.25,1)
n(0,0,1)
v(-3.125,-3.25,1)
n(0,0,1)
v(-3.125,4.5,1)
n(0,0,1)
v(-1.875,-5.25,1)
n(0,0,1)
v(-1.875,-5,1)
n(0,0,1)
v(-2.625,-5,1)
n(0,0,1)
v(13.125,3.5,1)
n(0,0,1)
v(13.125,-2.75,1)
n(0,0,1)
v(13.625,0,1)
n(0,0,1)
v(13.625,0,1)
n(0,0,1)
v(14.625,4.75,1)
n(0,0,1)
v(13.125,3.5,1)
n(0,0,1)
v(13.125,-2.75,1)
n(0,0,1)
v(13.375,-1.5,1)
n(0,0,1)
v(13.625,0,1)
n(0,0,1)
v(13.625,0,1)
n(0,0,1)
v(14.125,1.5,1)
n(0,0,1)
v(14.625,4.75,1)
n(0,0,1)
v(14.125,1.5,1)
n(0,0,1)
v(14.625,2.5,1)
n(0,0,1)
v(14.625,4.75,1)
n(0,0,1)
v(14.625,2.5,1)
n(0,0,1)
v(15.125,3.5,1)
n(0,0,1)
v(14.625,4.75,1)
n(0,0,1)
v(15.125,3.5,1)
n(0,0,1)
v(16.375,5.5,1)
n(0,0,1)
v(14.625,4.75,1)
n(0,0,1)
v(15.125,3.5,1)
n(0,0,1)
v(15.625,4,1)
n(0,0,1)
v(16.375,5.5,1)
n(0,0,1)
v(15.625,4,1)
n(0,0,1)
v(16.625,4.75,1)
n(0,0,1)
v(16.375,5.5,1)
n(0,0,1)
v(16.625,4.75,1)
n(0,0,1)
v(17.875,5.75,1)
n(0,0,1)
v(16.375,5.5,1)
n(0,0,1)
v(16.625,4.75,1)
n(0,0,1)
v(17.875,5,1)
n(0,0,1)
v(17.875,5.75,1)
n(0,0,1)
v(17.875,5,1)
n(0,0,1)
v(18.875,5.5,1)
n(0,0,1)
v(17.875,5.75,1)
n(0,0,1)
v(17.875,5,1)
n(0,0,1)
v(18.375,4.75,1)
n(0,0,1)
v(18.875,5.5,1)
n(0,0,1)
v(18.375,4.75,1)
n(0,0,1)
v(18.875,4.5,1)
n(0,0,1)
v(18.875,5.5,1)
n(0,0,1)
v(18.875,4.5,1)
n(0,0,1)
v(19.875,5.25,1)
n(0,0,1)
v(18.875,5.5,1)
n(0,0,1)
v(18.875,4.5,1)
n(0,0,1)
v(19.375,4,1)
n(0,0,1)
v(19.875,5.25,1)
n(0,0,1)
v(19.375,4,1)
n(0,0,1)
v(19.375,3,1)
n(0,0,1)
v(19.875,5.25,1)
n(0,0,1)
v(11.125,0,1)
n(0,0,1)
v(10.875,-1.75,1)
n(0,0,1)
v(11.375,-3.5,1)
n(0,0,1)
v(11.375,-3.5,1)
n(0,0,1)
v(11.875,1.75,1)
n(0,0,1)
v(11.125,0,1)
n(0,0,1)
v(11.375,-3.5,1)
n(0,0,1)
v(12.375,-5,1)
n(0,0,1)
v(11.875,1.75,1)
n(0,0,1)
v(12.375,-5,1)
n(0,0,1)
v(13.125,3.5,1)
n(0,0,1)
v(11.875,1.75,1)
n(0,0,1)
v(12.375,-5,1)
n(0,0,1)
v(13.125,-2.75,1)
n(0,0,1)
v(13.125,3.5,1)
n(0,0,1)
v(12.375,-5,1)
n(0,0,1)
v(13.375,-5.5,1)
n(0,0,1)
v(13.125,-2.75,1)
n(0,0,1)
v(13.375,-5.5,1)
n(0,0,1)
v(13.625,-4.25,1)
n(0,0,1)
v(13.125,-2.75,1)
n(0,0,1)
v(13.375,-5.5,1)
n(0,0,1)
v(14.625,-5.5,1)
n(0,0,1)
v(13.625,-4.25,1)
n(0,0,1)
v(14.625,-5.5,1)
n(0,0,1)
v(14.125,-4.75,1)
n(0,0,1)
v(13.625,-4.25,1)
n(0,0,1)
v(14.625,-5.5,1)
n(0,0,1)
v(14.875,-4.75,1)
n(0,0,1)
v(14.125,-4.75,1)
n(0,0,1)
v(14.625,-5.5,1)
n(0,0,1)
v(16.125,-5.25,1)
n(0,0,1)
v(14.875,-4.75,1)
n(0,0,1)
v(16.125,-5.25,1)
n(0,0,1)
v(15.625,-4.5,1)
n(0,0,1)
v(14.875,-4.75,1)
n(0,0,1)
v(16.125,-5.25,1)
n(0,0,1)
v(16.375,-4.25,1)
n(0,0,1)
v(15.625,-4.5,1)
n(0,0,1)
v(16.125,-5.25,1)
n(0,0,1)
v(17.625,-4.75,1)
n(0,0,1)
v(16.375,-4.25,1)
n(0,0,1)
v(17.625,-4.75,1)
n(0,0,1)
v(17.625,-2.75,1)
n(0,0,1)
v(16.375,-4.25,1)
n(0,0,1)
v(17.625,-4.75,1)
n(0,0,1)
v(18.625,-1,1)
n(0,0,1)
v(17.625,-2.75,1)
n(0,0,1)
v(17.625,-4.75,1)
n(0,0,1)
v(19.125,-3.75,1)
n(0,0,1)
v(18.625,-1,1)
n(0,0,1)
v(19.125,-3.75,1)
n(0,0,1)
v(19.125,1.25,1)
n(0,0,1)
v(18.625,-1,1)
n(0,0,1)
v(19.125,-3.75,1)
n(0,0,1)
v(19.375,3,1)
n(0,0,1)
v(19.125,1.25,1)
n(0,0,1)
v(19.125,-3.75,1)
n(0,0,1)
v(20.375,-2,1)
n(0,0,1)
v(19.375,3,1)
n(0,0,1)
v(20.375,-2,1)
n(0,0,1)
v(19.875,5.25,1)
n(0,0,1)
v(19.375,3,1)
n(0,0,1)
v(20.375,-2,1)
n(0,0,1)
v(21.125,4,1)
n(0,0,1)
v(19.875,5.25,1)
n(0,0,1)
v(20.375,-2,1)
n(0,0,1)
v(21.375,0,1)
n(0,0,1)
v(21.125,4,1)
n(0,0,1)
v(21.375,0,1)
n(0,0,1)
v(21.625,3,1)
n(0,0,1)
v(21.125,4,1)
n(0,0,1)
v(21.375,0,1)
n(0,0,1)
v(21.625,2,1)
n(0,0,1)
v(21.625,3,1)
n(-0.948683,-0.316228,0)
v(-2.125,-0.25,-1)
n(-0.948683,-0.316228,0)
v(-2.125,-0.25,1)
n(-0.987087,-0.160182,0)
v(-3.125,-3.25,-1)
n(-0.987087,-0.160182,0)
v(-3.125,-3.25,-1)
n(-0.948683,-0.316228,0)
v(-2.125,-0.25,1)
n(-0.987087,-0.160182,0)
v(-3.125,-3.25,1)
n(-0.987087,-0.160182,0)
v(-3.125,-3.25,-1)
n(-0.987087,-0.160182,0)
v(-3.125,-3.25,1)
n(-0.973249,0.229753,0)
v(-3.125,-4.25,-1)
n(-0.973249,0.229753,0)
v(-3.125,-4.25,-1)
n(-0.987087,-0.160182,0)
v(-3.125,-3.25,1)
n(-0.973249,0.229753,0)
v(-3.125,-4.25,1)
n(-0.973249,0.229753,0)
v(-3.125,-4.25,-1)
n(-0.973249,0.229753,0)
v(-3.125,-4.25,1)
n(-0.811242,0.58471,0)
v(-2.875,-4.75,-1)
n(-0.811242,0.58471,0)
v(-2.875,-4.75,-1)
n(-0.973249,0.229753,0)
v(-3.125,-4.25,1)
n(-0.811242,0.58471,0)
v(-2.875,-4.75,1)
n(-0.811242,0.58471,0)
v(-2.875,-4.75,-1)
n(-0.811242,0.58471,0)
v(-2.875,-4.75,1)
n(-0.382683,0.92388,0)
v(-2.625,-5,-1)
n(-0.382683,0.92388,0)
v(-2.625,-5,-1)
n(-0.811242,0.58471,0)
v(-2.875,-4.75,1)
n(-0.382683,0.92388,0)
v(-2.625,-5,1)
n(-0.382683,0.92388,0)
v(-2.625,-5,-1)
n(-0.382683,0.92388,0)
v(-2.625,-5,1)
n(0,1,0)
v(-1.875,-5,-1)
n(0,1,0)
v(-1.875,-5,-1)
n(-0.382683,0.92388,0)
v(-2.625,-5,1)
n(0,1,0)
v(-1.875,-5,1)
n(-1,0,0)
v(-1.875,-5,-1)
n(-1,0,0)
v(-1.875,-5,1)
n(-1,0,0)
v(-1.875,-5.25,-1)
n(-1,0,0)
v(-1.875,-5.25,-1)
n(-1,0,0)
v(-1.875,-5,1)
n(-1,0,0)
v(-1.875,-5.25,1)
n(0,-1,0)
v(-1.875,-5.25,-1)
n(0,-1,0)
v(-1.875,-5.25,1)
n(0,-1,0)
v(-7.125,-5.25,-1)
n(0,-1,0)
v(-7.125,-5.25,-1)
n(0,-1,0)
v(-1.875,-5.25,1)
n(0,-1,0)
v(-7.125,-5.25,1)
n(1,0,0)
v(-7.125,-5.25,-1)
n(1,0,0)
v(-7.125,-5.25,1)
n(1,0,0)
v(-7.125,-5,-1)
n(1,0,0)
v(-7.125,-5,-1)
n(1,0,0)
v(-7.125,-5.25,1)
n(1,0,0)
v(-7.125,-5,1)
n(0,1,0)
v(-7.125,-5,-1)
n(0,1,0)
v(-7.125,-5,1)
n(0,1,0)
v(-6.125,-5,-1)
n(0,1,0)
v(-6.125,-5,-1)
n(0,1,0)
v(-7.125,-5,1)
n(0,1,0)
v(-6.125,-5,1)
n(0.707107,0.707107,0)
v(-6.125,-5,-1)
n(0.707107,0.707107,0)
v(-6.125,-5,1)
n(0.707107,0.707107,0)
v(-5.625,-4.5,-1)
n(0.707107,0.707107,0)
v(-5.625,-4.5,-1)
n(0.707107,0.707107,0)
v(-6.125,-5,1)
n(0.707107,0.707107,0)
v(-5.625,-4.5,1)
n(1,0,0)
v(-5.625,-4.5,-1)
n(1,0,0)
v(-5.625,-4.5,1)
n(0.987087,0.160182,0)
v(-5.625,-4,-1)
n(0.987087,0.160182,0)
v(-5.625,-4,-1)
n(1,0,0)
v(-5.625,-4.5,1)
n(0.987087,0.160182,0)
v(-5.625,-4,1)
n(0.987087,0.160182,0)
v(-5.625,-4,-1)
n(0.987087,0.160182,0)
v(-5.625,-4,1)
n(0.953876,0.300201,0)
v(-5.375,-3.25,-1)
n(0.953876,0.300201,0)
v(-5.375,-3.25,-1)
n(0.987087,0.160182,0)
v(-5.625,-4,1)
n(0.953876,0.300201,0)
v(-5.375,-3.25,1)
n(0.953876,0.300201,0)
v(-5.375,-3.25,-1)
n(0.953876,0.300201,0)
v(-5.375,-3.25,1)
n(0.964694,0.263373,0)
v(-3.375,3.5,-1)
n(0.964694,0.263373,0)
v(-3.375,3.5,-1)
n(0.953876,0.300201,0)
v(-5.375,-3.25,1)
n(0.964694,0.263373,0)
v(-3.375,3.5,1)
n(0.964694,0.263373,0)
v(-3.375,3.5,-1)
n(0.964694,0.263373,0)
v(-3.375,3.5,1)
n(0.994029,-0.109117,0)
v(-3.125,4.5,-1)
n(0.994029,-0.109117,0)
v(-3.125,4.5,-1)
n(0.964694,0.263373,0)
v(-3.375,3.5,1)
n(0.994029,-0.109117,0)
v(-3.125,4.5,1)
n(0.994029,-0.109117,0)
v(-3.125,4.5,-1)
n(0.994029,-0.109117,0)
v(-3.125,4.5,1)
n(0.894427,-0.447214,0)
v(-3.375,5,-1)
n(0.894427,-0.447214,0)
v(-3.375,5,-1)
n(0.994029,-0.109117,0)
v(-3.125,4.5,1)
n(0.894427,-0.447214,0)
v(-3.375,5,1)
n(0.242536,-0.970143,0)
v(-3.375,5,-1)
n(0.242536,-0.970143,0)
v(-3.375,5,1)
n(0.242536,-0.970143,0)
v(-4.375,5.25,-1)
n(0.242536,-0.970143,0)
v(-4.375,5.25,-1)
n(0.242536,-0.970143,0)
v(-3.375,5,1)
n(0.242536,-0.970143,0)
v(-4.375,5.25,1)
n(1,0,0)
v(-4.375,5.25,-1)
n(1,0,0)
v(-4.375,5.25,1)
n(1,0,0)
v(-4.375,5.5,-1)
n(1,0,0)
v(-4.375,5.5,-1)
n(1,0,0)
v(-4.375,5.25,1)
n(1,0,0)
v(-4.375,5.5,1)
n(0,1,0)
v(-4.375,5.5,-1)
n(0,1,0)
v(-4.375,5.5,1)
n(0,1,0)
v(-0.125,5.5,-1)
n(0,1,0)
v(-0.125,5.5,-1)
n(0,1,0)
v(-4.375,5.5,1)
n(0,1,0)
v(-0.125,5.5,1)
n(0,1,0)
v(-0.125,5.5,-1)
n(0,1,0)
v(-0.125,5.5,1)
n(-0.229753,0.973249,0)
v(1.375,5.5,-1)
n(-0.229753,0.973249,0)
v(1.375,5.5,-1)
n(0,1,0)
v(-0.125,5.5,1)
n(-0.229753,0.973249,0)
v(1.375,5.5,1)
n(-0.229753,0.973249,0)
v(1.375,5.5,-1)
n(-0.229753,0.973249,0)
v(1.375,5.5,1)
n(-0.525731,0.850651,0)
v(2.375,5,-1)
n(-0.525731,0.850651,0)
v(2.375,5,-1)
n(-0.229753,0.973249,0)
v(1.375,5.5,1)
n(-0.525731,0.850651,0)
v(2.375,5,1)
n(-0.525731,0.850651,0)
v(2.375,5,-1)
n(-0.525731,0.850651,0)
v(2.375,5,1)
n(-0.846007,0.533172,0)
v(3.375,4.25,-1)
n(-0.846007,0.533172,0)
v(3.375,4.25,-1)
n(-0.525731,0.850651,0)
v(2.375,5,1)
n(-0.846007,0.533172,0)
v(3.375,4.25,1)
n(-0.846007,0.533172,0)
v(3.375,4.25,-1)
n(-0.846007,0.533172,0)
v(3.375,4.25,1)
n(-1,0,0)
v(3.625,3,-1)
n(-1,0,0)
v(3.625,3,-1)
n(-0.846007,0.533172,0)
v(3.375,4.25,1)
n(-1,0,0)
v(3.625,3,1)
n(-1,0,0)
v(3.625,3,-1)
n(-1,0,0)
v(3.625,3,1)
n(-0.906419,-0.422379,0)
v(3.375,1.75,-1)
n(-0.906419,-0.422379,0)
v(3.375,1.75,-1)
n(-1,0,0)
v(3.625,3,1)
n(-0.906419,-0.422379,0)
v(3.375,1.75,1)
n(-0.906419,-0.422379,0)
v(3.375,1.75,-1)
n(-0.906419,-0.422379,0)
v(3.375,1.75,1)
n(-0.658059,-0.752967,0)
v(2.375,0.5,-1)
n(-0.658059,-0.752967,0)
v(2.375,0.5,-1)
n(-0.906419,-0.422379,0)
v(3.375,1.75,1)
n(-0.658059,-0.752967,0)
v(2.375,0.5,1)
n(-0.658059,-0.752967,0)
v(2.375,0.5,-1)
n(-0.658059,-0.752967,0)
v(2.375,0.5,1)
n(-0.334579,-0.942368,0)
v(1.125,-0.25,-1)
n(-0.334579,-0.942368,0)
v(1.125,-0.25,-1)
n(-0.658059,-0.752967,0)
v(2.375,0.5,1)
n(-0.334579,-0.942368,0)
v(1.125,-0.25,1)
n(-0.334579,-0.942368,0)
v(1.125,-0.25,-1)
n(-0.334579,-0.942368,0)
v(1.125,-0.25,1)
n(0.0116255,-0.999932,0)
v(-0.625,-0.5,-1)
n(0.0116255,-0.999932,0)
v(-0.625,-0.5,-1)
n(-0.334579,-0.942368,0)
v(1.125,-0.25,1)
n(0.0116255,-0.999932,0)
v(-0.625,-0.5,1)
n(0.0116255,-0.999932,0)
v(-0.625,-0.5,-1)
n(0.0116255,-0.999932,0)
v(-0.625,-0.5,1)
n(0.164399,-0.986394,0)
v(-2.125,-0.25,-1)
n(0.164399,-0.986394,0)
v(-2.125,-0.25,-1)
n(0.0116255,-0.999932,0)
v(-0.625,-0.5,1)
n(0.164399,-0.986394,0)
v(-2.125,-0.25,1)
n(0,0,0)
v(-2.125,-0.25,-1)
n(0,0,0)
v(-2.125,-0.25,1)
n(0,0,0)
v(-2.125,-0.25,-1)
n(0,0,0)
v(-2.125,-0.25,-1)
n(0,0,0)
v(-2.125,-0.25,1)
n(0,0,0)
v(-2.125,-0.25,1)
n(0,1,0)
v(-2.125,0.25,-1)
n(0,1,0)
v(-2.125,0.25,1)
n(0.122183,0.992508,0)
v(-1.125,0.25,-1)
n(0.122183,0.992508,0)
v(-1.125,0.25,-1)
n(0,1,0)
v(-2.125,0.25,1)
n(0.122183,0.992508,0)
v(-1.125,0.25,1)
n(0.122183,0.992508,0)
v(-1.125,0.25,-1)
n(0.122183,0.992508,0)
v(-1.125,0.25,1)
n(0.492699,0.8702,0)
v(-0.125,0.5,-1)
n(0.492699,0.8702,0)
v(-0.125,0.5,-1)
n(0.122183,0.992508,0)
v(-1.125,0.25,1)
n(0.492699,0.8702,0)
v(-0.125,0.5,1)
n(0.492699,0.8702,0)
v(-0.125,0.5,-1)
n(0.492699,0.8702,0)
v(-0.125,0.5,1)
n(0.811242,0.58471,0)
v(0.625,1.25,-1)
n(0.811242,0.58471,0)
v(0.625,1.25,-1)
n(0.492699,0.8702,0)
v(-0.125,0.5,1)
n(0.811242,0.58471,0)
v(0.625,1.25,1)
n(0.811242,0.58471,0)
v(0.625,1.25,-1)
n(0.811242,0.58471,0)
v(0.625,1.25,1)
n(0.937885,0.346946,0)
v(1.125,2.25,-1)
n(0.937885,0.346946,0)
v(1.125,2.25,-1)
n(0.811242,0.58471,0)
v(0.625,1.25,1)
n(0.937885,0.346946,0)
v(1.125,2.25,1)
n(0.937885,0.346946,0)
v(1.125,2.25,-1)
n(0.937885,0.346946,0)
v(1.125,2.25,1)
n(0.994029,-0.109117,0)
v(1.375,3.25,-1)
n(0.994029,-0.109117,0)
v(1.375,3.25,-1)
n(0.937885,0.346946,0)
v(1.125,2.25,1)
n(0.994029,-0.109117,0)
v(1.375,3.25,1)
n(0.994029,-0.109117,0)
v(1.375,3.25,-1)
n(0.994029,-0.109117,0)
v(1.375,3.25,1)
n(0.67711,-0.735882,0)
v(0.875,4.25,-1)
n(0.67711,-0.735882,0)
v(0.875,4.25,-1)
n(0.994029,-0.109117,0)
v(1.375,3.25,1)
n(0.67711,-0.735882,0)
v(0.875,4.25,1)
n(0.67711,-0.735882,0)
v(0.875,4.25,-1)
n(0.67711,-0.735882,0)
v(0.875,4.25,1)
n(0.189108,-0.981956,0)
v(-0.375,4.75,-1)
n(0.189108,-0.981956,0)
v(-0.375,4.75,-1)
n(0.67711,-0.735882,0)
v(0.875,4.25,1)
n(0.189108,-0.981956,0)
v(-0.375,4.75,1)
n(0.189108,-0.981956,0)
v(-0.375,4.75,-1)
n(0.189108,-0.981956,0)
v(-0.375,4.75,1)
n(0,-1,0)
v(-0.875,4.75,-1)
n(0,-1,0)
v(-0.875,4.75,-1)
n(0.189108,-0.981956,0)
v(-0.375,4.75,1)
n(0,-1,0)
v(-0.875,4.75,1)
n(-0.963518,-0.267644,0)
v(-0.875,4.75,-1)
n(-0.963518,-0.267644,0)
v(-0.875,4.75,1)
n(-0.963518,-0.267644,0)
v(-2.125,0.25,-1)
n(-0.963518,-0.267644,0)
v(-2.125,0.25,-1)
n(-0.963518,-0.267644,0)
v(-0.875,4.75,1)
n(-0.963518,-0.267644,0)
v(-2.125,0.25,1)
n(0.287348,0.957826,0)
v(3.375,1.5,-1)
n(0.287348,0.957826,0)
v(3.375,1.5,1)
n(0.287348,0.957826,0)
v(5.875,2.25,-1)
n(0.287348,0.957826,0)
v(5.875,2.25,-1)
n(0.287348,0.957826,0)
v(3.375,1.5,1)
n(0.287348,0.957826,0)
v(5.875,2.25,1)
n(-0.948683,0.316228,0)
v(5.875,2.25,-1)
n(-0.948683,0.316228,0)
v(5.875,2.25,1)
n(-0.970512,0.241052,0)
v(6.125,1.5,-1)
n(-0.970512,0.241052,0)
v(6.125,1.5,-1)
n(-0.948683,0.316228,0)
v(5.875,2.25,1)
n(-0.970512,0.241052,0)
v(6.125,1.5,1)
n(-0.970512,0.241052,0)
v(6.125,1.5,-1)
n(-0.970512,0.241052,0)
v(6.125,1.5,1)
n(-0.99374,0.111719,0)
v(6.375,0,-1)
n(-0.99374,0.111719,0)
v(6.375,0,-1)
n(-0.970512,0.241052,0)
v(6.125,1.5,1)
n(-0.99374,0.111719,0)
v(6.375,0,1)
n(-0.99374,0.111719,0)
v(6.375,0,-1)
n(-0.99374,0.111719,0)
v(6.375,0,1)
n(-0.998274,0.058722,0)
v(6.625,-4.25,-1)
n(-0.998274,0.058722,0)
v(6.625,-4.25,-1)
n(-0.99374,0.111719,0)
v(6.375,0,1)
n(-0.998274,0.058722,0)
v(6.625,-4.25,1)
n(0.780869,0.624695,0)
v(6.625,-4.25,-1)
n(0.780869,0.624695,0)
v(6.625,-4.25,1)
n(0.827058,0.562117,0)
v(7.625,-3,-1)
n(0.827058,0.562117,0)
v(7.625,-3,-1)
n(0.780869,0.624695,0)
v(6.625,-4.25,1)
n(0.827058,0.562117,0)
v(7.625,-3,1)
n(0.827058,0.562117,0)
v(7.625,-3,-1)
n(0.827058,0.562117,0)
v(7.625,-3,1)
n(0.881675,0.471858,0)
v(8.625,-1.25,-1)
n(0.881675,0.471858,0)
v(8.625,-1.25,-1)
n(0.827058,0.562117,0)
v(7.625,-3,1)
n(0.881675,0.471858,0)
v(8.625,-1.25,1)
n(0.881675,0.471858,0)
v(8.625,-1.25,-1)
n(0.881675,0.471858,0)
v(8.625,-1.25,1)
n(0.894427,0.447214,0)
v(9.125,-0.25,-1)
n(0.894427,0.447214,0)
v(9.125,-0.25,-1)
n(0.881675,0.471858,0)
v(8.625,-1.25,1)
n(0.894427,0.447214,0)
v(9.125,-0.25,1)
n(0.707107,-0.707107,0)
v(9.125,-0.25,-1)
n(0.707107,-0.707107,0)
v(9.125,-0.25,1)
n(0.707107,-0.707107,0)
v(8.625,0.25,-1)
n(0.707107,-0.707107,0)
v(8.625,0.25,-1)
n(0.707107,-0.707107,0)
v(9.125,-0.25,1)
n(0.707107,-0.707107,0)
v(8.625,0.25,1)
n(0.707107,-0.707107,0)
v(8.625,0.25,-1)
n(0.707107,-0.707107,0)
v(8.625,0.25,1)
n(0.707107,-0.707107,0)
v(8.125,0.75,-1)
n(0.707107,-0.707107,0)
v(8.125,0.75,-1)
n(0.707107,-0.707107,0)
v(8.625,0.25,1)
n(0.707107,-0.707107,0)
v(8.125,0.75,1)
n(1,0,0)
v(8.125,0.75,-1)
n(1,0,0)
v(8.125,0.75,1)
n(0.987087,0.160182,0)
v(8.125,1.25,-1)
n(0.987087,0.160182,0)
v(8.125,1.25,-1)
n(1,0,0)
v(8.125,0.75,1)
n(0.987087,0.160182,0)
v(8.125,1.25,1)
n(0.987087,0.160182,0)
v(8.125,1.25,-1)
n(0.987087,0.160182,0)
v(8.125,1.25,1)
n(0.948683,0.316228,0)
v(8.375,2,-1)
n(0.948683,0.316228,0)
v(8.375,2,-1)
n(0.987087,0.160182,0)
v(8.125,1.25,1)
n(0.948683,0.316228,0)
v(8.375,2,1)
n(0.316228,0.948683,0)
v(8.375,2,-1)
n(0.316228,0.948683,0)
v(8.375,2,1)
n(0,1,0)
v(9.125,2.25,-1)
n(0,1,0)
v(9.125,2.25,-1)
n(0.316228,0.948683,0)
v(8.375,2,1)
n(0,1,0)
v(9.125,2.25,1)
n(0,1,0)
v(9.125,2.25,-1)
n(0,1,0)
v(9.125,2.25,1)
n(-0.316228,0.948683,0)
v(9.875,2,-1)
n(-0.316228,0.948683,0)
v(9.875,2,-1)
n(0,1,0)
v(9.125,2.25,1)
n(-0.316228,0.948683,0)
v(9.875,2,1)
n(-0.970143,0.242536,0)
v(9.875,2,-1)
n(-0.970143,0.242536,0)
v(9.875,2,1)
n(-0.999717,0.0237893,0)
v(10.125,1,-1)
n(-0.999717,0.0237893,0)
v(10.125,1,-1)
n(-0.970143,0.242536,0)
v(9.875,2,1)
n(-0.999717,0.0237893,0)
v(10.125,1,1)
n(-0.999717,0.0237893,0)
v(10.125,1,-1)
n(-0.999717,0.0237893,0)
v(10.125,1,1)
n(-0.945873,-0.324536,0)
v(9.875,-0.25,-1)
n(-0.945873,-0.324536,0)
v(9.875,-0.25,-1)
n(-0.999717,0.0237893,0)
v(10.125,1,1)
n(-0.945873,-0.324536,0)
v(9.875,-0.25,1)
n(-0.945873,-0.324536,0)
v(9.875,-0.25,-1)
n(-0.945873,-0.324536,0)
v(9.875,-0.25,1)
n(-0.86491,-0.501927,0)
v(9.375,-1.25,-1)
n(-0.86491,-0.501927,0)
v(9.375,-1.25,-1)
n(-0.945873,-0.324536,0)
v(9.875,-0.25,1)
n(-0.86491,-0.501927,0)
v(9.375,-1.25,1)
n(-0.86491,-0.501927,0)
v(9.375,-1.25,-1)
n(-0.86491,-0.501927,0)
v(9.375,-1.25,1)
n(-0.816339,-0.577573,0)
v(8.375,-2.75,-1)
n(-0.816339,-0.577573,0)
v(8.375,-2.75,-1)
n(-0.86491,-0.501927,0)
v(9.375,-1.25,1)
n(-0.816339,-0.577573,0)
v(8.375,-2.75,1)
n(-0.816339,-0.577573,0)
v(8.375,-2.75,-1)
n(-0.816339,-0.577573,0)
v(8.375,-2.75,1)
n(-0.806921,-0.590659,0)
v(6.875,-4.75,-1)
n(-0.806921,-0.590659,0)
v(6.875,-4.75,-1)
n(-0.816339,-0.577573,0)
v(8.375,-2.75,1)
n(-0.806921,-0.590659,0)
v(6.875,-4.75,1)
n(-0.806921,-0.590659,0)
v(6.875,-4.75,-1)
n(-0.806921,-0.590659,0)
v(6.875,-4.75,1)
n(-0.76302,-0.646375,0)
v(5.625,-6.5,-1)
n(-0.76302,-0.646375,0)
v(5.625,-6.5,-1)
n(-0.806921,-0.590659,0)
v(6.875,-4.75,1)
n(-0.76302,-0.646375,0)
v(5.625,-6.5,1)
n(-0.76302,-0.646375,0)
v(5.625,-6.5,-1)
n(-0.76302,-0.646375,0)
v(5.625,-6.5,1)
n(-0.707107,-0.707107,0)
v(4.875,-7.25,-1)
n(-0.707107,-0.707107,0)
v(4.875,-7.25,-1)
n(-0.76302,-0.646375,0)
v(5.625,-6.5,1)
n(-0.707107,-0.707107,0)
v(4.875,-7.25,1)
n(-0.707107,-0.707107,0)
v(4.875,-7.25,-1)
n(-0.707107,-0.707107,0)
v(4.875,-7.25,1)
n(-0.58471,-0.811242,0)
v(4.125,-8,-1)
n(-0.58471,-0.811242,0)
v(4.125,-8,-1)
n(-0.707107,-0.707107,0)
v(4.875,-7.25,1)
n(-0.58471,-0.811242,0)
v(4.125,-8,1)
n(-0.58471,-0.811242,0)
v(4.125,-8,-1)
n(-0.58471,-0.811242,0)
v(4.125,-8,1)
n(-0.382683,-0.92388,0)
v(3.125,-8.5,-1)
n(-0.382683,-0.92388,0)
v(3.125,-8.5,-1)
n(-0.58471,-0.811242,0)
v(4.125,-8,1)
n(-0.382683,-0.92388,0)
v(3.125,-8.5,1)
n(-0.382683,-0.92388,0)
v(3.125,-8.5,-1)
n(-0.382683,-0.92388,0)
v(3.125,-8.5,1)
n(0,-1,0)
v(2.375,-8.75,-1)
n(0,-1,0)
v(2.375,-8.75,-1)
n(-0.382683,-0.92388,0)
v(3.125,-8.5,1)
n(0,-1,0)
v(2.375,-8.75,1)
n(0,-1,0)
v(2.375,-8.75,-1)
n(0,-1,0)
v(2.375,-8.75,1)
n(0.316228,-0.948683,0)
v(1.625,-8.5,-1)
n(0.316228,-0.948683,0)
v(1.625,-8.5,-1)
n(0,-1,0)
v(2.375,-8.75,1)
n(0.316228,-0.948683,0)
v(1.625,-8.5,1)
n(0.948683,-0.316228,0)
v(1.625,-8.5,-1)
n(0.948683,-0.316228,0)
v(1.625,-8.5,1)
n(1,0,0)
v(1.375,-7.75,-1)
n(1,0,0)
v(1.375,-7.75,-1)
n(0.948683,-0.316228,0)
v(1.625,-8.5,1)
n(1,0,0)
v(1.375,-7.75,1)
n(1,0,0)
v(1.375,-7.75,-1)
n(1,0,0)
v(1.375,-7.75,1)
n(0.948683,0.316228,0)
v(1.625,-7,-1)
n(0.948683,0.316228,0)
v(1.625,-7,-1)
n(1,0,0)
v(1.375,-7.75,1)
n(0.948683,0.316228,0)
v(1.625,-7,1)
n(0.316228,0.948683,0)
v(1.625,-7,-1)
n(0.316228,0.948683,0)
v(1.625,-7,1)
n(0.316228,0.948683,0)
v(2.375,-6.75,-1)
n(0.316228,0.948683,0)
v(2.375,-6.75,-1)
n(0.316228,0.948683,0)
v(1.625,-7,1)
n(0.316228,0.948683,0)
v(2.375,-6.75,1)
n(-0.5547,0.83205,0)
v(2.375,-6.75,-1)
n(-0.5547,0.83205,0)
v(2.375,-6.75,1)
n(-0.501927,0.86491,0)
v(3.125,-7.25,-1)
n(-0.501927,0.86491,0)
v(3.125,-7.25,-1)
n(-0.5547,0.83205,0)
v(2.375,-6.75,1)
n(-0.501927,0.86491,0)
v(3.125,-7.25,1)
n(-0.501927,0.86491,0)
v(3.125,-7.25,-1)
n(-0.501927,0.86491,0)
v(3.125,-7.25,1)
n(-0.229753,0.973249,0)
v(3.625,-7.5,-1)
n(-0.229753,0.973249,0)
v(3.625,-7.5,-1)
n(-0.501927,0.86491,0)
v(3.125,-7.25,1)
n(-0.229753,0.973249,0)
v(3.625,-7.5,1)
n(-0.229753,0.973249,0)
v(3.625,-7.5,-1)
n(-0.229753,0.973249,0)
v(3.625,-7.5,1)
n(0.289784,0.957092,0)
v(3.875,-7.5,-1)
n(0.289784,0.957092,0)
v(3.875,-7.5,-1)
n(-0.229753,0.973249,0)
v(3.625,-7.5,1)
n(0.289784,0.957092,0)
v(3.875,-7.5,1)
n(0.289784,0.957092,0)
v(3.875,-7.5,-1)
n(0.289784,0.957092,0)
v(3.875,-7.5,1)
n(0.817416,0.576048,0)
v(4.625,-7,-1)
n(0.817416,0.576048,0)
v(4.625,-7,-1)
n(0.289784,0.957092,0)
v(3.875,-7.5,1)
n(0.817416,0.576048,0)
v(4.625,-7,1)
n(0.817416,0.576048,0)
v(4.625,-7,-1)
n(0.817416,0.576048,0)
v(4.625,-7,1)
n(0.992508,0.122183,0)
v(4.875,-6,-1)
n(0.992508,0.122183,0)
v(4.875,-6,-1)
n(0.817416,0.576048,0)
v(4.625,-7,1)
n(0.992508,0.122183,0)
v(4.875,-6,1)
n(0.992508,0.122183,0)
v(4.875,-6,-1)
n(0.992508,0.122183,0)
v(4.875,-6,1)
n(1,0,0)
v(4.875,-5.25,-1)
n(1,0,0)
v(4.875,-5.25,-1)
n(0.992508,0.122183,0)
v(4.875,-6,1)
n(1,0,0)
v(4.875,-5.25,1)
n(1,0,0)
v(4.875,-5.25,-1)
n(1,0,0)
v(4.875,-5.25,1)
n(1,0,0)
v(4.875,-4.5,-1)
n(1,0,0)
v(4.875,-4.5,-1)
n(1,0,0)
v(4.875,-5.25,1)
n(1,0,0)
v(4.875,-4.5,1)
n(1,0,0)
v(4.875,-4.5,-1)
n(1,0,0)
v(4.875,-4.5,1)
n(0.999136,-0.0415586,0)
v(4.875,-3.25,-1)
n(0.999136,-0.0415586,0)
v(4.875,-3.25,-1)
n(1,0,0)
v(4.875,-4.5,1)
n(0.999136,-0.0415586,0)
v(4.875,-3.25,1)
n(0.999136,-0.0415586,0)
v(4.875,-3.25,-1)
n(0.999136,-0.0415586,0)
v(4.875,-3.25,1)
n(0.999136,-0.0415586,0)
v(4.625,-0.25,-1)
n(0.999136,-0.0415586,0)
v(4.625,-0.25,-1)
n(0.999136,-0.0415586,0)
v(4.875,-3.25,1)
n(0.999136,-0.0415586,0)
v(4.625,-0.25,1)
n(0.999136,-0.0415586,0)
v(4.625,-0.25,-1)
n(0.999136,-0.0415586,0)
v(4.625,-0.25,1)
n(0.973249,-0.229753,0)
v(4.625,0.5,-1)
n(0.973249,-0.229753,0)
v(4.625,0.5,-1)
n(0.999136,-0.0415586,0)
v(4.625,-0.25,1)
n(0.973249,-0.229753,0)
v(4.625,0.5,1)
n(0.973249,-0.229753,0)
v(4.625,0.5,-1)
n(0.973249,-0.229753,0)
v(4.625,0.5,1)
n(0.894427,-0.447214,0)
v(4.375,1,-1)
n(0.894427,-0.447214,0)
v(4.375,1,-1)
n(0.973249,-0.229753,0)
v(4.625,0.5,1)
n(0.894427,-0.447214,0)
v(4.375,1,1)
n(0.316228,-0.948683,0)
v(4.375,1,-1)
n(0.316228,-0.948683,0)
v(4.375,1,1)
n(0.160182,-0.987087,0)
v(3.625,1.25,-1)
n(0.160182,-0.987087,0)
v(3.625,1.25,-1)
n(0.316228,-0.948683,0)
v(4.375,1,1)
n(0.160182,-0.987087,0)
v(3.625,1.25,1)
n(0.160182,-0.987087,0)
v(3.625,1.25,-1)
n(0.160182,-0.987087,0)
v(3.625,1.25,1)
n(0,-1,0)
v(3.375,1.25,-1)
n(0,-1,0)
v(3.375,1.25,-1)
n(0.160182,-0.987087,0)
v(3.625,1.25,1)
n(0,-1,0)
v(3.375,1.25,1)
n(1,0,0)
v(3.375,1.25,-1)
n(1,0,0)
v(3.375,1.25,1)
n(1,0,0)
v(3.375,1.5,-1)
n(1,0,0)
v(3.375,1.5,-1)
n(1,0,0)
v(3.375,1.25,1)
n(1,0,0)
v(3.375,1.5,1)
n(0,-1,0)
v(14.625,-5.5,-1)
n(0,-1,0)
v(14.625,-5.5,1)
n(0.229753,-0.973249,0)
v(13.375,-5.5,-1)
n(0.229753,-0.973249,0)
v(13.375,-5.5,-1)
n(0,-1,0)
v(14.625,-5.5,1)
n(0.229753,-0.973249,0)
v(13.375,-5.5,1)
n(0.229753,-0.973249,0)
v(13.375,-5.5,-1)
n(0.229753,-0.973249,0)
v(13.375,-5.5,1)
n(0.661803,-0.749678,0)
v(12.375,-5,-1)
n(0.661803,-0.749678,0)
v(12.375,-5,-1)
n(0.229753,-0.973249,0)
v(13.375,-5.5,1)
n(0.661803,-0.749678,0)
v(12.375,-5,1)
n(0.661803,-0.749678,0)
v(12.375,-5,-1)
n(0.661803,-0.749678,0)
v(12.375,-5,1)
n(0.907648,-0.419733,0)
v(11.375,-3.5,-1)
n(0.907648,-0.419733,0)
v(11.375,-3.5,-1)
n(0.661803,-0.749678,0)
v(12.375,-5,1)
n(0.907648,-0.419733,0)
v(11.375,-3.5,1)
n(0.907648,-0.419733,0)
v(11.375,-3.5,-1)
n(0.907648,-0.419733,0)
v(11.375,-3.5,1)
n(0.997675,-0.0681484,0)
v(10.875,-1.75,-1)
n(0.997675,-0.0681484,0)
v(10.875,-1.75,-1)
n(0.907648,-0.419733,0)
v(11.375,-3.5,1)
n(0.997675,-0.0681484,0)
v(10.875,-1.75,1)
n(0.997675,-0.0681484,0)
v(10.875,-1.75,-1)
n(0.997675,-0.0681484,0)
v(10.875,-1.75,1)
n(0.96286,0.270001,0)
v(11.125,0,-1)
n(0.96286,0.270001,0)
v(11.125,0,-1)
n(0.997675,-0.0681484,0)
v(10.875,-1.75,1)
n(0.96286,0.270001,0)
v(11.125,0,1)
n(0.96286,0.270001,0)
v(11.125,0,-1)
n(0.96286,0.270001,0)
v(11.125,0,1)
n(0.871487,0.490419,0)
v(11.875,1.75,-1)
n(0.871487,0.490419,0)
v(11.875,1.75,-1)
n(0.96286,0.270001,0)
v(11.125,0,1)
n(0.871487,0.490419,0)
v(11.875,1.75,1)
n(0.871487,0.490419,0)
v(11.875,1.75,-1)
n(0.871487,0.490419,0)
v(11.875,1.75,1)
n(0.732946,0.680287,0)
v(13.125,3.5,-1)
n(0.732946,0.680287,0)
v(13.125,3.5,-1)
n(0.871487,0.490419,0)
v(11.875,1.75,1)
n(0.732946,0.680287,0)
v(13.125,3.5,1)
n(0.732946,0.680287,0)
v(13.125,3.5,-1)
n(0.732946,0.680287,0)
v(13.125,3.5,1)
n(0.522529,0.852621,0)
v(14.625,4.75,-1)
n(0.522529,0.852621,0)
v(14.625,4.75,-1)
n(0.732946,0.680287,0)
v(13.125,3.5,1)
n(0.522529,0.852621,0)
v(14.625,4.75,1)
n(0.522529,0.852621,0)
v(14.625,4.75,-1)
n(0.522529,0.852621,0)
v(14.625,4.75,1)
n(0.281177,0.959656,0)
v(16.375,5.5,-1)
n(0.281177,0.959656,0)
v(16.375,5.5,-1)
n(0.522529,0.852621,0)
v(14.625,4.75,1)
n(0.281177,0.959656,0)
v(16.375,5.5,1)
n(0.281177,0.959656,0)
v(16.375,5.5,-1)
n(0.281177,0.959656,0)
v(16.375,5.5,1)
n(-0.0399044,0.999204,0)
v(17.875,5.75,-1)
n(-0.0399044,0.999204,0)
v(17.875,5.75,-1)
n(0.281177,0.959656,0)
v(16.375,5.5,1)
n(-0.0399044,0.999204,0)
v(17.875,5.75,1)
n(-0.0399044,0.999204,0)
v(17.875,5.75,-1)
n(-0.0399044,0.999204,0)
v(17.875,5.75,1)
n(-0.242536,0.970143,0)
v(18.875,5.5,-1)
n(-0.242536,0.970143,0)
v(18.875,5.5,-1)
n(-0.0399044,0.999204,0)
v(17.875,5.75,1)
n(-0.242536,0.970143,0)
v(18.875,5.5,1)
n(-0.242536,0.970143,0)
v(18.875,5.5,-1)
n(-0.242536,0.970143,0)
v(18.875,5.5,1)
n(-0.492699,0.8702,0)
v(19.875,5.25,-1)
n(-0.492699,0.8702,0)
v(19.875,5.25,-1)
n(-0.242536,0.970143,0)
v(18.875,5.5,1)
n(-0.492699,0.8702,0)
v(19.875,5.25,1)
n(-0.492699,0.8702,0)
v(19.875,5.25,-1)
n(-0.492699,0.8702,0)
v(19.875,5.25,1)
n(-0.811242,0.58471,0)
v(21.125,4,-1)
n(-0.811242,0.58471,0)
v(21.125,4,-1)
n(-0.492699,0.8702,0)
v(19.875,5.25,1)
n(-0.811242,0.58471,0)
v(21.125,4,1)
n(-0.811242,0.58471,0)
v(21.125,4,-1)
n(-0.811242,0.58471,0)
v(21.125,4,1)
n(-0.973249,0.229753,0)
v(21.625,3,-1)
n(-0.973249,0.229753,0)
v(21.625,3,-1)
n(-0.811242,0.58471,0)
v(21.125,4,1)
n(-0.973249,0.229753,0)
v(21.625,3,1)
n(-0.973249,0.229753,0)
v(21.625,3,-1)
n(-0.973249,0.229753,0)
v(21.625,3,1)
n(-0.998068,-0.0621374,0)
v(21.625,2,-1)
n(-0.998068,-0.0621374,0)
v(21.625,2,-1)
n(-0.973249,0.229753,0)
v(21.625,3,1)
n(-0.998068,-0.0621374,0)
v(21.625,2,1)
n(-0.998068,-0.0621374,0)
v(21.625,2,-1)
n(-0.998068,-0.0621374,0)
v(21.625,2,1)
n(-0.957092,-0.289784,0)
v(21.375,0,-1)
n(-0.957092,-0.289784,0)
v(21.375,0,-1)
n(-0.998068,-0.0621374,0)
v(21.625,2,1)
n(-0.957092,-0.289784,0)
v(21.375,0,1)
n(-0.957092,-0.289784,0)
v(21.375,0,-1)
n(-0.957092,-0.289784,0)
v(21.375,0,1)
n(-0.856705,-0.515806,0)
v(20.375,-2,-1)
n(-0.856705,-0.515806,0)
v(20.375,-2,-1)
n(-0.957092,-0.289784,0)
v(21.375,0,1)
n(-0.856705,-0.515806,0)
v(20.375,-2,1)
n(-0.856705,-0.515806,0)
v(20.375,-2,-1)
n(-0.856705,-0.515806,0)
v(20.375,-2,1)
n(-0.695614,-0.718415,0)
v(19.125,-3.75,-1)
n(-0.695614,-0.718415,0)
v(19.125,-3.75,-1)
n(-0.856705,-0.515806,0)
v(20.375,-2,1)
n(-0.695614,-0.718415,0)
v(19.125,-3.75,1)
n(-0.695614,-0.718415,0)
v(19.125,-3.75,-1)
n(-0.695614,-0.718415,0)
v(19.125,-3.75,1)
n(-0.439351,-0.898315,0)
v(17.625,-4.75,-1)
n(-0.439351,-0.898315,0)
v(17.625,-4.75,-1)
n(-0.695614,-0.718415,0)
v(19.125,-3.75,1)
n(-0.439351,-0.898315,0)
v(17.625,-4.75,1)
n(-0.439351,-0.898315,0)
v(17.625,-4.75,-1)
n(-0.439351,-0.898315,0)
v(17.625,-4.75,1)
n(-0.241052,-0.970512,0)
v(16.125,-5.25,-1)
n(-0.241052,-0.970512,0)
v(16.125,-5.25,-1)
n(-0.439351,-0.898315,0)
v(17.625,-4.75,1)
n(-0.241052,-0.970512,0)
v(16.125,-5.25,1)
n(-0.241052,-0.970512,0)
v(16.125,-5.25,-1)
n(-0.241052,-0.970512,0)
v(16.125,-5.25,1)
n(-0.164399,-0.986394,0)
v(14.625,-5.5,-1)
n(-0.164399,-0.986394,0)
v(14.625,-5.5,-1)
n(-0.241052,-0.970512,0)
v(16.125,-5.25,1)
n(-0.164399,-0.986394,0)
v(14.625,-5.5,1)
n(0,0,0)
v(14.625,-5.5,-1)
n(0,0,0)
v(14.625,-5.5,1)
n(0,0,0)
v(14.625,-5.5,-1)
n(0,0,0)
v(14.625,-5.5,-1)
n(0,0,0)
v(14.625,-5.5,1)
n(0,0,0)
v(14.625,-5.5,1)
n(-0.196116,-0.980581,0)
v(17.875,5,-1)
n(-0.196116,-0.980581,0)
v(17.875,5,1)
n(-0.40817,-0.912906,0)
v(16.625,4.75,-1)
n(-0.40817,-0.912906,0)
v(16.625,4.75,-1)
n(-0.196116,-0.980581,0)
v(17.875,5,1)
n(-0.40817,-0.912906,0)
v(16.625,4.75,1)
n(-0.40817,-0.912906,0)
v(16.625,4.75,-1)
n(-0.40817,-0.912906,0)
v(16.625,4.75,1)
n(-0.655202,-0.755454,0)
v(15.625,4,-1)
n(-0.655202,-0.755454,0)
v(15.625,4,-1)
n(-0.40817,-0.912906,0)
v(16.625,4.75,1)
n(-0.655202,-0.755454,0)
v(15.625,4,1)
n(-0.655202,-0.755454,0)
v(15.625,4,-1)
n(-0.655202,-0.755454,0)
v(15.625,4,1)
n(-0.811242,-0.58471,0)
v(15.125,3.5,-1)
n(-0.811242,-0.58471,0)
v(15.125,3.5,-1)
n(-0.655202,-0.755454,0)
v(15.625,4,1)
n(-0.811242,-0.58471,0)
v(15.125,3.5,1)
n(-0.811242,-0.58471,0)
v(15.125,3.5,-1)
n(-0.811242,-0.58471,0)
v(15.125,3.5,1)
n(-0.894427,-0.447214,0)
v(14.625,2.5,-1)
n(-0.894427,-0.447214,0)
v(14.625,2.5,-1)
n(-0.811242,-0.58471,0)
v(15.125,3.5,1)
n(-0.894427,-0.447214,0)
v(14.625,2.5,1)
n(-0.894427,-0.447214,0)
v(14.625,2.5,-1)
n(-0.894427,-0.447214,0)
v(14.625,2.5,1)
n(-0.92388,-0.382683,0)
v(14.125,1.5,-1)
n(-0.92388,-0.382683,0)
v(14.125,1.5,-1)
n(-0.894427,-0.447214,0)
v(14.625,2.5,1)
n(-0.92388,-0.382683,0)
v(14.125,1.5,1)
n(-0.92388,-0.382683,0)
v(14.125,1.5,-1)
n(-0.92388,-0.382683,0)
v(14.125,1.5,1)
n(-0.970512,-0.241052,0)
v(13.625,0,-1)
n(-0.970512,-0.241052,0)
v(13.625,0,-1)
n(-0.92388,-0.382683,0)
v(14.125,1.5,1)
n(-0.970512,-0.241052,0)
v(13.625,0,1)
n(-0.970512,-0.241052,0)
v(13.625,0,-1)
n(-0.970512,-0.241052,0)
v(13.625,0,1)
n(-0.983615,-0.180281,0)
v(13.375,-1.5,-1)
n(-0.983615,-0.180281,0)
v(13.375,-1.5,-1)
n(-0.970512,-0.241052,0)
v(13.625,0,1)
n(-0.983615,-0.180281,0)
v(13.375,-1.5,1)
n(-0.983615,-0.180281,0)
v(13.375,-1.5,-1)
n(-0.983615,-0.180281,0)
v(13.375,-1.5,1)
n(-0.998068,0.0621374,0)
v(13.125,-2.75,-1)
n(-0.998068,0.0621374,0)
v(13.125,-2.75,-1)
n(-0.983615,-0.180281,0)
v(13.375,-1.5,1)
n(-0.998068,0.0621374,0)
v(13.125,-2.75,1)
n(-0.998068,0.0621374,0)
v(13.125,-2.75,-1)
n(-0.998068,0.0621374,0)
v(13.125,-2.75,1)
n(-0.850651,0.525731,0)
v(13.625,-4.25,-1)
n(-0.850651,0.525731,0)
v(13.625,-4.25,-1)
n(-0.998068,0.0621374,0)
v(13.125,-2.75,1)
n(-0.850651,0.525731,0)
v(13.625,-4.25,1)
n(-0.850651,0.525731,0)
v(13.625,-4.25,-1)
n(-0.850651,0.525731,0)
v(13.625,-4.25,1)
n(-0.382683,0.92388,0)
v(14.125,-4.75,-1)
n(-0.382683,0.92388,0)
v(14.125,-4.75,-1)
n(-0.850651,0.525731,0)
v(13.625,-4.25,1)
n(-0.382683,0.92388,0)
v(14.125,-4.75,1)
n(-0.382683,0.92388,0)
v(14.125,-4.75,-1)
n(-0.382683,0.92388,0)
v(14.125,-4.75,1)
n(0.160182,0.987087,0)
v(14.875,-4.75,-1)
n(0.160182,0.987087,0)
v(14.875,-4.75,-1)
n(-0.382683,0.92388,0)
v(14.125,-4.75,1)
n(0.160182,0.987087,0)
v(14.875,-4.75,1)
n(0.160182,0.987087,0)
v(14.875,-4.75,-1)
n(0.160182,0.987087,0)
v(14.875,-4.75,1)
n(0.316228,0.948683,0)
v(15.625,-4.5,-1)
n(0.316228,0.948683,0)
v(15.625,-4.5,-1)
n(0.160182,0.987087,0)
v(14.875,-4.75,1)
n(0.316228,0.948683,0)
v(15.625,-4.5,1)
n(0.316228,0.948683,0)
v(15.625,-4.5,-1)
n(0.316228,0.948683,0)
v(15.625,-4.5,1)
n(0.563738,0.825954,0)
v(16.375,-4.25,-1)
n(0.563738,0.825954,0)
v(16.375,-4.25,-1)
n(0.316228,0.948683,0)
v(15.625,-4.5,1)
n(0.563738,0.825954,0)
v(16.375,-4.25,1)
n(0.563738,0.825954,0)
v(16.375,-4.25,-1)
n(0.563738,0.825954,0)
v(16.375,-4.25,1)
n(0.821396,0.570358,0)
v(17.625,-2.75,-1)
n(0.821396,0.570358,0)
v(17.625,-2.75,-1)
n(0.563738,0.825954,0)
v(16.375,-4.25,1)
n(0.821396,0.570358,0)
v(17.625,-2.75,1)
n(0.821396,0.570358,0)
v(17.625,-2.75,-1)
n(0.821396,0.570358,0)
v(17.625,-2.75,1)
n(0.932722,0.360597,0)
v(18.625,-1,-1)
n(0.932722,0.360597,0)
v(18.625,-1,-1)
n(0.821396,0.570358,0)
v(17.625,-2.75,1)
n(0.932722,0.360597,0)
v(18.625,-1,1)
n(0.932722,0.360597,0)
v(18.625,-1,-1)
n(0.932722,0.360597,0)
v(18.625,-1,1)
n(0.983793,0.179308,0)
v(19.125,1.25,-1)
n(0.983793,0.179308,0)
v(19.125,1.25,-1)
n(0.932722,0.360597,0)
v(18.625,-1,1)
n(0.983793,0.179308,0)
v(19.125,1.25,1)
n(0.983793,0.179308,0)
v(19.125,1.25,-1)
n(0.983793,0.179308,0)
v(19.125,1.25,1)
n(0.997484,0.070889,0)
v(19.375,3,-1)
n(0.997484,0.070889,0)
v(19.375,3,-1)
n(0.983793,0.179308,0)
v(19.125,1.25,1)
n(0.997484,0.070889,0)
v(19.375,3,1)
n(0.997484,0.070889,0)
v(19.375,3,-1)
n(0.997484,0.070889,0)
v(19.375,3,1)
n(1,0,0)
v(19.375,4,-1)
n(1,0,0)
v(19.375,4,-1)
n(0.997484,0.070889,0)
v(19.375,3,1)
n(1,0,0)
v(19.375,4,1)
n(0.707107,-0.707107,0)
v(19.375,4,-1)
n(0.707107,-0.707107,0)
v(19.375,4,1)
n(0.58471,-0.811242,0)
v(18.875,4.5,-1)
n(0.58471,-0.811242,0)
v(18.875,4.5,-1)
n(0.707107,-0.707107,0)
v(19.375,4,1)
n(0.58471,-0.811242,0)
v(18.875,4.5,1)
n(0.58471,-0.811242,0)
v(18.875,4.5,-1)
n(0.58471,-0.811242,0)
v(18.875,4.5,1)
n(0.447214,-0.894427,0)
v(18.375,4.75,-1)
n(0.447214,-0.894427,0)
v(18.375,4.75,-1)
n(0.58471,-0.811242,0)
v(18.875,4.5,1)
n(0.447214,-0.894427,0)
v(18.375,4.75,1)
n(0.447214,-0.894427,0)
v(18.375,4.75,-1)
n(0.447214,-0.894427,0)
v(18.375,4.75,1)
n(0.447214,-0.894427,0)
v(17.875,5,-1)
n(0.447214,-0.894427,0)
v(17.875,5,-1)
n(0.447214,-0.894427,0)
v(18.375,4.75,1)
n(0.447214,-0.894427,0)
v(17.875,5,1)
n(0,0,0)
v(17.875,5,-1)
n(0,0,0)
v(17.875,5,1)
n(0,0,0)
v(17.875,5,-1)
n(0,0,0)
v(17.875,5,-1)
n(0,0,0)
v(17.875,5,1)
n(0,0,0)
v(17.875,5,1)
n(0.164399,0.986394,0)
v(22.375,1.75,-1)
n(0.164399,0.986394,0)
v(22.375,1.75,1)
n(0.0824805,0.996593,0)
v(25.375,2.25,-1)
n(0.0824805,0.996593,0)
v(25.375,2.25,-1)
n(0.164399,0.986394,0)
v(22.375,1.75,1)
n(0.0824805,0.996593,0)
v(25.375,2.25,1)
n(0.0824805,0.996593,0)
v(25.375,2.25,-1)
n(0.0824805,0.996593,0)
v(25.375,2.25,1)
n(0,1,0)
v(25.875,2.25,-1)
n(0,1,0)
v(25.875,2.25,-1)
n(0.0824805,0.996593,0)
v(25.375,2.25,1)
n(0,1,0)
v(25.875,2.25,1)
n(-0.928477,-0.371391,0)
v(25.875,2.25,-1)
n(-0.928477,-0.371391,0)
v(25.875,2.25,1)
n(-0.928477,-0.371391,0)
v(25.375,1,-1)
n(-0.928477,-0.371391,0)
v(25.375,1,-1)
n(-0.928477,-0.371391,0)
v(25.875,2.25,1)
n(-0.928477,-0.371391,0)
v(25.375,1,1)
n(0.624695,0.780869,0)
v(25.375,1,-1)
n(0.624695,0.780869,0)
v(25.375,1,1)
n(0.443822,0.896115,0)
v(26.625,2,-1)
n(0.443822,0.896115,0)
v(26.625,2,-1)
n(0.624695,0.780869,0)
v(25.375,1,1)
n(0.443822,0.896115,0)
v(26.625,2,1)
n(0.443822,0.896115,0)
v(26.625,2,-1)
n(0.443822,0.896115,0)
v(26.625,2,1)
n(-0.0383765,0.999263,0)
v(27.625,2.25,-1)
n(-0.0383765,0.999263,0)
v(27.625,2.25,-1)
n(0.443822,0.896115,0)
v(26.625,2,1)
n(-0.0383765,0.999263,0)
v(27.625,2.25,1)
n(-0.0383765,0.999263,0)
v(27.625,2.25,-1)
n(-0.0383765,0.999263,0)
v(27.625,2.25,1)
n(-0.382683,0.92388,0)
v(28.375,2,-1)
n(-0.382683,0.92388,0)
v(28.375,2,-1)
n(-0.0383765,0.999263,0)
v(27.625,2.25,1)
n(-0.382683,0.92388,0)
v(28.375,2,1)
n(-0.382683,0.92388,0)
v(28.375,2,-1)
n(-0.382683,0.92388,0)
v(28.375,2,1)
n(-0.447214,0.894427,0)
v(28.875,1.75,-1)
n(-0.447214,0.894427,0)
v(28.875,1.75,-1)
n(-0.382683,0.92388,0)
v(28.375,2,1)
n(-0.447214,0.894427,0)
v(28.875,1.75,1)
n(-0.986394,0.164399,0)
v(28.875,1.75,-1)
n(-0.986394,0.164399,0)
v(28.875,1.75,1)
n(-0.999629,0.0272424,0)
v(29.125,0.25,-1)
n(-0.999629,0.0272424,0)
v(29.125,0.25,-1)
n(-0.986394,0.164399,0)
v(28.875,1.75,1)
n(-0.999629,0.0272424,0)
v(29.125,0.25,1)
n(-0.999629,0.0272424,0)
v(29.125,0.25,-1)
n(-0.999629,0.0272424,0)
v(29.125,0.25,1)
n(-0.944532,-0.328418,0)
v(28.875,-2,-1)
n(-0.944532,-0.328418,0)
v(28.875,-2,-1)
n(-0.999629,0.0272424,0)
v(29.125,0.25,1)
n(-0.944532,-0.328418,0)
v(28.875,-2,1)
n(-0.944532,-0.328418,0)
v(28.875,-2,-1)
n(-0.944532,-0.328418,0)
v(28.875,-2,1)
n(-0.746954,-0.664876,0)
v(27.625,-4,-1)
n(-0.746954,-0.664876,0)
v(27.625,-4,-1)
n(-0.944532,-0.328418,0)
v(28.875,-2,1)
n(-0.746954,-0.664876,0)
v(27.625,-4,1)
n(-0.746954,-0.664876,0)
v(27.625,-4,-1)
n(-0.746954,-0.664876,0)
v(27.625,-4,1)
n(-0.477885,-0.878422,0)
v(26.375,-5,-1)
n(-0.477885,-0.878422,0)
v(26.375,-5,-1)
n(-0.746954,-0.664876,0)
v(27.625,-4,1)
n(-0.477885,-0.878422,0)
v(26.375,-5,1)
n(-0.477885,-0.878422,0)
v(26.375,-5,-1)
n(-0.477885,-0.878422,0)
v(26.375,-5,1)
n(-0.160182,-0.987087,0)
v(24.875,-5.5,-1)
n(-0.160182,-0.987087,0)
v(24.875,-5.5,-1)
n(-0.477885,-0.878422,0)
v(26.375,-5,1)
n(-0.160182,-0.987087,0)
v(24.875,-5.5,1)
n(-0.160182,-0.987087,0)
v(24.875,-5.5,-1)
n(-0.160182,-0.987087,0)
v(24.875,-5.5,1)
n(0,-1,0)
v(24.375,-5.5,-1)
n(0,-1,0)
v(24.375,-5.5,-1)
n(-0.160182,-0.987087,0)
v(24.875,-5.5,1)
n(0,-1,0)
v(24.375,-5.5,1)
n(0.707107,-0.707107,0)
v(24.375,-5.5,-1)
n(0.707107,-0.707107,0)
v(24.375,-5.5,1)
n(0.707107,-0.707107,0)
v(23.875,-5,-1)
n(0.707107,-0.707107,0)
v(23.875,-5,-1)
n(0.707107,-0.707107,0)
v(24.375,-5.5,1)
n(0.707107,-0.707107,0)
v(23.875,-5,1)
n(-0.948683,-0.316228,0)
v(23.875,-5,-1)
n(-0.948683,-0.316228,0)
v(23.875,-5,1)
n(-0.9665,-0.256668,0)
v(23.375,-6.5,-1)
n(-0.9665,-0.256668,0)
v(23.375,-6.5,-1)
n(-0.948683,-0.316228,0)
v(23.875,-5,1)
n(-0.9665,-0.256668,0)
v(23.375,-6.5,1)
n(-0.9665,-0.256668,0)
v(23.375,-6.5,-1)
n(-0.9665,-0.256668,0)
v(23.375,-6.5,1)
n(-0.991152,0.132733,0)
v(23.125,-7.75,-1)
n(-0.991152,0.132733,0)
v(23.125,-7.75,-1)
n(-0.9665,-0.256668,0)
v(23.375,-6.5,1)
n(-0.991152,0.132733,0)
v(23.125,-7.75,1)
n(-0.991152,0.132733,0)
v(23.125,-7.75,-1)
n(-0.991152,0.132733,0)
v(23.125,-7.75,1)
n(-0.811242,0.58471,0)
v(23.375,-8.25,-1)
n(-0.811242,0.58471,0)
v(23.375,-8.25,-1)
n(-0.991152,0.132733,0)
v(23.125,-7.75,1)
n(-0.811242,0.58471,0)
v(23.375,-8.25,1)
n(-0.811242,0.58471,0)
v(23.375,-8.25,-1)
n(-0.811242,0.58471,0)
v(23.375,-8.25,1)
n(-0.707107,0.707107,0)
v(23.625,-8.5,-1)
n(-0.707107,0.707107,0)
v(23.625,-8.5,-1)
n(-0.811242,0.58471,0)
v(23.375,-8.25,1)
n(-0.707107,0.707107,0)
v(23.625,-8.5,1)
n(0,1,0)
v(23.625,-8.5,-1)
n(0,1,0)
v(23.625,-8.5,1)
n(0,1,0)
v(24.125,-8.5,-1)
n(0,1,0)
v(24.125,-8.5,-1)
n(0,1,0)
v(23.625,-8.5,1)
n(0,1,0)
v(24.125,-8.5,1)
n(-1,0,0)
v(24.125,-8.5,-1)
n(-1,0,0)
v(24.125,-8.5,1)
n(-1,0,0)
v(24.125,-8.75,-1)
n(-1,0,0)
v(24.125,-8.75,-1)
n(-1,0,0)
v(24.125,-8.5,1)
n(-1,0,0)
v(24.125,-8.75,1)
n(0,-1,0)
v(24.125,-8.75,-1)
n(0,-1,0)
v(24.125,-8.75,1)
n(0,-1,0)
v(19.625,-8.75,-1)
n(0,-1,0)
v(19.625,-8.75,-1)
n(0,-1,0)
v(24.125,-8.75,1)
n(0,-1,0)
v(19.625,-8.75,1)
n(1,0,0)
v(19.625,-8.75,-1)
n(1,0,0)
v(19.625,-8.75,1)
n(1,0,0)
v(19.625,-8.5,-1)
n(1,0,0)
v(19.625,-8.5,-1)
n(1,0,0)
v(19.625,-8.75,1)
n(1,0,0)
v(19.625,-8.5,1)
n(0.242536,0.970143,0)
v(19.625,-8.5,-1)
n(0.242536,0.970143,0)
v(19.625,-8.5,1)
n(0.242536,0.970143,0)
v(20.625,-8.25,-1)
n(0.242536,0.970143,0)
v(20.625,-8.25,-1)
n(0.242536,0.970143,0)
v(19.625,-8.5,1)
n(0.242536,0.970143,0)
v(20.625,-8.25,1)
n(0.894427,0.447214,0)
v(20.625,-8.25,-1)
n(0.894427,0.447214,0)
v(20.625,-8.25,1)
n(0.937885,0.346946,0)
v(20.875,-7.75,-1)
n(0.937885,0.346946,0)
v(20.875,-7.75,-1)
n(0.894427,0.447214,0)
v(20.625,-8.25,1)
n(0.937885,0.346946,0)
v(20.875,-7.75,1)
n(0.937885,0.346946,0)
v(20.875,-7.75,-1)
n(0.937885,0.346946,0)
v(20.875,-7.75,1)
n(0.964694,0.263373,0)
v(21.125,-6.75,-1)
n(0.964694,0.263373,0)
v(21.125,-6.75,-1)
n(0.937885,0.346946,0)
v(20.875,-7.75,1)
n(0.964694,0.263373,0)
v(21.125,-6.75,1)
n(0.964694,0.263373,0)
v(21.125,-6.75,-1)
n(0.964694,0.263373,0)
v(21.125,-6.75,1)
n(0.964694,0.263373,0)
v(23.125,0,-1)
n(0.964694,0.263373,0)
v(23.125,0,-1)
n(0.964694,0.263373,0)
v(21.125,-6.75,1)
n(0.964694,0.263373,0)
v(23.125,0,1)
n(0.964694,0.263373,0)
v(23.125,0,-1)
n(0.964694,0.263373,0)
v(23.125,0,1)
n(0.970143,0.242536,0)
v(23.375,1,-1)
n(0.970143,0.242536,0)
v(23.375,1,-1)
n(0.964694,0.263373,0)
v(23.125,0,1)
n(0.970143,0.242536,0)
v(23.375,1,1)
n(0.707107,-0.707107,0)
v(23.375,1,-1)
n(0.707107,-0.707107,0)
v(23.375,1,1)
n(0.707107,-0.707107,0)
v(23.125,1.25,-1)
n(0.707107,-0.707107,0)
v(23.125,1.25,-1)
n(0.707107,-0.707107,0)
v(23.375,1,1)
n(0.707107,-0.707107,0)
v(23.125,1.25,1)
n(1,0,0)
v(23.125,1.25,-1)
n(1,0,0)
v(23.125,1.25,1)
n(1,0,0)
v(23.125,1.5,-1)
n(1,0,0)
v(23.125,1.5,-1)
n(1,0,0)
v(23.125,1.25,1)
n(1,0,0)
v(23.125,1.5,1)
n(0,-1,0)
v(23.125,1.5,-1)
n(0,-1,0)
v(23.125,1.5,1)
n(0,-1,0)
v(22.375,1.5,-1)
n(0,-1,0)
v(22.375,1.5,-1)
n(0,-1,0)
v(23.125,1.5,1)
n(0,-1,0)
v(22.375,1.5,1)
n(1,0,0)
v(22.375,1.5,-1)
n(1,0,0)
v(22.375,1.5,1)
n(1,0,0)
v(22.375,1.75,-1)
n(1,0,0)
v(22.375,1.75,-1)
n(1,0,0)
v(22.375,1.5,1)
n(1,0,0)
v(22.375,1.75,1)
n(-0.707107,0.707107,0)
v(23.875,-4.25,-1)
n(-0.707107,0.707107,0)
v(23.875,-4.25,1)
n(-0.58471,0.811242,0)
v(24.375,-4.75,-1)
n(-0.58471,0.811242,0)
v(24.375,-4.75,-1)
n(-0.707107,0.707107,0)
v(23.875,-4.25,1)
n(-0.58471,0.811242,0)
v(24.375,-4.75,1)
n(-0.58471,0.811242,0)
v(24.375,-4.75,-1)
n(-0.58471,0.811242,0)
v(24.375,-4.75,1)
n(-0.447214,0.894427,0)
v(24.875,-5,-1)
n(-0.447214,0.894427,0)
v(24.875,-5,-1)
n(-0.58471,0.811242,0)
v(24.375,-4.75,1)
n(-0.447214,0.894427,0)
v(24.875,-5,1)
n(0.707107,0.707107,0)
v(24.875,-5,-1)
n(0.707107,0.707107,0)
v(24.875,-5,1)
n(0.633989,0.773342,0)
v(25.125,-4.75,-1)
n(0.633989,0.773342,0)
v(25.125,-4.75,-1)
n(0.707107,0.707107,0)
v(24.875,-5,1)
n(0.633989,0.773342,0)
v(25.125,-4.75,1)
n(0.633989,0.773342,0)
v(25.125,-4.75,-1)
n(0.633989,0.773342,0)
v(25.125,-4.75,1)
n(0.749678,0.661803,0)
v(25.875,-4.25,-1)
n(0.749678,0.661803,0)
v(25.875,-4.25,-1)
n(0.633989,0.773342,0)
v(25.125,-4.75,1)
n(0.749678,0.661803,0)
v(25.875,-4.25,1)
n(0.749678,0.661803,0)
v(25.875,-4.25,-1)
n(0.749678,0.661803,0)
v(25.875,-4.25,1)
n(0.931975,0.362523,0)
v(26.375,-3.25,-1)
n(0.931975,0.362523,0)
v(26.375,-3.25,-1)
n(0.749678,0.661803,0)
v(25.875,-4.25,1)
n(0.931975,0.362523,0)
v(26.375,-3.25,1)
n(0.931975,0.362523,0)
v(26.375,-3.25,-1)
n(0.931975,0.362523,0)
v(26.375,-3.25,1)
n(0.97552,0.219912,0)
v(26.875,-1.5,-1)
n(0.97552,0.219912,0)
v(26.875,-1.5,-1)
n(0.931975,0.362523,0)
v(26.375,-3.25,1)
n(0.97552,0.219912,0)
v(26.875,-1.5,1)
n(0.97552,0.219912,0)
v(26.875,-1.5,-1)
n(0.97552,0.219912,0)
v(26.875,-1.5,1)
n(0.999204,-0.0399044,0)
v(27.125,0,-1)
n(0.999204,-0.0399044,0)
v(27.125,0,-1)
n(0.97552,0.219912,0)
v(26.875,-1.5,1)
n(0.999204,-0.0399044,0)
v(27.125,0,1)
n(0.999204,-0.0399044,0)
v(27.125,0,-1)
n(0.999204,-0.0399044,0)
v(27.125,0,1)
n(0.970143,-0.242536,0)
v(26.875,1,-1)
n(0.970143,-0.242536,0)
v(26.875,1,-1)
n(0.999204,-0.0399044,0)
v(27.125,0,1)
n(0.970143,-0.242536,0)
v(26.875,1,1)
n(0.447214,-0.894427,0)
v(26.875,1,-1)
n(0.447214,-0.894427,0)
v(26.875,1,1)
n(0.447214,-0.894427,0)
v(26.375,1.25,-1)
n(0.447214,-0.894427,0)
v(26.375,1.25,-1)
n(0.447214,-0.894427,0)
v(26.875,1,1)
n(0.447214,-0.894427,0)
v(26.375,1.25,1)
n(-0.447214,-0.894427,0)
v(26.375,1.25,-1)
n(-0.447214,-0.894427,0)
v(26.375,1.25,1)
n(-0.58471,-0.811242,0)
v(25.875,1,-1)
n(-0.58471,-0.811242,0)
v(25.875,1,-1)
n(-0.447214,-0.894427,0)
v(26.375,1.25,1)
n(-0.58471,-0.811242,0)
v(25.875,1,1)
n(-0.58471,-0.811242,0)
v(25.875,1,-1)
n(-0.58471,-0.811242,0)
v(25.875,1,1)
n(-0.863729,-0.503956,0)
v(25.125,0.25,-1)
n(-0.863729,-0.503956,0)
v(25.125,0.25,-1)
n(-0.58471,-0.811242,0)
v(25.875,1,1)
n(-0.863729,-0.503956,0)
v(25.125,0.25,1)
n(-0.863729,-0.503956,0)
v(25.125,0.25,-1)
n(-0.863729,-0.503956,0)
v(25.125,0.25,1)
n(-0.963518,-0.267644,0)
v(23.875,-4.25,-1)
n(-0.963518,-0.267644,0)
v(23.875,-4.25,-1)
n(-0.863729,-0.503956,0)
v(25.125,0.25,1)
n(-0.963518,-0.267644,0)
v(23.875,-4.25,1)
n(-1,0,0)
v(32.125,-2.25,-1)
n(-1,0,0)
v(32.125,-2.25,1)
n(-0.992508,0.122183,0)
v(32.125,-3.25,-1)
n(-0.992508,0.122183,0)
v(32.125,-3.25,-1)
n(-1,0,0)
v(32.125,-2.25,1)
n(-0.992508,0.122183,0)
v(32.125,-3.25,1)
n(-0.992508,0.122183,0)
v(32.125,-3.25,-1)
n(-0.992508,0.122183,0)
v(32.125,-3.25,1)
n(-0.970143,0.242536,0)
v(32.375,-4.25,-1)
n(-0.970143,0.242536,0)
v(32.375,-4.25,-1)
n(-0.992508,0.122183,0)
v(32.125,-3.25,1)
n(-0.970143,0.242536,0)
v(32.375,-4.25,1)
n(-0.242536,0.970143,0)
v(32.375,-4.25,-1)
n(-0.242536,0.970143,0)
v(32.375,-4.25,1)
n(0,1,0)
v(33.375,-4.5,-1)
n(0,1,0)
v(33.375,-4.5,-1)
n(-0.242536,0.970143,0)
v(32.375,-4.25,1)
n(0,1,0)
v(33.375,-4.5,1)
n(0,1,0)
v(33.375,-4.5,-1)
n(0,1,0)
v(33.375,-4.5,1)
n(0.382683,0.92388,0)
v(34.375,-4.25,-1)
n(0.382683,0.92388,0)
v(34.375,-4.25,-1)
n(0,1,0)
v(33.375,-4.5,1)
n(0.382683,0.92388,0)
v(34.375,-4.25,1)
n(0.382683,0.92388,0)
v(34.375,-4.25,-1)
n(0.382683,0.92388,0)
v(34.375,-4.25,1)
n(0.514496,0.857493,0)
v(35.625,-3.5,-1)
n(0.514496,0.857493,0)
v(35.625,-3.5,-1)
n(0.382683,0.92388,0)
v(34.375,-4.25,1)
n(0.514496,0.857493,0)
v(35.625,-3.5,1)
n(-0.707107,0.707107,0)
v(35.625,-3.5,-1)
n(-0.707107,0.707107,0)
v(35.625,-3.5,1)
n(-0.707107,0.707107,0)
v(35.875,-3.75,-1)
n(-0.707107,0.707107,0)
v(35.875,-3.75,-1)
n(-0.707107,0.707107,0)
v(35.625,-3.5,1)
n(-0.707107,0.707107,0)
v(35.875,-3.75,1)
n(-0.707107,-0.707107,0)
v(35.875,-3.75,-1)
n(-0.707107,-0.707107,0)
v(35.875,-3.75,1)
n(-0.58471,-0.811242,0)
v(35.125,-4.5,-1)
n(-0.58471,-0.811242,0)
v(35.125,-4.5,-1)
n(-0.707107,-0.707107,0)
v(35.875,-3.75,1)
n(-0.58471,-0.811242,0)
v(35.125,-4.5,1)
n(-0.58471,-0.811242,0)
v(35.125,-4.5,-1)
n(-0.58471,-0.811242,0)
v(35.125,-4.5,1)
n(-0.362523,-0.931975,0)
v(34.125,-5,-1)
n(-0.362523,-0.931975,0)
v(34.125,-5,-1)
n(-0.58471,-0.811242,0)
v(35.125,-4.5,1)
n(-0.362523,-0.931975,0)
v(34.125,-5,1)
n(-0.362523,-0.931975,0)
v(34.125,-5,-1)
n(-0.362523,-0.931975,0)
v(34.125,-5,1)
n(-0.0166597,-0.999861,0)
v(32.375,-5.5,-1)
n(-0.0166597,-0.999861,0)
v(32.375,-5.5,-1)
n(-0.362523,-0.931975,0)
v(34.125,-5,1)
n(-0.0166597,-0.999861,0)
v(32.375,-5.5,1)
n(-0.0166597,-0.999861,0)
v(32.375,-5.5,-1)
n(-0.0166597,-0.999861,0)
v(32.375,-5.5,1)
n(0.404554,-0.914514,0)
v(31.375,-5.25,-1)
n(0.404554,-0.914514,0)
v(31.375,-5.25,-1)
n(-0.0166597,-0.999861,0)
v(32.375,-5.5,1)
n(0.404554,-0.914514,0)
v(31.375,-5.25,1)
n(0.404554,-0.914514,0)
v(31.375,-5.25,-1)
n(0.404554,-0.914514,0)
v(31.375,-5.25,1)
n(0.749678,-0.661803,0)
v(30.625,-4.75,-1)
n(0.749678,-0.661803,0)
v(30.625,-4.75,-1)
n(0.404554,-0.914514,0)
v(31.375,-5.25,1)
n(0.749678,-0.661803,0)
v(30.625,-4.75,1)
n(0.749678,-0.661803,0)
v(30.625,-4.75,-1)
n(0.749678,-0.661803,0)
v(30.625,-4.75,1)
n(0.937885,-0.346946,0)
v(30.375,-4.25,-1)
n(0.937885,-0.346946,0)
v(30.375,-4.25,-1)
n(0.749678,-0.661803,0)
v(30.625,-4.75,1)
n(0.937885,-0.346946,0)
v(30.375,-4.25,1)
n(0.937885,-0.346946,0)
v(30.375,-4.25,-1)
n(0.937885,-0.346946,0)
v(30.375,-4.25,1)
n(0.999717,-0.0237893,0)
v(30.125,-3.25,-1)
n(0.999717,-0.0237893,0)
v(30.125,-3.25,-1)
n(0.937885,-0.346946,0)
v(30.375,-4.25,1)
n(0.999717,-0.0237893,0)
v(30.125,-3.25,1)
n(0.999717,-0.0237893,0)
v(30.125,-3.25,-1)
n(0.999717,-0.0237893,0)
v(30.125,-3.25,1)
n(0.958543,0.284947,0)
v(30.375,-2,-1)
n(0.958543,0.284947,0)
v(30.375,-2,-1)
n(0.999717,-0.0237893,0)
v(30.125,-3.25,1)
n(0.958543,0.284947,0)
v(30.375,-2,1)
n(0.958543,0.284947,0)
v(30.375,-2,-1)
n(0.958543,0.284947,0)
v(30.375,-2,1)
n(0.895847,0.444363,0)
v(30.875,-0.75,-1)
n(0.895847,0.444363,0)
v(30.875,-0.75,-1)
n(0.958543,0.284947,0)
v(30.375,-2,1)
n(0.895847,0.444363,0)
v(30.875,-0.75,1)
n(0.895847,0.444363,0)
v(30.875,-0.75,-1)
n(0.895847,0.444363,0)
v(30.875,-0.75,1)
n(0.752967,0.658059,0)
v(31.625,0.5,-1)
n(0.752967,0.658059,0)
v(31.625,0.5,-1)
n(0.895847,0.444363,0)
v(30.875,-0.75,1)
n(0.752967,0.658059,0)
v(31.625,0.5,1)
n(0.752967,0.658059,0)
v(31.625,0.5,-1)
n(0.752967,0.658059,0)
v(31.625,0.5,1)
n(0.503482,0.864006,0)
v(32.875,1.5,-1)
n(0.503482,0.864006,0)
v(32.875,1.5,-1)
n(0.752967,0.658059,0)
v(31.625,0.5,1)
n(0.503482,0.864006,0)
v(32.875,1.5,1)
n(0.503482,0.864006,0)
v(32.875,1.5,-1)
n(0.503482,0.864006,0)
v(32.875,1.5,1)
n(0.284947,0.958543,0)
v(34.125,2,-1)
n(0.284947,0.958543,0)
v(34.125,2,-1)
n(0.503482,0.864006,0)
v(32.875,1.5,1)
n(0.284947,0.958543,0)
v(34.125,2,1)
n(0.284947,0.958543,0)
v(34.125,2,-1)
n(0.284947,0.958543,0)
v(34.125,2,1)
n(-0.0237893,0.999717,0)
v(35.375,2.25,-1)
n(-0.0237893,0.999717,0)
v(35.375,2.25,-1)
n(0.284947,0.958543,0)
v(34.125,2,1)
n(-0.0237893,0.999717,0)
v(35.375,2.25,1)
n(-0.0237893,0.999717,0)
v(35.375,2.25,-1)
n(-0.0237893,0.999717,0)
v(35.375,2.25,1)
n(-0.242536,0.970143,0)
v(36.375,2,-1)
n(-0.242536,0.970143,0)
v(36.375,2,-1)
n(-0.0237893,0.999717,0)
v(35.375,2.25,1)
n(-0.242536,0.970143,0)
v(36.375,2,1)
n(-0.970143,0.242536,0)
v(36.375,2,-1)
n(-0.970143,0.242536,0)
v(36.375,2,1)
n(-1,0,0)
v(36.625,1,-1)
n(-1,0,0)
v(36.625,1,-1)
n(-0.970143,0.242536,0)
v(36.375,2,1)
n(-1,0,0)
v(36.625,1,1)
n(-1,0,0)
v(36.625,1,-1)
n(-1,0,0)
v(36.625,1,1)
n(-0.8702,-0.492699,0)
v(36.375,0,-1)
n(-0.8702,-0.492699,0)
v(36.375,0,-1)
n(-1,0,0)
v(36.625,1,1)
n(-0.8702,-0.492699,0)
v(36.375,0,1)
n(-0.8702,-0.492699,0)
v(36.375,0,-1)
n(-0.8702,-0.492699,0)
v(36.375,0,1)
n(-0.58471,-0.811242,0)
v(35.125,-1.25,-1)
n(-0.58471,-0.811242,0)
v(35.125,-1.25,-1)
n(-0.8702,-0.492699,0)
v(36.375,0,1)
n(-0.58471,-0.811242,0)
v(35.125,-1.25,1)
n(-0.58471,-0.811242,0)
v(35.125,-1.25,-1)
n(-0.58471,-0.811242,0)
v(35.125,-1.25,1)
n(-0.309244,-0.950983,0)
v(33.625,-2,-1)
n(-0.309244,-0.950983,0)
v(33.625,-2,-1)
n(-0.58471,-0.811242,0)
v(35.125,-1.25,1)
n(-0.309244,-0.950983,0)
v(33.625,-2,1)
n(-0.309244,-0.950983,0)
v(33.625,-2,-1)
n(-0.309244,-0.950983,0)
v(33.625,-2,1)
n(-0.164399,-0.986394,0)
v(32.125,-2.25,-1)
n(-0.164399,-0.986394,0)
v(32.125,-2.25,-1)
n(-0.309244,-0.950983,0)
v(33.625,-2,1)
n(-0.164399,-0.986394,0)
v(32.125,-2.25,1)
n(0,0,0)
v(32.125,-2.25,-1)
n(0,0,0)
v(32.125,-2.25,1)
n(0,0,0)
v(32.125,-2.25,-1)
n(0,0,0)
v(32.125,-2.25,-1)
n(0,0,0)
v(32.125,-2.25,1)
n(0,0,0)
v(32.125,-2.25,1)
n(0.316228,0.948683,0)
v(32.375,-1.75,-1)
n(0.316228,0.948683,0)
v(32.375,-1.75,1)
n(0.439351,0.898315,0)
v(33.125,-1.5,-1)
n(0.439351,0.898315,0)
v(33.125,-1.5,-1)
n(0.316228,0.948683,0)
v(32.375,-1.75,1)
n(0.439351,0.898315,0)
v(33.125,-1.5,1)
n(0.439351,0.898315,0)
v(33.125,-1.5,-1)
n(0.439351,0.898315,0)
v(33.125,-1.5,1)
n(0.749678,0.661803,0)
v(33.875,-1,-1)
n(0.749678,0.661803,0)
v(33.875,-1,-1)
n(0.439351,0.898315,0)
v(33.125,-1.5,1)
n(0.749678,0.661803,0)
v(33.875,-1,1)
n(0.749678,0.661803,0)
v(33.875,-1,-1)
n(0.749678,0.661803,0)
v(33.875,-1,1)
n(0.945873,0.324536,0)
v(34.375,0,-1)
n(0.945873,0.324536,0)
v(34.375,0,-1)
n(0.749678,0.661803,0)
v(33.875,-1,1)
n(0.945873,0.324536,0)
v(34.375,0,1)
n(0.945873,0.324536,0)
v(34.375,0,-1)
n(0.945873,0.324536,0)
v(34.375,0,1)
n(0.995133,0.0985376,0)
v(34.625,1.25,-1)
n(0.995133,0.0985376,0)
v(34.625,1.25,-1)
n(0.945873,0.324536,0)
v(34.375,0,1)
n(0.995133,0.0985376,0)
v(34.625,1.25,1)
n(0.995133,0.0985376,0)
v(34.625,1.25,-1)
n(0.995133,0.0985376,0)
v(34.625,1.25,1)
n(1,0,0)
v(34.625,1.5,-1)
n(1,0,0)
v(34.625,1.5,-1)
n(0.995133,0.0985376,0)
v(34.625,1.25,1)
n(1,0,0)
v(34.625,1.5,1)
n(0.707107,-0.707107,0)
v(34.625,1.5,-1)
n(0.707107,-0.707107,0)
v(34.625,1.5,1)
n(0.707107,-0.707107,0)
v(34.375,1.75,-1)
n(0.707107,-0.707107,0)
v(34.375,1.75,-1)
n(0.707107,-0.707107,0)
v(34.625,1.5,1)
n(0.707107,-0.707107,0)
v(34.375,1.75,1)
n(-0.447214,-0.894427,0)
v(34.375,1.75,-1)
n(-0.447214,-0.894427,0)
v(34.375,1.75,1)
n(-0.58471,-0.811242,0)
v(33.875,1.5,-1)
n(-0.58471,-0.811242,0)
v(33.875,1.5,-1)
n(-0.447214,-0.894427,0)
v(34.375,1.75,1)
n(-0.58471,-0.811242,0)
v(33.875,1.5,1)
n(-0.58471,-0.811242,0)
v(33.875,1.5,-1)
n(-0.58471,-0.811242,0)
v(33.875,1.5,1)
n(-0.788205,-0.615412,0)
v(33.625,1.25,-1)
n(-0.788205,-0.615412,0)
v(33.625,1.25,-1)
n(-0.58471,-0.811242,0)
v(33.875,1.5,1)
n(-0.788205,-0.615412,0)
v(33.625,1.25,1)
n(-0.788205,-0.615412,0)
v(33.625,1.25,-1)
n(-0.788205,-0.615412,0)
v(33.625,1.25,1)
n(-0.917376,-0.398022,0)
v(32.875,0,-1)
n(-0.917376,-0.398022,0)
v(32.875,0,-1)
n(-0.788205,-0.615412,0)
v(33.625,1.25,1)
n(-0.917376,-0.398022,0)
v(32.875,0,1)
n(-0.917376,-0.398022,0)
v(32.875,0,-1)
n(-0.917376,-0.398022,0)
v(32.875,0,1)
n(-0.961524,-0.274721,0)
v(32.375,-1.75,-1)
n(-0.961524,-0.274721,0)
v(32.375,-1.75,-1)
n(-0.917376,-0.398022,0)
v(32.875,0,1)
n(-0.961524,-0.274721,0)
v(32.375,-1.75,1)
n(0,0,0)
v(32.375,-1.75,-1)
n(0,0,0)
v(32.375,-1.75,1)
n(0,0,0)
v(32.375,-1.75,-1)
n(0,0,0)
v(32.375,-1.75,-1)
n(0,0,0)
v(32.375,-1.75,1)
n(0,0,0)
v(32.375,-1.75,1)
n(-0.961524,-0.274721,0)
v(41.125,2.25,-1)
n(-0.961524,-0.274721,0)
v(41.125,2.25,1)
n(-0.961524,-0.274721,0)
v(40.125,-1.25,-1)
n(-0.961524,-0.274721,0)
v(40.125,-1.25,-1)
n(-0.961524,-0.274721,0)
v(41.125,2.25,1)
n(-0.961524,-0.274721,0)
v(40.125,-1.25,1)
n(0.857493,0.514496,0)
v(40.125,-1.25,-1)
n(0.857493,0.514496,0)
v(40.125,-1.25,1)
n(0.788205,0.615412,0)
v(40.875,0,-1)
n(0.788205,0.615412,0)
v(40.875,0,-1)
n(0.857493,0.514496,0)
v(40.125,-1.25,1)
n(0.788205,0.615412,0)
v(40.875,0,1)
n(0.788205,0.615412,0)
v(40.875,0,-1)
n(0.788205,0.615412,0)
v(40.875,0,1)
n(0.707107,0.707107,0)
v(41.625,0.75,-1)
n(0.707107,0.707107,0)
v(41.625,0.75,-1)
n(0.788205,0.615412,0)
v(40.875,0,1)
n(0.707107,0.707107,0)
v(41.625,0.75,1)
n(0.707107,0.707107,0)
v(41.625,0.75,-1)
n(0.707107,0.707107,0)
v(41.625,0.75,1)
n(0.707107,0.707107,0)
v(42.375,1.5,-1)
n(0.707107,0.707107,0)
v(42.375,1.5,-1)
n(0.707107,0.707107,0)
v(41.625,0.75,1)
n(0.707107,0.707107,0)
v(42.375,1.5,1)
n(0.707107,0.707107,0)
v(42.375,1.5,-1)
n(0.707107,0.707107,0)
v(42.375,1.5,1)
n(0.492699,0.8702,0)
v(42.875,2,-1)
n(0.492699,0.8702,0)
v(42.875,2,-1)
n(0.707107,0.707107,0)
v(42.375,1.5,1)
n(0.492699,0.8702,0)
v(42.875,2,1)
n(0.492699,0.8702,0)
v(42.875,2,-1)
n(0.492699,0.8702,0)
v(42.875,2,1)
n(0.242536,0.970143,0)
v(43.875,2.25,-1)
n(0.242536,0.970143,0)
v(43.875,2.25,-1)
n(0.492699,0.8702,0)
v(42.875,2,1)
n(0.242536,0.970143,0)
v(43.875,2.25,1)
n(-0.5547,0.83205,0)
v(43.875,2.25,-1)
n(-0.5547,0.83205,0)
v(43.875,2.25,1)
n(-0.817416,0.576048,0)
v(44.625,1.75,-1)
n(-0.817416,0.576048,0)
v(44.625,1.75,-1)
n(-0.5547,0.83205,0)
v(43.875,2.25,1)
n(-0.817416,0.576048,0)
v(44.625,1.75,1)
n(-0.817416,0.576048,0)
v(44.625,1.75,-1)
n(-0.817416,0.576048,0)
v(44.625,1.75,1)
n(-1,0,0)
v(44.875,0.75,-1)
n(-1,0,0)
v(44.875,0.75,-1)
n(-0.817416,0.576048,0)
v(44.625,1.75,1)
n(-1,0,0)
v(44.875,0.75,1)
n(-1,0,0)
v(44.875,0.75,-1)
n(-1,0,0)
v(44.875,0.75,1)
n(-0.965967,-0.258664,0)
v(44.625,-0.25,-1)
n(-0.965967,-0.258664,0)
v(44.625,-0.25,-1)
n(-1,0,0)
v(44.875,0.75,1)
n(-0.965967,-0.258664,0)
v(44.625,-0.25,1)
n(-0.965967,-0.258664,0)
v(44.625,-0.25,-1)
n(-0.965967,-0.258664,0)
v(44.625,-0.25,1)
n(-0.931975,-0.362523,0)
v(43.625,-3.75,-1)
n(-0.931975,-0.362523,0)
v(43.625,-3.75,-1)
n(-0.965967,-0.258664,0)
v(44.625,-0.25,1)
n(-0.931975,-0.362523,0)
v(43.625,-3.75,1)
n(-0.931975,-0.362523,0)
v(43.625,-3.75,-1)
n(-0.931975,-0.362523,0)
v(43.625,-3.75,1)
n(-0.894427,-0.447214,0)
v(43.375,-4.25,-1)
n(-0.894427,-0.447214,0)
v(43.375,-4.25,-1)
n(-0.931975,-0.362523,0)
v(43.625,-3.75,1)
n(-0.894427,-0.447214,0)
v(43.375,-4.25,1)
n(-0.707107,0.707107,0)
v(43.375,-4.25,-1)
n(-0.707107,0.707107,0)
v(43.375,-4.25,1)
n(-0.707107,0.707107,0)
v(43.625,-4.5,-1)
n(-0.707107,0.707107,0)
v(43.625,-4.5,-1)
n(-0.707107,0.707107,0)
v(43.375,-4.25,1)
n(-0.707107,0.707107,0)
v(43.625,-4.5,1)
n(0,0,0)
v(43.625,-4.5,-1)
n(0,0,0)
v(43.625,-4.5,1)
n(0,0,0)
v(43.625,-4.5,-1)
n(0,0,0)
v(43.625,-4.5,-1)
n(0,0,0)
v(43.625,-4.5,1)
n(0,0,0)
v(43.625,-4.5,1)
n(0,1,0)
v(43.625,-4.5,-1)
n(0,1,0)
v(43.625,-4.5,1)
n(0.382683,0.92388,0)
v(43.875,-4.5,-1)
n(0.382683,0.92388,0)
v(43.875,-4.5,-1)
n(0,1,0)
v(43.625,-4.5,1)
n(0.382683,0.92388,0)
v(43.875,-4.5,1)
n(0.382683,0.92388,0)
v(43.875,-4.5,-1)
n(0.382683,0.92388,0)
v(43.875,-4.5,1)
n(0.707107,0.707107,0)
v(44.625,-3.75,-1)
n(0.707107,0.707107,0)
v(44.625,-3.75,-1)
n(0.382683,0.92388,0)
v(43.875,-4.5,1)
n(0.707107,0.707107,0)
v(44.625,-3.75,1)
n(0.707107,0.707107,0)
v(44.625,-3.75,-1)
n(0.707107,0.707107,0)
v(44.625,-3.75,1)
n(0.707107,0.707107,0)
v(44.875,-3.5,-1)
n(0.707107,0.707107,0)
v(44.875,-3.5,-1)
n(0.707107,0.707107,0)
v(44.625,-3.75,1)
n(0.707107,0.707107,0)
v(44.875,-3.5,1)
n(-0.707107,0.707107,0)
v(44.875,-3.5,-1)
n(-0.707107,0.707107,0)
v(44.875,-3.5,1)
n(-0.707107,0.707107,0)
v(45.125,-3.75,-1)
n(-0.707107,0.707107,0)
v(45.125,-3.75,-1)
n(-0.707107,0.707107,0)
v(44.875,-3.5,1)
n(-0.707107,0.707107,0)
v(45.125,-3.75,1)
n(-0.707107,-0.707107,0)
v(45.125,-3.75,-1)
n(-0.707107,-0.707107,0)
v(45.125,-3.75,1)
n(-0.525731,-0.850651,0)
v(43.875,-5,-1)
n(-0.525731,-0.850651,0)
v(43.875,-5,-1)
n(-0.707107,-0.707107,0)
v(45.125,-3.75,1)
n(-0.525731,-0.850651,0)
v(43.875,-5,1)
n(-0.525731,-0.850651,0)
v(43.875,-5,-1)
n(-0.525731,-0.850651,0)
v(43.875,-5,1)
n(0,-1,0)
v(42.375,-5.5,-1)
n(0,-1,0)
v(42.375,-5.5,-1)
n(-0.525731,-0.850651,0)
v(43.875,-5,1)
n(0,-1,0)
v(42.375,-5.5,1)
n(0,-1,0)
v(42.375,-5.5,-1)
n(0,-1,0)
v(42.375,-5.5,1)
n(0.316228,-0.948683,0)
v(41.625,-5.25,-1)
n(0.316228,-0.948683,0)
v(41.625,-5.25,-1)
n(0,-1,0)
v(42.375,-5.5,1)
n(0.316228,-0.948683,0)
v(41.625,-5.25,1)
n(0.948683,-0.316228,0)
v(41.625,-5.25,-1)
n(0.948683,-0.316228,0)
v(41.625,-5.25,1)
n(0.999263,-0.0383765,0)
v(41.375,-4.5,-1)
n(0.999263,-0.0383765,0)
v(41.375,-4.5,-1)
n(0.948683,-0.316228,0)
v(41.625,-5.25,1)
n(0.999263,-0.0383765,0)
v(41.375,-4.5,1)
n(0.999263,-0.0383765,0)
v(41.375,-4.5,-1)
n(0.999263,-0.0383765,0)
v(41.375,-4.5,1)
n(0.965967,0.258664,0)
v(41.625,-3.5,-1)
n(0.965967,0.258664,0)
v(41.625,-3.5,-1)
n(0.999263,-0.0383765,0)
v(41.375,-4.5,1)
n(0.965967,0.258664,0)
v(41.625,-3.5,1)
n(0.965967,0.258664,0)
v(41.625,-3.5,-1)
n(0.965967,0.258664,0)
v(41.625,-3.5,1)
n(0.990334,0.138701,0)
v(42.625,0,-1)
n(0.990334,0.138701,0)
v(42.625,0,-1)
n(0.965967,0.258664,0)
v(41.625,-3.5,1)
n(0.990334,0.138701,0)
v(42.625,0,1)
n(0.990334,0.138701,0)
v(42.625,0,-1)
n(0.990334,0.138701,0)
v(42.625,0,1)
n(1,0,0)
v(42.625,0.5,-1)
n(1,0,0)
v(42.625,0.5,-1)
n(0.990334,0.138701,0)
v(42.625,0,1)
n(1,0,0)
v(42.625,0.5,1)
n(1,0,0)
v(42.625,0.5,-1)
n(1,0,0)
v(42.625,0.5,1)
n(1,0,0)
v(42.625,0.75,-1)
n(1,0,0)
v(42.625,0.75,-1)
n(1,0,0)
v(42.625,0.5,1)
n(1,0,0)
v(42.625,0.75,1)
n(0,-1,0)
v(42.625,0.75,-1)
n(0,-1,0)
v(42.625,0.75,1)
n(-0.289784,-0.957092,0)
v(42.375,0.75,-1)
n(-0.289784,-0.957092,0)
v(42.375,0.75,-1)
n(0,-1,0)
v(42.625,0.75,1)
n(-0.289784,-0.957092,0)
v(42.375,0.75,1)
n(-0.289784,-0.957092,0)
v(42.375,0.75,-1)
n(-0.289784,-0.957092,0)
v(42.375,0.75,1)
n(-0.675785,-0.737099,0)
v(41.625,0.25,-1)
n(-0.675785,-0.737099,0)
v(41.625,0.25,-1)
n(-0.289784,-0.957092,0)
v(42.375,0.75,1)
n(-0.675785,-0.737099,0)
v(41.625,0.25,1)
n(-0.675785,-0.737099,0)
v(41.625,0.25,-1)
n(-0.675785,-0.737099,0)
v(41.625,0.25,1)
n(-0.827058,-0.562117,0)
v(40.625,-1,-1)
n(-0.827058,-0.562117,0)
v(40.625,-1,-1)
n(-0.675785,-0.737099,0)
v(41.625,0.25,1)
n(-0.827058,-0.562117,0)
v(40.625,-1,1)
n(-0.827058,-0.562117,0)
v(40.625,-1,-1)
n(-0.827058,-0.562117,0)
v(40.625,-1,1)
n(-0.918984,-0.394296,0)
v(39.625,-2.75,-1)
n(-0.918984,-0.394296,0)
v(39.625,-2.75,-1)
n(-0.827058,-0.562117,0)
v(40.625,-1,1)
n(-0.918984,-0.394296,0)
v(39.625,-2.75,1)
n(-0.918984,-0.394296,0)
v(39.625,-2.75,-1)
n(-0.918984,-0.394296,0)
v(39.625,-2.75,1)
n(-0.957826,-0.287348,0)
v(38.875,-5.25,-1)
n(-0.957826,-0.287348,0)
v(38.875,-5.25,-1)
n(-0.918984,-0.394296,0)
v(39.625,-2.75,1)
n(-0.957826,-0.287348,0)
v(38.875,-5.25,1)
n(0,-1,0)
v(38.875,-5.25,-1)
n(0,-1,0)
v(38.875,-5.25,1)
n(0,-1,0)
v(36.875,-5.25,-1)
n(0,-1,0)
v(36.875,-5.25,-1)
n(0,-1,0)
v(38.875,-5.25,1)
n(0,-1,0)
v(36.875,-5.25,1)
n(0.957826,0.287348,0)
v(36.875,-5.25,-1)
n(0.957826,0.287348,0)
v(36.875,-5.25,1)
n(0.970276,0.242,0)
v(38.375,-0.25,-1)
n(0.970276,0.242,0)
v(38.375,-0.25,-1)
n(0.957826,0.287348,0)
v(36.875,-5.25,1)
n(0.970276,0.242,0)
v(38.375,-0.25,1)
n(0.970276,0.242,0)
v(38.375,-0.25,-1)
n(0.970276,0.242,0)
v(38.375,-0.25,1)
n(0.995133,0.0985376,0)
v(38.625,1,-1)
n(0.995133,0.0985376,0)
v(38.625,1,-1)
n(0.970276,0.242,0)
v(38.375,-0.25,1)
n(0.995133,0.0985376,0)
v(38.625,1,1)
n(0.995133,0.0985376,0)
v(38.625,1,-1)
n(0.995133,0.0985376,0)
v(38.625,1,1)
n(1,0,0)
v(38.625,1.25,-1)
n(1,0,0)
v(38.625,1.25,-1)
n(0.995133,0.0985376,0)
v(38.625,1,1)
n(1,0,0)
v(38.625,1.25,1)
n(0.707107,-0.707107,0)
v(38.625,1.25,-1)
n(0.707107,-0.707107,0)
v(38.625,1.25,1)
n(0.707107,-0.707107,0)
v(38.375,1.5,-1)
n(0.707107,-0.707107,0)
v(38.375,1.5,-1)
n(0.707107,-0.707107,0)
v(38.625,1.25,1)
n(0.707107,-0.707107,0)
v(38.375,1.5,1)
n(0,-1,0)
v(38.375,1.5,-1)
n(0,-1,0)
v(38.375,1.5,1)
n(0,-1,0)
v(37.875,1.5,-1)
n(0,-1,0)
v(37.875,1.5,-1)
n(0,-1,0)
v(38.375,1.5,1)
n(0,-1,0)
v(37.875,1.5,1)
n(1,0,0)
v(37.875,1.5,-1)
n(1,0,0)
v(37.875,1.5,1)
n(1,0,0)
v(37.875,1.75,-1)
n(1,0,0)
v(37.875,1.75,-1)
n(1,0,0)
v(37.875,1.5,1)
n(1,0,0)
v(37.875,1.75,1)
n(0.178885,0.98387,0)
v(37.875,1.75,-1)
n(0.178885,0.98387,0)
v(37.875,1.75,1)
n(0.0898056,0.995959,0)
v(40.625,2.25,-1)
n(0.0898056,0.995959,0)
v(40.625,2.25,-1)
n(0.178885,0.98387,0)
v(37.875,1.75,1)
n(0.0898056,0.995959,0)
v(40.625,2.25,1)
n(0.0898056,0.995959,0)
v(40.625,2.25,-1)
n(0.0898056,0.995959,0)
v(40.625,2.25,1)
n(0,1,0)
v(41.125,2.25,-1)
n(0,1,0)
v(41.125,2.25,-1)
n(0.0898056,0.995959,0)
v(40.625,2.25,1)
n(0,1,0)
v(41.125,2.25,1)
n(-0.977802,-0.209529,0)
v(57.375,5.75,-1)
n(-0.977802,-0.209529,0)
v(57.375,5.75,1)
n(-0.977802,-0.209529,0)
v(56.625,2.25,-1)
n(-0.977802,-0.209529,0)
v(56.625,2.25,-1)
n(-0.977802,-0.209529,0)
v(57.375,5.75,1)
n(-0.977802,-0.209529,0)
v(56.625,2.25,1)
n(0,-1,0)
v(56.625,2.25,-1)
n(0,-1,0)
v(56.625,2.25,1)
n(0,-1,0)
v(56.375,2.25,-1)
n(0,-1,0)
v(56.375,2.25,-1)
n(0,-1,0)
v(56.625,2.25,1)
n(0,-1,0)
v(56.375,2.25,1)
n(0.980581,-0.196116,0)
v(56.375,2.25,-1)
n(0.980581,-0.196116,0)
v(56.375,2.25,1)
n(0.92388,-0.382683,0)
v(56.125,3.5,-1)
n(0.92388,-0.382683,0)
v(56.125,3.5,-1)
n(0.980581,-0.196116,0)
v(56.375,2.25,1)
n(0.92388,-0.382683,0)
v(56.125,3.5,1)
n(0.92388,-0.382683,0)
v(56.125,3.5,-1)
n(0.92388,-0.382683,0)
v(56.125,3.5,1)
n(0.707107,-0.707107,0)
v(55.625,4.25,-1)
n(0.707107,-0.707107,0)
v(55.625,4.25,-1)
n(0.92388,-0.382683,0)
v(56.125,3.5,1)
n(0.707107,-0.707107,0)
v(55.625,4.25,1)
n(0.707107,-0.707107,0)
v(55.625,4.25,-1)
n(0.707107,-0.707107,0)
v(55.625,4.25,1)
n(0.404554,-0.914514,0)
v(54.875,4.75,-1)
n(0.404554,-0.914514,0)
v(54.875,4.75,-1)
n(0.707107,-0.707107,0)
v(55.625,4.25,1)
n(0.404554,-0.914514,0)
v(54.875,4.75,1)
n(0.404554,-0.914514,0)
v(54.875,4.75,-1)
n(0.404554,-0.914514,0)
v(54.875,4.75,1)
n(0,-1,0)
v(53.875,5,-1)
n(0,-1,0)
v(53.875,5,-1)
n(0.404554,-0.914514,0)
v(54.875,4.75,1)
n(0,-1,0)
v(53.875,5,1)
n(0,-1,0)
v(53.875,5,-1)
n(0,-1,0)
v(53.875,5,1)
n(-0.307669,-0.951493,0)
v(52.875,4.75,-1)
n(-0.307669,-0.951493,0)
v(52.875,4.75,-1)
n(0,-1,0)
v(53.875,5,1)
n(-0.307669,-0.951493,0)
v(52.875,4.75,1)
n(-0.307669,-0.951493,0)
v(52.875,4.75,-1)
n(-0.307669,-0.951493,0)
v(52.875,4.75,1)
n(-0.550491,-0.834841,0)
v(51.625,4.25,-1)
n(-0.550491,-0.834841,0)
v(51.625,4.25,-1)
n(-0.307669,-0.951493,0)
v(52.875,4.75,1)
n(-0.550491,-0.834841,0)
v(51.625,4.25,1)
n(-0.550491,-0.834841,0)
v(51.625,4.25,-1)
n(-0.550491,-0.834841,0)
v(51.625,4.25,1)
n(-0.773342,-0.633989,0)
v(50.625,3.25,-1)
n(-0.773342,-0.633989,0)
v(50.625,3.25,-1)
n(-0.550491,-0.834841,0)
v(51.625,4.25,1)
n(-0.773342,-0.633989,0)
v(50.625,3.25,1)
n(-0.773342,-0.633989,0)
v(50.625,3.25,-1)
n(-0.773342,-0.633989,0)
v(50.625,3.25,1)
n(-0.898315,-0.439351,0)
v(49.625,1.75,-1)
n(-0.898315,-0.439351,0)
v(49.625,1.75,-1)
n(-0.773342,-0.633989,0)
v(50.625,3.25,1)
n(-0.898315,-0.439351,0)
v(49.625,1.75,1)
n(-0.898315,-0.439351,0)
v(49.625,1.75,-1)
n(-0.898315,-0.439351,0)
v(49.625,1.75,1)
n(-0.973249,-0.229753,0)
v(49.125,0.25,-1)
n(-0.973249,-0.229753,0)
v(49.125,0.25,-1)
n(-0.898315,-0.439351,0)
v(49.625,1.75,1)
n(-0.973249,-0.229753,0)
v(49.125,0.25,1)
n(-0.973249,-0.229753,0)
v(49.125,0.25,-1)
n(-0.973249,-0.229753,0)
v(49.125,0.25,1)
n(-0.999932,0.0116255,0)
v(48.875,-1.5,-1)
n(-0.999932,0.0116255,0)
v(48.875,-1.5,-1)
n(-0.973249,-0.229753,0)
v(49.125,0.25,1)
n(-0.999932,0.0116255,0)
v(48.875,-1.5,1)
n(-0.999932,0.0116255,0)
v(48.875,-1.5,-1)
n(-0.999932,0.0116255,0)
v(48.875,-1.5,1)
n(-0.950983,0.309244,0)
v(49.125,-3,-1)
n(-0.950983,0.309244,0)
v(49.125,-3,-1)
n(-0.999932,0.0116255,0)
v(48.875,-1.5,1)
n(-0.950983,0.309244,0)
v(49.125,-3,1)
n(-0.950983,0.309244,0)
v(49.125,-3,-1)
n(-0.950983,0.309244,0)
v(49.125,-3,1)
n(-0.749678,0.661803,0)
v(49.625,-4,-1)
n(-0.749678,0.661803,0)
v(49.625,-4,-1)
n(-0.950983,0.309244,0)
v(49.125,-3,1)
n(-0.749678,0.661803,0)
v(49.625,-4,1)
n(-0.749678,0.661803,0)
v(49.625,-4,-1)
n(-0.749678,0.661803,0)
v(49.625,-4,1)
n(-0.404554,0.914514,0)
v(50.375,-4.5,-1)
n(-0.404554,0.914514,0)
v(50.375,-4.5,-1)
n(-0.749678,0.661803,0)
v(49.625,-4,1)
n(-0.404554,0.914514,0)
v(50.375,-4.5,1)
n(-0.404554,0.914514,0)
v(50.375,-4.5,-1)
n(-0.404554,0.914514,0)
v(50.375,-4.5,1)
n(0,1,0)
v(51.375,-4.75,-1)
n(0,1,0)
v(51.375,-4.75,-1)
n(-0.404554,0.914514,0)
v(50.375,-4.5,1)
n(0,1,0)
v(51.375,-4.75,1)
n(0,1,0)
v(51.375,-4.75,-1)
n(0,1,0)
v(51.375,-4.75,1)
n(0.242536,0.970143,0)
v(52.375,-4.5,-1)
n(0.242536,0.970143,0)
v(52.375,-4.5,-1)
n(0,1,0)
v(51.375,-4.75,1)
n(0.242536,0.970143,0)
v(52.375,-4.5,1)
n(0.964764,0.263117,0)
v(52.375,-4.5,-1)
n(0.964764,0.263117,0)
v(52.375,-4.5,1)
n(0.957092,0.289784,0)
v(53.125,-1.75,-1)
n(0.957092,0.289784,0)
v(53.125,-1.75,-1)
n(0.964764,0.263117,0)
v(52.375,-4.5,1)
n(0.957092,0.289784,0)
v(53.125,-1.75,1)
n(0.957092,0.289784,0)
v(53.125,-1.75,-1)
n(0.957092,0.289784,0)
v(53.125,-1.75,1)
n(0.948683,0.316228,0)
v(53.375,-1,-1)
n(0.948683,0.316228,0)
v(53.375,-1,-1)
n(0.957092,0.289784,0)
v(53.125,-1.75,1)
n(0.948683,0.316228,0)
v(53.375,-1,1)
n(0.894427,-0.447214,0)
v(53.375,-1,-1)
n(0.894427,-0.447214,0)
v(53.375,-1,1)
n(0.894427,-0.447214,0)
v(53.125,-0.5,-1)
n(0.894427,-0.447214,0)
v(53.125,-0.5,-1)
n(0.894427,-0.447214,0)
v(53.375,-1,1)
n(0.894427,-0.447214,0)
v(53.125,-0.5,1)
n(0.242536,-0.970143,0)
v(53.125,-0.5,-1)
n(0.242536,-0.970143,0)
v(53.125,-0.5,1)
n(0.242536,-0.970143,0)
v(52.125,-0.25,-1)
n(0.242536,-0.970143,0)
v(52.125,-0.25,-1)
n(0.242536,-0.970143,0)
v(53.125,-0.5,1)
n(0.242536,-0.970143,0)
v(52.125,-0.25,1)
n(0.707107,0.707107,0)
v(52.125,-0.25,-1)
n(0.707107,0.707107,0)
v(52.125,-0.25,1)
n(0.382683,0.92388,0)
v(52.375,0,-1)
n(0.382683,0.92388,0)
v(52.375,0,-1)
n(0.707107,0.707107,0)
v(52.125,-0.25,1)
n(0.382683,0.92388,0)
v(52.375,0,1)
n(0.382683,0.92388,0)
v(52.375,0,-1)
n(0.382683,0.92388,0)
v(52.375,0,1)
n(0,1,0)
v(56.875,0,-1)
n(0,1,0)
v(56.875,0,-1)
n(0.382683,0.92388,0)
v(52.375,0,1)
n(0,1,0)
v(56.875,0,1)
n(-0.707107,-0.707107,0)
v(56.875,0,-1)
n(-0.707107,-0.707107,0)
v(56.875,0,1)
n(-0.58471,-0.811242,0)
v(56.625,-0.25,-1)
n(-0.58471,-0.811242,0)
v(56.625,-0.25,-1)
n(-0.707107,-0.707107,0)
v(56.875,0,1)
n(-0.58471,-0.811242,0)
v(56.625,-0.25,1)
n(-0.58471,-0.811242,0)
v(56.625,-0.25,-1)
n(-0.58471,-0.811242,0)
v(56.625,-0.25,1)
n(-0.447214,-0.894427,0)
v(56.125,-0.5,-1)
n(-0.447214,-0.894427,0)
v(56.125,-0.5,-1)
n(-0.58471,-0.811242,0)
v(56.625,-0.25,1)
n(-0.447214,-0.894427,0)
v(56.125,-0.5,1)
n(-0.447214,-0.894427,0)
v(56.125,-0.5,-1)
n(-0.447214,-0.894427,0)
v(56.125,-0.5,1)
n(-0.447214,-0.894427,0)
v(55.625,-0.75,-1)
n(-0.447214,-0.894427,0)
v(55.625,-0.75,-1)
n(-0.447214,-0.894427,0)
v(56.125,-0.5,1)
n(-0.447214,-0.894427,0)
v(55.625,-0.75,1)
n(-1,0,0)
v(55.625,-0.75,-1)
n(-1,0,0)
v(55.625,-0.75,1)
n(-0.988883,-0.148696,0)
v(55.625,-1.5,-1)
n(-0.988883,-0.148696,0)
v(55.625,-1.5,-1)
n(-1,0,0)
v(55.625,-0.75,1)
n(-0.988883,-0.148696,0)
v(55.625,-1.5,1)
n(-0.988883,-0.148696,0)
v(55.625,-1.5,-1)
n(-0.988883,-0.148696,0)
v(55.625,-1.5,1)
n(-0.955779,-0.294086,0)
v(54.625,-4.75,-1)
n(-0.955779,-0.294086,0)
v(54.625,-4.75,-1)
n(-0.988883,-0.148696,0)
v(55.625,-1.5,1)
n(-0.955779,-0.294086,0)
v(54.625,-4.75,1)
n(-0.274721,-0.961524,0)
v(54.625,-4.75,-1)
n(-0.274721,-0.961524,0)
v(54.625,-4.75,1)
n(-0.208556,-0.97801,0)
v(52.875,-5.25,-1)
n(-0.208556,-0.97801,0)
v(52.875,-5.25,-1)
n(-0.274721,-0.961524,0)
v(54.625,-4.75,1)
n(-0.208556,-0.97801,0)
v(52.875,-5.25,1)
n(-0.208556,-0.97801,0)
v(52.875,-5.25,-1)
n(-0.208556,-0.97801,0)
v(52.875,-5.25,1)
n(-0.070889,-0.997484,0)
v(51.125,-5.5,-1)
n(-0.070889,-0.997484,0)
v(51.125,-5.5,-1)
n(-0.208556,-0.97801,0)
v(52.875,-5.25,1)
n(-0.070889,-0.997484,0)
v(51.125,-5.5,1)
n(-0.070889,-0.997484,0)
v(51.125,-5.5,-1)
n(-0.070889,-0.997484,0)
v(51.125,-5.5,1)
n(0.229753,-0.973249,0)
v(49.875,-5.5,-1)
n(0.229753,-0.973249,0)
v(49.875,-5.5,-1)
n(-0.070889,-0.997484,0)
v(51.125,-5.5,1)
n(0.229753,-0.973249,0)
v(49.875,-5.5,1)
n(0.229753,-0.973249,0)
v(49.875,-5.5,-1)
n(0.229753,-0.973249,0)
v(49.875,-5.5,1)
n(0.525731,-0.850651,0)
v(48.875,-5,-1)
n(0.525731,-0.850651,0)
v(48.875,-5,-1)
n(0.229753,-0.973249,0)
v(49.875,-5.5,1)
n(0.525731,-0.850651,0)
v(48.875,-5,1)
n(0.525731,-0.850651,0)
v(48.875,-5,-1)
n(0.525731,-0.850651,0)
v(48.875,-5,1)
n(0.767752,-0.640747,0)
v(47.875,-4.25,-1)
n(0.767752,-0.640747,0)
v(47.875,-4.25,-1)
n(0.525731,-0.850651,0)
v(48.875,-5,1)
n(0.767752,-0.640747,0)
v(47.875,-4.25,1)
n(0.767752,-0.640747,0)
v(47.875,-4.25,-1)
n(0.767752,-0.640747,0)
v(47.875,-4.25,1)
n(0.894427,-0.447214,0)
v(47.375,-3.25,-1)
n(0.894427,-0.447214,0)
v(47.375,-3.25,-1)
n(0.767752,-0.640747,0)
v(47.875,-4.25,1)
n(0.894427,-0.447214,0)
v(47.375,-3.25,1)
n(0.894427,-0.447214,0)
v(47.375,-3.25,-1)
n(0.894427,-0.447214,0)
v(47.375,-3.25,1)
n(0.950983,-0.309244,0)
v(46.875,-2.25,-1)
n(0.950983,-0.309244,0)
v(46.875,-2.25,-1)
n(0.894427,-0.447214,0)
v(47.375,-3.25,1)
n(0.950983,-0.309244,0)
v(46.875,-2.25,1)
n(0.950983,-0.309244,0)
v(46.875,-2.25,-1)
n(0.950983,-0.309244,0)
v(46.875,-2.25,1)
n(1,0,0)
v(46.625,-0.75,-1)
n(1,0,0)
v(46.625,-0.75,-1)
n(0.950983,-0.309244,0)
v(46.875,-2.25,1)
n(1,0,0)
v(46.625,-0.75,1)
n(1,0,0)
v(46.625,-0.75,-1)
n(1,0,0)
v(46.625,-0.75,1)
n(0.963013,0.269455,0)
v(46.875,0.75,-1)
n(0.963013,0.269455,0)
v(46.875,0.75,-1)
n(1,0,0)
v(46.625,-0.75,1)
n(0.963013,0.269455,0)
v(46.875,0.75,1)
n(0.963013,0.269455,0)
v(46.875,0.75,-1)
n(0.963013,0.269455,0)
v(46.875,0.75,1)
n(0.895847,0.444363,0)
v(47.375,2,-1)
n(0.895847,0.444363,0)
v(47.375,2,-1)
n(0.963013,0.269455,0)
v(46.875,0.75,1)
n(0.895847,0.444363,0)
v(47.375,2,1)
n(0.895847,0.444363,0)
v(47.375,2,-1)
n(0.895847,0.444363,0)
v(47.375,2,1)
n(0.752967,0.658059,0)
v(48.125,3.25,-1)
n(0.752967,0.658059,0)
v(48.125,3.25,-1)
n(0.895847,0.444363,0)
v(47.375,2,1)
n(0.752967,0.658059,0)
v(48.125,3.25,1)
n(0.752967,0.658059,0)
v(48.125,3.25,-1)
n(0.752967,0.658059,0)
v(48.125,3.25,1)
n(0.519685,0.854358,0)
v(49.375,4.25,-1)
n(0.519685,0.854358,0)
v(49.375,4.25,-1)
n(0.752967,0.658059,0)
v(48.125,3.25,1)
n(0.519685,0.854358,0)
v(49.375,4.25,1)
n(0.519685,0.854358,0)
v(49.375,4.25,-1)
n(0.519685,0.854358,0)
v(49.375,4.25,1)
n(0.313092,0.949723,0)
v(51.625,5.25,-1)
n(0.313092,0.949723,0)
v(51.625,5.25,-1)
n(0.519685,0.854358,0)
v(49.375,4.25,1)
n(0.313092,0.949723,0)
v(51.625,5.25,1)
n(0.313092,0.949723,0)
v(51.625,5.25,-1)
n(0.313092,0.949723,0)
v(51.625,5.25,1)
n(0.109117,0.994029,0)
v(53.875,5.75,-1)
n(0.109117,0.994029,0)
v(53.875,5.75,-1)
n(0.313092,0.949723,0)
v(51.625,5.25,1)
n(0.109117,0.994029,0)
v(53.875,5.75,1)
n(0.109117,0.994029,0)
v(53.875,5.75,-1)
n(0.109117,0.994029,0)
v(53.875,5.75,1)
n(-0.122183,0.992508,0)
v(54.875,5.75,-1)
n(-0.122183,0.992508,0)
v(54.875,5.75,-1)
n(0.109117,0.994029,0)
v(53.875,5.75,1)
n(-0.122183,0.992508,0)
v(54.875,5.75,1)
n(-0.122183,0.992508,0)
v(54.875,5.75,-1)
n(-0.122183,0.992508,0)
v(54.875,5.75,1)
n(-0.346946,0.937885,0)
v(55.875,5.5,-1)
n(-0.346946,0.937885,0)
v(55.875,5.5,-1)
n(-0.122183,0.992508,0)
v(54.875,5.75,1)
n(-0.346946,0.937885,0)
v(55.875,5.5,1)
n(-0.346946,0.937885,0)
v(55.875,5.5,-1)
n(-0.346946,0.937885,0)
v(55.875,5.5,1)
n(-0.447214,0.894427,0)
v(56.375,5.25,-1)
n(-0.447214,0.894427,0)
v(56.375,5.25,-1)
n(-0.346946,0.937885,0)
v(55.875,5.5,1)
n(-0.447214,0.894427,0)
v(56.375,5.25,1)
n(0.5547,0.83205,0)
v(56.375,5.25,-1)
n(0.5547,0.83205,0)
v(56.375,5.25,1)
n(0.289784,0.957092,0)
v(57.125,5.75,-1)
n(0.289784,0.957092,0)
v(57.125,5.75,-1)
n(0.5547,0.83205,0)
v(56.375,5.25,1)
n(0.289784,0.957092,0)
v(57.125,5.75,1)
n(0.289784,0.957092,0)
v(57.125,5.75,-1)
n(0.289784,0.957092,0)
v(57.125,5.75,1)
n(0,1,0)
v(57.375,5.75,-1)
n(0,1,0)
v(57.375,5.75,-1)
n(0.289784,0.957092,0)
v(57.125,5.75,1)
n(0,1,0)
v(57.375,5.75,1)
n(0,-1,0)
v(65.125,-5.25,-1)
n(0,-1,0)
v(65.125,-5.25,1)
n(0,-1,0)
v(56.625,-5.25,-1)
n(0,-1,0)
v(56.625,-5.25,-1)
n(0,-1,0)
v(65.125,-5.25,1)
n(0,-1,0)
v(56.625,-5.25,1)
n(1,0,0)
v(56.625,-5.25,-1)
n(1,0,0)
v(56.625,-5.25,1)
n(1,0,0)
v(56.625,-5,-1)
n(1,0,0)
v(56.625,-5,-1)
n(1,0,0)
v(56.625,-5.25,1)
n(1,0,0)
v(56.625,-5,1)
n(0.196116,0.980581,0)
v(56.625,-5,-1)
n(0.196116,0.980581,0)
v(56.625,-5,1)
n(0.196116,0.980581,0)
v(57.875,-4.75,-1)
n(0.196116,0.980581,0)
v(57.875,-4.75,-1)
n(0.196116,0.980581,0)
v(56.625,-5,1)
n(0.196116,0.980581,0)
v(57.875,-4.75,1)
n(0.894427,0.447214,0)
v(57.875,-4.75,-1)
n(0.894427,0.447214,0)
v(57.875,-4.75,1)
n(0.937885,0.346946,0)
v(58.125,-4.25,-1)
n(0.937885,0.346946,0)
v(58.125,-4.25,-1)
n(0.894427,0.447214,0)
v(57.875,-4.75,1)
n(0.937885,0.346946,0)
v(58.125,-4.25,1)
n(0.937885,0.346946,0)
v(58.125,-4.25,-1)
n(0.937885,0.346946,0)
v(58.125,-4.25,1)
n(0.964694,0.263373,0)
v(58.375,-3.25,-1)
n(0.964694,0.263373,0)
v(58.375,-3.25,-1)
n(0.937885,0.346946,0)
v(58.125,-4.25,1)
n(0.964694,0.263373,0)
v(58.375,-3.25,1)
n(0.964694,0.263373,0)
v(58.375,-3.25,-1)
n(0.964694,0.263373,0)
v(58.375,-3.25,1)
n(0.989646,0.14353,0)
v(60.375,3.5,-1)
n(0.989646,0.14353,0)
v(60.375,3.5,-1)
n(0.964694,0.263373,0)
v(58.375,-3.25,1)
n(0.989646,0.14353,0)
v(60.375,3.5,1)
n(0.989646,0.14353,0)
v(60.375,3.5,-1)
n(0.989646,0.14353,0)
v(60.375,3.5,1)
n(0.973249,-0.229753,0)
v(60.375,4.5,-1)
n(0.973249,-0.229753,0)
v(60.375,4.5,-1)
n(0.989646,0.14353,0)
v(60.375,3.5,1)
n(0.973249,-0.229753,0)
v(60.375,4.5,1)
n(0.973249,-0.229753,0)
v(60.375,4.5,-1)
n(0.973249,-0.229753,0)
v(60.375,4.5,1)
n(0.894427,-0.447214,0)
v(60.125,5,-1)
n(0.894427,-0.447214,0)
v(60.125,5,-1)
n(0.973249,-0.229753,0)
v(60.375,4.5,1)
n(0.894427,-0.447214,0)
v(60.125,5,1)
n(0.316228,-0.948683,0)
v(60.125,5,-1)
n(0.316228,-0.948683,0)
v(60.125,5,1)
n(0.316228,-0.948683,0)
v(59.375,5.25,-1)
n(0.316228,-0.948683,0)
v(59.375,5.25,-1)
n(0.316228,-0.948683,0)
v(60.125,5,1)
n(0.316228,-0.948683,0)
v(59.375,5.25,1)
n(1,0,0)
v(59.375,5.25,-1)
n(1,0,0)
v(59.375,5.25,1)
n(1,0,0)
v(59.375,5.5,-1)
n(1,0,0)
v(59.375,5.5,-1)
n(1,0,0)
v(59.375,5.25,1)
n(1,0,0)
v(59.375,5.5,1)
n(0,1,0)
v(59.375,5.5,-1)
n(0,1,0)
v(59.375,5.5,1)
n(0,1,0)
v(64.375,5.5,-1)
n(0,1,0)
v(64.375,5.5,-1)
n(0,1,0)
v(59.375,5.5,1)
n(0,1,0)
v(64.375,5.5,1)
n(-1,0,0)
v(64.375,5.5,-1)
n(-1,0,0)
v(64.375,5.5,1)
n(-1,0,0)
v(64.375,5.25,-1)
n(-1,0,0)
v(64.375,5.25,-1)
n(-1,0,0)
v(64.375,5.5,1)
n(-1,0,0)
v(64.375,5.25,1)
n(-0.242536,-0.970143,0)
v(64.375,5.25,-1)
n(-0.242536,-0.970143,0)
v(64.375,5.25,1)
n(-0.576048,-0.817416,0)
v(63.375,5,-1)
n(-0.576048,-0.817416,0)
v(63.375,5,-1)
n(-0.242536,-0.970143,0)
v(64.375,5.25,1)
n(-0.576048,-0.817416,0)
v(63.375,5,1)
n(-0.576048,-0.817416,0)
v(63.375,5,-1)
n(-0.576048,-0.817416,0)
v(63.375,5,1)
n(-0.898315,-0.439351,0)
v(62.875,4.25,-1)
n(-0.898315,-0.439351,0)
v(62.875,4.25,-1)
n(-0.576048,-0.817416,0)
v(63.375,5,1)
n(-0.898315,-0.439351,0)
v(62.875,4.25,1)
n(-0.898315,-0.439351,0)
v(62.875,4.25,-1)
n(-0.898315,-0.439351,0)
v(62.875,4.25,1)
n(-0.956108,-0.293016,0)
v(62.625,3.5,-1)
n(-0.956108,-0.293016,0)
v(62.625,3.5,-1)
n(-0.898315,-0.439351,0)
v(62.875,4.25,1)
n(-0.956108,-0.293016,0)
v(62.625,3.5,1)
n(-0.956108,-0.293016,0)
v(62.625,3.5,-1)
n(-0.956108,-0.293016,0)
v(62.625,3.5,1)
n(-0.956108,-0.293016,0)
v(60.875,-2.75,-1)
n(-0.956108,-0.293016,0)
v(60.875,-2.75,-1)
n(-0.956108,-0.293016,0)
v(62.625,3.5,1)
n(-0.956108,-0.293016,0)
v(60.875,-2.75,1)
n(-0.956108,-0.293016,0)
v(60.875,-2.75,-1)
n(-0.956108,-0.293016,0)
v(60.875,-2.75,1)
n(-0.987087,-0.160182,0)
v(60.625,-3.5,-1)
n(-0.987087,-0.160182,0)
v(60.625,-3.5,-1)
n(-0.956108,-0.293016,0)
v(60.875,-2.75,1)
n(-0.987087,-0.160182,0)
v(60.625,-3.5,1)
n(-0.987087,-0.160182,0)
v(60.625,-3.5,-1)
n(-0.987087,-0.160182,0)
v(60.625,-3.5,1)
n(-1,0,0)
v(60.625,-4,-1)
n(-1,0,0)
v(60.625,-4,-1)
n(-0.987087,-0.160182,0)
v(60.625,-3.5,1)
n(-1,0,0)
v(60.625,-4,1)
n(-1,0,0)
v(60.625,-4,-1)
n(-1,0,0)
v(60.625,-4,1)
n(-1,0,0)
v(60.625,-4.25,-1)
n(-1,0,0)
v(60.625,-4.25,-1)
n(-1,0,0)
v(60.625,-4,1)
n(-1,0,0)
v(60.625,-4.25,1)
n(-0.316228,0.948683,0)
v(60.625,-4.25,-1)
n(-0.316228,0.948683,0)
v(60.625,-4.25,1)
n(-0.0621374,0.998068,0)
v(61.375,-4.5,-1)
n(-0.0621374,0.998068,0)
v(61.375,-4.5,-1)
n(-0.316228,0.948683,0)
v(60.625,-4.25,1)
n(-0.0621374,0.998068,0)
v(61.375,-4.5,1)
n(-0.0621374,0.998068,0)
v(61.375,-4.5,-1)
n(-0.0621374,0.998068,0)
v(61.375,-4.5,1)
n(0.196116,0.980581,0)
v(62.625,-4.25,-1)
n(0.196116,0.980581,0)
v(62.625,-4.25,-1)
n(-0.0621374,0.998068,0)
v(61.375,-4.5,1)
n(0.196116,0.980581,0)
v(62.625,-4.25,1)
n(0.196116,0.980581,0)
v(62.625,-4.25,-1)
n(0.196116,0.980581,0)
v(62.625,-4.25,1)
n(0.40817,0.912906,0)
v(63.875,-4,-1)
n(0.40817,0.912906,0)
v(63.875,-4,-1)
n(0.196116,0.980581,0)
v(62.625,-4.25,1)
n(0.40817,0.912906,0)
v(63.875,-4,1)
n(0.40817,0.912906,0)
v(63.875,-4,-1)
n(0.40817,0.912906,0)
v(63.875,-4,1)
n(0.655202,0.755454,0)
v(64.875,-3.25,-1)
n(0.655202,0.755454,0)
v(64.875,-3.25,-1)
n(0.40817,0.912906,0)
v(63.875,-4,1)
n(0.655202,0.755454,0)
v(64.875,-3.25,1)
n(0.655202,0.755454,0)
v(64.875,-3.25,-1)
n(0.655202,0.755454,0)
v(64.875,-3.25,1)
n(0.707107,0.707107,0)
v(65.875,-2.25,-1)
n(0.707107,0.707107,0)
v(65.875,-2.25,-1)
n(0.655202,0.755454,0)
v(64.875,-3.25,1)
n(0.707107,0.707107,0)
v(65.875,-2.25,1)
n(0,1,0)
v(65.875,-2.25,-1)
n(0,1,0)
v(65.875,-2.25,1)
n(0,1,0)
v(66.125,-2.25,-1)
n(0,1,0)
v(66.125,-2.25,-1)
n(0,1,0)
v(65.875,-2.25,1)
n(0,1,0)
v(66.125,-2.25,1)
n(-0.948683,-0.316228,0)
v(66.125,-2.25,-1)
n(-0.948683,-0.316228,0)
v(66.125,-2.25,1)
n(-0.948683,-0.316228,0)
v(65.125,-5.25,-1)
n(-0.948683,-0.316228,0)
v(65.125,-5.25,-1)
n(-0.948683,-0.316228,0)
v(66.125,-2.25,1)
n(-0.948683,-0.316228,0)
v(65.125,-5.25,1)
glEnd()
| 17.235884 | 78 | 0.524413 | 39,136 | 127,287 | 1.705054 | 0.008458 | 0.110387 | 0.158702 | 0.109997 | 0.995669 | 0.995669 | 0.995669 | 0.995669 | 0.995669 | 0.995669 | 0 | 0.52534 | 0.116257 | 127,287 | 7,384 | 79 | 17.238218 | 0.067864 | 0.000542 | 0 | 0.998374 | 0 | 0 | 0.00081 | 0.000228 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000136 | false | 0 | 0.000271 | 0 | 0.000407 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
601cbcf54ab4b96346e277ff4d5425f37c360ec9 | 2,240 | py | Python | test/test_tools.py | amuritna/phenny | c01f409f41db125fe3f50093ed1ec3454f95a529 | [
"EFL-2.0"
] | 7 | 2018-10-29T18:01:47.000Z | 2022-01-21T04:13:46.000Z | test/test_tools.py | amuritna/phenny | c01f409f41db125fe3f50093ed1ec3454f95a529 | [
"EFL-2.0"
] | 225 | 2018-03-08T10:41:50.000Z | 2021-11-01T19:51:17.000Z | test/test_tools.py | amuritna/phenny | c01f409f41db125fe3f50093ed1ec3454f95a529 | [
"EFL-2.0"
] | 44 | 2018-03-19T15:30:15.000Z | 2020-07-29T08:47:45.000Z | """
Tests for phenny's tools.py
"""
import unittest
import tools
class ToolsTest(unittest.TestCase):
def test_break_up(self):
text = "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut " \
"labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea " \
"rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor " \
"sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna " \
"aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd " \
"gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet."
lines = [
"Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut",
"labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo",
"dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit",
"amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor",
"invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et",
"justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum",
"dolor sit amet."
]
self.assertEqual(tools.break_up(text, max_length=100), lines)
def test_truncate_short(self):
text = "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut."
self.assertEqual(tools.truncate(text, max_length=100), text)
def test_truncate_long(self):
text = "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut " \
"labore et dolore magna aliquyam erat, sed diam voluptua."
truncated = "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt..."
self.assertEqual(tools.truncate(text, max_length=100), truncated)
| 58.947368 | 120 | 0.702232 | 315 | 2,240 | 4.961905 | 0.209524 | 0.053743 | 0.105566 | 0.126679 | 0.868842 | 0.868842 | 0.868842 | 0.868842 | 0.81254 | 0.81254 | 0 | 0.005251 | 0.234821 | 2,240 | 37 | 121 | 60.540541 | 0.906651 | 0.012054 | 0 | 0.071429 | 0 | 0.107143 | 0.692971 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 1 | 0.107143 | false | 0 | 0.071429 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
60fc9f36ac80e0bcbdac509f5f8d8c0602a01ffb | 3,995 | py | Python | tests/conftest.py | hbldh/autopil | 73841f05a88694761983519f042e64d94a191234 | [
"MIT"
] | 3 | 2016-08-30T09:35:18.000Z | 2019-02-26T13:36:28.000Z | tests/conftest.py | hbldh/autopil | 73841f05a88694761983519f042e64d94a191234 | [
"MIT"
] | 7 | 2018-07-06T07:17:45.000Z | 2019-10-22T21:29:19.000Z | tests/conftest.py | hbldh/autopil | 73841f05a88694761983519f042e64d94a191234 | [
"MIT"
] | 1 | 2020-04-24T10:16:56.000Z | 2020-04-24T10:16:56.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
conftest
"""
import io
from itertools import chain
import pytest
from PIL import Image
from PIL.Image import open as pil_open
import piexif
@pytest.fixture()
def base_img():
return [
[0, 0, 255, 255, 255, 255, 0, 0],
[0, 0, 255, 0, 0, 0, 0, 0],
[0, 0, 255, 255, 255, 0, 0, 0],
[0, 0, 255, 0, 0, 0, 0, 0],
[0, 0, 255, 0, 0, 0, 0, 0],
[0, 0, 255, 0, 0, 0, 0, 0],
]
def _save_with_exif_and_return_PIL_image(mat, orientation_value):
i = Image.new('L', (len(mat[0]), len(mat)))
i.putdata(list(chain(*mat)))
exif = {'0th': {piexif.ImageIFD.Orientation: orientation_value}}
with io.BytesIO() as b:
i.save(b, format='jpeg', exif=piexif.dump(exif), quality=100, subsampling=0)
b.seek(0)
i = pil_open(b)
i.load()
i.putdata(list(chain(*mat)))
return i
@pytest.fixture()
def image_with_rotation_value_1():
mat = [
[0, 0, 255, 255, 255, 255, 0, 0],
[0, 0, 255, 0, 0, 0, 0, 0],
[0, 0, 255, 255, 255, 0, 0, 0],
[0, 0, 255, 0, 0, 0, 0, 0],
[0, 0, 255, 0, 0, 0, 0, 0],
[0, 0, 255, 0, 0, 0, 0, 0],
]
return _save_with_exif_and_return_PIL_image(mat, 1)
@pytest.fixture()
def image_with_rotation_value_2():
mat = [
[0, 0, 255, 255, 255, 255, 0, 0][::-1],
[0, 0, 255, 0, 0, 0, 0, 0][::-1],
[0, 0, 255, 255, 255, 0, 0, 0][::-1],
[0, 0, 255, 0, 0, 0, 0, 0][::-1],
[0, 0, 255, 0, 0, 0, 0, 0][::-1],
[0, 0, 255, 0, 0, 0, 0, 0][::-1],
]
return _save_with_exif_and_return_PIL_image(mat, 2)
@pytest.fixture()
def image_with_rotation_value_3():
mat = [
[0, 0, 0, 0, 0, 255, 0, 0],
[0, 0, 0, 0, 0, 255, 0, 0],
[0, 0, 0, 0, 0, 255, 0, 0],
[0, 0, 0, 255, 255, 255, 0, 0],
[0, 0, 0, 0, 0, 255, 0, 0],
[0, 0, 255, 255, 255, 255, 0, 0]
]
return _save_with_exif_and_return_PIL_image(mat, 3)
@pytest.fixture()
def image_with_rotation_value_4():
mat = [
[0, 0, 0, 0, 0, 255, 0, 0][::-1],
[0, 0, 0, 0, 0, 255, 0, 0][::-1],
[0, 0, 0, 0, 0, 255, 0, 0][::-1],
[0, 0, 0, 255, 255, 255, 0, 0][::-1],
[0, 0, 0, 0, 0, 255, 0, 0][::-1],
[0, 0, 255, 255, 255, 255, 0, 0][::-1]
]
return _save_with_exif_and_return_PIL_image(mat, 4)
@pytest.fixture()
def image_with_rotation_value_5():
mat = [
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[255, 255, 255, 255, 255, 255],
[255, 0, 255, 0, 0, 0],
[255, 0, 255, 0, 0, 0],
[255, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
]
return _save_with_exif_and_return_PIL_image(mat, 5)
@pytest.fixture()
def image_with_rotation_value_6():
mat = [
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[255, 0, 0, 0, 0, 0],
[255, 0, 255, 0, 0, 0],
[255, 0, 255, 0, 0, 0],
[255, 255, 255, 255, 255, 255],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
]
return _save_with_exif_and_return_PIL_image(mat, 6)
@pytest.fixture()
def image_with_rotation_value_7():
mat = [
[0, 0, 0, 0, 0, 0][::-1],
[0, 0, 0, 0, 0, 0][::-1],
[255, 0, 0, 0, 0, 0][::-1],
[255, 0, 255, 0, 0, 0][::-1],
[255, 0, 255, 0, 0, 0][::-1],
[255, 255, 255, 255, 255, 255][::-1],
[0, 0, 0, 0, 0, 0][::-1],
[0, 0, 0, 0, 0, 0][::-1],
]
return _save_with_exif_and_return_PIL_image(mat, 7)
@pytest.fixture()
def image_with_rotation_value_8():
mat = [
[0, 0, 0, 0, 0, 0][::-1],
[0, 0, 0, 0, 0, 0][::-1],
[255, 255, 255, 255, 255, 255][::-1],
[255, 0, 255, 0, 0, 0][::-1],
[255, 0, 255, 0, 0, 0][::-1],
[255, 0, 0, 0, 0, 0][::-1],
[0, 0, 0, 0, 0, 0][::-1],
[0, 0, 0, 0, 0, 0][::-1],
]
return _save_with_exif_and_return_PIL_image(mat, 8)
| 26.456954 | 84 | 0.459074 | 714 | 3,995 | 2.428571 | 0.082633 | 0.296424 | 0.32699 | 0.318339 | 0.785467 | 0.762399 | 0.762399 | 0.587082 | 0.554787 | 0.524221 | 0 | 0.249087 | 0.314643 | 3,995 | 150 | 85 | 26.633333 | 0.384222 | 0.012766 | 0 | 0.609756 | 0 | 0 | 0.002033 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081301 | false | 0 | 0.04878 | 0.00813 | 0.211382 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7157816fc0a4ba2554efce817d246a357e3237f0 | 275 | py | Python | src/deep_dialog/agents/__init__.py | shivanipoddariiith/DialogueAgentRL | fb04c12e675c0f15d61d66bc8edf74a841c6a053 | [
"MIT"
] | 3 | 2019-07-10T06:19:40.000Z | 2020-12-01T10:27:50.000Z | src/deep_dialog/agents/__init__.py | shivanipods/DialogueAgentRL | fb04c12e675c0f15d61d66bc8edf74a841c6a053 | [
"MIT"
] | null | null | null | src/deep_dialog/agents/__init__.py | shivanipods/DialogueAgentRL | fb04c12e675c0f15d61d66bc8edf74a841c6a053 | [
"MIT"
] | 1 | 2019-03-11T13:02:57.000Z | 2019-03-11T13:02:57.000Z | from .agent_cmd import *
from .agent_baselines import *
from .agent_dqn import *
from .agent_dqn_keras import *
from .agent_dqn_botlzmann import *
from .agent_bbqn import *
from .agent_a2c import *
from .agent_a2c_adverserial import *
from .agent_recurrent import *
| 27.5 | 37 | 0.770909 | 39 | 275 | 5.128205 | 0.307692 | 0.405 | 0.6 | 0.27 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008696 | 0.163636 | 275 | 9 | 38 | 30.555556 | 0.86087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
716b8865a60216b8c80bf970201813e6cffa042b | 408 | py | Python | etk/knowledge_graph/__init__.py | donaq/etk | 2084003ae70acc9b6751ddadc29db935c95a0a52 | [
"MIT"
] | 77 | 2017-03-09T19:17:12.000Z | 2022-02-02T15:55:19.000Z | etk/knowledge_graph/__init__.py | donaq/etk | 2084003ae70acc9b6751ddadc29db935c95a0a52 | [
"MIT"
] | 145 | 2017-04-07T19:07:53.000Z | 2021-11-19T01:16:20.000Z | etk/knowledge_graph/__init__.py | donaq/etk | 2084003ae70acc9b6751ddadc29db935c95a0a52 | [
"MIT"
] | 72 | 2017-04-18T20:54:36.000Z | 2022-02-17T07:38:45.000Z | from etk.knowledge_graph.graph import Graph
from etk.knowledge_graph.knowledge_graph import KnowledgeGraph
from etk.knowledge_graph.namespacemanager import NamespaceManager
from etk.knowledge_graph.node import Node, URI, BNode, Literal, LiteralType
from etk.knowledge_graph.ontology import Ontology
from etk.knowledge_graph.schema import KGSchema
from etk.knowledge_graph.subject import Subject, Reification
| 51 | 75 | 0.872549 | 55 | 408 | 6.327273 | 0.309091 | 0.321839 | 0.321839 | 0.422414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080882 | 408 | 7 | 76 | 58.285714 | 0.928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
718c50e7dd43a9d0ee53989b103d127a77734cb2 | 7,535 | py | Python | evalme/tests/test_classification.py | heartexlabs/label-studio-evalme | 48f7a5226346b6e074edb4717b84122cc089bc7a | [
"MIT"
] | 3 | 2020-04-11T13:01:57.000Z | 2021-05-19T13:53:16.000Z | evalme/tests/test_classification.py | heartexlabs/label-studio-evalme | 48f7a5226346b6e074edb4717b84122cc089bc7a | [
"MIT"
] | 28 | 2020-05-21T01:34:44.000Z | 2022-03-21T15:39:16.000Z | evalme/tests/test_classification.py | heartexlabs/label-studio-evalme | 48f7a5226346b6e074edb4717b84122cc089bc7a | [
"MIT"
] | 1 | 2020-05-21T17:43:26.000Z | 2020-05-21T17:43:26.000Z | import pytest
from evalme.classification import ClassificationEvalItem, ChoicesEvalItem, naive
@pytest.mark.ClassificationEvalItem
def test_not_matching():
test_data = [[
{
"from_name": "labels",
"id": "6rhBThcT1F",
"image_rotation": 0,
"original_height": 5852,
"original_width": 3902,
"to_name": "image",
"type": "polygonlabels",
"value": {
"points": [
[
43.333333333333336,
31.822222222222223
],
[
34.8,
40.977777777777774
],
[
38.266666666666666,
56.62222222222222
],
[
61.2,
56.53333333333333
],
[
65.6,
74.57777777777778
],
[
89.73333333333333,
74.57777777777778
],
[
86.13333333333334,
39.55555555555556
]
],
"polygonlabels": [
"Clothing"
]
}
}
],
[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories"]
}
}
]]
obj = ChoicesEvalItem(raw_data=test_data[0])
obj1 = ChoicesEvalItem(raw_data=test_data[1])
assert obj1.exact_match(obj) == 0
assert obj.exact_match(obj1) == 0
def test_not_matching_per_label():
test_data = [[
{
"from_name": "labels",
"id": "6rhBThcT1F",
"image_rotation": 0,
"original_height": 5852,
"original_width": 3902,
"to_name": "image",
"type": "polygonlabels",
"value": {
"points": [
[
43.333333333333336,
31.822222222222223
],
[
34.8,
40.977777777777774
],
[
38.266666666666666,
56.62222222222222
],
[
61.2,
56.53333333333333
],
[
65.6,
74.57777777777778
],
[
89.73333333333333,
74.57777777777778
],
[
86.13333333333334,
39.55555555555556
]
],
"polygonlabels": [
"Clothing"
]
}
}
],
[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories"]
}
}
]]
obj = ChoicesEvalItem(raw_data=test_data[0])
obj1 = ChoicesEvalItem(raw_data=test_data[1])
assert obj1.exact_match(obj, per_label=True) == {'Error': 0}
assert obj.exact_match(obj1, per_label=True) == {'Error': 0}
def test_matching_type():
test_data = [[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories"]
}
}
],
[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories"]
}
}
]]
obj = ChoicesEvalItem(raw_data=test_data[0])
obj1 = ChoicesEvalItem(raw_data=test_data[1])
assert obj1.exact_match(obj) == 1
assert obj.exact_match(obj1) == 1
def test_matching_type_per_label():
test_data = [[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories"]
}
}
],
[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories"]
}
}
]]
obj = ChoicesEvalItem(raw_data=test_data[0])
obj1 = ChoicesEvalItem(raw_data=test_data[1])
assert obj1.exact_match(obj, per_label=True) == {"Accessories": 1}
assert obj.exact_match(obj1, per_label=True) == {"Accessories": 1}
def test_naive_matching():
test_data = [[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories"]
}
}
],
[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories"]
}
}
]]
assert naive(test_data[0], test_data[1]) == 1
def test_naive_matching_per_label():
test_data = [[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories", "1", "2"]
}
}
],
[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories", "1", "2"]
}
}
]]
assert naive(test_data[0], test_data[1], per_label=True) == {"Accessories\\1\\2": 1}
def test_naive_not_matching():
test_data = [[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories1"]
}
}
],
[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories2"]
}
}
]]
assert naive(test_data[0], test_data[1]) == 0
def test_naive_not_matching_per_label():
test_data = [[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories1"]
}
}
],
[
{
"from_name": "labels",
"to_name": "image",
"type": "choices",
"value": {
"choices": ["Accessories2"]
}
}
]]
assert naive(test_data[0], test_data[1], per_label=True) == {"Accessories1": 0}
| 26.814947 | 88 | 0.355806 | 496 | 7,535 | 5.183468 | 0.145161 | 0.074679 | 0.087126 | 0.093349 | 0.927655 | 0.890704 | 0.871256 | 0.865811 | 0.827305 | 0.818748 | 0 | 0.125313 | 0.522362 | 7,535 | 280 | 89 | 26.910714 | 0.589053 | 0 | 0 | 0.631179 | 0 | 0 | 0.157266 | 0 | 0 | 0 | 0 | 0 | 0.045627 | 1 | 0.030418 | false | 0 | 0.007605 | 0 | 0.038023 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e080ee5635b8898cc30d892fffff596b5bd99f36 | 125 | py | Python | tests/__init__.py | NextGenTechBar/twandora | f626717a5580f82250bbe66d4ebc357e0882382c | [
"MIT"
] | 1 | 2015-03-09T19:07:25.000Z | 2015-03-09T19:07:25.000Z | tests/__init__.py | NextGenTechBar/twandora | f626717a5580f82250bbe66d4ebc357e0882382c | [
"MIT"
] | null | null | null | tests/__init__.py | NextGenTechBar/twandora | f626717a5580f82250bbe66d4ebc357e0882382c | [
"MIT"
] | null | null | null | import os
from unittest import TestLoader
def discover_suite():
return TestLoader().discover(os.path.dirname(__file__))
| 20.833333 | 59 | 0.784 | 16 | 125 | 5.8125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 125 | 5 | 60 | 25 | 0.845455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
e08725f2c9b54d0bc936bb2ec78f652c434bed45 | 12,773 | py | Python | core/controllers/skill_mastery_test.py | oswalgopal/oppia | 7513e8eca5adc278974ad266b0ea3f59a646983d | [
"Apache-2.0"
] | 2 | 2020-03-28T18:32:45.000Z | 2021-02-07T18:29:31.000Z | core/controllers/skill_mastery_test.py | gitter-badger/oppia | 7d8e659264582d7ce74bc6c139e597b82bca0e04 | [
"Apache-2.0"
] | 35 | 2019-02-23T20:31:21.000Z | 2019-08-19T12:32:13.000Z | core/controllers/skill_mastery_test.py | gitter-badger/oppia | 7d8e659264582d7ce74bc6c139e597b82bca0e04 | [
"Apache-2.0"
] | 1 | 2021-01-28T05:20:56.000Z | 2021-01-28T05:20:56.000Z | # Copyright 2019 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the Question Player controller."""
from __future__ import absolute_import # pylint: disable=import-only-modules
from __future__ import unicode_literals # pylint: disable=import-only-modules
from core.domain import skill_services
from core.tests import test_utils
import feconf
class SkillMasteryDataHandlerTest(test_utils.GenericTestBase):
"""Tests update skill mastery degree."""
def setUp(self):
"""Completes the setup for SkillMasteryDataHandler."""
super(SkillMasteryDataHandlerTest, self).setUp()
self.signup(self.NEW_USER_EMAIL, self.NEW_USER_USERNAME)
self.user_id = self.get_user_id_from_email(self.NEW_USER_EMAIL)
self.skill_id_1 = skill_services.get_new_skill_id()
self.save_new_skill(
self.skill_id_1, self.user_id, description='Skill Description 1')
self.skill_id_2 = skill_services.get_new_skill_id()
self.save_new_skill(
self.skill_id_2, self.user_id, description='Skill Description 2')
self.degree_of_mastery_1 = 0.3
self.degree_of_mastery_2 = 0.5
def test_get_with_valid_skill_ids_list(self):
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_1, self.degree_of_mastery_1)
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_2, self.degree_of_mastery_2)
skill_ids = [self.skill_id_1, self.skill_id_2]
self.login(self.NEW_USER_EMAIL)
response_json = self.get_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
params={
'comma_separated_skill_ids': ','.join(skill_ids)
})
degrees_of_mastery = {
self.skill_id_1: self.degree_of_mastery_1,
self.skill_id_2: self.degree_of_mastery_2
}
self.assertEqual(
response_json['degrees_of_mastery'], degrees_of_mastery)
self.logout()
def test_get_with_skill_without_skill_mastery(self):
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_1, self.degree_of_mastery_1)
skill_ids = [self.skill_id_1, self.skill_id_2]
self.login(self.NEW_USER_EMAIL)
response_json = self.get_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
params={
'comma_separated_skill_ids': ','.join(skill_ids)
})
degrees_of_mastery = {
self.skill_id_1: self.degree_of_mastery_1,
self.skill_id_2: None
}
self.assertEqual(
response_json['degrees_of_mastery'], degrees_of_mastery)
self.logout()
def test_get_with_no_skill_ids_returns_400(self):
self.login(self.NEW_USER_EMAIL)
json_response = self.get_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
expected_status_int=400)
self.assertEqual(
json_response['error'],
'Expected request to contain parameter comma_separated_skill_ids.')
self.logout()
def test_get_with_invalid_skill_ids_returns_400(self):
skill_ids = ['invalid_skill_id']
self.login(self.NEW_USER_EMAIL)
json_response = self.get_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
params={
'comma_separated_skill_ids': ','.join(skill_ids)
}, expected_status_int=400)
self.assertEqual(
json_response['error'],
'Invalid skill ID invalid_skill_id')
self.logout()
def test_get_with_nonexistent_skill_ids_returns_404(self):
skill_id_3 = skill_services.get_new_skill_id()
skill_ids = [self.skill_id_1, skill_id_3]
self.login(self.NEW_USER_EMAIL)
self.get_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
params={
'comma_separated_skill_ids': ','.join(skill_ids)
}, expected_status_int=404)
self.logout()
def test_put_with_valid_skill_mastery_dict(self):
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_1, self.degree_of_mastery_1)
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_2, self.degree_of_mastery_2)
payload = {}
mastery_change_per_skill = {
self.skill_id_1: 0.3,
self.skill_id_2: -0.3
}
payload['mastery_change_per_skill'] = mastery_change_per_skill
self.login(self.NEW_USER_EMAIL)
csrf_token = self.get_new_csrf_token()
self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token)
degrees_of_mastery = {
self.skill_id_1: 0.6,
self.skill_id_2: 0.2
}
self.assertEqual(
skill_services.get_multi_user_skill_mastery(
self.user_id, [self.skill_id_1, self.skill_id_2]),
degrees_of_mastery)
self.logout()
def test_put_with_skill_with_no_skill_mastery(self):
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_1, self.degree_of_mastery_1)
payload = {}
mastery_change_per_skill = {
self.skill_id_1: 0.3,
self.skill_id_2: 0.3
}
payload['mastery_change_per_skill'] = mastery_change_per_skill
self.login(self.NEW_USER_EMAIL)
csrf_token = self.get_new_csrf_token()
self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token)
degrees_of_mastery = {
self.skill_id_1: 0.6,
self.skill_id_2: 0.3
}
self.assertEqual(
skill_services.get_multi_user_skill_mastery(
self.user_id, [self.skill_id_1, self.skill_id_2]),
degrees_of_mastery)
self.logout()
def test_put_with_skill_mastery_lower_than_zero(self):
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_1, self.degree_of_mastery_1)
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_2, self.degree_of_mastery_2)
payload = {}
mastery_change_per_skill = {
self.skill_id_1: -0.5,
self.skill_id_2: 0.3
}
payload['mastery_change_per_skill'] = mastery_change_per_skill
self.login(self.NEW_USER_EMAIL)
csrf_token = self.get_new_csrf_token()
self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token)
degrees_of_mastery = {
self.skill_id_1: 0.0,
self.skill_id_2: 0.8
}
self.assertEqual(
skill_services.get_multi_user_skill_mastery(
self.user_id, [self.skill_id_1, self.skill_id_2]),
degrees_of_mastery)
self.logout()
def test_put_with_skill_mastery_higher_than_one(self):
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_1, self.degree_of_mastery_1)
skill_services.create_user_skill_mastery(
self.user_id, self.skill_id_2, self.degree_of_mastery_2)
payload = {}
mastery_change_per_skill = {
self.skill_id_1: 0.9,
self.skill_id_2: 0.3
}
payload['mastery_change_per_skill'] = mastery_change_per_skill
self.login(self.NEW_USER_EMAIL)
csrf_token = self.get_new_csrf_token()
self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token)
degrees_of_mastery = {
self.skill_id_1: 1.0,
self.skill_id_2: 0.8
}
self.assertEqual(
skill_services.get_multi_user_skill_mastery(
self.user_id, [self.skill_id_1, self.skill_id_2]),
degrees_of_mastery)
self.logout()
def test_put_with_invalid_type_returns_400(self):
payload = {}
mastery_change_per_skill = [self.skill_id_1, self.skill_id_2]
payload['mastery_change_per_skill'] = mastery_change_per_skill
self.login(self.NEW_USER_EMAIL)
csrf_token = self.get_new_csrf_token()
json_response = self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token, expected_status_int=400)
self.assertEqual(
json_response['error'],
'Expected payload to contain mastery_change_per_skill as a dict.'
)
self.logout()
def test_put_with_no_mastery_change_per_skill_returns_400(self):
payload = {}
self.login(self.NEW_USER_EMAIL)
csrf_token = self.get_new_csrf_token()
json_response = self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token, expected_status_int=400)
self.assertEqual(
json_response['error'],
'Expected payload to contain mastery_change_per_skill as a dict.'
)
self.logout()
def test_put_with_invalid_skill_ids_returns_400(self):
payload = {}
mastery_change_per_skill = {
'invalid_skill_id': 0.3
}
payload['mastery_change_per_skill'] = mastery_change_per_skill
self.login(self.NEW_USER_EMAIL)
csrf_token = self.get_new_csrf_token()
json_response = self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token, expected_status_int=400)
self.assertEqual(
json_response['error'], 'Invalid skill ID invalid_skill_id')
self.logout()
def test_put_with_nonexistent_skill_ids_returns_404(self):
skill_id_3 = skill_services.get_new_skill_id()
payload = {}
mastery_change_per_skill = {
self.skill_id_1: 0.3,
self.skill_id_2: 0.5,
skill_id_3: 0.6
}
payload['mastery_change_per_skill'] = mastery_change_per_skill
self.login(self.NEW_USER_EMAIL)
csrf_token = self.get_new_csrf_token()
self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token, expected_status_int=404)
self.logout()
def test_put_with_invalid_type_of_degree_of_mastery_returns_400(self):
payload = {}
mastery_change_per_skill = {
self.skill_id_1: 0.1,
self.skill_id_2: {}
}
payload['mastery_change_per_skill'] = mastery_change_per_skill
self.login(self.NEW_USER_EMAIL)
csrf_token = self.get_new_csrf_token()
json_response = self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token, expected_status_int=400)
self.assertEqual(
json_response['error'],
'Expected degree of mastery of skill %s to be a number, '
'received %s.' % (self.skill_id_2, '{}'))
mastery_change_per_skill = {
self.skill_id_1: 0.1,
self.skill_id_2: True
}
payload['mastery_change_per_skill'] = mastery_change_per_skill
json_response = self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token, expected_status_int=400)
self.assertEqual(
json_response['error'],
'Expected degree of mastery of skill %s to be a number, '
'received %s.' % (self.skill_id_2, 'True'))
self.logout()
def test_put_with_no_logged_in_user_returns_401(self):
payload = {}
mastery_change_per_skill = {
self.skill_id_1: 0.3,
self.skill_id_2: 0.5
}
payload['mastery_change_per_skill'] = mastery_change_per_skill
csrf_token = self.get_new_csrf_token()
json_response = self.put_json(
'%s' % feconf.SKILL_MASTERY_DATA_URL,
payload, csrf_token=csrf_token, expected_status_int=401)
self.assertEqual(
json_response['error'],
'You must be logged in to access this resource.')
| 34.243968 | 79 | 0.642684 | 1,688 | 12,773 | 4.420616 | 0.099526 | 0.06848 | 0.089922 | 0.092871 | 0.840927 | 0.818145 | 0.789333 | 0.779148 | 0.770571 | 0.754757 | 0 | 0.021183 | 0.271902 | 12,773 | 372 | 80 | 34.336022 | 0.781183 | 0.060753 | 0 | 0.725979 | 0 | 0 | 0.080555 | 0.034595 | 0 | 0 | 0 | 0 | 0.049822 | 1 | 0.05694 | false | 0 | 0.017794 | 0 | 0.078292 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e0892c722d55047e33bcb8a3bdcfcfda15224f21 | 14,558 | py | Python | key_set/base.py | eturino/key_set.py | ee9a8e27012789ae46657eef7ac057412c33a313 | [
"Apache-2.0"
] | null | null | null | key_set/base.py | eturino/key_set.py | ee9a8e27012789ae46657eef7ac057412c33a313 | [
"Apache-2.0"
] | null | null | null | key_set/base.py | eturino/key_set.py | ee9a8e27012789ae46657eef7ac057412c33a313 | [
"Apache-2.0"
] | null | null | null | from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Any, List, Set, TypeVar, Union
from .enum import KeySetType
TKS = TypeVar('TKS', List[str], Set[str])
class KeySet(ABC): # Inherit from ABC(Abstract base class)
"""Base class for all KeySets."""
@abstractmethod
def key_set_type(self) -> KeySetType:
"""Returns the KeySetType that defines this class."""
pass
@abstractmethod
def elements(self) -> set[str]:
"""Returns a copy of the set of elements that this KeySet includes.
It'll return an empty set.
"""
pass
def represents_all(self) -> bool:
"""Returns true if the set is a ALL KeySet."""
return False
def represents_none(self) -> bool:
"""Returns true if the set is a NONE KeySet."""
return False
def represents_some(self) -> bool:
"""Returns true if the set is a SOME KeySet."""
return False
def represents_all_except_some(self) -> bool:
"""Returns true if the set is a ALL_EXCEPT_SOME KeySet."""
return False
def __contains__(self, item: str) -> bool:
"""Returns True if the set represented by this includes the elem."""
return self.includes(item)
@abstractmethod
def includes(self, _elem: str) -> bool:
"""Returns True if the set represented by this includes the elem."""
pass
@abstractmethod
def invert(self) -> KeySet:
"""Returns a new KeySet that represents the inverse Set of this one.
All <-> None
Some <-> AllExceptSome
"""
pass
@abstractmethod
def clone(self) -> KeySet:
"""Returns a new KeySet that represents the same Set of this one."""
pass
@abstractmethod
def intersect(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that represents the intersection (A ∩ B)."""
pass
@abstractmethod
def union(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the elements of both (A U B)."""
pass
@abstractmethod
def difference(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the diff (A - B)."""
pass
class KeySetAll(KeySet):
"""Represents the ALL sets: 𝕌 (the entirety of possible keys)."""
def __eq__(self, other: Any) -> bool:
"""Returns True if `other` is KeySetAll.."""
if not isinstance(other, KeySet):
# don't attempt to compare against unrelated types
return NotImplemented
return isinstance(other, KeySetAll)
def __len__(self) -> int:
"""Returns 0 (since we do not know, but maybe should be infinity)."""
return 0
def __str__(self) -> str:
"""Returns str()."""
return '<KeySetAll>'
def __repr__(self) -> str:
"""Returns repr()."""
return 'KeySetAll()'
def key_set_type(self) -> KeySetType:
"""Returns the KeySetType that describes the set."""
return KeySetType.ALL
def elements(self) -> set[str]:
"""Returns an empty set."""
return set()
def represents_all(self) -> bool:
"""Returns true if the set is a ALL KeySet."""
return True
def invert(self) -> KeySetNone:
"""Returns a new KeySet NONE."""
return KeySetNone()
def clone(self) -> KeySetAll:
"""Returns a new KeySet that represents the same Set of this one."""
return KeySetAll()
def includes(self, _elem: str) -> bool:
"""Returns True if the set represented by this includes the elem."""
return True
def intersect(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that represents the intersection (A ∩ B)."""
return other.clone()
def union(self, _other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the elements of both (A U B)."""
return self.clone()
def difference(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the diff (A - B)."""
if other.represents_all():
return KeySetNone()
if other.represents_none():
return self.clone()
if other.represents_some():
return KeySetAllExceptSome(other.elements())
if other.represents_all_except_some():
return KeySetSome(other.elements())
return NotImplemented
class KeySetNone(KeySet):
"""Represents the NONE sets: ø (empty set)."""
def __eq__(self, other: Any) -> bool:
"""Returns True if `other` is KeySetNone..."""
if not isinstance(other, KeySet):
# don't attempt to compare against unrelated types
return NotImplemented
return isinstance(other, KeySetNone)
def __len__(self) -> int:
"""Returns 0."""
return 0
def __str__(self) -> str:
"""Returns str()."""
return '<KeySetNone>'
def __repr__(self) -> str:
"""Returns repr()."""
return 'KeySetNone()'
def key_set_type(self) -> KeySetType:
"""Returns the KeySetType that describes the set."""
return KeySetType.NONE
def elements(self) -> set[str]:
"""Returns an empty set."""
return set()
def represents_none(self) -> bool:
"""Returns true if the set is a NONE KeySet."""
return True
def invert(self) -> KeySetAll:
"""Returns a new KeySet ALL."""
return KeySetAll()
def clone(self) -> KeySetNone:
"""Returns a new KeySet that represents the same Set of this one."""
return KeySetNone()
def includes(self, _elem: str) -> bool:
"""Returns True if the set represented by this includes the elem."""
return False
def intersect(self, _other: KeySet) -> KeySetNone:
"""Returns a new KeySet that represents the intersection (A ∩ B)."""
return self.clone()
def union(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the elements of both (A U B)."""
return other.clone()
def difference(self, _other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the diff (A - B)."""
return self.clone()
class KeySetSome(KeySet):
"""Represents the SOME sets: a concrete set (`A ⊂ 𝕌`)."""
def __init__(self, elements: TKS):
"""Requires the set of elements of the concrete set."""
self._elements = set(elements)
def __eq__(self, other: Any) -> bool:
"""Returns True if `other` is KeySetSome."""
if not isinstance(other, KeySet):
# don't attempt to compare against unrelated types
return NotImplemented
if not isinstance(other, KeySetSome):
return False
return self._elements == other.elements()
def __len__(self) -> int:
"""Returns the length of the elements in the set."""
return len(self._elements)
def __str__(self) -> str:
"""Returns str()."""
keys = ','.join(sorted([x for x in self._elements]))
return f'<KeySetSome ({keys})>'
def __repr__(self) -> str:
"""Returns repr()."""
keys = ','.join([f'\'{x}\'' for x in self._elements])
return f'KeySetSome([{keys}])'
def key_set_type(self) -> KeySetType:
"""Returns the KeySetType that describes the set."""
return KeySetType.SOME
def elements(self) -> set[str]:
"""Returns a copy of the set of the elements of the concrete set."""
return set(self._elements)
def represents_some(self) -> bool:
"""Returns true if the set is a SOME KeySet."""
return True
def invert(self) -> KeySetAllExceptSome:
"""Returns a new KeySet ALL_EXCEPT_SOME."""
return KeySetAllExceptSome(self.elements())
def clone(self) -> KeySetSome:
"""Returns a new KeySet that represents the same Set of this one."""
return KeySetSome(self.elements())
def includes(self, elem: str) -> bool:
"""Returns True if the set represented by this includes the elem."""
return elem in self._elements
def intersect(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that represents the intersection (A ∩ B)."""
if other.represents_all():
return self.clone()
if other.represents_none():
return other.clone()
if other.represents_some():
elems = self._elements.intersection(other.elements())
return build_some_or_none(elems)
if other.represents_all_except_some():
elems = self._elements.difference(other.elements())
return build_some_or_none(elems)
return NotImplemented
def union(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the elements of both (A U B)."""
if other.represents_all():
return other.clone()
if other.represents_none():
return self.clone()
if other.represents_some():
elems = self._elements.union(other.elements())
return build_some_or_none(elems)
if other.represents_all_except_some():
elems = other.elements().difference(self._elements)
return build_all_except_some_or_all(elems)
return NotImplemented
def difference(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the diff (A - B)."""
if other.represents_all():
return KeySetNone()
if other.represents_none():
return self.clone()
if other.represents_some():
elems = self._elements.difference(other.elements())
return build_some_or_none(elems)
if other.represents_all_except_some():
elems = self._elements.intersection(other.elements())
return build_some_or_none(elems)
return NotImplemented
class KeySetAllExceptSome(KeySet):
"""Represents the ALL_EXCEPT_SOME sets: the complementary of a concrete set.
Includes all the elements except the given ones (`A' = {x ∈ 𝕌 | x ∉ A}`).
"""
def __init__(self, elements: TKS):
"""Requires the set of elements of the concrete set."""
self._elements = set(elements)
def __eq__(self, other: Any) -> bool:
"""Returns True if `other` is KeySetAllExceptSome."""
if not isinstance(other, KeySet):
# don't attempt to compare against unrelated types
return NotImplemented
if not isinstance(other, KeySetAllExceptSome):
return False
return self._elements == other.elements()
def __len__(self) -> int:
"""Returns the length of the elements in the exclusion."""
return len(self._elements)
def __str__(self) -> str:
"""Returns str()."""
keys = ','.join(sorted([x for x in self._elements]))
return f'<KeySetAllExceptSome ({keys})>'
def __repr__(self) -> str:
"""Returns repr()."""
keys = ','.join([f'\'{x}\'' for x in self._elements])
return f'KeySetAllExceptSome([{keys}])'
def key_set_type(self) -> KeySetType:
"""Returns the KeySetType that describes the set."""
return KeySetType.ALL_EXCEPT_SOME
def elements(self) -> set[str]:
"""Returns a copy of the set of the elements of the concrete set."""
return set(self._elements)
def represents_all_except_some(self) -> bool:
"""Returns true if the set is a ALL_EXCEPT_SOME KeySet."""
return True
def invert(self) -> KeySetSome:
"""Returns a new KeySet SOME."""
return KeySetSome(self.elements())
def clone(self) -> KeySetAllExceptSome:
"""Returns a new KeySet that represents the same Set of this one."""
return KeySetAllExceptSome(self.elements())
def includes(self, elem: str) -> bool:
"""Returns True if the set represented by this includes the elem."""
return elem not in self._elements
def intersect(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that represents the intersection (A ∩ B)."""
if other.represents_all():
return self.clone()
if other.represents_none():
return other.clone()
if other.represents_some():
elems = other.elements().difference(self._elements)
return build_some_or_none(elems)
if other.represents_all_except_some():
elems = self._elements.union(other.elements())
return build_all_except_some_or_all(elems)
return NotImplemented
def union(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the elements of both (A U B)."""
if other.represents_all():
return other.clone()
if other.represents_none():
return self.clone()
if other.represents_some():
elems = self._elements.difference(other.elements())
return build_all_except_some_or_all(elems)
if other.represents_all_except_some():
elems = other.elements().intersection(self._elements)
return build_all_except_some_or_all(elems)
return NotImplemented
def difference(self, other: KeySet) -> KeySet:
"""Returns a new KeySet that contains the diff (A - B)."""
if other.represents_all():
return KeySetNone()
if other.represents_none():
return self.clone()
if other.represents_some():
elems = self._elements.union(other.elements())
return build_all_except_some_or_all(elems)
if other.represents_all_except_some():
other_elems = other.elements()
surviving = [e for e in other_elems if e not in self._elements]
return build_some_or_none(surviving)
return NotImplemented
TS = Union[KeySetSome, KeySetNone]
TAES = Union[KeySetAllExceptSome, KeySetAll]
def build_all() -> KeySetAll:
"""Returns ALL."""
return KeySetAll()
def build_none() -> KeySetNone:
"""Returns NONE."""
return KeySetNone()
def build_some_or_none(seq: TKS) -> TS:
"""Returns NONE if seq is blank, or SOME otherwise."""
if len(seq) > 0:
return KeySetSome(seq)
else:
return KeySetNone()
def build_all_except_some_or_all(seq: TKS) -> TAES:
"""Returns ALL if seq is blank, or ALL_EXCEPT_SOME otherwise."""
if len(seq) > 0:
return KeySetAllExceptSome(seq)
else:
return KeySetAll()
| 33.086364 | 80 | 0.611622 | 1,764 | 14,558 | 4.912698 | 0.077664 | 0.025848 | 0.054927 | 0.049042 | 0.813293 | 0.78883 | 0.754327 | 0.732979 | 0.730672 | 0.711862 | 0 | 0.000569 | 0.275587 | 14,558 | 439 | 81 | 33.161731 | 0.820406 | 0.270573 | 0 | 0.774704 | 0 | 0 | 0.015221 | 0.002848 | 0 | 0 | 0 | 0 | 0 | 1 | 0.280632 | false | 0.031621 | 0.01581 | 0 | 0.699605 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
e0e617bc4e813ff67a28d7acf18db6ca2fbbf076 | 3,653 | py | Python | istatistic/migrations/0001_initial.py | gzy403999903/itam | 9ced83fecf10a70686d0a2a5159effdea03eca6c | [
"Artistic-1.0-cl8"
] | 79 | 2018-05-28T09:13:31.000Z | 2022-03-22T08:55:21.000Z | istatistic/migrations/0001_initial.py | tracyzk/itam | 9ced83fecf10a70686d0a2a5159effdea03eca6c | [
"Artistic-1.0-cl8"
] | 1 | 2018-11-16T07:40:12.000Z | 2018-11-16T08:40:11.000Z | istatistic/migrations/0001_initial.py | tracyzk/itam | 9ced83fecf10a70686d0a2a5159effdea03eca6c | [
"Artistic-1.0-cl8"
] | 34 | 2018-05-28T09:13:34.000Z | 2021-10-18T06:52:55.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='DepartmentCost',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(unique=True, max_length=128, verbose_name='\u6807\u9898')),
('service_cost', models.FloatField(verbose_name='\u670d\u52a1\u6210\u672c')),
('device_used', models.FloatField(verbose_name='\u8bbe\u5907\u5360\u7528(\u53f0)')),
('renewal_cost', models.FloatField(verbose_name='\u670d\u52a1\u7eed\u4fdd\u6210\u672c')),
('repair_parts_cost', models.FloatField(verbose_name='\u7ef4\u4fee\u53ca\u914d\u4ef6\u652f\u51fa')),
('channel_cost', models.FloatField(verbose_name='\u573a\u5730\u4fe1\u9053\u652f\u51fa')),
('device_depreciation', models.FloatField(verbose_name='\u8bbe\u5907\u6298\u65e7')),
('storage_cost', models.FloatField(verbose_name='\u5b58\u50a8\u670d\u52a1\u6210\u672c')),
('node_constructed', models.FloatField(verbose_name='\u8282\u70b9\u5efa\u8bbe\u6210\u672c')),
('total_cost', models.FloatField(verbose_name='\u603b\u6210\u672c')),
('statistic_time', models.DateTimeField(default=django.utils.timezone.now, verbose_name='\u6838\u7b97\u65f6\u95f4')),
('createDate', models.DateTimeField(auto_now_add=True, verbose_name='\u521b\u5efa\u65f6\u95f4')),
('update_date', models.DateTimeField(auto_now=True, verbose_name='\u66f4\u65b0\u65f6\u95f4')),
('memo', models.TextField(null=True, verbose_name='\u5907\u6ce8', blank=True)),
],
),
migrations.CreateModel(
name='ServiceCost',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(unique=True, max_length=128, verbose_name='\u6807\u9898')),
('service_cost', models.FloatField(verbose_name='\u670d\u52a1\u6210\u672c')),
('device_used', models.FloatField(verbose_name='\u8bbe\u5907\u5360\u7528(\u53f0)')),
('renewal_cost', models.FloatField(verbose_name='\u670d\u52a1\u7eed\u4fdd\u6210\u672c')),
('repair_parts_cost', models.FloatField(verbose_name='\u7ef4\u4fee\u53ca\u914d\u4ef6\u652f\u51fa')),
('channel_cost', models.FloatField(verbose_name='\u573a\u5730\u4fe1\u9053\u652f\u51fa')),
('device_depreciation', models.FloatField(verbose_name='\u8bbe\u5907\u6298\u65e7')),
('storage_cost', models.FloatField(verbose_name='\u5b58\u50a8\u670d\u52a1\u6210\u672c')),
('node_constructed', models.FloatField(verbose_name='\u8282\u70b9\u5efa\u8bbe\u6210\u672c')),
('total_cost', models.FloatField(verbose_name='\u603b\u6210\u672c')),
('statistic_time', models.DateTimeField(default=django.utils.timezone.now, verbose_name='\u6838\u7b97\u65f6\u95f4')),
('createDate', models.DateTimeField(auto_now_add=True, verbose_name='\u521b\u5efa\u65f6\u95f4')),
('update_date', models.DateTimeField(auto_now=True, verbose_name='\u66f4\u65b0\u65f6\u95f4')),
('memo', models.TextField(null=True, verbose_name='\u5907\u6ce8', blank=True)),
],
),
]
| 66.418182 | 133 | 0.643033 | 394 | 3,653 | 5.781726 | 0.266497 | 0.144864 | 0.181738 | 0.213345 | 0.899034 | 0.899034 | 0.899034 | 0.899034 | 0.899034 | 0.899034 | 0 | 0.127217 | 0.197372 | 3,653 | 54 | 134 | 67.648148 | 0.649727 | 0.005749 | 0 | 0.791667 | 0 | 0 | 0.308815 | 0.186226 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e0e7fb7d7c302a7b4979188b57706cfbbb3d9147 | 5,344 | py | Python | test/classes/class12.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 1 | 2020-08-07T16:09:57.000Z | 2020-08-07T16:09:57.000Z | test/classes/class12.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | null | null | null | test/classes/class12.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | null | null | null | class F:
@classmethod
def meth(cls, a, b=1):
cls.a = a
cls.b = b
print(cls)
cls()
cls + 1
a.cls = 1
a.cls.__name__
cls[123]
class : meta.class.python, source.python, storage.type.class.python
: meta.class.python, source.python
F : entity.name.type.class.python, meta.class.python, source.python
: : meta.class.python, punctuation.section.class.begin.python, source.python
: meta.function.decorator.python, source.python
@ : entity.name.function.decorator.python, meta.function.decorator.python, punctuation.definition.decorator.python, source.python
classmethod : meta.function.decorator.python, source.python, support.type.python
: meta.function.python, source.python
def : meta.function.python, source.python, storage.type.function.python
: meta.function.python, source.python
meth : entity.name.function.python, meta.function.python, source.python
( : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.begin.python, source.python
cls : meta.function.parameters.python, meta.function.python, source.python, variable.parameter.function.language.python, variable.parameter.function.language.special.cls.python
, : meta.function.parameters.python, meta.function.python, punctuation.separator.parameters.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
a : meta.function.parameters.python, meta.function.python, source.python, variable.parameter.function.language.python
, : meta.function.parameters.python, meta.function.python, punctuation.separator.parameters.python, source.python
: meta.function.parameters.python, meta.function.python, source.python
b : meta.function.parameters.python, meta.function.python, source.python, variable.parameter.function.language.python
= : keyword.operator.python, meta.function.parameters.python, meta.function.python, source.python
1 : constant.numeric.dec.python, meta.function.parameters.python, meta.function.python, source.python
) : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.end.python, source.python
: : meta.function.python, punctuation.section.function.begin.python, source.python
: source.python
cls : source.python, variable.language.special.cls.python
. : punctuation.separator.period.python, source.python
a : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
a : source.python
: source.python
cls : source.python, variable.language.special.cls.python
. : punctuation.separator.period.python, source.python
b : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
b : source.python
: source.python
print : meta.function-call.python, source.python, support.function.builtin.python
( : meta.function-call.python, punctuation.definition.arguments.begin.python, source.python
cls : meta.function-call.arguments.python, meta.function-call.python, source.python, variable.language.special.cls.python
) : meta.function-call.python, punctuation.definition.arguments.end.python, source.python
: source.python
cls : meta.function-call.python, source.python, variable.language.special.cls.python
( : meta.function-call.python, punctuation.definition.arguments.begin.python, source.python
) : meta.function-call.python, punctuation.definition.arguments.end.python, source.python
: source.python
cls : source.python, variable.language.special.cls.python
: source.python
+ : keyword.operator.arithmetic.python, source.python
: source.python
1 : constant.numeric.dec.python, source.python
: source.python
a : source.python
. : punctuation.separator.period.python, source.python
cls : source.python
: source.python
= : keyword.operator.assignment.python, source.python
: source.python
1 : constant.numeric.dec.python, source.python
: source.python
a : source.python
. : punctuation.separator.period.python, source.python
cls : source.python
. : punctuation.separator.period.python, source.python
__name__ : source.python, support.variable.magic.python
: source.python
cls : meta.item-access.python, source.python, variable.language.special.cls.python
[ : meta.item-access.python, punctuation.definition.arguments.begin.python, source.python
123 : constant.numeric.dec.python, meta.item-access.arguments.python, meta.item-access.python, source.python
] : meta.item-access.python, punctuation.definition.arguments.end.python, source.python
| 60.727273 | 186 | 0.659244 | 574 | 5,344 | 6.123693 | 0.080139 | 0.249218 | 0.312376 | 0.102418 | 0.890185 | 0.81394 | 0.748791 | 0.721195 | 0.629872 | 0.614509 | 0 | 0.002936 | 0.235217 | 5,344 | 87 | 187 | 61.425287 | 0.857108 | 0 | 0 | 0.559524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.02381 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1cfc442c553d9e257a7a43b73943bf729dd23d88 | 49,000 | py | Python | cpdb/tracker/tests/test_views.py | invinst/CPDBv2_backend | b4e96d620ff7a437500f525f7e911651e4a18ef9 | [
"Apache-2.0"
] | 25 | 2018-07-20T22:31:40.000Z | 2021-07-15T16:58:41.000Z | cpdb/tracker/tests/test_views.py | invinst/CPDBv2_backend | b4e96d620ff7a437500f525f7e911651e4a18ef9 | [
"Apache-2.0"
] | 13 | 2018-06-18T23:08:47.000Z | 2022-02-10T07:38:25.000Z | cpdb/tracker/tests/test_views.py | invinst/CPDBv2_backend | b4e96d620ff7a437500f525f7e911651e4a18ef9 | [
"Apache-2.0"
] | 6 | 2018-05-17T21:59:43.000Z | 2020-11-17T00:30:26.000Z | from datetime import datetime
from operator import itemgetter
import pytz
from django.test import override_settings
from django.urls import reverse
from rest_framework import status
from rest_framework.authtoken.models import Token
from rest_framework.test import APITestCase
from robber import expect
from freezegun import freeze_time
from urllib.parse import urlencode
from authentication.factories import AdminUserFactory
from data.factories import AttachmentFileFactory, AllegationFactory, UserFactory, OfficerFactory
from data.models import AttachmentFile
from document_cloud.factories import DocumentCrawlerFactory
from activity_log.models import ActivityLog
from activity_log.constants import ADD_TAG_TO_DOCUMENT, REMOVE_TAG_FROM_DOCUMENT
from tracker.tests.mixins import TrackerTestCaseMixin
class AttachmentAPITestCase(TrackerTestCaseMixin, APITestCase):
def test_retrieve_unauthenticated_user(self):
user = UserFactory(username='test user')
allegation = AllegationFactory(crid='456')
with freeze_time('2017-08-04 14:30:00'):
attachment = AttachmentFileFactory(
id=123,
allegation=allegation,
title='CR document',
text_content='CHICAGO POLICE DEPARTMENT RD I HT334604',
url='http://foo.com',
preview_image_url='https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
source_type='DOCUMENTCLOUD',
show=True,
file_type='document',
pages=10,
last_updated_by=user,
views_count=100
)
with freeze_time('2017-08-05 12:00:01'):
attachment.save()
AttachmentFileFactory(
id=124,
allegation=allegation,
show=True,
file_type='document',
preview_image_url='https://assets.documentcloud.org/124/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123-CR.html',
)
AttachmentFileFactory(
id=125,
allegation=allegation,
show=True,
file_type='document',
preview_image_url='https://assets.documentcloud.org/125/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123-CR.html',
)
AttachmentFileFactory(
id=126,
allegation=allegation,
show=False,
file_type='document',
preview_image_url='https://assets.documentcloud.org/125/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123-CR.html',
)
AttachmentFileFactory(
id=127,
allegation=allegation,
show=True,
file_type='audio',
preview_image_url='',
original_url='http://audio_link',
)
response = self.client.get(reverse('api-v2:attachments-detail', kwargs={'pk': '123'}))
response.data['linked_documents'] = sorted(response.data['linked_documents'], key=itemgetter('id'))
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq({
'id': 123,
'crid': '456',
'title': 'CR document',
'text_content': 'CHICAGO POLICE DEPARTMENT RD I HT334604',
'url': 'http://foo.com',
'preview_image_url': 'https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
'original_url': 'https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
'created_at': '2017-08-04T09:30:00-05:00',
'updated_at': '2017-08-05T07:00:01-05:00',
'crawler_name': 'Document Cloud',
'linked_documents': [
{
'id': 124,
'preview_image_url': 'https://assets.documentcloud.org/124/CRID-456-CR-p1-normal.gif',
},
{
'id': 125,
'preview_image_url': 'https://assets.documentcloud.org/125/CRID-456-CR-p1-normal.gif',
}
],
'pages': 10,
'last_updated_by': 'test user'
})
expect(
self.client.get(reverse('api-v2:attachments-detail', kwargs={'pk': '126'})).status_code
).to.be.eq(status.HTTP_404_NOT_FOUND)
def test_retrieve_authenticated_user(self):
user = UserFactory(username='test user')
allegation = AllegationFactory(crid='456')
attachment = AttachmentFileFactory(
id=123,
allegation=allegation,
title='CR document',
text_content='CHICAGO POLICE DEPARTMENT RD I HT334604',
url='http://foo.com',
preview_image_url='https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
source_type='DOCUMENTCLOUD',
show=True,
file_type='document',
pages=10,
last_updated_by=user,
views_count=100,
downloads_count=99,
notifications_count=200,
tags=['tag123']
)
attachment.created_at = datetime(2017, 8, 4, 14, 30, 00, tzinfo=pytz.utc)
with freeze_time('2017-08-05 12:00:01'):
attachment.save()
AttachmentFileFactory(
id=124,
allegation=allegation,
show=True,
file_type='document',
preview_image_url='https://assets.documentcloud.org/124/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123-CR.html',
tags=['tag124'],
)
AttachmentFileFactory(
id=125,
allegation=allegation,
show=True,
file_type='document',
preview_image_url='https://assets.documentcloud.org/125/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123-CR.html',
tags=['tag125'],
)
AttachmentFileFactory(
id=126,
allegation=allegation,
show=False,
file_type='document',
preview_image_url='https://assets.documentcloud.org/125/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123-CR.html',
)
AttachmentFileFactory(
id=127,
allegation=allegation,
show=True,
file_type='audio',
preview_image_url='',
original_url='http://audio_link',
tags=['tag127'],
)
admin_user = AdminUserFactory()
token, _ = Token.objects.get_or_create(user=admin_user)
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
response = self.client.get(reverse('api-v2:attachments-detail', kwargs={'pk': '123'}))
response.data['linked_documents'] = sorted(response.data['linked_documents'], key=itemgetter('id'))
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq({
'id': 123,
'crid': '456',
'title': 'CR document',
'text_content': 'CHICAGO POLICE DEPARTMENT RD I HT334604',
'url': 'http://foo.com',
'preview_image_url': 'https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
'original_url': 'https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
'created_at': '2017-08-04T09:30:00-05:00',
'updated_at': '2017-08-05T07:00:01-05:00',
'crawler_name': 'Document Cloud',
'linked_documents': [
{
'id': 124,
'preview_image_url': 'https://assets.documentcloud.org/124/CRID-456-CR-p1-normal.gif',
},
{
'id': 125,
'preview_image_url': 'https://assets.documentcloud.org/125/CRID-456-CR-p1-normal.gif',
}
],
'pages': 10,
'last_updated_by': 'test user',
'views_count': 100,
'downloads_count': 99,
'notifications_count': 200,
'tags': ['tag123'],
'next_document_id': 126,
})
expect(
self.client.get(reverse('api-v2:attachments-detail', kwargs={'pk': '126'})).status_code
).to.be.eq(status.HTTP_200_OK)
@freeze_time('2017-01-14 12:00:01')
def test_list_attachments(self):
allegation1 = AllegationFactory(crid=123)
allegation2 = AllegationFactory(crid=456)
AttachmentFileFactory(
allegation=allegation1,
id=1,
file_type='document',
title='CRID 1051117 CR',
source_type='DOCUMENTCLOUD',
preview_image_url='http://web.com/image/CRID-1051117-CR-p1-normal.gif',
views_count=1,
downloads_count=1,
url='http://document/link/1',
)
AttachmentFileFactory(
allegation=allegation1,
id=2,
file_type='audio',
title='Log 1087021 911',
source_type='COPA',
preview_image_url=None,
views_count=2,
downloads_count=2,
url='http://audio/link/2',
)
AttachmentFileFactory(
allegation=allegation2,
id=3,
file_type='video',
title='Log 1086127 Body Worn Camera #1',
source_type='COPA',
preview_image_url=None,
views_count=3,
downloads_count=3,
url='http://video/link/3',
)
AttachmentFileFactory(id=4, allegation=allegation2, show=False)
expected_data = {
'count': 3,
'next': None,
'previous': None,
'results': [
{
'id': 1,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'CRID 1051117 CR',
'source_type': 'DOCUMENTCLOUD',
'preview_image_url': 'http://web.com/image/CRID-1051117-CR-p1-normal.gif',
'crid': '123',
'show': True,
'documents_count': 2,
'file_type': 'document',
'url': 'http://document/link/1',
},
{
'id': 2,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'Log 1087021 911',
'source_type': 'COPA',
'preview_image_url': None,
'crid': '123',
'show': True,
'documents_count': 2,
'file_type': 'audio',
'url': 'http://audio/link/2',
},
{
'id': 3,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'Log 1086127 Body Worn Camera #1',
'source_type': 'COPA',
'preview_image_url': None,
'crid': '456',
'show': True,
'documents_count': 1,
'file_type': 'video',
'url': 'http://video/link/3',
}
]
}
url = reverse('api-v2:attachments-list', kwargs={})
response = self.client.get(url)
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq(expected_data)
@freeze_time('2017-01-14 12:00:01')
def test_list_attachments_authenticated_user(self):
allegation1 = AllegationFactory(crid=123)
allegation2 = AllegationFactory(crid=456)
AttachmentFileFactory(
allegation=allegation1,
id=1,
file_type='document',
title='CRID 1051117 CR',
source_type='DOCUMENTCLOUD',
preview_image_url='http://web.com/image/CRID-1051117-CR-p1-normal.gif',
views_count=1,
downloads_count=1,
url='http://document/link/1',
)
AttachmentFileFactory(
allegation=allegation1,
id=2,
file_type='audio',
title='Log 1087021 911',
source_type='COPA',
preview_image_url=None,
views_count=2,
downloads_count=2,
url='http://audio/link/2',
)
AttachmentFileFactory(
allegation=allegation2,
id=3,
file_type='video',
title='Log 1086127 Body Worn Camera #1',
source_type='COPA',
preview_image_url=None,
views_count=3,
downloads_count=3,
url='http://video/link/3',
)
AttachmentFileFactory(
allegation=allegation2,
id=4,
file_type='video',
title='Log 1086127 Body Worn Camera #1',
source_type='COPA',
preview_image_url=None,
views_count=3,
downloads_count=3,
url='http://video/link/4',
show=False
)
expected_data = {
'count': 4,
'next': None,
'previous': None,
'results': [
{
'id': 1,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'CRID 1051117 CR',
'source_type': 'DOCUMENTCLOUD',
'preview_image_url': 'http://web.com/image/CRID-1051117-CR-p1-normal.gif',
'views_count': 1,
'downloads_count': 1,
'crid': '123',
'show': True,
'documents_count': 2,
'file_type': 'document',
'url': 'http://document/link/1',
},
{
'id': 2,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'Log 1087021 911',
'source_type': 'COPA',
'preview_image_url': None,
'views_count': 2,
'downloads_count': 2,
'crid': '123',
'show': True,
'documents_count': 2,
'file_type': 'audio',
'url': 'http://audio/link/2',
},
{
'id': 3,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'Log 1086127 Body Worn Camera #1',
'source_type': 'COPA',
'preview_image_url': None,
'views_count': 3,
'downloads_count': 3,
'crid': '456',
'show': True,
'documents_count': 1,
'file_type': 'video',
'url': 'http://video/link/3',
},
{
'id': 4,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'Log 1086127 Body Worn Camera #1',
'source_type': 'COPA',
'preview_image_url': None,
'views_count': 3,
'downloads_count': 3,
'crid': '456',
'show': False,
'documents_count': 1,
'file_type': 'video',
'url': 'http://video/link/4',
}
]
}
admin_user = AdminUserFactory()
token, _ = Token.objects.get_or_create(user=admin_user)
base_url = reverse('api-v2:attachments-list')
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
response = self.client.get(base_url)
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq(expected_data)
def test_update_attachment_visibility(self):
admin_user = AdminUserFactory()
token, _ = Token.objects.get_or_create(user=admin_user)
AttachmentFileFactory(id=1)
url = reverse('api-v2:attachments-detail', kwargs={'pk': '1'})
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
response = self.client.patch(url, {'show': False}, format='json')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(AttachmentFile.objects.get(pk=1).show).to.be.false()
def test_update_attachment_bad_request(self):
admin_user = AdminUserFactory()
token, _ = Token.objects.get_or_create(user=admin_user)
AttachmentFileFactory(id=1)
url = reverse('api-v2:attachments-detail', kwargs={'pk': '1'})
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
response = self.client.patch(url, {}, format='json')
expect(response.status_code).to.eq(status.HTTP_400_BAD_REQUEST)
def test_update_attachment_bad_request_with_error(self):
admin_user = AdminUserFactory()
token, _ = Token.objects.get_or_create(user=admin_user)
AttachmentFileFactory(id=1)
expected_data = {
'message': {
'tags': ['Ensure this field has no more than 20 characters.']
}
}
url = reverse('api-v2:attachments-detail', kwargs={'pk': '1'})
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
response = self.client.patch(url, {'tags': ['this is a tag with more than 20 characters']}, format='json')
expect(response.status_code).to.eq(status.HTTP_400_BAD_REQUEST)
expect(response.data).to.eq(expected_data)
def test_update_attachment_with_invalid_pk(self):
admin_user = AdminUserFactory()
token, _ = Token.objects.get_or_create(user=admin_user)
AttachmentFileFactory(id=1)
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
url = reverse('api-v2:attachments-detail', kwargs={'pk': '2'})
response = self.client.patch(url, {}, format='json')
expect(response.status_code).to.eq(status.HTTP_404_NOT_FOUND)
def test_update_attachment_title(self):
admin_user = AdminUserFactory(username='Test admin user')
token, _ = Token.objects.get_or_create(user=admin_user)
attachment = AttachmentFileFactory(
id=1,
show=True,
title='CR document',
text_content='CHICAGO POLICE DEPARTMENT RD I HT334604',
last_updated_by=None,
allegation=AllegationFactory(crid='456'),
url='http://foo.com',
preview_image_url='https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
source_type='DOCUMENTCLOUD',
file_type='document',
pages=10,
views_count=100,
downloads_count=99,
notifications_count=200,
manually_updated=False,
)
attachment.created_at = datetime(2017, 8, 4, 14, 30, 00, tzinfo=pytz.utc)
attachment.save()
url = reverse('api-v2:attachments-detail', kwargs={'pk': '1'})
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
with freeze_time('2017-08-05 12:00:01'):
response = self.client.patch(url, {'title': 'New title'}, format='json')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq({
'id': 1,
'crid': '456',
'title': 'New title',
'text_content': 'CHICAGO POLICE DEPARTMENT RD I HT334604',
'url': 'http://foo.com',
'preview_image_url': 'https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
'original_url': 'https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
'created_at': '2017-08-04T09:30:00-05:00',
'updated_at': '2017-08-05T07:00:01-05:00',
'crawler_name': 'Document Cloud',
'linked_documents': [],
'pages': 10,
'last_updated_by': 'Test admin user',
'views_count': 100,
'downloads_count': 99,
'notifications_count': 200,
'tags': [],
'next_document_id': None,
})
updated_attachment = AttachmentFile.objects.get(pk=1)
expect(updated_attachment.last_updated_by_id).to.eq(admin_user.id)
expect(updated_attachment.manually_updated).to.be.true()
def test_update_attachment_tags(self):
admin_user = AdminUserFactory(id=1, username='Test admin user')
token, _ = Token.objects.get_or_create(user=admin_user)
attachment = AttachmentFileFactory(
id=1,
show=True,
title='CR document',
text_content='CHICAGO POLICE DEPARTMENT RD I HT334604',
last_updated_by=None,
allegation=AllegationFactory(crid='456'),
url='http://foo.com',
preview_image_url='https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
source_type='DOCUMENTCLOUD',
file_type='document',
pages=10,
views_count=100,
downloads_count=99,
notifications_count=200,
manually_updated=False,
tags=['tag1']
)
attachment.created_at = datetime(2017, 8, 4, 14, 30, 00, tzinfo=pytz.utc)
attachment.save()
url = reverse('api-v2:attachments-detail', kwargs={'pk': '1'})
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
with freeze_time('2017-08-05 12:00:01'):
response = self.client.patch(url, {'tags': ['tag1', 'tag2', 'tag3']}, format='json')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq({
'id': 1,
'crid': '456',
'title': 'CR document',
'text_content': 'CHICAGO POLICE DEPARTMENT RD I HT334604',
'url': 'http://foo.com',
'preview_image_url': 'https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
'original_url': 'https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
'created_at': '2017-08-04T09:30:00-05:00',
'updated_at': '2017-08-05T07:00:01-05:00',
'crawler_name': 'Document Cloud',
'linked_documents': [],
'pages': 10,
'last_updated_by': 'Test admin user',
'views_count': 100,
'downloads_count': 99,
'notifications_count': 200,
'tags': ['tag1', 'tag2', 'tag3'],
'next_document_id': None,
})
updated_attachment = AttachmentFile.objects.get(pk=1)
expect(updated_attachment.last_updated_by_id).to.eq(admin_user.id)
expect(updated_attachment.manually_updated).to.be.true()
activity_logs = ActivityLog.objects.all().order_by('data')
expect(activity_logs.count()).to.eq(2)
activity_log_1 = activity_logs[0]
expect(activity_log_1.action_type).to.eq(ADD_TAG_TO_DOCUMENT)
expect(activity_log_1.user_id).to.eq(1)
expect(activity_log_1.data).to.eq('tag2')
activity_log_2 = activity_logs[1]
expect(activity_log_2.action_type).to.eq(ADD_TAG_TO_DOCUMENT)
expect(activity_log_2.user_id).to.eq(1)
expect(activity_log_2.data).to.eq('tag3')
updated_name = updated_attachment.tags.names()
expect(updated_name).to.have.length(3)
expect(updated_name).to.contain('tag1', 'tag2', 'tag3')
def test_remove_attachment_tags(self):
admin_user = AdminUserFactory(id=1, username='Test admin user')
token, _ = Token.objects.get_or_create(user=admin_user)
attachment = AttachmentFileFactory(
id=1,
show=True,
title='CR document',
text_content='CHICAGO POLICE DEPARTMENT RD I HT334604',
last_updated_by=None,
allegation=AllegationFactory(crid='456'),
url='http://foo.com',
preview_image_url='https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
source_type='DOCUMENTCLOUD',
file_type='document',
pages=10,
views_count=100,
downloads_count=99,
notifications_count=200,
manually_updated=False,
tags=['tag1', 'tag2', 'tag3']
)
attachment.created_at = datetime(2017, 8, 4, 14, 30, 00, tzinfo=pytz.utc)
attachment.save()
url = reverse('api-v2:attachments-detail', kwargs={'pk': '1'})
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
with freeze_time('2017-08-05 12:00:01'):
response = self.client.patch(url, {'tags': ['tag1']}, format='json')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq({
'id': 1,
'crid': '456',
'title': 'CR document',
'text_content': 'CHICAGO POLICE DEPARTMENT RD I HT334604',
'url': 'http://foo.com',
'preview_image_url': 'https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
'original_url': 'https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
'created_at': '2017-08-04T09:30:00-05:00',
'updated_at': '2017-08-05T07:00:01-05:00',
'crawler_name': 'Document Cloud',
'linked_documents': [],
'pages': 10,
'last_updated_by': 'Test admin user',
'views_count': 100,
'downloads_count': 99,
'notifications_count': 200,
'tags': ['tag1'],
'next_document_id': None
})
updated_attachment = AttachmentFile.objects.get(pk=1)
expect(updated_attachment.last_updated_by_id).to.eq(admin_user.id)
expect(updated_attachment.manually_updated).to.be.true()
activity_logs = ActivityLog.objects.all().order_by('data')
expect(activity_logs.count()).to.eq(2)
activity_log_1 = activity_logs[0]
expect(activity_log_1.action_type).to.eq(REMOVE_TAG_FROM_DOCUMENT)
expect(activity_log_1.user_id).to.eq(1)
expect(activity_log_1.data).to.eq('tag2')
activity_log_2 = activity_logs[1]
expect(activity_log_2.action_type).to.eq(REMOVE_TAG_FROM_DOCUMENT)
expect(activity_log_2.user_id).to.eq(1)
expect(activity_log_2.data).to.eq('tag3')
def test_update_attachment_title_no_change(self):
admin_user = AdminUserFactory(username='Test admin user')
token, _ = Token.objects.get_or_create(user=admin_user)
attachment = AttachmentFileFactory(
id=1,
show=True,
title='No changed CR document',
text_content='CHICAGO POLICE DEPARTMENT RD I HT334604',
last_updated_by=None,
allegation=AllegationFactory(crid='456'),
url='http://foo.com',
preview_image_url='https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
source_type='DOCUMENTCLOUD',
file_type='document',
pages=10,
views_count=100,
downloads_count=99,
notifications_count=200,
manually_updated=False,
)
attachment.created_at = datetime(2017, 8, 4, 14, 30, 00, tzinfo=pytz.utc)
attachment.save()
url = reverse('api-v2:attachments-detail', kwargs={'pk': '1'})
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
with freeze_time('2017-08-05 12:00:01'):
response = self.client.patch(url, {'title': 'No changed CR document'}, format='json')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq({
'id': 1,
'crid': '456',
'title': 'No changed CR document',
'text_content': 'CHICAGO POLICE DEPARTMENT RD I HT334604',
'url': 'http://foo.com',
'preview_image_url': 'https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
'original_url': 'https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
'created_at': '2017-08-04T09:30:00-05:00',
'updated_at': '2017-08-05T07:00:01-05:00',
'crawler_name': 'Document Cloud',
'linked_documents': [],
'pages': 10,
'last_updated_by': None,
'views_count': 100,
'downloads_count': 99,
'notifications_count': 200,
'tags': [],
'next_document_id': None,
})
updated_attachment = AttachmentFile.objects.get(pk=1)
expect(updated_attachment.last_updated_by_id).to.be.none()
expect(updated_attachment.manually_updated).to.be.false()
def test_update_attachment_text_content(self):
admin_user = AdminUserFactory(username='Test admin user')
token, _ = Token.objects.get_or_create(user=admin_user)
attachment = AttachmentFileFactory(
id=1,
show=True,
title='CR document',
text_content='CHICAGO POLICE DEPARTMENT RD I HT334604',
last_updated_by=None,
allegation=AllegationFactory(crid='456'),
url='http://foo.com',
preview_image_url='https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
original_url='https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
source_type='DOCUMENTCLOUD',
file_type='document',
pages=10,
views_count=100,
downloads_count=99,
notifications_count=200,
manually_updated=False,
)
attachment.created_at = datetime(2017, 8, 4, 14, 30, 00, tzinfo=pytz.utc)
attachment.save()
url = reverse('api-v2:attachments-detail', kwargs={'pk': '1'})
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
with freeze_time('2017-08-05 12:00:01'):
response = self.client.patch(
url,
{'text_content': 'New text content'},
format='json'
)
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq({
'id': 1,
'crid': '456',
'title': 'CR document',
'text_content': 'New text content',
'url': 'http://foo.com',
'preview_image_url': 'https://assets.documentcloud.org/CRID-456-CR-p1-normal.gif',
'original_url': 'https://www.documentcloud.org/documents/1-CRID-123456-CR.html',
'created_at': '2017-08-04T09:30:00-05:00',
'updated_at': '2017-08-05T07:00:01-05:00',
'crawler_name': 'Document Cloud',
'linked_documents': [],
'pages': 10,
'last_updated_by': 'Test admin user',
'views_count': 100,
'downloads_count': 99,
'notifications_count': 200,
'tags': [],
'next_document_id': None,
})
updated_attachment = AttachmentFile.objects.get(pk=1)
expect(updated_attachment.last_updated_by).to.eq(admin_user)
expect(updated_attachment.manually_updated).to.be.true()
@freeze_time('2017-01-14 12:00:01')
def test_attachments_filtered_by_cr_unauthenticated_user(self):
allegation1 = AllegationFactory(crid='1')
allegation2 = AllegationFactory(crid='2')
AttachmentFileFactory(
id=1,
file_type='document',
title='CRID 1051117 CR',
source_type='DOCUMENTCLOUD',
preview_image_url='http://web.com/image/CRID-1051117-CR-p1-normal.gif',
views_count=1,
downloads_count=1,
allegation=allegation1,
url='http://document/link/1',
)
AttachmentFileFactory(
id=2,
file_type='audio',
title='Log 1087021 911',
source_type='COPA',
preview_image_url=None,
views_count=2,
downloads_count=2,
allegation=allegation2,
url='http://audio/link/2',
)
base_url = reverse('api-v2:attachments-list')
query_string = urlencode({'crid': allegation1.crid})
url = f'{base_url}?{query_string}'
response = self.client.get(url)
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq({
'count': 1,
'next': None,
'previous': None,
'results': [
{
'id': 1,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'CRID 1051117 CR',
'source_type': 'DOCUMENTCLOUD',
'preview_image_url': 'http://web.com/image/CRID-1051117-CR-p1-normal.gif',
'crid': '1',
'show': True,
'documents_count': 1,
'file_type': 'document',
'url': 'http://document/link/1',
}
]
})
@freeze_time('2017-01-14 12:00:01')
def test_attachments_filtered_by_cr_authenticated_user(self):
allegation1 = AllegationFactory(crid='1')
allegation2 = AllegationFactory(crid='2')
AttachmentFileFactory(
id=1,
file_type='document',
title='CRID 1051117 CR',
source_type='DOCUMENTCLOUD',
preview_image_url='http://web.com/image/CRID-1051117-CR-p1-normal.gif',
views_count=1,
downloads_count=1,
allegation=allegation1,
url='http://document/link/1',
)
AttachmentFileFactory(
id=2,
file_type='audio',
title='Log 1087021 911',
source_type='COPA',
preview_image_url=None,
views_count=2,
downloads_count=2,
allegation=allegation2,
url='http://audio/link/2',
)
AttachmentFileFactory(
id=3,
file_type='document',
title='CRID 1051117 CR',
source_type='DOCUMENTCLOUD',
preview_image_url='http://web.com/image/CRID-1051117-CR-p1-normal.gif',
views_count=1,
downloads_count=1,
allegation=allegation1,
url='http://document/link/3',
show=False,
)
admin_user = AdminUserFactory()
token, _ = Token.objects.get_or_create(user=admin_user)
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
base_url = reverse('api-v2:attachments-list')
query_string = urlencode({'crid': allegation1.crid})
url = f'{base_url}?{query_string}'
response = self.client.get(url)
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data).to.eq({
'count': 2,
'next': None,
'previous': None,
'results': [
{
'id': 1,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'CRID 1051117 CR',
'source_type': 'DOCUMENTCLOUD',
'preview_image_url': 'http://web.com/image/CRID-1051117-CR-p1-normal.gif',
'views_count': 1,
'downloads_count': 1,
'crid': '1',
'show': True,
'documents_count': 1,
'file_type': 'document',
'url': 'http://document/link/1',
},
{
'id': 3,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'CRID 1051117 CR',
'source_type': 'DOCUMENTCLOUD',
'preview_image_url': 'http://web.com/image/CRID-1051117-CR-p1-normal.gif',
'views_count': 1,
'downloads_count': 1,
'crid': '1',
'show': False,
'documents_count': 1,
'file_type': 'document',
'url': 'http://document/link/3',
}
]
})
@freeze_time('2017-01-14 12:00:01')
def test_attachments_full_text_search(self):
allegation_1 = AllegationFactory(crid=111333)
allegation_2 = AllegationFactory(crid=123456)
AttachmentFileFactory(
id=11,
allegation=allegation_1,
show=True,
)
AttachmentFileFactory(
id=22,
title='summary report',
show=True,
)
AttachmentFileFactory(
id=33,
title='document title',
text_content='document content',
show=True,
)
AttachmentFileFactory(
id=44,
title='This is a title',
text_content='This is a content.',
source_type='DOCUMENTCLOUD',
allegation=allegation_2,
show=True,
file_type='document',
preview_image_url='https://assets.documentcloud.org/125/CRID-456-CR-p1-normal.gif',
url='https://www.documentcloud.org/documents/1-CRID-123-CR.html',
)
AttachmentFileFactory(
id=55,
allegation=allegation_1,
show=False,
)
AttachmentFileFactory(
id=66,
title='summary report',
show=False,
)
AttachmentFileFactory(
id=77,
title='document title',
text_content='document content',
show=False,
)
AttachmentFileFactory(
id=88,
title='This is a title',
text_content='This is a content.',
show=False,
)
base_url = reverse('api-v2:attachments-list')
self.refresh_index()
response = self.client.get(f'{base_url}?match=11133')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data['count']).to.eq(1)
expect(response.data['results'][0]['id']).to.eq(11)
response = self.client.get(f'{base_url}?match=summary')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data['count']).to.eq(1)
expect(response.data['results'][0]['id']).to.eq(22)
response = self.client.get(f'{base_url}?match=document')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data['count']).to.eq(1)
expect(response.data['results'][0]['id']).to.eq(33)
response = self.client.get(f'{base_url}?crid=123456')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data['count']).to.eq(1)
expect(response.data['results'][0]).to.eq({
'id': 44,
'created_at': '2017-01-14T06:00:01-06:00',
'title': 'This is a title',
'source_type': 'DOCUMENTCLOUD',
'crid': '123456',
'show': True,
'file_type': 'document',
'url': 'https://www.documentcloud.org/documents/1-CRID-123-CR.html',
'preview_image_url': 'https://assets.documentcloud.org/125/CRID-456-CR-p1-normal.gif',
'documents_count': 1,
})
def test_attachments_full_text_search_match_multiple_fields(self):
allegation = AllegationFactory(crid=123456)
AttachmentFileFactory(
id=11,
allegation=allegation
)
AttachmentFileFactory(
id=22,
title='Title 123456'
)
AttachmentFileFactory(
id=33,
title='document title',
text_content='document content 12345'
)
AttachmentFileFactory(
id=44,
title='document title',
text_content='document content'
)
expected_ids = [11, 22, 33]
base_url = reverse('api-v2:attachments-list')
self.refresh_index()
response = self.client.get(f'{base_url}?match=12345')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data['count']).to.eq(3)
expect(expected_ids).to.contain(*[result['id'] for result in response.data['results']])
def test_attachments_full_text_search_with_pagination(self):
allegation = AllegationFactory(crid=111333)
AttachmentFileFactory(
id=11,
title='summary',
allegation=allegation
)
AttachmentFileFactory(
id=22,
title='summary report'
)
AttachmentFileFactory(
id=33,
title='summary report title',
text_content='document content'
)
base_url = reverse('api-v2:attachments-list')
self.refresh_index()
response = self.client.get(f'{base_url}?match=summary&limit=2&offset=2')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data['count']).to.eq(3)
expect(response.data['next']).to.be.none()
expect(response.data['previous']).to.contain(f'{base_url}?limit=2&match=summary')
expect(len(response.data['results'])).to.eq(1)
def test_attachments_full_text_search_as_admin(self):
admin_user = AdminUserFactory()
token, _ = Token.objects.get_or_create(user=admin_user)
AttachmentFileFactory(
id=11,
title='document title 1',
text_content='document content 1',
show=True,
)
AttachmentFileFactory(
id=22,
title='document title 2',
text_content='document content 2',
show=False,
)
AttachmentFileFactory(
id=33,
title='this is title',
text_content='this is content',
show=False,
)
base_url = reverse('api-v2:attachments-list')
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
expected_ids = [11, 22]
self.refresh_index()
response = self.client.get(f'{base_url}?match=document')
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data['count']).to.eq(2)
expect(expected_ids).to.contain(*[result['id'] for result in response.data['results']])
def test_get_attachments_as_admin(self):
admin_user = AdminUserFactory()
token, _ = Token.objects.get_or_create(user=admin_user)
AttachmentFileFactory(id=133, show=False)
base_url = reverse('api-v2:attachments-list')
self.client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
response = self.client.get(base_url)
expect(response.status_code).to.eq(status.HTTP_200_OK)
expect(response.data['count']).to.eq(1)
expect(response.data['results'][0]['id']).to.eq(133)
def test_tags(self):
AttachmentFileFactory(tags=['chicago', 'tactical'])
AttachmentFileFactory(tags=['tactical', 'twitter', 'another tag'])
AttachmentFileFactory()
OfficerFactory(tags=['officer_tag1', 'officer_tag2'])
url = reverse('api-v2:attachments-tags')
response = self.client.get(url)
expect(response.data).to.eq(['another tag', 'chicago', 'tactical', 'twitter'])
class DocumentCrawlersViewSetTestCase(APITestCase):
@override_settings(TIME_ZONE='UTC')
def setUp(self):
with freeze_time(datetime(2018, 3, 3, 12, 0, 1, tzinfo=pytz.utc)):
DocumentCrawlerFactory(
id=1,
source_type='DOCUMENTCLOUD',
status='Failed',
num_documents=5,
num_new_documents=1,
num_updated_documents=4,
num_successful_run=0
)
with freeze_time(datetime(2018, 4, 3, 12, 0, 1, tzinfo=pytz.utc)):
DocumentCrawlerFactory(
id=2,
source_type='DOCUMENTCLOUD',
status='Success',
num_documents=5,
num_new_documents=1,
num_updated_documents=4,
num_successful_run=1
)
with freeze_time(datetime(2018, 3, 3, 12, 0, 10, tzinfo=pytz.utc)):
DocumentCrawlerFactory(
id=3,
source_type='PORTAL_COPA',
status='Failed',
num_documents=7,
num_new_documents=1,
num_updated_documents=5,
num_successful_run=0
)
with freeze_time(datetime(2018, 4, 3, 12, 0, 10, tzinfo=pytz.utc)):
DocumentCrawlerFactory(
id=4,
source_type='PORTAL_COPA',
status='Success',
num_documents=6,
num_new_documents=2,
num_updated_documents=4,
num_successful_run=1
)
with freeze_time(datetime(2018, 3, 3, 12, 0, 20, tzinfo=pytz.utc)):
DocumentCrawlerFactory(
id=5,
source_type='SUMMARY_REPORTS_COPA',
status='Failed',
num_documents=3,
num_new_documents=1,
num_updated_documents=1,
num_successful_run=0
)
with freeze_time(datetime(2018, 4, 3, 12, 0, 20, tzinfo=pytz.utc)):
DocumentCrawlerFactory(
id=6,
source_type='SUMMARY_REPORTS_COPA',
status='Success',
num_documents=4,
num_new_documents=1,
num_updated_documents=3,
num_successful_run=1
)
def test_list(self):
url = reverse('api-v2:document-crawlers-list')
response = self.client.get(url, {'limit': 3})
expect(response.data['results']).to.eq([
{
'id': 6,
'crawler_name': 'SUMMARY_REPORTS_COPA',
'status': 'Success',
'num_documents': 4,
'num_new_documents': 1,
'recent_run_at': '2018-04-03',
'num_successful_run': 1
},
{
'id': 4,
'crawler_name': 'PORTAL_COPA',
'status': 'Success',
'num_documents': 6,
'num_new_documents': 2,
'recent_run_at': '2018-04-03',
'num_successful_run': 1
},
{
'id': 2,
'crawler_name': 'DOCUMENTCLOUD',
'status': 'Success',
'num_documents': 5,
'num_new_documents': 1,
'recent_run_at': '2018-04-03',
'num_successful_run': 1
}
])
def test_pagination_list(self):
url = reverse('api-v2:document-crawlers-list')
response = self.client.get(url, {'limit': 3, 'offset': 3})
expect(response.data['results']).to.eq([
{
'id': 5,
'crawler_name': 'SUMMARY_REPORTS_COPA',
'status': 'Failed',
'num_documents': 3,
'num_new_documents': 1,
'recent_run_at': '2018-03-03',
'num_successful_run': 0
},
{
'id': 3,
'crawler_name': 'PORTAL_COPA',
'status': 'Failed',
'num_documents': 7,
'num_new_documents': 1,
'recent_run_at': '2018-03-03',
'num_successful_run': 0
},
{
'id': 1,
'crawler_name': 'DOCUMENTCLOUD',
'status': 'Failed',
'num_documents': 5,
'num_new_documents': 1,
'recent_run_at': '2018-03-03',
'num_successful_run': 0
}
])
| 38.401254 | 114 | 0.545102 | 5,228 | 49,000 | 4.928653 | 0.054705 | 0.010867 | 0.029107 | 0.018163 | 0.902589 | 0.886948 | 0.87352 | 0.857065 | 0.842279 | 0.82823 | 0 | 0.066935 | 0.325878 | 49,000 | 1,275 | 115 | 38.431373 | 0.713127 | 0 | 0 | 0.761168 | 0 | 0.010309 | 0.250755 | 0.030633 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020619 | false | 0 | 0.015464 | 0 | 0.037801 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e826b714e4035a5a49bdf84f8eaaa081522fb969 | 16,657 | py | Python | python/examples/depthwise_conv/ops.py | ThomasRaoux/iree-llvm-sandbox | bbe5de6b5b3fe642d2910135e1f98154cfd8ee4c | [
"Apache-2.0"
] | null | null | null | python/examples/depthwise_conv/ops.py | ThomasRaoux/iree-llvm-sandbox | bbe5de6b5b3fe642d2910135e1f98154cfd8ee4c | [
"Apache-2.0"
] | null | null | null | python/examples/depthwise_conv/ops.py | ThomasRaoux/iree-llvm-sandbox | bbe5de6b5b3fe642d2910135e1f98154cfd8ee4c | [
"Apache-2.0"
] | null | null | null | # pytype: skip-file
from mlir.ir import *
from mlir.dialects.linalg.opdsl.lang import *
@linalg_structured_op
def depthwise_conv_1d_ncw_cw(I=TensorDef(TV.T1, S.N, S.C,
S.OW * S.SW + S.KW * S.DW),
K=TensorDef(TV.T2, S.C, S.KW),
O=TensorDef(U, S.N, S.C, S.OW, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.n, D.c, D.ow, D.kw)
O[D.n, D.c, D.ow] += (cast(U, I[D.n, D.c, D.ow * S.SW + D.kw * S.DW]) *
cast(U, K[D.c, D.kw]))
@linalg_structured_op
def depthwise_conv_1d_ncw_wc(I=TensorDef(TV.T1, S.N, S.C,
S.OW * S.SW + S.KW * S.DW),
K=TensorDef(TV.T2, S.KW, S.C),
O=TensorDef(U, S.N, S.C, S.OW, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.n, D.c, D.ow, D.kw)
O[D.n, D.c, D.ow] += (cast(U, I[D.n, D.c, D.ow * S.SW + D.kw * S.DW]) *
cast(U, K[D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_1d_nwc_cw(I=TensorDef(TV.T1, S.N, S.OW * S.SW + S.KW * S.DW,
S.C),
K=TensorDef(TV.T2, S.C, S.KW),
O=TensorDef(U, S.N, S.OW, S.C, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.n, D.ow, D.c, D.kw)
O[D.n, D.ow, D.c] += (cast(U, I[D.n, D.ow * S.SW + D.kw * S.DW, D.c]) *
cast(U, K[D.c, D.kw]))
@linalg_structured_op
def depthwise_conv_1d_nwc_wc(I=TensorDef(TV.T1, S.N, S.OW * S.SW + S.KW * S.DW,
S.C),
K=TensorDef(TV.T2, S.KW, S.C),
O=TensorDef(U, S.N, S.OW, S.C, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.n, D.ow, D.c, D.kw)
O[D.n, D.ow, D.c] += (cast(U, I[D.n, D.ow * S.SW + D.kw * S.DW, D.c]) *
cast(U, K[D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_1d_cnw_cw(I=TensorDef(TV.T1, S.C, S.N,
S.OW * S.SW + S.KW * S.DW),
K=TensorDef(TV.T2, S.C, S.KW),
O=TensorDef(U, S.C, S.N, S.OW, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.c, D.n, D.ow, D.kw)
O[D.c, D.n, D.ow] += (cast(U, I[D.c, D.n, D.ow * S.SW + D.kw * S.DW]) *
cast(U, K[D.c, D.kw]))
@linalg_structured_op
def depthwise_conv_1d_cnw_wc(I=TensorDef(TV.T1, S.C, S.N,
S.OW * S.SW + S.KW * S.DW),
K=TensorDef(TV.T2, S.KW, S.C),
O=TensorDef(U, S.C, S.N, S.OW, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.c, D.n, D.ow, D.kw)
O[D.c, D.n, D.ow] += (cast(U, I[D.c, D.n, D.ow * S.SW + D.kw * S.DW]) *
cast(U, K[D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_1d_cwn_cw(I=TensorDef(TV.T1, S.C, S.OW * S.SW + S.KW * S.DW,
S.N),
K=TensorDef(TV.T2, S.C, S.KW),
O=TensorDef(U, S.C, S.OW, S.N, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.c, D.ow, D.n, D.kw)
O[D.c, D.ow, D.n] += (cast(U, I[D.c, D.ow * S.SW + D.kw * S.DW, D.n]) *
cast(U, K[D.c, D.kw]))
@linalg_structured_op
def depthwise_conv_1d_cwn_wc(I=TensorDef(TV.T1, S.C, S.OW * S.SW + S.KW * S.DW,
S.N),
K=TensorDef(TV.T2, S.KW, S.C),
O=TensorDef(U, S.C, S.OW, S.N, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.c, D.ow, D.n, D.kw)
O[D.c, D.ow, D.n] += (cast(U, I[D.c, D.ow * S.SW + D.kw * S.DW, D.n]) *
cast(U, K[D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_1d_wnc_cw(I=TensorDef(TV.T1, S.OW * S.SW + S.KW * S.DW, S.N,
S.C),
K=TensorDef(TV.T2, S.C, S.KW),
O=TensorDef(U, S.OW, S.N, S.C, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.ow, D.n, D.c, D.kw)
O[D.ow, D.n, D.c] += (cast(U, I[D.ow * S.SW + D.kw * S.DW, D.n, D.c]) *
cast(U, K[D.c, D.kw]))
@linalg_structured_op
def depthwise_conv_1d_wnc_wc(I=TensorDef(TV.T1, S.OW * S.SW + S.KW * S.DW, S.N,
S.C),
K=TensorDef(TV.T2, S.KW, S.C),
O=TensorDef(U, S.OW, S.N, S.C, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.ow, D.n, D.c, D.kw)
O[D.ow, D.n, D.c] += (cast(U, I[D.ow * S.SW + D.kw * S.DW, D.n, D.c]) *
cast(U, K[D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_1d_wcn_cw(I=TensorDef(TV.T1, S.OW * S.SW + S.KW * S.DW, S.C,
S.N),
K=TensorDef(TV.T2, S.C, S.KW),
O=TensorDef(U, S.OW, S.C, S.N, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.ow, D.c, D.n, D.kw)
O[D.ow, D.c, D.n] += (cast(U, I[D.ow * S.SW + D.kw * S.DW, D.c, D.n]) *
cast(U, K[D.c, D.kw]))
@linalg_structured_op
def depthwise_conv_1d_wcn_wc(I=TensorDef(TV.T1, S.OW * S.SW + S.KW * S.DW, S.C,
S.N),
K=TensorDef(TV.T2, S.KW, S.C),
O=TensorDef(U, S.OW, S.C, S.N, output=True),
strides=AttributeDef(S.SW),
dilations=AttributeDef(S.DW)):
implements(ConvolutionOpInterface)
domain(D.ow, D.c, D.n, D.kw)
O[D.ow, D.c, D.n] += (cast(U, I[D.ow * S.SW + D.kw * S.DW, D.c, D.n]) *
cast(U, K[D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_2d_nchw_chw(I=TensorDef(TV.T1, S.N, S.C,
S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW),
K=TensorDef(TV.T2, S.C, S.KH, S.KW),
O=TensorDef(U, S.N, S.C, S.OH, S.OW,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.n, D.c, D.oh, D.ow, D.kh, D.kw)
O[D.n, D.c, D.oh, D.ow] += (cast(
U, I[D.n, D.c, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW]) *
cast(U, K[D.c, D.kh, D.kw]))
@linalg_structured_op
def depthwise_conv_2d_nchw_hwc(I=TensorDef(TV.T1, S.N, S.C,
S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW),
K=TensorDef(TV.T2, S.KH, S.KW, S.C),
O=TensorDef(U, S.N, S.C, S.OH, S.OW,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.n, D.c, D.oh, D.ow, D.kh, D.kw)
O[D.n, D.c, D.oh, D.ow] += (cast(
U, I[D.n, D.c, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW]) *
cast(U, K[D.kh, D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_2d_nhwc_chw(I=TensorDef(TV.T1, S.N,
S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW, S.C),
K=TensorDef(TV.T2, S.C, S.KH, S.KW),
O=TensorDef(U, S.N, S.OH, S.OW, S.C,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.n, D.oh, D.ow, D.c, D.kh, D.kw)
O[D.n, D.oh, D.ow, D.c] += (cast(
U, I[D.n, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW, D.c]) *
cast(U, K[D.c, D.kh, D.kw]))
@linalg_structured_op
def depthwise_conv_2d_nhwc_hwc(I=TensorDef(TV.T1, S.N,
S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW, S.C),
K=TensorDef(TV.T2, S.KH, S.KW, S.C),
O=TensorDef(U, S.N, S.OH, S.OW, S.C,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.n, D.oh, D.ow, D.c, D.kh, D.kw)
O[D.n, D.oh, D.ow, D.c] += (cast(
U, I[D.n, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW, D.c]) *
cast(U, K[D.kh, D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_2d_cnhw_chw(I=TensorDef(TV.T1, S.C, S.N,
S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW),
K=TensorDef(TV.T2, S.C, S.KH, S.KW),
O=TensorDef(U, S.C, S.N, S.OH, S.OW,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.c, D.n, D.oh, D.ow, D.kh, D.kw)
O[D.c, D.n, D.oh, D.ow] += (cast(
U, I[D.c, D.n, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW]) *
cast(U, K[D.c, D.kh, D.kw]))
@linalg_structured_op
def depthwise_conv_2d_cnhw_hwc(I=TensorDef(TV.T1, S.C, S.N,
S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW),
K=TensorDef(TV.T2, S.KH, S.KW, S.C),
O=TensorDef(U, S.C, S.N, S.OH, S.OW,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.c, D.n, D.oh, D.ow, D.kh, D.kw)
O[D.c, D.n, D.oh, D.ow] += (cast(
U, I[D.c, D.n, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW]) *
cast(U, K[D.kh, D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_2d_chwn_chw(I=TensorDef(TV.T1, S.C,
S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW, S.N),
K=TensorDef(TV.T2, S.C, S.KH, S.KW),
O=TensorDef(U, S.C, S.OH, S.OW, S.N,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.c, D.oh, D.ow, D.n, D.kh, D.kw)
O[D.c, D.oh, D.ow, D.n] += (cast(
U, I[D.c, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW, D.n]) *
cast(U, K[D.c, D.kh, D.kw]))
@linalg_structured_op
def depthwise_conv_2d_chwn_hwc(I=TensorDef(TV.T1, S.C,
S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW, S.N),
K=TensorDef(TV.T2, S.KH, S.KW, S.C),
O=TensorDef(U, S.C, S.OH, S.OW, S.N,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.c, D.oh, D.ow, D.n, D.kh, D.kw)
O[D.c, D.oh, D.ow, D.n] += (cast(
U, I[D.c, D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW, D.n]) *
cast(U, K[D.kh, D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_2d_hwnc_chw(I=TensorDef(TV.T1, S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW, S.N, S.C),
K=TensorDef(TV.T2, S.C, S.KH, S.KW),
O=TensorDef(U, S.OH, S.OW, S.N, S.C,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.oh, D.ow, D.n, D.c, D.kh, D.kw)
O[D.oh, D.ow, D.n, D.c] += (cast(
U, I[D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW, D.n, D.c]) *
cast(U, K[D.c, D.kh, D.kw]))
@linalg_structured_op
def depthwise_conv_2d_hwnc_hwc(I=TensorDef(TV.T1, S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW, S.N, S.C),
K=TensorDef(TV.T2, S.KH, S.KW, S.C),
O=TensorDef(U, S.OH, S.OW, S.N, S.C,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.oh, D.ow, D.n, D.c, D.kh, D.kw)
O[D.oh, D.ow, D.n, D.c] += (cast(
U, I[D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW, D.n, D.c]) *
cast(U, K[D.kh, D.kw, D.c]))
@linalg_structured_op
def depthwise_conv_2d_hwcn_chw(I=TensorDef(TV.T1, S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW, S.C, S.N),
K=TensorDef(TV.T2, S.C, S.KH, S.KW),
O=TensorDef(U, S.OH, S.OW, S.C, S.N,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.oh, D.ow, D.c, D.n, D.kh, D.kw)
O[D.oh, D.ow, D.c, D.n] += (cast(
U, I[D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW, D.c, D.n]) *
cast(U, K[D.c, D.kh, D.kw]))
@linalg_structured_op
def depthwise_conv_2d_hwcn_hwc(I=TensorDef(TV.T1, S.OH * S.SH + S.KH * S.DH,
S.OW * S.SW + S.KW * S.DW, S.C, S.N),
K=TensorDef(TV.T2, S.KH, S.KW, S.C),
O=TensorDef(U, S.OH, S.OW, S.C, S.N,
output=True),
strides=AttributeDef(S.SH, S.SW),
dilations=AttributeDef(S.DH, S.DW)):
implements(ConvolutionOpInterface)
domain(D.oh, D.ow, D.c, D.n, D.kh, D.kw)
O[D.oh, D.ow, D.c, D.n] += (cast(
U, I[D.oh * S.SH + D.kh * S.DH, D.ow * S.SW + D.kw * S.DW, D.c, D.n]) *
cast(U, K[D.kh, D.kw, D.c]))
| 47.727794 | 80 | 0.41322 | 2,606 | 16,657 | 2.585955 | 0.024175 | 0.028491 | 0.030272 | 0.074789 | 0.990058 | 0.990058 | 0.990058 | 0.990058 | 0.966315 | 0.966315 | 0 | 0.007477 | 0.421925 | 16,657 | 348 | 81 | 47.864943 | 0.692388 | 0.001021 | 0 | 0.912752 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.080537 | false | 0 | 0.006711 | 0 | 0.087248 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1c46fc394f4f2e4a1722f7dab063575db81ae159 | 2,360 | py | Python | rematchrSite/rematchrApp/migrations/0003_auto_20150319_1243.py | ctames/rematchr | 4a22c3e4b1c22b64008e4996bdde9d4657c5294b | [
"MIT"
] | null | null | null | rematchrSite/rematchrApp/migrations/0003_auto_20150319_1243.py | ctames/rematchr | 4a22c3e4b1c22b64008e4996bdde9d4657c5294b | [
"MIT"
] | null | null | null | rematchrSite/rematchrApp/migrations/0003_auto_20150319_1243.py | ctames/rematchr | 4a22c3e4b1c22b64008e4996bdde9d4657c5294b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('rematchrApp', '0002_auto_20150226_1336'),
]
operations = [
migrations.AddField(
model_name='conference',
name='user',
field=models.ForeignKey(null=True, to=settings.AUTH_USER_MODEL, unique=True),
preserve_default=True,
),
migrations.AddField(
model_name='reviewer',
name='doc_texts',
field=models.TextField(default=b'', blank=True),
preserve_default=True,
),
migrations.AddField(
model_name='reviewer',
name='doc_urls',
field=models.TextField(default=b'', blank=True),
preserve_default=True,
),
migrations.AlterField(
model_name='conference',
name='title',
field=models.CharField(max_length=256),
preserve_default=True,
),
migrations.AlterField(
model_name='researcher',
name='doc_texts',
field=models.TextField(default=b'', blank=True),
preserve_default=True,
),
migrations.AlterField(
model_name='researcher',
name='doc_urls',
field=models.TextField(default=b'', blank=True),
preserve_default=True,
),
migrations.AlterField(
model_name='researcher',
name='firstname',
field=models.CharField(max_length=256),
preserve_default=True,
),
migrations.AlterField(
model_name='researcher',
name='lastname',
field=models.CharField(max_length=256),
preserve_default=True,
),
migrations.AlterField(
model_name='reviewer',
name='firstname',
field=models.CharField(max_length=256),
preserve_default=True,
),
migrations.AlterField(
model_name='reviewer',
name='lastname',
field=models.CharField(max_length=256),
preserve_default=True,
),
]
| 30.649351 | 89 | 0.563559 | 211 | 2,360 | 6.104265 | 0.260664 | 0.069876 | 0.147516 | 0.20264 | 0.719721 | 0.719721 | 0.719721 | 0.719721 | 0.719721 | 0.719721 | 0 | 0.020177 | 0.327966 | 2,360 | 76 | 90 | 31.052632 | 0.791929 | 0.008898 | 0 | 0.814286 | 0 | 0 | 0.086864 | 0.009842 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.042857 | 0 | 0.085714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1c909d4fcbf0c7e9aae88e952da25feb47d59377 | 13,536 | py | Python | tests/model/eventsources/test_self_managed_kafka_event_source.py | hawflau/serverless-application-model | d2cf4b7e23d26cdf677c564d53bb58e6a5b6cac2 | [
"Apache-2.0"
] | 1,279 | 2020-08-25T03:33:15.000Z | 2022-03-31T09:49:22.000Z | tests/model/eventsources/test_self_managed_kafka_event_source.py | hawflau/serverless-application-model | d2cf4b7e23d26cdf677c564d53bb58e6a5b6cac2 | [
"Apache-2.0"
] | 797 | 2020-08-24T23:30:05.000Z | 2022-03-31T22:28:29.000Z | tests/model/eventsources/test_self_managed_kafka_event_source.py | hawflau/serverless-application-model | d2cf4b7e23d26cdf677c564d53bb58e6a5b6cac2 | [
"Apache-2.0"
] | 431 | 2020-08-27T20:47:26.000Z | 2022-03-31T23:57:55.000Z | from unittest import TestCase
from samtranslator.model.eventsources.pull import SelfManagedKafka
from samtranslator.model.exceptions import InvalidEventException
class SelfManagedKafkaEventSource(TestCase):
def setUp(self):
self.logical_id = "SelfManagedKafkaEvent"
self.kafka_event_source = SelfManagedKafka(self.logical_id)
def test_get_policy_arn(self):
arn = self.kafka_event_source.get_policy_arn()
expected_arn = None
self.assertEqual(arn, expected_arn)
def test_get_policy_statements(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "SASL_SCRAM_256_AUTH", "URI": "SECRET_URI"},
{"Type": "VPC_SUBNET", "URI": "SECRET_URI"},
{"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"},
]
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.BatchSize = 1
policy_statements = self.kafka_event_source.get_policy_statements()
expected_policy_document = [
{
"PolicyDocument": {
"Statement": [
{"Action": ["secretsmanager:GetSecretValue"], "Effect": "Allow", "Resource": "SECRET_URI"},
{
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
],
"Effect": "Allow",
"Resource": "*",
},
],
"Version": "2012-10-17",
},
"PolicyName": "SelfManagedKafkaExecutionRolePolicy",
}
]
self.assertEqual(policy_statements, expected_policy_document)
def test_get_policy_statements_with_only_auth_mechanism(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "BASIC_AUTH", "URI": "SECRET_URI"},
]
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.BatchSize = 1
policy_statements = self.kafka_event_source.get_policy_statements()
expected_policy_document = [
{
"PolicyDocument": {
"Statement": [
{"Action": ["secretsmanager:GetSecretValue"], "Effect": "Allow", "Resource": "SECRET_URI"},
],
"Version": "2012-10-17",
},
"PolicyName": "SelfManagedKafkaExecutionRolePolicy",
}
]
self.assertEqual(policy_statements, expected_policy_document)
def test_get_policy_statements_with_only_vpc_config(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "VPC_SUBNET", "URI": "SECRET_URI"},
{"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"},
]
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.BatchSize = 1
policy_statements = self.kafka_event_source.get_policy_statements()
expected_policy_document = [
{
"PolicyDocument": {
"Statement": [
{
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
],
"Effect": "Allow",
"Resource": "*",
},
],
"Version": "2012-10-17",
},
"PolicyName": "SelfManagedKafkaExecutionRolePolicy",
}
]
self.assertEqual(policy_statements, expected_policy_document)
def test_get_policy_statements_with_secrets_manager_kms_key_id(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "SASL_SCRAM_256_AUTH", "URI": "SECRET_URI"},
{"Type": "VPC_SUBNET", "URI": "SECRET_URI"},
{"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"},
]
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.BatchSize = 1
self.kafka_event_source.SecretsManagerKmsKeyId = "SECRET_KEY"
policy_statements = self.kafka_event_source.get_policy_statements()
expected_policy_document = [
{
"PolicyDocument": {
"Statement": [
{"Action": ["secretsmanager:GetSecretValue"], "Effect": "Allow", "Resource": "SECRET_URI"},
{
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
],
"Effect": "Allow",
"Resource": "*",
},
{
"Action": ["kms:Decrypt"],
"Effect": "Allow",
"Resource": {
"Fn::Sub": "arn:${AWS::Partition}:kms:${AWS::Region}:${AWS::AccountId}:key/"
+ self.kafka_event_source.SecretsManagerKmsKeyId
},
},
],
"Version": "2012-10-17",
},
"PolicyName": "SelfManagedKafkaExecutionRolePolicy",
}
]
self.assertEqual(policy_statements, expected_policy_document)
def test_must_raise_for_missing_topics(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "SASL_SCRAM_256_AUTH", "URI": "SECRET_URI"},
{"Type": "VPC_SUBNET", "URI": "SECRET_URI"},
{"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"},
]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.BatchSize = 1
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
def test_must_raise_for_empty_topics(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "SASL_SCRAM_256_AUTH", "URI": "SECRET_URI"},
{"Type": "VPC_SUBNET", "URI": "SECRET_URI"},
{"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"},
]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.BatchSize = 1
self.kafka_event_source.Topics = []
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
def test_must_raise_for_multiple_topics(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "SASL_SCRAM_256_AUTH", "URI": "SECRET_URI"},
{"Type": "VPC_SUBNET", "URI": "SECRET_URI"},
{"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"},
]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Topics = ["Topics1", "Topics2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.BatchSize = 1
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
def test_must_raise_for_missing_endpoints(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "SASL_SCRAM_256_AUTH", "URI": "SECRET_URI"},
{"Type": "VPC_SUBNET", "URI": "SECRET_URI"},
{"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"},
]
self.kafka_event_source.Enabled = True
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.BatchSize = 1
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
def test_must_raise_for_empty_bootstrap_server(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "SASL_SCRAM_256_AUTH", "URI": "SECRET_URI"},
{"Type": "VPC_SUBNET", "URI": "SECRET_URI"},
{"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"},
]
self.kafka_event_source.KafkaBootstrapServers = []
self.kafka_event_source.Enabled = True
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.BatchSize = 1
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
def test_must_raise_for_missing_vpc_subnet(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "SASL_SCRAM_256_AUTH", "URI": "SECRET_URI"},
{"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"},
]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.BatchSize = 1
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
def test_must_raise_for_missing_vpc_security_group(self):
self.kafka_event_source.SourceAccessConfigurations = [
{"Type": "SASL_SCRAM_256_AUTH", "URI": "SECRET_URI"},
{"Type": "VPC_SUBNET", "URI": "SECRET_URI"},
]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.BatchSize = 1
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
def test_must_raise_for_missing_source_access_configurations(self):
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.BatchSize = 1
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
def test_must_raise_for_unknown_source_access_configurations_type(self):
test_credentials = [
[{"Type": "BASIC_AUT", "URI": "SECRET_URI"}],
[{"Type": "SASL_SCRAM_256_AUT", "URI": "SECRET_URI"}],
[{"Type": None, "URI": "SECRET_URI"}],
[{"Type": "VPC_SUB", "URI": "SECRET_URI"}, {"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"}],
[{"Type": "VPC_SUBNET", "URI": "SECRET_URI"}, {"Type": None, "URI": None}],
]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.BatchSize = 1
for config in test_credentials:
self.kafka_event_source.SourceAccessConfigurations = config
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
def test_must_raise_for_wrong_source_access_configurations_uri(self):
test_credentials = [
[{"Type": "BASIC_AUTH", "URI": 1}],
[{"Type": "SASL_SCRAM_256_AUTH", "URI": 1}],
[{"Type": "SASL_SCRAM_512_AUTH", "URI": 1}],
[{"Type": "VPC_SUBNET", "URI": None}, {"Type": "VPC_SECURITY_GROUP", "URI": "SECRET_URI"}],
[{"Type": "VPC_SUBNET", "URI": "SECRET_URI"}, {"Type": "VPC_SECURITY_GROUP", "URI": None}],
]
self.kafka_event_source.KafkaBootstrapServers = ["endpoint1", "endpoint2"]
self.kafka_event_source.Enabled = True
self.kafka_event_source.Topics = ["Topics"]
self.kafka_event_source.BatchSize = 1
for config in test_credentials:
self.kafka_event_source.SourceAccessConfigurations = config
with self.assertRaises(InvalidEventException):
self.kafka_event_source.get_policy_statements()
| 45.575758 | 115 | 0.578088 | 1,202 | 13,536 | 6.144759 | 0.090682 | 0.103574 | 0.161116 | 0.230165 | 0.904549 | 0.877471 | 0.861224 | 0.847143 | 0.847143 | 0.847143 | 0 | 0.013769 | 0.307846 | 13,536 | 296 | 116 | 45.72973 | 0.774576 | 0 | 0 | 0.688213 | 0 | 0 | 0.201093 | 0.04669 | 0 | 0 | 0 | 0 | 0.057034 | 1 | 0.060837 | false | 0 | 0.011407 | 0 | 0.076046 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
98e11c172bd8757ee9e63da44b94385356888b1a | 245 | py | Python | 4.unittest/4.1.calculator/service.py | tarathep/automation-test-course | 68ace45c2660b1d811eee0f1d38f2955a10b387c | [
"Apache-2.0"
] | null | null | null | 4.unittest/4.1.calculator/service.py | tarathep/automation-test-course | 68ace45c2660b1d811eee0f1d38f2955a10b387c | [
"Apache-2.0"
] | null | null | null | 4.unittest/4.1.calculator/service.py | tarathep/automation-test-course | 68ace45c2660b1d811eee0f1d38f2955a10b387c | [
"Apache-2.0"
] | 1 | 2020-12-13T03:16:22.000Z | 2020-12-13T03:16:22.000Z | def sum(num1 ,num2):
return num1 + num2
def minus(num1 ,num2):
return num1 - num2
def multiply(num1 ,num2):
return num1 * num2
def divide(num1 ,num2):
return num1 / num2
if __name__ == "__main__":
print(sum(1,2))
pass | 16.333333 | 26 | 0.628571 | 36 | 245 | 4.055556 | 0.416667 | 0.438356 | 0.383562 | 0.493151 | 0.664384 | 0.513699 | 0 | 0 | 0 | 0 | 0 | 0.097826 | 0.24898 | 245 | 15 | 27 | 16.333333 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0.03252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0.090909 | 0 | 0.363636 | 0.727273 | 0.090909 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 7 |
c79624e03fb01b8dab01d4a8308659efa0ba4786 | 60,524 | py | Python | api/tests/opentrons/protocols/advanced_control/test_transfers.py | mrakitin/opentrons | d9c7ed23d13cdb62bd1bc397dc2871d4bd5b77e9 | [
"Apache-2.0"
] | null | null | null | api/tests/opentrons/protocols/advanced_control/test_transfers.py | mrakitin/opentrons | d9c7ed23d13cdb62bd1bc397dc2871d4bd5b77e9 | [
"Apache-2.0"
] | null | null | null | api/tests/opentrons/protocols/advanced_control/test_transfers.py | mrakitin/opentrons | d9c7ed23d13cdb62bd1bc397dc2871d4bd5b77e9 | [
"Apache-2.0"
] | null | null | null | """ Test the Transfer class and its functions """
import pytest
import opentrons.protocol_api as papi
from opentrons.protocols.context.protocol_api.protocol_context import \
ProtocolContextImplementation
from opentrons.types import Mount, TransferTipPolicy
from opentrons.protocols.advanced_control import transfers as tx
from opentrons.protocols.api_support.types import APIVersion
@pytest.fixture
def _instr_labware(ctx):
lw1 = ctx.load_labware('biorad_96_wellplate_200ul_pcr', 1)
lw2 = ctx.load_labware('corning_96_wellplate_360ul_flat', 2)
tiprack = ctx.load_labware('opentrons_96_tiprack_300ul', 3)
tiprack2 = ctx.load_labware('opentrons_96_tiprack_300ul', 4)
instr = ctx.load_instrument('p300_single', Mount.RIGHT,
tip_racks=[tiprack])
instr_multi = ctx.load_instrument(
'p300_multi', Mount.LEFT, tip_racks=[tiprack2])
return {'ctx': ctx, 'instr': instr, 'lw1': lw1, 'lw2': lw2,
'tiprack': tiprack, 'instr_multi': instr_multi}
# +++++++ Test Helper Functions ++++++++++
def test_check_if_zero():
tclass = tx.TransferPlan
assert tclass._check_volume_not_zero(APIVersion(2, 6), 0)
assert tclass._check_volume_not_zero(APIVersion(2, 6), 15)
assert not tclass._check_volume_not_zero(APIVersion(2, 8), 0)
assert tclass._check_volume_not_zero(APIVersion(2, 8), 15)
# +++++++ Test transfer types ++++++++++++
def test_default_transfers(_instr_labware):
# Transfer 100ml from row1 of labware1 to row1 of labware2: first with
# new_tip = ONCE, then with new_tip = NEVER
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
# ========== Transfer ===========
xfer_plan = tx.TransferPlan(
100, lw1.columns()[0], lw2.columns()[0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer')
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][1], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.columns()[0][1], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][2], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.columns()[0][2], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][3], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.columns()[0][3], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][4], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.columns()[0][4], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][5], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.columns()[0][5], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][6], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.columns()[0][6], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][7], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.columns()[0][7], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
assert xfer_plan_list == exp1
# ========== Distribute ===========
dist_plan = tx.TransferPlan(
50, lw1.columns()[0][0], lw2.columns()[0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='distribute')
dist_plan_list = []
for step in dist_plan:
dist_plan_list.append(step)
exp2 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [300, lw1.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [50, lw2.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [50, lw2.columns()[0][1], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [50, lw2.columns()[0][2], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [50, lw2.columns()[0][3], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [50, lw2.columns()[0][4], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [50, lw2.columns()[0][5], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [50, lw2.columns()[0][6], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [50, lw2.columns()[0][7], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
assert dist_plan_list == exp2
# ========== Consolidate ===========
consd_plan = tx.TransferPlan(
50, lw1.columns()[0], lw2.columns()[0][0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='consolidate')
consd_plan_list = []
for step in consd_plan:
consd_plan_list.append(step)
exp3 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [50, lw1.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [50, lw1.columns()[0][1], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [50, lw1.columns()[0][2], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [50, lw1.columns()[0][3], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [50, lw1.columns()[0][4], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [50, lw1.columns()[0][5], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [300, lw2.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [50, lw1.columns()[0][6], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [50, lw1.columns()[0][7], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
assert consd_plan_list == exp3
def test_uneven_transfers(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
new_tip=TransferTipPolicy.NEVER))
# ========== One-to-Many ==========
xfer_plan = tx.TransferPlan(
100, lw1.columns()[0][0], lw2.columns()[1][:4],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer', options=options)
one_to_many_plan_list = []
for step in xfer_plan:
one_to_many_plan_list.append(step)
exp1 = [{'method': 'aspirate', 'args': [100, lw1.columns()[0][0], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[1][0], 1.0],
'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1.columns()[0][0], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[1][1], 1.0],
'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1.columns()[0][0], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[1][2], 1.0],
'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1.columns()[0][0], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[1][3], 1.0],
'kwargs': {}}]
assert one_to_many_plan_list == exp1
# ========== Few-to-Many ==========
xfer_plan = tx.TransferPlan(
[100, 90, 80, 70], lw1.columns()[0][:2], lw2.columns()[1][:4],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer', options=options)
few_to_many_plan_list = []
for step in xfer_plan:
few_to_many_plan_list.append(step)
exp2 = [{'method': 'aspirate', 'args': [100, lw1.columns()[0][0], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[1][0], 1.0],
'kwargs': {}},
{'method': 'aspirate', 'args': [90, lw1.columns()[0][0], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [90, lw2.columns()[1][1], 1.0],
'kwargs': {}},
{'method': 'aspirate', 'args': [80, lw1.columns()[0][1], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [80, lw2.columns()[1][2], 1.0],
'kwargs': {}},
{'method': 'aspirate', 'args': [70, lw1.columns()[0][1], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [70, lw2.columns()[1][3], 1.0],
'kwargs': {}}]
assert few_to_many_plan_list == exp2
# ========== Many-to-One ==========
xfer_plan = tx.TransferPlan(
[100, 90, 80, 70], lw1.columns()[0][:4], lw2.columns()[1][0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer', options=options)
many_to_one_plan_list = []
for step in xfer_plan:
many_to_one_plan_list.append(step)
exp3 = [{'method': 'aspirate', 'args': [100, lw1.columns()[0][0], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[1][0], 1.0],
'kwargs': {}},
{'method': 'aspirate', 'args': [90, lw1.columns()[0][1], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [90, lw2.columns()[1][0], 1.0],
'kwargs': {}},
{'method': 'aspirate', 'args': [80, lw1.columns()[0][2], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [80, lw2.columns()[1][0], 1.0],
'kwargs': {}},
{'method': 'aspirate', 'args': [70, lw1.columns()[0][3], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [70, lw2.columns()[1][0], 1.0],
'kwargs': {}}]
assert many_to_one_plan_list == exp3
def test_location_wells(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
aspirate_loc = lw1.wells()[0].top()
# Test single-channel transfer with locations
list_of_locs = [
well.bottom(5) for col in lw2.columns()[0:11] for well in col]
xfer_plan = tx.TransferPlan(
30,
aspirate_loc,
list_of_locs,
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer')
idx_dest = 0
for step in xfer_plan:
if step['method'] == 'aspirate':
assert step['args'][1].point == aspirate_loc.point
elif step['method'] == 'dispense':
assert step['args'][1].point\
== list_of_locs[idx_dest].point
idx_dest += 1
multi_locs = [
col[0].bottom(5) for col in lw2.columns()[0:11]]
# Test multi-channel transfer with locations
xfer_plan = tx.TransferPlan(
30,
aspirate_loc,
multi_locs,
_instr_labware['instr_multi'],
max_volume=_instr_labware['instr_multi'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer')
idx_dest = 0
for step in xfer_plan:
if step['method'] == 'aspirate':
assert step['args'][1].point == aspirate_loc.point
elif step['method'] == 'dispense':
assert step['args'][1].point\
== multi_locs[idx_dest].point
idx_dest += 1
def test_no_new_tip(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
new_tip=TransferTipPolicy.NEVER))
# ========== Transfer ==========
xfer_plan = tx.TransferPlan(
100, lw1.columns()[0], lw2.columns()[0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer',
options=options)
for step in xfer_plan:
assert step['method'] != 'pick_up_tip'
assert step['method'] != 'drop_tip'
# ========== Distribute ===========
dist_plan = tx.TransferPlan(
30, lw1.columns()[0][0], lw2.columns()[0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='distribute',
options=options)
for step in dist_plan:
assert step['method'] != 'pick_up_tip'
assert step['method'] != 'drop_tip'
# ========== Consolidate ===========
consd_plan = tx.TransferPlan(
40, lw1.columns()[0], lw2.rows()[0][1],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer',
options=options)
for step in consd_plan:
assert step['method'] != 'pick_up_tip'
assert step['method'] != 'drop_tip'
def test_new_tip_always(_instr_labware, monkeypatch):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
tiprack = _instr_labware['tiprack']
i_ctx = _instr_labware['instr']
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
new_tip=TransferTipPolicy.ALWAYS,
drop_tip_strategy=tx.DropTipStrategy.TRASH))
xfer_plan = tx.TransferPlan(
100,
lw1.columns()[0][1:5], lw2.columns()[0][1:5],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer',
options=options)
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1.columns()[0][1], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[0][1], 1.0],
'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}},
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1.columns()[0][2], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[0][2], 1.0],
'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}},
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1.columns()[0][3], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[0][3], 1.0],
'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}},
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1.columns()[0][4], 1.0],
'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2.columns()[0][4], 1.0],
'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
assert xfer_plan_list == exp1
for cmd in xfer_plan_list:
getattr(i_ctx, cmd['method'])(*cmd['args'], **cmd['kwargs'])
assert tiprack.next_tip() == tiprack.columns()[0][4]
def test_transfer_w_touchtip_blowout(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
# ========== Transfer ==========
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
touch_tip_strategy=tx.TouchTipStrategy.ALWAYS,
blow_out_strategy=tx.BlowOutStrategy.TRASH,
new_tip=TransferTipPolicy.NEVER))
xfer_plan = tx.TransferPlan(
100, lw1.columns()[0][:3], lw2.rows()[0][:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer',
options=options)
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'aspirate',
'args': [100, lw1.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][0], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'blow_out',
'args': [_instr_labware['instr'].trash_container.wells()[0]],
'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][1], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][1], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'blow_out',
'args': [_instr_labware['instr'].trash_container.wells()[0]],
'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][2], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][2], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'blow_out',
'args': [_instr_labware['instr'].trash_container.wells()[0]],
'kwargs': {}}]
assert xfer_plan_list == exp1
# ========== Distribute ==========
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
disposal_volume=_instr_labware['instr'].min_volume,
touch_tip_strategy=tx.TouchTipStrategy.ALWAYS,
new_tip=TransferTipPolicy.NEVER))
dist_plan = tx.TransferPlan(
30, lw1.columns()[0][0], lw2.rows()[0][:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='distribute',
options=options)
dist_plan_list = []
for step in dist_plan:
dist_plan_list.append(step)
exp2 = [{'method': 'aspirate',
'args': [120, lw1.columns()[0][0], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [30, lw2.rows()[0][0], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [30, lw2.rows()[0][1], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [30, lw2.rows()[0][2], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'blow_out',
'args': [_instr_labware['instr'].trash_container.wells()[0]],
'kwargs': {}}]
assert dist_plan_list == exp2
def test_transfer_w_airgap_blowout(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
air_gap=10, blow_out_strategy=tx.BlowOutStrategy.DEST,
new_tip=TransferTipPolicy.NEVER))
# ========== Transfer ==========
xfer_plan = tx.TransferPlan(
100, lw1.columns()[0][1:5], lw2.rows()[0][1:5],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer',
options=options)
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'aspirate',
'args': [100, lw1.columns()[0][1], 1.0], 'kwargs': {}},
{'method': 'air_gap',
'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [110, lw2.rows()[0][1], 1.0], 'kwargs': {}},
{'method': 'blow_out',
'args': [lw2.rows()[0][1]], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][2], 1.0], 'kwargs': {}},
{'method': 'air_gap',
'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [110, lw2.rows()[0][2], 1.0], 'kwargs': {}},
{'method': 'blow_out',
'args': [lw2.rows()[0][2]], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][3], 1.0], 'kwargs': {}},
{'method': 'air_gap',
'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [110, lw2.rows()[0][3], 1.0], 'kwargs': {}},
{'method': 'blow_out',
'args': [lw2.rows()[0][3]], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][4], 1.0], 'kwargs': {}},
{'method': 'air_gap',
'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [110, lw2.rows()[0][4], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw2.rows()[0][4]], 'kwargs': {}}]
assert xfer_plan_list == exp1
# ========== Distribute ==========
dist_plan = tx.TransferPlan(
60, lw1.columns()[1][0], lw2.rows()[1][1:6],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='distribute',
options=options)
dist_plan_list = []
for step in dist_plan:
dist_plan_list.append(step)
exp2 = [{'method': 'aspirate',
'args': [240, lw1.columns()[1][0], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [70, lw2.rows()[1][1], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [70, lw2.rows()[1][2], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [70, lw2.rows()[1][3], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [70, lw2.rows()[1][4], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw2.rows()[1][4]], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][0], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [70, lw2.rows()[1][5], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw2.rows()[1][5]], 'kwargs': {}}]
assert dist_plan_list == exp2
# ========== Consolidate ==========
consd_plan = tx.TransferPlan(
60, lw1.columns()[1], lw2.rows()[1][1],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='consolidate',
options=options)
consd_plan_list = []
for step in consd_plan:
consd_plan_list.append(step)
exp3 = [{'method': 'aspirate',
'args': [60, lw1.columns()[1][0], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][1], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][2], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][3], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [280, lw2.rows()[1][1], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw2.rows()[1][1]], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][4], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][5], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][6], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][7], 1.0], 'kwargs': {}},
{'method': 'air_gap', 'args': [10], 'kwargs': {}},
{'method': 'dispense',
'args': [280, lw2.rows()[1][1], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw2.rows()[1][1]], 'kwargs': {}}]
assert consd_plan_list == exp3
def test_touchtip_mix(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
new_tip=TransferTipPolicy.NEVER,
touch_tip_strategy=tx.TouchTipStrategy.ALWAYS,
mix_strategy=tx.MixStrategy.AFTER))
# ========== Transfer ==========
xfer_plan = tx.TransferPlan(
100, lw1.columns()[0][1:5], lw2.rows()[0][1:5],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer',
options=options)
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'aspirate',
'args': [100, lw1.columns()[0][1], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][1], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'location': lw2.rows()[0][1]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][2], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][2], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'location': lw2.rows()[0][2]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][3], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][3], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'location': lw2.rows()[0][3]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][4], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][4], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'location': lw2.rows()[0][4]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}}]
assert xfer_plan_list == exp1
# ========== Distribute ==========
dist_plan = tx.TransferPlan(
60, lw1.columns()[1][0], lw2.rows()[1][1:6],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='distribute',
options=options)
dist_plan_list = []
for step in dist_plan:
dist_plan_list.append(step)
exp2 = [{'method': 'aspirate',
'args': [300, lw1.columns()[1][0], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [60, lw2.rows()[1][1], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [60, lw2.rows()[1][2], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [60, lw2.rows()[1][3], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [60, lw2.rows()[1][4], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [60, lw2.rows()[1][5], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'location': lw2.rows()[1][5]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}}]
assert dist_plan_list == exp2
# ========== Consolidate ==========
consd_plan = tx.TransferPlan(
60, lw1.columns()[1], lw2.rows()[1][1],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='consolidate',
options=options)
consd_plan_list = []
for step in consd_plan:
consd_plan_list.append(step)
exp3 = [{'method': 'aspirate',
'args': [60, lw1.columns()[1][0], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][1], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][2], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][3], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][4], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [300, lw2.rows()[1][1], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'location': lw2.rows()[1][1]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][5], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][6], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [60, lw1.columns()[1][7], 1.0], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}},
{'method': 'dispense',
'args': [180, lw2.rows()[1][1], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'location': lw2.rows()[1][1]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {}}]
assert consd_plan_list == exp3
def test_all_options(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
new_tip=TransferTipPolicy.ONCE,
drop_tip_strategy=tx.DropTipStrategy.RETURN,
touch_tip_strategy=tx.TouchTipStrategy.ALWAYS,
mix_strategy=tx.MixStrategy.AFTER),
pick_up_tip=options.pick_up_tip._replace(
presses=4,
increment=2),
touch_tip=options.touch_tip._replace(
speed=1.6),
mix=options.mix._replace(
mix_after=options.mix.mix_after._replace(
repetitions=2)),
blow_out=options.blow_out._replace(
location=lw2.columns()[10][0]),
aspirate=options.aspirate._replace(
rate=1.5))
xfer_plan = tx.TransferPlan(
100, lw1.columns()[0][1:4], lw2.rows()[0][1:4],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer',
options=options)
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'pick_up_tip',
'args': [], 'kwargs': {'presses': 4, 'increment': 2}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][1], 1.5], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {'speed': 1.6}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][1], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'repetitions': 2, 'location': lw2.rows()[0][1]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {'speed': 1.6}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][2], 1.5], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {'speed': 1.6}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][2], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'repetitions': 2, 'location': lw2.rows()[0][2]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {'speed': 1.6}},
{'method': 'aspirate',
'args': [100, lw1.columns()[0][3], 1.5], 'kwargs': {}},
{'method': 'touch_tip', 'args': [], 'kwargs': {'speed': 1.6}},
{'method': 'dispense',
'args': [100, lw2.rows()[0][3], 1.0], 'kwargs': {}},
{'method': 'mix', 'args': [], 'kwargs': {
'repetitions': 2, 'location': lw2.rows()[0][3]}},
{'method': 'touch_tip', 'args': [], 'kwargs': {'speed': 1.6}},
{'method': 'return_tip', 'args': [], 'kwargs': {}}]
assert xfer_plan_list == exp1
def test_oversized_distribute(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
xfer_plan = tx.TransferPlan(
700, lw1.columns()[0][0], lw2.rows()[0][1:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='distribute')
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [300, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [300, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [300, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [300, lw2.wells_by_index()['A3'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw2.wells_by_index()['A3'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw2.wells_by_index()['A3'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
assert xfer_plan_list == exp1
def test_oversized_consolidate(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
xfer_plan = tx.TransferPlan(
700, lw2.rows()[0][1:3],
lw1.wells_by_index()['A1'],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='consolidate')
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [300, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [300, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [300, lw2.wells_by_index()['A3'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [300, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw2.wells_by_index()['A3'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw2.wells_by_index()['A3'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw1.wells_by_index()['A1'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
assert xfer_plan_list == exp1
def test_oversized_transfer(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
xfer_plan = tx.TransferPlan(
700, lw2.rows()[0][1:3], lw1.columns()[0][1:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=_instr_labware['ctx'].api_version,
mode='transfer')
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [300, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [300, lw1.wells_by_index()['B1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw1.wells_by_index()['B1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw1.wells_by_index()['B1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [300, lw2.wells_by_index()['A3'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [300, lw1.wells_by_index()['C1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw2.wells_by_index()['A3'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw1.wells_by_index()['C1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw2.wells_by_index()['A3'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw1.wells_by_index()['C1'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
assert xfer_plan_list == exp1
def test_multichannel_transfer_old_version(loop, hardware):
# for API version below 2.2, multichannel pipette can only
# reach row A of 384-well plates
ctx = papi.ProtocolContext(
implementation=ProtocolContextImplementation(hardware=hardware),
loop=loop,
api_version=APIVersion(2, 1)
)
lw1 = ctx.load_labware('biorad_96_wellplate_200ul_pcr', 1)
lw2 = ctx.load_labware('corning_384_wellplate_112ul_flat', 2)
tiprack = ctx.load_labware('opentrons_96_tiprack_300ul', 3)
instr_multi = ctx.load_instrument(
'p300_multi', Mount.LEFT, tip_racks=[tiprack])
xfer_plan = tx.TransferPlan(
100, lw1.rows()[0][0], [lw2.rows()[0][1], lw2.rows()[1][1]],
instr_multi,
max_volume=instr_multi.hw_pipette['working_volume'],
api_version=ctx.api_version,
mode='distribute')
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.wells_by_name()['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.wells_by_index()['A2'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
assert xfer_plan_list == exp1
# target without row limit
with pytest.raises(IndexError):
xfer_plan = tx.TransferPlan(
100, lw1.rows()[0][1], lw2.rows()[1][1],
instr_multi,
max_volume=instr_multi.hw_pipette['working_volume'],
api_version=ctx.api_version,
# todo(mm, 2021-03-17): This test intends to test mode='transfer',
# but it's always accidentally tested mode='consolidate' because of
# a quirk in how TransferPlan used to guess the mode when not
# explicitly specified. If this is changed to mode='transfer' now,
# it raises ZeroDivisionError instead of IndexError. Bug #7516.
mode='consolidate')
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
def test_multichannel_transfer_locs(loop, hardware):
api_version = APIVersion(2, 2)
ctx = papi.ProtocolContext(
implementation=ProtocolContextImplementation(
api_version=api_version,
hardware=hardware
),
loop=loop,
api_version=api_version
)
lw1 = ctx.load_labware('biorad_96_wellplate_200ul_pcr', 1)
lw2 = ctx.load_labware('corning_384_wellplate_112ul_flat', 2)
tiprack = ctx.load_labware('opentrons_96_tiprack_300ul', 3)
instr_multi = ctx.load_instrument(
'p300_multi', Mount.LEFT, tip_racks=[tiprack])
# targets within row limit
xfer_plan = tx.TransferPlan(
100, lw1.rows()[0][1], lw2.rows()[1][1],
instr_multi,
max_volume=instr_multi.hw_pipette['working_volume'],
api_version=ctx.api_version,
mode='transfer')
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
exp1 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1.wells_by_name()['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2.wells_by_index()['B2'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
assert xfer_plan_list == exp1
# targets outside of row limit will be skipped
xfer_plan = tx.TransferPlan(
100, lw1.rows()[0][1], [lw2.rows()[1][1], lw2.rows()[2][1]],
instr_multi,
max_volume=instr_multi.hw_pipette['working_volume'],
api_version=ctx.api_version,
mode='transfer')
xfer_plan_list = []
for step in xfer_plan:
xfer_plan_list.append(step)
assert xfer_plan_list == exp1
# no valid source or targets, raise error
with pytest.raises(RuntimeError):
assert tx.TransferPlan(
100, lw1.rows()[0][1], lw2.rows()[2][1],
instr_multi,
max_volume=instr_multi.hw_pipette['working_volume'],
api_version=ctx.api_version,
mode='transfer')
def test_zero_volume_results_in_no_transfer(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
API_VERSION = _instr_labware['ctx'].api_version
exp_no_vol = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
# ========== Transfer ===========
xfer_plan = tx.TransferPlan(
0, lw1.columns()[0], lw2.columns()[0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='transfer')
for step, expected in zip(xfer_plan, exp_no_vol):
assert step == expected
xfer_plan = tx.TransferPlan(
[100, 0, 200], lw1.wells()[0:3], lw2.wells()[0:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='transfer')
exp2 = [{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate',
'args': [100, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [100, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate',
'args': [200, lw1['C1'], 1.0], 'kwargs': {}},
{'method': 'dispense',
'args': [200, lw2['C1'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(xfer_plan, exp2):
assert step == expected
# ========== Distribute ===========
dist_plan = tx.TransferPlan(
0, lw1.columns()[0][0], lw2.rows()[0][1:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='distribute')
for step, expected in zip(dist_plan, exp_no_vol):
assert step == expected
dist_plan = tx.TransferPlan(
[100, 0], lw1.columns()[0][0], lw2.rows()[0][1:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='distribute')
exp3 = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2['A2'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(dist_plan, exp3):
assert step == expected
# ========== Consolidate ===========
consd_plan = tx.TransferPlan(
0, lw1.columns()[0], lw2.columns()[0][0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='consolidate')
for step, expected in zip(consd_plan, exp_no_vol):
assert step == expected
cons_list = [100, 200, 300, 200, 0, 0, 100, 200]
consd_plan = tx.TransferPlan(
cons_list, lw1.columns()[0], lw2.columns()[0][0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='consolidate')
exp4 = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [200, lw1['B1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [300, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [300, lw1['C1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [300, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [200, lw1['D1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1['G1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [300, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [200, lw1['H1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [200, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(consd_plan, exp4):
assert step == expected
def test_zero_volume_causes_transfer_of_disposal_vol(_instr_labware):
# This test checks the old behavior of distribute and consolidate
# with zero volumes in which case the volume aspirated/dispensed
# was the min volume + disposal volume if a zero volume was
# specified.
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
API_VERSION = APIVersion(2, 6)
blow_out = _instr_labware['instr'].trash_container.wells()[0]
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
disposal_volume=_instr_labware['instr'].min_volume))
# ========== Distribute ===========
dist_plan = tx.TransferPlan(
0, lw1.columns()[0][0], lw2.rows()[0][1:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='distribute',
options=options)
exp_no_vol = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [30, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [0, lw2['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [0, lw2['A3'], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [blow_out], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(dist_plan, exp_no_vol):
assert step == expected
dist_plan = tx.TransferPlan(
[100, 0], lw1.columns()[0][0], lw2.rows()[0][1:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='distribute',
options=options)
exp = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [130, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [100, lw2['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [0, lw2['A3'], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [blow_out], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(dist_plan, exp):
assert step == expected
# ========== Consolidate ===========
consd_plan = tx.TransferPlan(
0, lw1.columns()[0], lw2.columns()[0][0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='consolidate')
exp_no_vol = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['B1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['C1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['D1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['E1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['F1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['G1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['H1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [0, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(consd_plan, exp_no_vol):
assert step == expected
cons_list = [100, 200, 300, 200, 0, 0, 100, 200]
consd_plan = tx.TransferPlan(
cons_list, lw1.columns()[0], lw2.columns()[0][0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='consolidate')
exp2 = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [200, lw1['B1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [300, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [300, lw1['C1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [300, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [200, lw1['D1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['E1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [0, lw1['F1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [100, lw1['G1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [300, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [200, lw1['H1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [200, lw2['A1'], 1.0], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(consd_plan, exp2):
assert step == expected
def test_blowout_to_source(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
API_VERSION = APIVersion(2, 6)
# ========== Transfer ===========
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
disposal_volume=_instr_labware['instr'].min_volume,
blow_out_strategy=tx.BlowOutStrategy.SOURCE))
xfer_plan = tx.TransferPlan(
30, [lw1['A1'], lw1['A2']], [lw2['B1'], lw2['B2']],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='transfer',
options=options)
exp = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [30, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [30, lw2['B1'], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw1['A1']], 'kwargs': {}},
{'method': 'aspirate', 'args': [30, lw1['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [30, lw2['B2'], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw1['A2']], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(xfer_plan, exp):
assert step == expected
# ========== Distribute ===========
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
disposal_volume=_instr_labware['instr'].min_volume,
blow_out_strategy=tx.BlowOutStrategy.SOURCE))
dist_plan = tx.TransferPlan(
30, lw1.columns()[0][0], lw2.rows()[0][1:3],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='distribute',
options=options)
exp = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [90, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [30, lw2['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [30, lw2['A3'], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw1['A1']], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(dist_plan, exp):
assert step == expected
def test_blowout_to_dest(_instr_labware):
_instr_labware['ctx'].home()
lw1 = _instr_labware['lw1']
lw2 = _instr_labware['lw2']
API_VERSION = APIVersion(2, 6)
# ========== Transfer ===========
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
disposal_volume=_instr_labware['instr'].min_volume,
blow_out_strategy=tx.BlowOutStrategy.DEST))
xfer_plan = tx.TransferPlan(
30, [lw1['A1'], lw1['A2']], [lw2['B1'], lw2['B2']],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='transfer',
options=options)
exp = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [30, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [30, lw2['B1'], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw2['B1']], 'kwargs': {}},
{'method': 'aspirate', 'args': [30, lw1['A2'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [30, lw2['B2'], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw2['B2']], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(xfer_plan, exp):
assert step == expected
# ========== Consolidate ===========
options = tx.TransferOptions()
options = options._replace(
transfer=options.transfer._replace(
blow_out_strategy=tx.BlowOutStrategy.DEST))
consd_plan = tx.TransferPlan(
30, lw2.rows()[0][1:3], lw1.columns()[0][0],
_instr_labware['instr'],
max_volume=_instr_labware['instr'].hw_pipette['working_volume'],
api_version=API_VERSION,
mode='consolidate',
options=options)
exp = [
{'method': 'pick_up_tip', 'args': [], 'kwargs': {}},
{'method': 'aspirate', 'args': [30, lw2['A2'], 1.0], 'kwargs': {}},
{'method': 'aspirate', 'args': [30, lw2['A3'], 1.0], 'kwargs': {}},
{'method': 'dispense', 'args': [60, lw1['A1'], 1.0], 'kwargs': {}},
{'method': 'blow_out', 'args': [lw1['A1']], 'kwargs': {}},
{'method': 'drop_tip', 'args': [], 'kwargs': {}}]
for step, expected in zip(consd_plan, exp):
assert step == expected
| 44.113703 | 79 | 0.511615 | 6,816 | 60,524 | 4.343457 | 0.041227 | 0.128492 | 0.061341 | 0.105928 | 0.911873 | 0.892451 | 0.88154 | 0.868637 | 0.849417 | 0.835501 | 0 | 0.054947 | 0.259087 | 60,524 | 1,371 | 80 | 44.145879 | 0.605245 | 0.031822 | 0 | 0.783429 | 0 | 0 | 0.195648 | 0.004885 | 0 | 0 | 0 | 0.000729 | 0.042658 | 1 | 0.016407 | false | 0 | 0.004922 | 0 | 0.022149 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c79fbc0387fb7794fc25b3be4e89dd1c403ff263 | 46,460 | py | Python | python/test/feature_extractor_test.py | xinglinsky/vmaf | 55e60bd72eefef6d807bc8650f942349a19139f9 | [
"BSD-2-Clause-Patent"
] | null | null | null | python/test/feature_extractor_test.py | xinglinsky/vmaf | 55e60bd72eefef6d807bc8650f942349a19139f9 | [
"BSD-2-Clause-Patent"
] | null | null | null | python/test/feature_extractor_test.py | xinglinsky/vmaf | 55e60bd72eefef6d807bc8650f942349a19139f9 | [
"BSD-2-Clause-Patent"
] | null | null | null | from __future__ import absolute_import
import os
import unittest
import re
from vmaf.config import VmafConfig
from vmaf.core.feature_extractor import VmafFeatureExtractor, \
MomentFeatureExtractor, \
PsnrFeatureExtractor, SsimFeatureExtractor, MsSsimFeatureExtractor, \
VifFrameDifferenceFeatureExtractor, \
AnsnrFeatureExtractor, PypsnrFeatureExtractor, VmafIntegerFeatureExtractor, \
PypsnrMaxdb100FeatureExtractor
from vmaf.core.asset import Asset
from vmaf.core.result_store import FileSystemResultStore
from test.testutil import set_default_576_324_videos_for_testing, set_default_flat_1920_1080_videos_for_testing, \
set_default_576_324_10bit_videos_for_testing, set_default_576_324_12bit_videos_for_testing, \
set_default_576_324_16bit_videos_for_testing, set_default_576_324_10bit_videos_for_testing_b
__copyright__ = "Copyright 2016-2020, Netflix, Inc."
__license__ = "BSD+Patent"
class FeatureExtractorTest(unittest.TestCase):
def setUp(self) -> None:
self.verificationErrors = []
self.maxDiff = None
def tearDown(self):
if hasattr(self, 'fextractor'):
self.fextractor.remove_results()
pass
self.assertEqual([], self.verificationErrors)
def test_executor_id(self):
asset = Asset(dataset="test", content_id=0, asset_id=1,
ref_path="dir/refvideo.yuv", dis_path="dir/disvideo.yuv",
asset_dict={'width': 720, 'height': 480})
fextractor = VmafFeatureExtractor([asset], None)
self.assertEqual(fextractor.executor_id, "VMAF_feature_V0.2.7")
def test_get_log_file_path(self):
import hashlib
asset = Asset(dataset="test", content_id=0, asset_id=1,
ref_path="dir/refvideo.yuv", dis_path="dir/disvideo.yuv",
asset_dict={'width':720, 'height':480,},
workdir_root="my_workdir_root")
fextractor = VmafFeatureExtractor([asset], None)
log_file_path = fextractor._get_log_file_path(asset)
h = hashlib.sha1("test_0_1_refvideo_720x480_vs_disvideo_720x480_q_720x480".encode("utf-8")).hexdigest()
self.assertTrue(re.match(r"^my_workdir_root/[a-zA-Z0-9-]+/VMAF_feature_V0.2.7_{}$".format(h), log_file_path))
def test_run_vmaf_fextractor(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = VmafFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['VMAF_feature_vif_score'], 0.4460930625, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_motion_score'], 4.04982535417, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_motion2_score'], 3.8953518541666665, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_motion0_score'], 0.0, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_score'], 0.9345148541666667, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm2_score'], 0.9345148541666667, places=4) # at version 0.2.4b (ioannis adm fix), adm and adm2 are now identical
self.assertAlmostEqual(results[0]['VMAF_feature_ansnr_score'], 23.5095715208, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_num_score'], 712650.023478, places=0)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_den_score'], 1597314.95249, places=0)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_num_score'], 371.80645372916666, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_den_score'], 397.83378972916671, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_anpsnr_score'], 34.164776875, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale0_score'], 0.363420489439, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale1_score'], 0.766647542135, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale2_score'], 0.862854666902, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale3_score'], 0.915971778036, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_scale0_score'], 0.90791933424090698, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_scale1_score'], 0.8938705209242691, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_scale2_score'], 0.9300123587874962, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_scale3_score'], 0.9649663148179196, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif2_score'], 0.72722361912801026, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm3_score'], 0.9241841443734412, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_motion_score'], 4.04982535417, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_motion2_score'], 3.8953518541666665, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_ansnr_score'], 31.2714392708, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_num_score'], 1597314.86733, places=0)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_den_score'], 1597314.95249, places=0)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_num_score'], 397.83378972916671, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_den_score'], 397.83378972916671, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_anpsnr_score'], 41.9266444375, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale0_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale1_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale3_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_scale0_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_scale1_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_scale2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_scale3_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm3_score'], 1.0, places=4)
def test_run_vmaf_integer_fextractor(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = VmafIntegerFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_vif_score'], 0.44642331250000006, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_motion_score'], 4.04982535417, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_motion2_score'], 3.8953518541666665, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_score'], 0.9345148541666667, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm2_score'], 0.9345148541666667, places=4) # at version 0.2.4b (ioannis adm fix), adm and adm2 are now identical
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_ansnr_score'], 23.5095715208, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_vif_num_score'], 713111.410502125, places=0)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_vif_den_score'], 1597165.5464884583, places=0)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_num_score'], 371.8243668541666, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_den_score'], 397.8567857291667, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_anpsnr_score'], 34.164776875, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_vif_scale0_score'], 0.3636620710647402, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_vif_scale1_score'], 0.7674952820232231, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_vif_scale2_score'], 0.8631077727416296, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_vif_scale3_score'], 0.9157200890843669, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale0_score'], 0.90791933424090698, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale1_score'], 0.8938705209242691, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale2_score'], 0.9300123587874962, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale3_score'], 0.9649663148179196, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_vif2_score'], 0.72749630372849, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm3_score'], 0.9241841443734412, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_vif_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_motion_score'], 4.04982535417, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_motion2_score'], 3.8953518541666665, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm2_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_ansnr_score'], 31.2714392708, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_vif_num_score'], 1597165.34910075, places=0)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_vif_den_score'], 1597165.5464884583, places=0)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_num_score'], 397.8576817708333, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_den_score'], 397.8567857291667, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_anpsnr_score'], 41.9266444375, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_vif_scale0_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_vif_scale1_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_vif_scale2_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_vif_scale3_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale0_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale1_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale2_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale3_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_vif2_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm3_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
@unittest.skip("vifdiff alternative needed, vmaf_feature executable deprecated")
def test_run_vif_frame_difference_fextractor(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = VifFrameDifferenceFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['VifDiff_feature_vifdiff_score'], 0.26745858333333333, places=4)
self.assertAlmostEqual(results[0]['VifDiff_feature_vifdiff_num_score'], 305412.7661844375, places=0)
self.assertAlmostEqual(results[0]['VifDiff_feature_vifdiff_den_score'], 1113927.6002349583, places=0)
self.assertAlmostEqual(results[1]['VifDiff_feature_vifdiff_score'], 0.9791655833333334, places=4)
self.assertAlmostEqual(results[1]['VifDiff_feature_vifdiff_num_score'], 1113926.2941030415, places=0)
self.assertAlmostEqual(results[1]['VifDiff_feature_vifdiff_den_score'], 1113927.6002349583, places=0)
def test_run_moment_fextractor(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = MomentFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Moment_feature_ref1st_score'], 59.788567297525134, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_ref2nd_score'], 4696.668388042269, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_refvar_score'], 1121.519917231203, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis1st_score'], 61.332006624999984, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis2nd_score'], 4798.659574041666, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_disvar_score'], 1036.837184348847, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref1st_score'], 59.788567297525134, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref2nd_score'], 4696.668388042269, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_refvar_score'], 1121.519917231203, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis1st_score'], 59.788567297525134, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis2nd_score'], 4696.668388042269, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_disvar_score'], 1121.519917231203, places=4)
def test_run_moment_fextractor_10bit(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_10bit_videos_for_testing()
self.fextractor = MomentFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Moment_feature_ref1st_score'], 59.788567297525134 * 4, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_ref2nd_score'], 4696.668388042269 * 16, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_refvar_score'], 1121.519917231203 * 16, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis1st_score'], 61.332006624999984 * 4, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis2nd_score'], 4798.659574041666 * 16, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_disvar_score'], 1036.837184348847 * 16, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref1st_score'], 59.788567297525134 * 4, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref2nd_score'], 4696.668388042269 * 16, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_refvar_score'], 1121.519917231203 * 16, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis1st_score'], 59.788567297525134 * 4, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis2nd_score'], 4696.668388042269 * 16, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_disvar_score'], 1121.519917231203 * 16, places=4)
def test_run_moment_fextractor_12bit(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_12bit_videos_for_testing()
self.fextractor = MomentFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Moment_feature_ref1st_score'], 979.6711819844536, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_ref2nd_score'], 1238135.8363054413, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_refvar_score'], 278292.25886465114, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis1st_score'], 996.2818072702333, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis2nd_score'], 1255533.4389574758, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_disvar_score'], 262952.8893540034, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref1st_score'], 979.6711819844536, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref2nd_score'], 1238135.8363054413, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_refvar_score'], 278292.25886465114, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis1st_score'], 979.6711819844536, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis2nd_score'], 1238135.8363054413, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_disvar_score'], 278292.25886465114, places=4)
def test_run_moment_fextractor_16bit(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_16bit_videos_for_testing()
self.fextractor = MomentFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Moment_feature_ref1st_score'], 979.6711819844536 * 16.0, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_ref2nd_score'], 1238135.8363054413 * 256.0, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_refvar_score'], 278292.25886465114 * 256.0, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis1st_score'], 996.2818072702333 * 16.0, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis2nd_score'], 1255533.4389574758 * 256.0, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_disvar_score'], 262952.8893540034 * 256.0, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref1st_score'], 979.6711819844536 * 16.0, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref2nd_score'], 1238135.8363054413 * 256.0, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_refvar_score'], 278292.25886465114 * 256.0, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis1st_score'], 979.6711819844536 * 16.0, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis2nd_score'], 1238135.8363054413 * 256.0, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_disvar_score'], 278292.25886465114 * 256.0, places=4)
def test_run_psnr_fextractor(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = PsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['PSNR_feature_psnr_score'], 30.755063979166664, places=4)
self.assertAlmostEqual(results[1]['PSNR_feature_psnr_score'], 60.0, places=4)
def test_run_ansnr_fextractor(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = AnsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['ANSNR_feature_ansnr_score'], 23.509571520833333, places=4)
self.assertAlmostEqual(results[0]['ANSNR_feature_anpsnr_score'], 34.16477641666666, places=4)
self.assertAlmostEqual(results[1]['ANSNR_feature_ansnr_score'], 31.271439270833337, places=4)
self.assertAlmostEqual(results[1]['ANSNR_feature_anpsnr_score'], 41.926644187499996, places=4)
def test_run_ssim_fextractor(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = SsimFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_score'], 0.86322654166666657, places=4)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_l_score'], 0.9981474583333334, places=4)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_c_score'], 0.96126793750000006, places=4)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_s_score'], 0.89773633333333336, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_l_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_c_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_s_score'], 1.0, places=4)
def test_run_ssim_fextractor_flat(self):
ref_path, dis_path, asset, asset_original = set_default_flat_1920_1080_videos_for_testing()
self.fextractor = SsimFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
try: self.assertAlmostEqual(results[0]['SSIM_feature_ssim_score'], 0.9087330000000001, places=8)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['SSIM_feature_ssim_l_score'], 0.9087330000000001, places=8)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['SSIM_feature_ssim_c_score'], 1.0, places=8)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['SSIM_feature_ssim_s_score'], 1.0, places=8)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['SSIM_feature_ssim_score'], 1.0, places=8)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['SSIM_feature_ssim_l_score'], 1.0, places=8)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['SSIM_feature_ssim_c_score'], 1.0, places=8)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['SSIM_feature_ssim_s_score'], 1.0, places=8)
except AssertionError as e: self.verificationErrors.append(str(e))
def test_run_ms_ssim_fextractor(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = MsSsimFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_score'], 0.9632498125, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale0_score'], 0.9981474583333334, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale0_score'], 0.96126793750000006, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale0_score'], 0.89773633333333336, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale1_score'], 0.99899612500000001, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale1_score'], 0.9857694375, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale1_score'], 0.941185875, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale2_score'], 0.99923564583333324, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale2_score'], 0.997034020833, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale2_score'], 0.977992145833, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale3_score'], 0.99929210416666658, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale3_score'], 0.999588104167, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale3_score'], 0.99387125, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale4_score'], 0.99940356249999995, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale4_score'], 0.999907625, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale4_score'], 0.998222583333, places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale0_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale0_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale0_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale1_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale1_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale1_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale2_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale2_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale2_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale3_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale3_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale3_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale4_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale4_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale4_score'], 1., places=4)
def test_run_vmaf_integer_fextractor_checkerboard(self):
ref_path = VmafConfig.test_resource_path("yuv", "checkerboard_1920_1080_10_3_0_0.yuv")
dis_path = VmafConfig.test_resource_path("yuv", "checkerboard_1920_1080_10_3_10_0.yuv")
dis_path2 = VmafConfig.test_resource_path("yuv", "checkerboard_1920_1080_10_3_1_0.yuv")
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=VmafConfig.workdir_path(),
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width': 1920, 'height': 1080})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=VmafConfig.workdir_path(),
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width': 1920, 'height': 1080})
asset2 = Asset(dataset="test", content_id=0, asset_id=2,
workdir_root=VmafConfig.workdir_path(),
ref_path=ref_path,
dis_path=dis_path2,
asset_dict={'width': 1920, 'height': 1080})
self.fextractor = VmafIntegerFeatureExtractor(
[asset, asset_original, asset2],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_score'], 0.053996333333333334, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm2_score'], 0.053996333333333334, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale0_score'], 0.23738393128710478, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale1_score'], 0.08524788663335138, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale2_score'], 0.024058909404945077, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale3_score'], 0.018034879735107798, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_motion_score'], 12.554711666666668, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[0]['VMAF_integer_feature_motion2_score'], 12.554711666666668, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm2_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale0_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale1_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale2_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale3_score'], 1.0, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_motion_score'], 12.554711666666668, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[1]['VMAF_integer_feature_motion2_score'], 12.554711666666668, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[2]['VMAF_integer_feature_adm_score'], 0.78533833333333336, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[2]['VMAF_integer_feature_adm2_score'], 0.7853384465157921, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[2]['VMAF_integer_feature_adm_scale0_score'], 0.72132189911792899, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[2]['VMAF_integer_feature_adm_scale1_score'], 0.69259738857522501, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[2]['VMAF_integer_feature_adm_scale2_score'], 0.80415911639244586, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[2]['VMAF_integer_feature_adm_scale3_score'], 0.82791889676239039, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[2]['VMAF_integer_feature_motion_score'], 12.554711666666668, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
try: self.assertAlmostEqual(results[2]['VMAF_integer_feature_motion2_score'], 12.554711666666668, places=4)
except AssertionError as e: self.verificationErrors.append(str(e))
def test_run_vmaf_integer_fextractor_flat(self):
ref_path, dis_path, asset, asset_original = set_default_flat_1920_1080_videos_for_testing()
self.fextractor = VmafIntegerFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_score'], 1.0, places=4)
self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm2_score'], 1.0, places=4)
self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale0_score'], 1.0, places=4)
self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale1_score'], 1.0, places=4)
self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale2_score'], 1.0, places=4)
self.assertAlmostEqual(results[0]['VMAF_integer_feature_adm_scale3_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale0_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale1_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_integer_feature_adm_scale3_score'], 1.0, places=4)
def test_run_psnr_fextractor_proc(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
callback_dict = {
'ref_proc_callback': 'identity',
'dis_proc_callback': 'multiply',
}
asset.asset_dict.update(callback_dict)
asset_original.asset_dict.update(callback_dict)
self.fextractor = PsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None,
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['PSNR_feature_psnr_score'], 27.645446604166665, places=8)
self.assertAlmostEqual(results[1]['PSNR_feature_psnr_score'], 31.87683660416667, places=8)
def test_run_pypsnr_fextractor(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = PypsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnry_score'], 30.755063979166664, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnru_score'], 38.449441057158786, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnrv_score'], 40.9919102486235, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnry_score'], 60.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnru_score'], 60.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnrv_score'], 60.0, places=4)
def test_run_pypsnr_fextractor_10bit(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_10bit_videos_for_testing()
self.fextractor = PypsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnry_score'], 30.780573260053277, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnru_score'], 38.769832063651364, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnrv_score'], 41.28418847734209, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnry_score'], 72.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnru_score'], 72.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnrv_score'], 72.0, places=4)
def test_run_pypsnr_fextractor_10bit_b(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_10bit_videos_for_testing_b()
self.fextractor = PypsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnry_score'], 32.57145231892744, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnru_score'], 39.03859552689696, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnrv_score'], 41.28060001337217, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnry_score'], 72.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnru_score'], 72.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnrv_score'], 72.0, places=4)
def test_run_pypsnr_fextractor_12bit(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_12bit_videos_for_testing()
self.fextractor = PypsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnry_score'], 32.577817940053734, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnru_score'], 39.044961148023255, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnrv_score'], 41.28696563449846, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnry_score'], 84.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnru_score'], 84.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnrv_score'], 84.0, places=4)
def test_run_pypsnr_fextractor_16bit(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_16bit_videos_for_testing()
self.fextractor = PypsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnry_score'], 32.579806240311484, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnru_score'], 39.046949448281005, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnrv_score'], 41.288953934756215, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnry_score'], 108.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnru_score'], 108.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnrv_score'], 108.0, places=4)
def test_run_pypsnr_fextractor_16bit_custom_max_db(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_16bit_videos_for_testing()
self.fextractor = PypsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None,
optional_dict={'max_db': 100.0}
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnry_score'], 32.579806240311484, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnru_score'], 39.046949448281005, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_feature_psnrv_score'], 41.288953934756215, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnry_score'], 100.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnru_score'], 100.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_feature_psnrv_score'], 100.0, places=4)
def test_run_pypsnr_fextractor_maxdb100_16bit(self):
ref_path, dis_path, asset, asset_original = set_default_576_324_16bit_videos_for_testing()
self.fextractor = PypsnrMaxdb100FeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None,
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Pypsnr_maxdb100_feature_psnry_score'], 32.579806240311484, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_maxdb100_feature_psnru_score'], 39.046949448281005, places=4)
self.assertAlmostEqual(results[0]['Pypsnr_maxdb100_feature_psnrv_score'], 41.288953934756215, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_maxdb100_feature_psnry_score'], 100.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_maxdb100_feature_psnru_score'], 100.0, places=4)
self.assertAlmostEqual(results[1]['Pypsnr_maxdb100_feature_psnrv_score'], 100.0, places=4)
if __name__ == '__main__':
unittest.main(verbosity=2)
| 60.02584 | 182 | 0.729811 | 5,754 | 46,460 | 5.606708 | 0.059785 | 0.177707 | 0.236942 | 0.148414 | 0.919314 | 0.911503 | 0.901863 | 0.879359 | 0.842007 | 0.798704 | 0 | 0.107881 | 0.15805 | 46,460 | 773 | 183 | 60.103493 | 0.716849 | 0.002906 | 0 | 0.421138 | 0 | 0 | 0.190018 | 0.182095 | 0 | 0 | 0 | 0 | 0.569106 | 1 | 0.042276 | false | 0.001626 | 0.01626 | 0 | 0.060163 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c7ede981d0726249623ae10c046075ec83e68b0a | 829 | py | Python | nengo/utils/neurons.py | Michaeljurado24/nengo | dc0419fbe571374d0a55a7f67309dfcb254a2e88 | [
"BSD-2-Clause"
] | 762 | 2015-01-05T13:01:24.000Z | 2022-03-26T11:35:38.000Z | nengo/utils/neurons.py | Michaeljurado24/nengo | dc0419fbe571374d0a55a7f67309dfcb254a2e88 | [
"BSD-2-Clause"
] | 1,066 | 2015-01-01T15:38:41.000Z | 2022-03-20T19:26:44.000Z | nengo/utils/neurons.py | Michaeljurado24/nengo | dc0419fbe571374d0a55a7f67309dfcb254a2e88 | [
"BSD-2-Clause"
] | 205 | 2015-01-25T18:08:44.000Z | 2022-03-22T22:03:08.000Z | from nengo.exceptions import MovedError
def spikes2events(*args, **kwargs):
"""Moved to nengo_extras.neurons."""
raise MovedError(location="nengo_extras.neurons")
def _rates_isi_events(*args, **kwargs):
"""Moved to nengo_extras.neurons."""
raise MovedError(location="nengo_extras.neurons")
def rates_isi(*args, **kwargs):
"""Moved to nengo_extras.neurons."""
raise MovedError(location="nengo_extras.neurons")
def lowpass_filter(*args, **kwargs):
"""Moved to nengo_extras.neurons."""
raise MovedError(location="nengo_extras.neurons")
def rates_kernel(*args, **kwargs):
"""Moved to nengo_extras.neurons."""
raise MovedError(location="nengo_extras.neurons")
def settled_firingrate(*args, **kwargs):
"""Moved to nengo.neurons."""
raise MovedError(location="nengo.neurons")
| 25.90625 | 53 | 0.711701 | 99 | 829 | 5.787879 | 0.232323 | 0.191972 | 0.314136 | 0.17801 | 0.82548 | 0.726003 | 0.726003 | 0.726003 | 0.726003 | 0.726003 | 0 | 0.001395 | 0.135103 | 829 | 31 | 54 | 26.741935 | 0.797768 | 0.214717 | 0 | 0.384615 | 0 | 0 | 0.182258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.461538 | true | 0.076923 | 0.076923 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 12 |
1be5b6ca064908b5a72564069c3100aa20976069 | 33,142 | py | Python | sdk/python/pulumi_oci/jms/fleet.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/jms/fleet.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/jms/fleet.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['FleetArgs', 'Fleet']
@pulumi.input_type
class FleetArgs:
def __init__(__self__, *,
compartment_id: pulumi.Input[str],
display_name: pulumi.Input[str],
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None):
"""
The set of arguments for constructing a Fleet resource.
:param pulumi.Input[str] compartment_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Fleet.
:param pulumi.Input[str] display_name: (Updatable) The name of the Fleet. The displayName must be unique for Fleets in the same compartment.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`. (See [Understanding Free-form Tags](https://docs.cloud.oracle.com/iaas/Content/Tagging/Tasks/managingtagsandtagnamespaces.htm)).
:param pulumi.Input[str] description: (Updatable) The Fleet's description. If nothing is provided, the Fleet description will be null.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`. (See [Managing Tags and Tag Namespaces](https://docs.cloud.oracle.com/iaas/Content/Tagging/Concepts/understandingfreeformtags.htm).)
"""
pulumi.set(__self__, "compartment_id", compartment_id)
pulumi.set(__self__, "display_name", display_name)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if description is not None:
pulumi.set(__self__, "description", description)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> pulumi.Input[str]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Fleet.
"""
return pulumi.get(self, "compartment_id")
@compartment_id.setter
def compartment_id(self, value: pulumi.Input[str]):
pulumi.set(self, "compartment_id", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Input[str]:
"""
(Updatable) The name of the Fleet. The displayName must be unique for Fleets in the same compartment.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: pulumi.Input[str]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`. (See [Understanding Free-form Tags](https://docs.cloud.oracle.com/iaas/Content/Tagging/Tasks/managingtagsandtagnamespaces.htm)).
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The Fleet's description. If nothing is provided, the Fleet description will be null.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`. (See [Managing Tags and Tag Namespaces](https://docs.cloud.oracle.com/iaas/Content/Tagging/Concepts/understandingfreeformtags.htm).)
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@pulumi.input_type
class _FleetState:
def __init__(__self__, *,
approximate_application_count: Optional[pulumi.Input[int]] = None,
approximate_installation_count: Optional[pulumi.Input[int]] = None,
approximate_jre_count: Optional[pulumi.Input[int]] = None,
approximate_managed_instance_count: Optional[pulumi.Input[int]] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
state: Optional[pulumi.Input[str]] = None,
system_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
time_created: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering Fleet resources.
:param pulumi.Input[int] approximate_application_count: The approximate count of all unique applications in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
:param pulumi.Input[int] approximate_installation_count: The approximate count of all unique Java installations in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
:param pulumi.Input[int] approximate_jre_count: The approximate count of all unique Java Runtimes in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
:param pulumi.Input[int] approximate_managed_instance_count: The approximate count of all unique managed instances in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
:param pulumi.Input[str] compartment_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Fleet.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`. (See [Understanding Free-form Tags](https://docs.cloud.oracle.com/iaas/Content/Tagging/Tasks/managingtagsandtagnamespaces.htm)).
:param pulumi.Input[str] description: (Updatable) The Fleet's description. If nothing is provided, the Fleet description will be null.
:param pulumi.Input[str] display_name: (Updatable) The name of the Fleet. The displayName must be unique for Fleets in the same compartment.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`. (See [Managing Tags and Tag Namespaces](https://docs.cloud.oracle.com/iaas/Content/Tagging/Concepts/understandingfreeformtags.htm).)
:param pulumi.Input[str] state: The lifecycle state of the Fleet.
:param pulumi.Input[Mapping[str, Any]] system_tags: System tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). System tags can be viewed by users, but can only be created by the system. Example: `{"orcl-cloud.free-tier-retained": "true"}`
:param pulumi.Input[str] time_created: The creation date and time of the Fleet (formatted according to RFC3339).
"""
if approximate_application_count is not None:
pulumi.set(__self__, "approximate_application_count", approximate_application_count)
if approximate_installation_count is not None:
pulumi.set(__self__, "approximate_installation_count", approximate_installation_count)
if approximate_jre_count is not None:
pulumi.set(__self__, "approximate_jre_count", approximate_jre_count)
if approximate_managed_instance_count is not None:
pulumi.set(__self__, "approximate_managed_instance_count", approximate_managed_instance_count)
if compartment_id is not None:
pulumi.set(__self__, "compartment_id", compartment_id)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if description is not None:
pulumi.set(__self__, "description", description)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
if state is not None:
pulumi.set(__self__, "state", state)
if system_tags is not None:
pulumi.set(__self__, "system_tags", system_tags)
if time_created is not None:
pulumi.set(__self__, "time_created", time_created)
@property
@pulumi.getter(name="approximateApplicationCount")
def approximate_application_count(self) -> Optional[pulumi.Input[int]]:
"""
The approximate count of all unique applications in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
"""
return pulumi.get(self, "approximate_application_count")
@approximate_application_count.setter
def approximate_application_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "approximate_application_count", value)
@property
@pulumi.getter(name="approximateInstallationCount")
def approximate_installation_count(self) -> Optional[pulumi.Input[int]]:
"""
The approximate count of all unique Java installations in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
"""
return pulumi.get(self, "approximate_installation_count")
@approximate_installation_count.setter
def approximate_installation_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "approximate_installation_count", value)
@property
@pulumi.getter(name="approximateJreCount")
def approximate_jre_count(self) -> Optional[pulumi.Input[int]]:
"""
The approximate count of all unique Java Runtimes in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
"""
return pulumi.get(self, "approximate_jre_count")
@approximate_jre_count.setter
def approximate_jre_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "approximate_jre_count", value)
@property
@pulumi.getter(name="approximateManagedInstanceCount")
def approximate_managed_instance_count(self) -> Optional[pulumi.Input[int]]:
"""
The approximate count of all unique managed instances in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
"""
return pulumi.get(self, "approximate_managed_instance_count")
@approximate_managed_instance_count.setter
def approximate_managed_instance_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "approximate_managed_instance_count", value)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Fleet.
"""
return pulumi.get(self, "compartment_id")
@compartment_id.setter
def compartment_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "compartment_id", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`. (See [Understanding Free-form Tags](https://docs.cloud.oracle.com/iaas/Content/Tagging/Tasks/managingtagsandtagnamespaces.htm)).
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The Fleet's description. If nothing is provided, the Fleet description will be null.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The name of the Fleet. The displayName must be unique for Fleets in the same compartment.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`. (See [Managing Tags and Tag Namespaces](https://docs.cloud.oracle.com/iaas/Content/Tagging/Concepts/understandingfreeformtags.htm).)
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@property
@pulumi.getter
def state(self) -> Optional[pulumi.Input[str]]:
"""
The lifecycle state of the Fleet.
"""
return pulumi.get(self, "state")
@state.setter
def state(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "state", value)
@property
@pulumi.getter(name="systemTags")
def system_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
System tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). System tags can be viewed by users, but can only be created by the system. Example: `{"orcl-cloud.free-tier-retained": "true"}`
"""
return pulumi.get(self, "system_tags")
@system_tags.setter
def system_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "system_tags", value)
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> Optional[pulumi.Input[str]]:
"""
The creation date and time of the Fleet (formatted according to RFC3339).
"""
return pulumi.get(self, "time_created")
@time_created.setter
def time_created(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_created", value)
class Fleet(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
__props__=None):
"""
This resource provides the Fleet resource in Oracle Cloud Infrastructure Jms service.
Create a new Fleet using the information provided.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_fleet = oci.jms.Fleet("testFleet",
compartment_id=var["compartment_id"],
display_name=var["fleet_display_name"],
defined_tags={
"foo-namespace.bar-key": "value",
},
description=var["fleet_description"],
freeform_tags={
"bar-key": "value",
})
```
## Import
Fleets can be imported using the `id`, e.g.
```sh
$ pulumi import oci:jms/fleet:Fleet test_fleet "id"
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] compartment_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Fleet.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`. (See [Understanding Free-form Tags](https://docs.cloud.oracle.com/iaas/Content/Tagging/Tasks/managingtagsandtagnamespaces.htm)).
:param pulumi.Input[str] description: (Updatable) The Fleet's description. If nothing is provided, the Fleet description will be null.
:param pulumi.Input[str] display_name: (Updatable) The name of the Fleet. The displayName must be unique for Fleets in the same compartment.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`. (See [Managing Tags and Tag Namespaces](https://docs.cloud.oracle.com/iaas/Content/Tagging/Concepts/understandingfreeformtags.htm).)
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: FleetArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource provides the Fleet resource in Oracle Cloud Infrastructure Jms service.
Create a new Fleet using the information provided.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_fleet = oci.jms.Fleet("testFleet",
compartment_id=var["compartment_id"],
display_name=var["fleet_display_name"],
defined_tags={
"foo-namespace.bar-key": "value",
},
description=var["fleet_description"],
freeform_tags={
"bar-key": "value",
})
```
## Import
Fleets can be imported using the `id`, e.g.
```sh
$ pulumi import oci:jms/fleet:Fleet test_fleet "id"
```
:param str resource_name: The name of the resource.
:param FleetArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(FleetArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = FleetArgs.__new__(FleetArgs)
if compartment_id is None and not opts.urn:
raise TypeError("Missing required property 'compartment_id'")
__props__.__dict__["compartment_id"] = compartment_id
__props__.__dict__["defined_tags"] = defined_tags
__props__.__dict__["description"] = description
if display_name is None and not opts.urn:
raise TypeError("Missing required property 'display_name'")
__props__.__dict__["display_name"] = display_name
__props__.__dict__["freeform_tags"] = freeform_tags
__props__.__dict__["approximate_application_count"] = None
__props__.__dict__["approximate_installation_count"] = None
__props__.__dict__["approximate_jre_count"] = None
__props__.__dict__["approximate_managed_instance_count"] = None
__props__.__dict__["state"] = None
__props__.__dict__["system_tags"] = None
__props__.__dict__["time_created"] = None
super(Fleet, __self__).__init__(
'oci:jms/fleet:Fleet',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
approximate_application_count: Optional[pulumi.Input[int]] = None,
approximate_installation_count: Optional[pulumi.Input[int]] = None,
approximate_jre_count: Optional[pulumi.Input[int]] = None,
approximate_managed_instance_count: Optional[pulumi.Input[int]] = None,
compartment_id: Optional[pulumi.Input[str]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
state: Optional[pulumi.Input[str]] = None,
system_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
time_created: Optional[pulumi.Input[str]] = None) -> 'Fleet':
"""
Get an existing Fleet resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[int] approximate_application_count: The approximate count of all unique applications in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
:param pulumi.Input[int] approximate_installation_count: The approximate count of all unique Java installations in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
:param pulumi.Input[int] approximate_jre_count: The approximate count of all unique Java Runtimes in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
:param pulumi.Input[int] approximate_managed_instance_count: The approximate count of all unique managed instances in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
:param pulumi.Input[str] compartment_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Fleet.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`. (See [Understanding Free-form Tags](https://docs.cloud.oracle.com/iaas/Content/Tagging/Tasks/managingtagsandtagnamespaces.htm)).
:param pulumi.Input[str] description: (Updatable) The Fleet's description. If nothing is provided, the Fleet description will be null.
:param pulumi.Input[str] display_name: (Updatable) The name of the Fleet. The displayName must be unique for Fleets in the same compartment.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`. (See [Managing Tags and Tag Namespaces](https://docs.cloud.oracle.com/iaas/Content/Tagging/Concepts/understandingfreeformtags.htm).)
:param pulumi.Input[str] state: The lifecycle state of the Fleet.
:param pulumi.Input[Mapping[str, Any]] system_tags: System tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). System tags can be viewed by users, but can only be created by the system. Example: `{"orcl-cloud.free-tier-retained": "true"}`
:param pulumi.Input[str] time_created: The creation date and time of the Fleet (formatted according to RFC3339).
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _FleetState.__new__(_FleetState)
__props__.__dict__["approximate_application_count"] = approximate_application_count
__props__.__dict__["approximate_installation_count"] = approximate_installation_count
__props__.__dict__["approximate_jre_count"] = approximate_jre_count
__props__.__dict__["approximate_managed_instance_count"] = approximate_managed_instance_count
__props__.__dict__["compartment_id"] = compartment_id
__props__.__dict__["defined_tags"] = defined_tags
__props__.__dict__["description"] = description
__props__.__dict__["display_name"] = display_name
__props__.__dict__["freeform_tags"] = freeform_tags
__props__.__dict__["state"] = state
__props__.__dict__["system_tags"] = system_tags
__props__.__dict__["time_created"] = time_created
return Fleet(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="approximateApplicationCount")
def approximate_application_count(self) -> pulumi.Output[int]:
"""
The approximate count of all unique applications in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
"""
return pulumi.get(self, "approximate_application_count")
@property
@pulumi.getter(name="approximateInstallationCount")
def approximate_installation_count(self) -> pulumi.Output[int]:
"""
The approximate count of all unique Java installations in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
"""
return pulumi.get(self, "approximate_installation_count")
@property
@pulumi.getter(name="approximateJreCount")
def approximate_jre_count(self) -> pulumi.Output[int]:
"""
The approximate count of all unique Java Runtimes in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
"""
return pulumi.get(self, "approximate_jre_count")
@property
@pulumi.getter(name="approximateManagedInstanceCount")
def approximate_managed_instance_count(self) -> pulumi.Output[int]:
"""
The approximate count of all unique managed instances in the Fleet in the past seven days. This metric is provided on a best-effort manner, and is not taken into account when computing the resource ETag.
"""
return pulumi.get(self, "approximate_managed_instance_count")
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> pulumi.Output[str]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment of the Fleet.
"""
return pulumi.get(self, "compartment_id")
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`. (See [Understanding Free-form Tags](https://docs.cloud.oracle.com/iaas/Content/Tagging/Tasks/managingtagsandtagnamespaces.htm)).
"""
return pulumi.get(self, "defined_tags")
@property
@pulumi.getter
def description(self) -> pulumi.Output[str]:
"""
(Updatable) The Fleet's description. If nothing is provided, the Fleet description will be null.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Output[str]:
"""
(Updatable) The name of the Fleet. The displayName must be unique for Fleets in the same compartment.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`. (See [Managing Tags and Tag Namespaces](https://docs.cloud.oracle.com/iaas/Content/Tagging/Concepts/understandingfreeformtags.htm).)
"""
return pulumi.get(self, "freeform_tags")
@property
@pulumi.getter
def state(self) -> pulumi.Output[str]:
"""
The lifecycle state of the Fleet.
"""
return pulumi.get(self, "state")
@property
@pulumi.getter(name="systemTags")
def system_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
System tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). System tags can be viewed by users, but can only be created by the system. Example: `{"orcl-cloud.free-tier-retained": "true"}`
"""
return pulumi.get(self, "system_tags")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> pulumi.Output[str]:
"""
The creation date and time of the Fleet (formatted according to RFC3339).
"""
return pulumi.get(self, "time_created")
| 56.172881 | 390 | 0.684479 | 4,115 | 33,142 | 5.332928 | 0.06051 | 0.055639 | 0.058009 | 0.030622 | 0.914559 | 0.892641 | 0.866348 | 0.847209 | 0.830668 | 0.804511 | 0 | 0.000651 | 0.212148 | 33,142 | 589 | 391 | 56.268251 | 0.839799 | 0.454529 | 0 | 0.609907 | 1 | 0 | 0.125211 | 0.051529 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164087 | false | 0.003096 | 0.01548 | 0 | 0.281734 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
40650950727542be3d13862129c22b8f38e89f44 | 220,674 | py | Python | applications/DEMApplication/python_scripts/DEM_benchmarks_class.py | AndreaVoltan/MyKratos7.0 | e977752722e8ef1b606f25618c4bf8fd04c434cc | [
"BSD-4-Clause"
] | 2 | 2020-04-30T19:13:08.000Z | 2021-04-14T19:40:47.000Z | applications/DEMApplication/python_scripts/DEM_benchmarks_class.py | AndreaVoltan/MyKratos7.0 | e977752722e8ef1b606f25618c4bf8fd04c434cc | [
"BSD-4-Clause"
] | 1 | 2020-04-30T19:19:09.000Z | 2020-05-02T14:22:36.000Z | applications/DEMApplication/python_scripts/DEM_benchmarks_class.py | AndreaVoltan/MyKratos7.0 | e977752722e8ef1b606f25618c4bf8fd04c434cc | [
"BSD-4-Clause"
] | 1 | 2020-06-12T08:51:24.000Z | 2020-06-12T08:51:24.000Z | from __future__ import print_function, absolute_import, division # makes KratosMultiphysics backward compatible with python 2.6 and 2.7
from KratosMultiphysics import * # importing the Kratos Library
from KratosMultiphysics.DEMApplication import *
import shutil
from glob import glob
from math import pi, sin, cos, tan, atan, sqrt
from os import system
import os, sys
def initialize_time_parameters(benchmark_number):
number_of_coeffs_of_restitution = 1
if benchmark_number==1:
end_time = 0.0005
dt = 6.4e-8 # Complies Rayleigh's condition
graph_print_interval = 0.000005
number_of_points_in_the_graphic = 6
elif benchmark_number==2:
end_time = 0.007
dt = 3e-7 # Complies Rayleigh's condition????????????????
graph_print_interval = 0.0001
number_of_points_in_the_graphic = 6
elif benchmark_number==3:
end_time = 0.00031
dt = 8.1e-9 #1.1e-9 # Complies Rayleigh's condition
graph_print_interval = 0.000001
number_of_points_in_the_graphic = 6
elif benchmark_number==4:
end_time = 0.0002 #0.00003
dt = 2e-8 #1.9e-9 # Complies Rayleigh's condition
graph_print_interval = 0.000001
number_of_points_in_the_graphic = 17
elif benchmark_number==5:
end_time = 0.0000005
dt = 3.6e-11 #3.6e-12 # Complies Rayleigh's condition
graph_print_interval = 0.00000005
number_of_points_in_the_graphic = 17
elif benchmark_number==6:
end_time = 0.01
dt = 1.0e-6 #1.0e-7 # Complies Rayleigh's condition ????????????????
graph_print_interval = 0.00025
number_of_points_in_the_graphic = 17
elif benchmark_number==7:
end_time = 0.0005
dt = 4.4614e-7 #4.4614e-8 # Complies Rayleigh's condition ????????????????
graph_print_interval = 0.000005
number_of_points_in_the_graphic = 17
elif benchmark_number==8:
end_time = 0.02
dt = 2.0e-6 #5.0e-7 # Complies Rayleigh's condition
graph_print_interval = 0.0001
number_of_points_in_the_graphic = 17
elif benchmark_number==9:
end_time = 0.001 #0.0005
dt = 5.0e-8 # 3.4e-8 # Complies Rayleigh's condition
graph_print_interval = 0.000005
number_of_points_in_the_graphic = 6
elif benchmark_number==10:
end_time = 0.00015 #0.0005
dt = 2.0e-8 #3.6e-12 # Complies Rayleigh's condition
graph_print_interval = 0.00001
number_of_points_in_the_graphic = 10
number_of_coeffs_of_restitution = 4
elif benchmark_number==11:
end_time = 0.00015 #0.0005
dt = 1.0e-7 #3.6e-12 # Complies Rayleigh's condition
graph_print_interval = 0.00001
number_of_points_in_the_graphic = 10
number_of_coeffs_of_restitution = 4
elif benchmark_number==12:
end_time = 0.1
dt = 5.0e-7
graph_print_interval = 1e-4
number_of_points_in_the_graphic = 1
elif benchmark_number==13:
end_time = 2.0
dt = 1.0e-4
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==14:
end_time = 2.0
dt = 1.0e-4
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==15:
end_time = 2.0
dt = 1.0e-4
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==16:
end_time = 1.0
dt = 0.50e-4
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==17:
end_time = 1.0
dt = 1.0e-6
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==20: # Normal compression
end_time = 0.01
dt = 1e-5
graph_print_interval = 1e-5 # utilitzo com a output freq del grafic de punts
number_of_points_in_the_graphic = 1
elif benchmark_number==21: # Normal compression with indentation
end_time = 0.01
dt = 1e-5
graph_print_interval = 1e-5
number_of_points_in_the_graphic = 1
elif benchmark_number==22: # Tensile
end_time = 0.05
dt = 1e-5
graph_print_interval = 1e-5
number_of_points_in_the_graphic = 1
elif benchmark_number==23: # Tensile with indentation
end_time = 0.05
dt = 1e-5
graph_print_interval = 1e-5
number_of_points_in_the_graphic = 1
elif benchmark_number==24: # Shear
end_time = 8e-5
dt = 1e-7
graph_print_interval = 1e-7
number_of_points_in_the_graphic = 1
elif benchmark_number==25: # Shear + radius expansion
end_time = 8e-5
dt = 1e-7
graph_print_interval = 1e-7
number_of_points_in_the_graphic = 1
elif benchmark_number==26: #
end_time = 0.1
dt = 1e-5
graph_print_interval = 1e-4
number_of_points_in_the_graphic = 1
elif benchmark_number==27: #UCS TEST
end_time = 0.05
dt = 5e-7
graph_print_interval = 5e-4
number_of_points_in_the_graphic = 1
elif benchmark_number==28: #PENDULO3D . not ready
end_time = 100
dt = 1e-4
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==30:
end_time = 0.5
dt = 1.0e-3
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==31:
end_time = 0.5
dt = 1.0e-3
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==32:
end_time = 0.5
dt = 1.0e-6
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==33:
end_time = 0.5
dt = 1.0e-6
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
elif benchmark_number==40:
end_time = 1
dt = 5e-5
graph_print_interval = 1e-2
number_of_points_in_the_graphic = 1
else: #benchmark_number==68: #
end_time = 1e-3
dt = 1e-6
graph_print_interval = 1e-7
number_of_points_in_the_graphic = 1
return end_time, dt, graph_print_interval, number_of_points_in_the_graphic, number_of_coeffs_of_restitution
def PrintResultsMessage(test_number, it_is_success, error, elapsed_time, error_filename = 'errors.err'):
with open(error_filename, 'a') as error_file:
name = str(test_number)
error_file.write('DEM Benchmark ' + name + ':')
if it_is_success:
error_file.write(' OK!........ Test ' + name + ' SUCCESSFUL (error: '
+ str(round(error, 2)) + ', time: '
+ str(round(elapsed_time, 2)) + 's.'')\n')
else:
error_file.write(' KO!........ Test ' + name + ' FAILED (error: ' + str(error) + ')\n')
def GetDisplacement(node):
displacement = [0]*3
displacement[0] = node.X-node.X0
displacement[1] = node.Y-node.Y0
displacement[2] = node.Z-node.Z0
return displacement
def MeasureError(node, variable):
return sqrt(sum([node.GetSolutionStepValue(variable)[i]**2 for i in range(3)]))
def GetNodeDisplacement(node):
return sqrt(sum([GetDisplacement(node)[i]**2 for i in range(3)]))
class Benchmark1:
def __init__(self):
self.number = 1
self.initial_normal_vel = 10.0
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
for node in modelpart.Nodes:
if node.Id == 1:
node.SetSolutionStepValue(VELOCITY_X, -self.initial_normal_vel)
else:
node.SetSolutionStepValue(VELOCITY_X, self.initial_normal_vel)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
pass
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
normal_contact_force_outfile_name = 'variables_for_node_1.txt'
gnuplot_script_name = 'benchmark1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name, 'w')
self.gnuplot_outfile.write("set grid; plot '" + normal_contact_force_outfile_name + "' every 20 u 1:8 w lp lt -1 lw 1.5 ps 1 pt 4")
self.gnuplot_outfile.close()
#print_gnuplot_files_on_screen(gnuplot_script_name)
error1, error2, error3 = self.compute_errors(normal_contact_force_outfile_name)
it_is_success = error1 < 1.0 and error2 < 1.0 and error3 < 1.0
error_measure = error1 + error2 + error3
PrintResultsMessage(self.number, it_is_success, error_measure, elapsed_time)
def compute_errors(self, normal_contact_force_outfile_name):
Chung_data = []; DEM_data = []
with open('paper_data/benchmark1_graph1.dat') as inf:
for line in inf:
Chung_data.append(float(line))
with open(normal_contact_force_outfile_name) as inf:
for line in inf:
parts = line.split()
if parts[0] == '#Time':
break
for line in inf:
parts = line.split()
DEM_data.append(float(parts[7]))
error = abs(max(DEM_data) - float(Chung_data[0]))/float(Chung_data[0])
print("Error in restitution numbers =", 100*error,"%")
error1 = 100*error
error2 = error3 = 0
return error1, error2, error3
class Benchmark2:
def __init__(self):
self.number = 2
self.initial_normal_vel = -0.2
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
for node in modelpart.Nodes:
node.SetSolutionStepValue(VELOCITY_Z, self.initial_normal_vel)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
pass
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
normal_contact_force_outfile_name = 'variables_for_node_2.txt'
gnuplot_script_name = 'benchmark2_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name, 'w')
self.gnuplot_outfile.write("set grid; plot '" + normal_contact_force_outfile_name + "' every 10 u 1:10 w lp lt 3 lw 1.5 ps 1 pt 6")
self.gnuplot_outfile.close()
#print_gnuplot_files_on_screen(gnuplot_script_name)
error1, error2, error3 = self.compute_errors(normal_contact_force_outfile_name)
it_is_success = error1 < 1.0 and error2 < 1.0 and error3 < 1.0
error_measure = error1 + error2 + error3
PrintResultsMessage(self.number, it_is_success, error_measure, elapsed_time)
def compute_errors(self, normal_contact_force_outfile_name):
Chung_data = []; DEM_data = []
with open('paper_data/benchmark2_graph1.dat') as inf:
for line in inf:
Chung_data.append(float(line))
with open(normal_contact_force_outfile_name) as inf:
for line in inf:
parts = line.split()
if parts[0] == '#Time':
break
for line in inf:
parts = line.split()
DEM_data.append(float(parts[9]))
error = abs(max(DEM_data) - float(Chung_data[0]))/float(Chung_data[0])
print("Error in restitution numbers =", 100*error,"%")
error1 = 100*error
error2 = error3 = 0
return error1, error2, error3
class Benchmark3:
def __init__(self):
self.number = 3
self.restitution_numbers_list = []
self.initial_normal_vel = 0
self.generated_data = None
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
#number = 1.0/(number_of_points_in_the_graphic-1) * (iteration - 1)
if number_of_points_in_the_graphic == 1:
number = 0
else:
number = 1.0/(number_of_points_in_the_graphic-1) * (iteration - 1)
for node in modelpart.Nodes:
self.initial_normal_vel = node.GetSolutionStepValue(VELOCITY_Z)
modelpart.GetProperties()[1][COEFFICIENT_OF_RESTITUTION] = number
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
for node in modelpart.Nodes:
final_vel = node.GetSolutionStepValue(VELOCITY_Z)
restitution_coefficient = -final_vel / self.initial_normal_vel
self.restitution_numbers_list.append(restitution_coefficient)
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
self.output_filename = "benchmark3_dt_" + str(dt) + '_restitution_numbers_vector_list_data.dat'
self.generated_data = open(self.output_filename, 'w')
for i in range(0, number_of_points_in_the_graphic):
first_col = 1/(number_of_points_in_the_graphic-1) * i
self.generated_data.write("%6.4f %11.8f" % (first_col, self.restitution_numbers_list[i]) + '\n')
self.generated_data.close()
gnuplot_script_name = 'benchmark3_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name, 'w')
self.gnuplot_outfile.write("set grid; plot '" + self.output_filename + "' u 1:2 w lp lt 3 lw 1.5 ps 2 pt 4, '"\
+ self.output_filename + "' u 1:3 w lp lt 2 lw 1.5 ps 2 pt 6")
self.gnuplot_outfile.close()
self.create_gnuplot_scripts(self.output_filename, dt)
error1, error2, error3 = self.compute_errors(self.output_filename)
it_is_success = error1 < 1.0 and error2 < 1.0 and error3 < 1.0
error_measure = error1 + error2 + error3
PrintResultsMessage(self.number, it_is_success, error_measure, elapsed_time)
def create_gnuplot_scripts(self, output_filename, dt):
gnuplot_script_name_1 = 'benchmark3_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Coefficient of restitution'\nset ylabel 'Damping ratio'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:1][0:1] '" + output_filename + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark3_graph1.dat' w lp ls 1 t 'Al. oxide',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark3_graph1.dat' w lp ls 2 t 'Cast iron'\n")
self.gnuplot_outfile.close()
#print_gnuplot_files_on_screen(gnuplot_script_name_1)
def compute_errors(self, output_filename):
lines_Chung = lines_DEM = list(range(0, 6))
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark3_graph1.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split()
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
generated_data_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
generated_data_error+=abs(i-j)
generated_data_error/=summation_of_Chung_data
print("Error in restitution numbers =", 100*generated_data_error,"%")
error1 = 100*generated_data_error
error2 = error3 = 0
return error1, error2, error3
class Benchmark4:
def __init__(self):
self.number = 4
self.initial_module_vel = 3.9
self.initial_tangential_vel = 0
self.radius = 0.0025
self.degrees = 0
self.angles_list = []
self.tangential_restitution_coefficient_list = []
self.final_angular_vel_list = []
self.rebound_angle_list = []
self.final_angular_vel_list_outfile = None
self.rebound_angle_list_outfile = None
self.tangential_restitution_coefficient_list_outfile = None
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.degrees = 90 / (number_of_points_in_the_graphic + 1) * iteration
self.initial_tangential_vel = self.initial_module_vel * sin(self.degrees * pi / 180.0)
initial_normal_vel = -self.initial_module_vel * cos(self.degrees * pi / 180.0)
for node in modelpart.Nodes:
node.SetSolutionStepValue(VELOCITY_Y, self.initial_tangential_vel)
node.SetSolutionStepValue(VELOCITY_Z, initial_normal_vel)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
for node in modelpart.Nodes:
final_angular_vel = node.GetSolutionStepValue(ANGULAR_VELOCITY_X)
final_tangential_center_velocity = node.GetSolutionStepValue(VELOCITY_Y)
final_normal_center_velocity = node.GetSolutionStepValue(VELOCITY_Z)
final_tangential_contact_velocity = final_tangential_center_velocity + final_angular_vel * self.radius
rebound_angle = 180 / pi * atan(final_tangential_contact_velocity / final_normal_center_velocity)
tangential_restitution_coefficient = final_tangential_center_velocity / self.initial_tangential_vel
self.final_angular_vel_list.append(final_angular_vel)
self.rebound_angle_list.append(rebound_angle)
self.tangential_restitution_coefficient_list.append(tangential_restitution_coefficient)
self.angles_list.append(self.degrees)
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
self.tangential_restitution_coefficient_list_outfile_name = "benchmark4_dt_" + str(dt) + '_tangential_restitution_coefficient_list_data.dat'
self.final_angular_vel_list_outfile_name = "benchmark4_dt_" + str(dt) + '_final_angular_vel_list_data.dat'
self.rebound_angle_list_outfile_name = "benchmark4_dt_" + str(dt) + '_rebound_angle_list_data.dat'
self.tangential_restitution_coefficient_list_outfile = open(self.tangential_restitution_coefficient_list_outfile_name, 'w')
self.final_angular_vel_list_outfile = open(self.final_angular_vel_list_outfile_name, 'w')
self.rebound_angle_list_outfile = open(self.rebound_angle_list_outfile_name, 'w')
for i in range(0, number_of_points_in_the_graphic):
self.tangential_restitution_coefficient_list_outfile.write("%14.8f %14.8f" % (self.angles_list[i], self.tangential_restitution_coefficient_list[i]) + '\n')
self.final_angular_vel_list_outfile.write("%14.8f %14.8f" % (self.angles_list[i], self.final_angular_vel_list[i]) + '\n')
self.rebound_angle_list_outfile.write("%14.8f %14.8f" % (self.angles_list[i], self.rebound_angle_list[i]) + '\n')
self.tangential_restitution_coefficient_list_outfile.close()
self.final_angular_vel_list_outfile.close()
self.rebound_angle_list_outfile.close()
self.create_gnuplot_scripts(self.tangential_restitution_coefficient_list_outfile_name, self.final_angular_vel_list_outfile_name,\
self.rebound_angle_list_outfile_name, dt)
error1, error2, error3 = self.compute_errors(self.tangential_restitution_coefficient_list_outfile_name, self.final_angular_vel_list_outfile_name,\
self.rebound_angle_list_outfile_name)
it_is_success = error1 < 2.0 and error2 < 2.0 and error3 < 2.0
error_measure = error1 + error2 + error3
PrintResultsMessage(self.number, it_is_success, error_measure, elapsed_time)
def create_gnuplot_scripts(self, tangential_restitution_coefficient_list_outfile_name, final_angular_vel_list_outfile_name,\
rebound_angle_list_outfile_name, dt):
gnuplot_script_name_1 = 'benchmark4_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:90][.4:1] '" + tangential_restitution_coefficient_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark4_graph1.dat' index 0 w lp ls 1 t 'Al. oxide',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark4_graph1.dat' index 1 w lp ls 2 t 'Al. alloy',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark4_graph1.dat' index 2 w p pt 7 ps 2 lt -1 t 'Experimental'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_2 = 'benchmark4_comparison_2_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_2, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Incident angle (deg)'\nset ylabel 'Final angular velocity (rad/s)'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:90][-750:0] '" + final_angular_vel_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark4_graph2.dat' index 0 w lp ls 1 t 'Al. oxide',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark4_graph2.dat' index 1 w lp ls 2 t 'Al. alloy',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark4_graph2.dat' index 2 w p pt 7 ps 2 lt -1 t 'Experimental'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_3 = 'benchmark4_comparison_3_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_3, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Incident angle (deg)'\nset ylabel 'Rebound angle (deg)'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:90][-30:90] '" + rebound_angle_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark4_graph3.dat' index 0 w lp ls 1 t 'Al. oxide',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark4_graph3.dat' index 1 w lp ls 2 t 'Al. alloy',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark4_graph3.dat' index 2 w p pt 7 ps 2 lt -1 t 'Experimental'\n")
self.gnuplot_outfile.close()
'''
print_gnuplot_files_on_screen(gnuplot_script_name_1)
print_gnuplot_files_on_screen(gnuplot_script_name_2)
print_gnuplot_files_on_screen(gnuplot_script_name_3)'''
def compute_errors(self, tangential_restitution_coefficient_list_outfile_name, final_angular_vel_list_outfile_name, rebound_angle_list_outfile_name):
lines_Chung = list(range(17, 30)); lines_DEM = list(range(0, 8)) + list(range(9, 16, 2)) + [16]
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark4_graph1.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split(',')
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(tangential_restitution_coefficient_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_tangential_restitution_coefficient_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
final_tangential_restitution_coefficient_error+=abs(i-j)
final_tangential_restitution_coefficient_error/=summation_of_Chung_data
print("Error in tangential restitution coefficient =", 100*final_tangential_restitution_coefficient_error,"%")
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark4_graph2.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split(',')
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(final_angular_vel_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_angular_vel_total_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
final_angular_vel_total_error+=abs(i-j)
final_angular_vel_total_error/=summation_of_Chung_data
print("Error in final angular vel =", 100*final_angular_vel_total_error,"%")
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark4_graph3.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split(',')
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(rebound_angle_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_rebound_angle_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
final_rebound_angle_error+=abs(i-j)
final_rebound_angle_error/=summation_of_Chung_data
print("Error in final rebound angle =", 100*final_rebound_angle_error,"%")
error1 = 100*final_tangential_restitution_coefficient_error
error2 = 100*final_angular_vel_total_error
error3 = 100*final_rebound_angle_error
return error1, error2, error3
class Benchmark5:
def __init__(self):
self.number = 5
self.initial_normal_vel = -5.0
self.initial_tangential_vel = 0
self.radius = 0.00001
self.Vst_div_mu_per_Vcn_list = []
self.Vst_prima_div_mu_per_Vcn_prima_list = []
self.r_w1_prima_div_mu_per_Vcn_list = []
self.Vst_prima_div_mu_per_Vcn_prima_list_outfile = None
self.r_w1_prima_div_mu_per_Vcn_list_outfile = None
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
degrees = 90 / (number_of_points_in_the_graphic + 1) * iteration
self.initial_tangential_vel = -self.initial_normal_vel * tan(degrees * pi / 180.0)
for node in modelpart.Nodes:
node.SetSolutionStepValue(VELOCITY_Y, self.initial_tangential_vel)
node.SetSolutionStepValue(VELOCITY_Z, self.initial_normal_vel)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
mu = 0.3
for node in modelpart.Nodes:
final_angular_vel = node.GetSolutionStepValue(ANGULAR_VELOCITY_X)
final_tangential_center_velocity = node.GetSolutionStepValue(VELOCITY_Y)
final_normal_center_velocity = node.GetSolutionStepValue(VELOCITY_Z)
Vst_div_mu_per_Vcn = -self.initial_tangential_vel / (mu * self.initial_normal_vel)
Vst_prima_div_mu_per_Vcn_prima = (final_tangential_center_velocity + final_angular_vel * self.radius) / (mu * final_normal_center_velocity)
r_w1_prima_div_mu_per_Vcn = -self.radius * final_angular_vel / (mu * self.initial_normal_vel)
self.Vst_div_mu_per_Vcn_list.append(Vst_div_mu_per_Vcn)
self.Vst_prima_div_mu_per_Vcn_prima_list.append(Vst_prima_div_mu_per_Vcn_prima)
self.r_w1_prima_div_mu_per_Vcn_list.append(r_w1_prima_div_mu_per_Vcn)
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
self.Vst_prima_div_mu_per_Vcn_prima_list_outfile_name = "benchmark5_dt_" + str(dt) + '_Vst_prima_div_mu_per_Vcn_prima_list_data.dat'
self.r_w1_prima_div_mu_per_Vcn_list_outfile_name = "benchmark5_dt_" + str(dt) + '_r_w1_prima_div_mu_per_Vcn_list_data.dat'
self.Vst_prima_div_mu_per_Vcn_prima_list_outfile = open(self.Vst_prima_div_mu_per_Vcn_prima_list_outfile_name, 'w')
self.r_w1_prima_div_mu_per_Vcn_list_outfile = open(self.r_w1_prima_div_mu_per_Vcn_list_outfile_name, 'w')
for i in range(0, number_of_points_in_the_graphic):
self.Vst_prima_div_mu_per_Vcn_prima_list_outfile.write("%14.8f %14.8f" % (self.Vst_div_mu_per_Vcn_list[i], self.Vst_prima_div_mu_per_Vcn_prima_list[i]) + '\n')
self.r_w1_prima_div_mu_per_Vcn_list_outfile.write("%14.8f %14.8f" % (self.Vst_div_mu_per_Vcn_list[i], self.r_w1_prima_div_mu_per_Vcn_list[i]) + '\n')
self.Vst_prima_div_mu_per_Vcn_prima_list_outfile.close()
self.r_w1_prima_div_mu_per_Vcn_list_outfile.close()
self.create_gnuplot_scripts(self.Vst_prima_div_mu_per_Vcn_prima_list_outfile_name, self.r_w1_prima_div_mu_per_Vcn_list_outfile_name, dt)
error1, error2, error3 = self.compute_errors(self.Vst_prima_div_mu_per_Vcn_prima_list_outfile_name, self.r_w1_prima_div_mu_per_Vcn_list_outfile_name)
it_is_success = error1 < 2.0 and error2 < 2.0 and error3 < 2.0
error_measure = error1 + error2 + error3
PrintResultsMessage(self.number, it_is_success, error_measure, elapsed_time)
def create_gnuplot_scripts(self, Vst_prima_div_mu_per_Vcn_prima_list_outfile_name, r_w1_prima_div_mu_per_Vcn_list_outfile_name, dt):
gnuplot_script_name_1 = 'benchmark5_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:14][-4:6] '" + Vst_prima_div_mu_per_Vcn_prima_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark5_graph1.dat' index 0 w lp ls 1 t 'Steel',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark5_graph1.dat' index 1 w lp ls 2 t 'Polyethylene',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark5_graph1.dat' index 2 w p pt 7 ps 2 lt -1 t 'FEM'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_2 = 'benchmark5_comparison_2_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_2, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Normalized incident angle'\nset ylabel 'Normalized final angular velocity'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:20][-6:0] '" + r_w1_prima_div_mu_per_Vcn_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark5_graph2.dat' index 0 w lp ls 1 t 'Steel',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark5_graph2.dat' index 1 w lp ls 2 t 'Polyethylene',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark5_graph2.dat' index 2 w p pt 7 ps 2 lt -1 t 'FEM'\n")
self.gnuplot_outfile.close()
'''
print_gnuplot_files_on_screen(gnuplot_script_name_1)
print_gnuplot_files_on_screen(gnuplot_script_name_2)'''
def compute_errors(self, Vst_prima_div_mu_per_Vcn_prima_list_outfile_name, r_w1_prima_div_mu_per_Vcn_list_outfile_name):
lines_Chung = list(range(49, 53)); lines_DEM = list(range(11, 15)) # Sliding regime for the time being
#lines_Chung = list(range(38, 53)); lines_DEM = list(range(0, 15)) # Whole diagram
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark5_graph1.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split(',')
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(Vst_prima_div_mu_per_Vcn_prima_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_Vst_prima_div_mu_per_Vcn_prima_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
final_Vst_prima_div_mu_per_Vcn_prima_error+=abs(i-j)
final_Vst_prima_div_mu_per_Vcn_prima_error/=summation_of_Chung_data
print("Error in final Vst prima div mu per Vcn prima =", 100*final_Vst_prima_div_mu_per_Vcn_prima_error,"%")
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark5_graph2.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split(',')
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(r_w1_prima_div_mu_per_Vcn_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_r_w1_prima_div_mu_per_Vcn_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
final_r_w1_prima_div_mu_per_Vcn_error+=abs(i-j)
final_r_w1_prima_div_mu_per_Vcn_error/=summation_of_Chung_data
print("Error in final r w1 prima div mu per Vcn =", 100*final_r_w1_prima_div_mu_per_Vcn_error,"%")
error1 = 100*final_Vst_prima_div_mu_per_Vcn_prima_error
error2 = 100*final_r_w1_prima_div_mu_per_Vcn_error
error3 = 0
return error1, error2, error3
class Benchmark6:
def __init__(self):
self.number = 6
self.initial_normal_vel = -0.2
self.initial_tangential_vel = 0
self.radius = 0.1
self.special_quantity_list = []
self.beta_list = []
self.Vst_div_Vcn_list = []
self.Vst_prima_div_Vcn_prima_list = []
self.beta_list_outfile = None
self.Vst_prima_div_Vcn_prima_list_outfile = None
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
degrees = 90 / (number_of_points_in_the_graphic + 1) * iteration
self.initial_tangential_vel = -self.initial_normal_vel * tan(degrees * pi / 180.0) # Here is tangential of the contact point, only. In X axis
initial_angular_vel = -self.initial_tangential_vel / self.radius # In Y axis
for node in modelpart.Nodes:
node.SetSolutionStepValue(VELOCITY_Z, self.initial_normal_vel)
node.SetSolutionStepValue(ANGULAR_VELOCITY_Y, initial_angular_vel)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
mu = 0.4
restitution_coeff = 0.5
for node in modelpart.Nodes:
special_quantity = -3.5 * mu * (1.0 + restitution_coeff) * self.initial_normal_vel / self.initial_tangential_vel
final_angular_vel = node.GetSolutionStepValue(ANGULAR_VELOCITY_Y)
final_tangential_center_velocity = node.GetSolutionStepValue(VELOCITY_X)
final_normal_center_velocity = node.GetSolutionStepValue(VELOCITY_Z)
beta = -(final_tangential_center_velocity - final_angular_vel * self.radius)/ self.initial_tangential_vel
Vst_div_Vcn = -self.initial_tangential_vel / self.initial_normal_vel
Vst_prima_div_Vcn_prima = (final_tangential_center_velocity - final_angular_vel * self.radius) / final_normal_center_velocity
self.special_quantity_list.append(special_quantity)
self.beta_list.append(beta)
self.Vst_div_Vcn_list.append(Vst_div_Vcn)
self.Vst_prima_div_Vcn_prima_list.append(Vst_prima_div_Vcn_prima)
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
self.beta_list_outfile_name = "benchmark6_dt_" + str(dt) + '_beta_list_data.dat'
self.Vst_prima_div_Vcn_prima_list_outfile_name = "benchmark6_dt_" + str(dt) + '_Vst_prima_div_Vcn_prima_data.dat'
self.beta_list_outfile = open(self.beta_list_outfile_name, 'w')
self.Vst_prima_div_Vcn_prima_list_outfile = open(self.Vst_prima_div_Vcn_prima_list_outfile_name, 'w')
for i in range(0, number_of_points_in_the_graphic):
self.beta_list_outfile.write("%14.8f %14.8f" % (self.special_quantity_list[i], self.beta_list[i]) + '\n')
self.Vst_prima_div_Vcn_prima_list_outfile.write("%14.8f %14.8f" % (self.Vst_div_Vcn_list[i], self.Vst_prima_div_Vcn_prima_list[i]) + '\n')
self.beta_list_outfile.close()
self.Vst_prima_div_Vcn_prima_list_outfile.close()
self.create_gnuplot_scripts(self.beta_list_outfile_name, self.Vst_prima_div_Vcn_prima_list_outfile_name, dt)
error1, error2, error3 = self.compute_errors(self.beta_list_outfile_name, self.Vst_prima_div_Vcn_prima_list_outfile_name)
it_is_success = error1 < 3.0 and error2 < 3.0 and error3 < 3.0
error_measure = error1 + error2 + error3
PrintResultsMessage(self.number, it_is_success, error_measure, elapsed_time)
def create_gnuplot_scripts(self, beta_list_outfile_name, Vst_prima_div_Vcn_prima_list_outfile_name, dt):
gnuplot_script_name_1 = 'benchmark6_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:25][-1:.6] '" + beta_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark6_graph1.dat' index 0 w lp ls 1 t 'Al. alloy',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark6_graph1.dat' index 1 w lp ls 2 t 'Nylon'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_2 = 'benchmark6_comparison_2_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_2, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Tangent of incident angle'\nset ylabel 'Tangent of recoil angle'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:7][-2:8] '" + Vst_prima_div_Vcn_prima_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark6_graph2.dat' index 0 w lp ls 1 t 'Al. alloy',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark6_graph2.dat' index 1 w lp ls 2 t 'Nylon'\n")
self.gnuplot_outfile.close()
'''
print_gnuplot_files_on_screen(gnuplot_script_name_1)
print_gnuplot_files_on_screen(gnuplot_script_name_2)'''
def compute_errors(self, beta_list_outfile_name, Vst_prima_div_Vcn_prima_list_outfile_name):
lines_Chung = list(range(1, 7)); lines_DEM = list(range(16, 10, -1)) # Sliding regime for the time being
#lines_Chung = list(range(1, 17)); lines_DEM = list(range(0, 16)) # Whole diagram
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark6_graph1.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split(',')
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(beta_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_beta_list_outfile_name_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
DEM_data.reverse()
for i, j in zip(DEM_data, Chung_data):
final_beta_list_outfile_name_error+=abs(i-j)
final_beta_list_outfile_name_error/=summation_of_Chung_data
print("Error in final beta =", 100*final_beta_list_outfile_name_error,"%")
lines_Chung = list(range(13, 17)); lines_DEM = list(range(12, 16)) # Sliding regime for the time being
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark6_graph2.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split(',')
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(Vst_prima_div_Vcn_prima_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_Vst_prima_div_Vcn_prima_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
final_Vst_prima_div_Vcn_prima_error+=abs(i-j)
final_Vst_prima_div_Vcn_prima_error/=summation_of_Chung_data
print("Error in final Vst prima div Vcn =", 100*final_Vst_prima_div_Vcn_prima_error,"%")
error1 = 100*final_beta_list_outfile_name_error
error2 = 100*final_Vst_prima_div_Vcn_prima_error
error3 = 0
return error1, error2, error3
class Benchmark7:
def __init__(self):
self.number = 7
self.initial_angular_vel = 0
self.final_tangential_center_vel_list_outfile = None
self.final_angular_vel_list_outfile = None
self.initial_angular_vel_list = []
self.final_tangential_center_vel_list = []
self.final_angular_vel_list = []
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
initial_normal_vel = 0.2
radius = 0.1
degrees = 90 / (number_of_points_in_the_graphic + 1) * iteration
self.initial_angular_vel = initial_normal_vel / radius * tan(degrees * pi / 180.0) # Here is tangential of the contact point, only
for node in modelpart.Nodes:
if node.Id == 1:
node.SetSolutionStepValue(VELOCITY_X, initial_normal_vel)
node.SetSolutionStepValue(ANGULAR_VELOCITY_Y, self.initial_angular_vel)
else:
node.SetSolutionStepValue(VELOCITY_X, -initial_normal_vel)
node.SetSolutionStepValue(ANGULAR_VELOCITY_Y, -self.initial_angular_vel)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
for node in modelpart.Nodes:
if node.Id == 1:
final_tangential_center_velocity = node.GetSolutionStepValue(VELOCITY_Z)
final_angular_vel = node.GetSolutionStepValue(ANGULAR_VELOCITY_Y)
self.initial_angular_vel_list.append(self.initial_angular_vel)
self.final_tangential_center_vel_list.append(final_tangential_center_velocity)
self.final_angular_vel_list.append(final_angular_vel)
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
self.final_tangential_center_vel_list_outfile_name = "benchmark7_dt_" + str(dt) + '_final_tangential_center_vel_list_data.dat'
self.final_angular_vel_list_outfile_name = "benchmark7_dt_" + str(dt) + '_final_angular_vel_list_data.dat'
self.final_tangential_center_vel_list_outfile = open(self.final_tangential_center_vel_list_outfile_name, 'w')
self.final_angular_vel_list_outfile = open(self.final_angular_vel_list_outfile_name, 'w')
for i in range(0, number_of_points_in_the_graphic):
self.final_tangential_center_vel_list_outfile.write("%14.8f %14.8f" % (self.initial_angular_vel_list[i], self.final_tangential_center_vel_list[i]) + '\n')
self.final_angular_vel_list_outfile.write("%14.8f %14.8f" % (self.initial_angular_vel_list[i], self.final_angular_vel_list[i]) + '\n')
self.final_tangential_center_vel_list_outfile.close()
self.final_angular_vel_list_outfile.close()
gnuplot_script_name = 'benchmark7_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name, 'w')
self.gnuplot_outfile.write("set multiplot layout 2, 1; set grid; set bmargin 0; set format x \"\"; set ytics -5, 5; set key bottom;\
plot [0:25][-10:10] '" + self.final_tangential_center_vel_list_outfile_name + "' w lp lw 1.5 ps 2 pt 4;\
set bmargin; set tmargin 0; set format x \"%g\"; set ytics 0, 5, 20; set key top;\
plot [0:25][0:25] '" + self.final_angular_vel_list_outfile_name + "' w lp lw 1.5 lt 3 ps 2 pt 6; unset multiplot")
self.gnuplot_outfile.close()
self.create_gnuplot_scripts(self.final_tangential_center_vel_list_outfile_name, self.final_angular_vel_list_outfile_name, dt)
error1, error2, error3 = self.compute_errors(self.final_tangential_center_vel_list_outfile_name, self.final_angular_vel_list_outfile_name)
it_is_success = error1 < 1.0 and error2 < 1.0 and error3 < 1.0
error_measure = error1 + error2 + error3
PrintResultsMessage(self.number, it_is_success, error_measure, elapsed_time)
def create_gnuplot_scripts(self, final_tangential_center_vel_list_outfile_name, final_angular_vel_list_outfile_name, dt):
gnuplot_script_name_1 = 'benchmark7_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:25][-10:10] '" + final_tangential_center_vel_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark7_graph1.dat' w lp ls 1 t 'Al. alloy',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark7_graph1.dat' w lp ls 2 t 'Copper'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_2 = 'benchmark7_comparison_2_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_2, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Initial angular velocity (rad/s)'\nset ylabel 'Final angular velocity (rad/s)'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:25][0:25] '" + final_angular_vel_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark7_graph2.dat' w lp ls 1 t 'Al. alloy',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark7_graph2.dat' w lp ls 2 t 'Copper'\n")
self.gnuplot_outfile.close()
'''
print_gnuplot_files_on_screen(gnuplot_script_name_1)
print_gnuplot_files_on_screen(gnuplot_script_name_2)'''
def compute_errors(self, final_tangential_center_vel_list_outfile_name, final_angular_vel_list_outfile_name):
lines_Chung = []; lines_DEM = []; lines_Chung = list(range(0, 17)); lines_DEM = list(range(0, 17))
Chung_data = []; DEM_data = []
i = 0
with open('paper_data/benchmark7_graph1.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split()
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(final_tangential_center_vel_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_tangential_center_vel_error = 0
for i, j in zip(DEM_data, Chung_data):
final_tangential_center_vel_error+=abs(i-j)
print("Error in final tangential center vel =", final_tangential_center_vel_error)
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark7_graph2.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split()
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(final_angular_vel_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_angular_vel_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
final_angular_vel_error+=abs(i-j)
final_angular_vel_error/=summation_of_Chung_data
print("Error in final angular vel =", 100*final_angular_vel_error,"%")
error1 = 100*final_tangential_center_vel_error
error2 = 100*final_angular_vel_error
error3 = 0
return error1, error2, error3
class Benchmark8:
def __init__(self):
self.number = 8
self.initial_normal_vel = 0.2
self.initial_tangential_vel = 0
self.radius = 0.1
self.special_quantity_list = []
self.beta_list = []
self.Vst_div_Vcn_list = []
self.Vst_prima_div_Vcn_prima_list = []
self.beta_list_outfile = None
self.Vst_prima_div_Vcn_prima_list_outfile = None
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
degrees = 90 - 90 / (number_of_points_in_the_graphic + 1) * iteration
self.initial_tangential_vel = self.initial_normal_vel * tan(degrees * pi / 180.0) # Here is tangential of the contact point, only
initial_angular_vel = -self.initial_tangential_vel / self.radius
for node in modelpart.Nodes:
if node.Id == 1:
node.SetSolutionStepValue(VELOCITY_X, self.initial_normal_vel)
node.SetSolutionStepValue(ANGULAR_VELOCITY_Y, initial_angular_vel)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
mu = 0.4
restitution_coeff = 0.5
for node in modelpart.Nodes:
if node.Id == 1:
special_quantity = 3.5 * mu * (1.0 + restitution_coeff) * self.initial_normal_vel / self.initial_tangential_vel
final_angular_vel = node.GetSolutionStepValue(ANGULAR_VELOCITY_Y)
final_tangential_center_velocity = node.GetSolutionStepValue(VELOCITY_Z)
final_normal_center_velocity = node.GetSolutionStepValue(VELOCITY_X)
beta = -(final_tangential_center_velocity - final_angular_vel * self.radius)/ self.initial_tangential_vel
Vst_div_Vcn = self.initial_tangential_vel / self.initial_normal_vel
Vst_prima_div_Vcn_prima = -(final_tangential_center_velocity - final_angular_vel * self.radius) / final_normal_center_velocity
self.special_quantity_list.append(special_quantity)
self.beta_list.append(beta)
self.Vst_div_Vcn_list.append(Vst_div_Vcn)
self.Vst_prima_div_Vcn_prima_list.append(Vst_prima_div_Vcn_prima)
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
self.beta_list_outfile_name = 'benchmark8_dt_' + str(dt) + 's_beta_list_data.dat'
self.Vst_prima_div_Vcn_prima_list_outfile_name = 'benchmark8_dt_' + str(dt) + 's_Vst_prima_div_Vcn_prima_list_data.dat'
self.beta_list_outfile = open(self.beta_list_outfile_name, 'w')
self.Vst_prima_div_Vcn_prima_list_outfile = open(self.Vst_prima_div_Vcn_prima_list_outfile_name, 'w')
for i in range(0, number_of_points_in_the_graphic):
self.beta_list_outfile.write("%14.8f %14.8f" % (self.special_quantity_list[i], self.beta_list[i]) + '\n')
self.Vst_prima_div_Vcn_prima_list_outfile.write("%14.8f %14.8f" % (self.Vst_div_Vcn_list[i], self.Vst_prima_div_Vcn_prima_list[i]) + '\n')
self.beta_list_outfile.close()
self.Vst_prima_div_Vcn_prima_list_outfile.close()
self.create_gnuplot_scripts(self.beta_list_outfile_name, self.Vst_prima_div_Vcn_prima_list_outfile_name, dt)
error1, error2, error3 = self.compute_errors(self.beta_list_outfile_name, self.Vst_prima_div_Vcn_prima_list_outfile_name)
it_is_success = error1 < 3.0 and error2 < 3.0 and error3 < 3.0
error_measure = error1 + error2 + error3
PrintResultsMessage(self.number, it_is_success, error_measure, elapsed_time)
def create_gnuplot_scripts(self, beta_list_outfile_name, Vst_prima_div_Vcn_prima_list_outfile_name, dt):
gnuplot_script_name_1 = 'benchmark8_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:25][-1:.6] '" + beta_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark8_graph1.dat' index 0 w lp ls 1 t 'Al. alloy',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark8_graph1.dat' index 1 w lp ls 2 t 'Nylon'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_2 = 'benchmark8_comparison_2_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_2, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Tangent of incident angle'\nset ylabel 'Tangent of recoil angle'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:8][-2:8] '" + Vst_prima_div_Vcn_prima_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark8_graph2.dat' index 0 w lp ls 1 t 'Al. alloy',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark8_graph2.dat' index 1 w lp ls 2 t 'Nylon'\n")
self.gnuplot_outfile.close()
'''
print_gnuplot_files_on_screen(gnuplot_script_name_1)
print_gnuplot_files_on_screen(gnuplot_script_name_2)'''
def compute_errors(self, beta_list_outfile_name, Vst_prima_div_Vcn_prima_list_outfile_name):
lines_Chung = []; lines_DEM = []; lines_Chung = list(range(1, 7)); lines_DEM = list(range(0, 6)) # Sliding regime for the time being
#lines_Chung = []; lines_DEM = []; lines_Chung = list(range(1, 18)); lines_DEM = list(range(0, 17)) # Whole diagram
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark8_graph1.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split(',')
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(beta_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_beta_list_outfile_name_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
final_beta_list_outfile_name_error+=abs(i-j)
final_beta_list_outfile_name_error/=summation_of_Chung_data
print("Error in final beta =", 100*final_beta_list_outfile_name_error,"%")
lines_Chung = []; lines_DEM = []; lines_DEM = list(range(4, 0, -1)); lines_Chung = list(range(13, 17)) # Sliding regime for the time being
#lines_Chung = list(range(1, 17)); lines_DEM = list(range(0, 16)) # Whole diagram
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark8_graph2.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split(',')
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(Vst_prima_div_Vcn_prima_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_Vst_prima_div_Vcn_prima_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
DEM_data.reverse()
for i, j in zip(DEM_data, Chung_data):
final_Vst_prima_div_Vcn_prima_error+=abs(i-j)
final_Vst_prima_div_Vcn_prima_error/=summation_of_Chung_data
print("Error in final Vst prima div Vcn =", 100*final_Vst_prima_div_Vcn_prima_error,"%")
error1 = 100*final_beta_list_outfile_name_error
error2 = 100*final_Vst_prima_div_Vcn_prima_error
error3 = 0
return error1, error2, error3
class Benchmark9:
def __init__(self):
self.number = 9
self.initial_normal_vel = 200.0
self.restitution_numbers_list = []
self.generated_data = None
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
if number_of_points_in_the_graphic == 1:
number = 0
else:
number = 1.0/(number_of_points_in_the_graphic-1) * (iteration - 1)
for node in modelpart.Nodes:
if node.Id == 1:
node.SetSolutionStepValue(VELOCITY_X, self.initial_normal_vel)
node.SetSolutionStepValue(VELOCITY_Z, 0.0)
modelpart.GetProperties()[1][COEFFICIENT_OF_RESTITUTION] = number
else:
node.SetSolutionStepValue(VELOCITY_X, -self.initial_normal_vel)
node.SetSolutionStepValue(VELOCITY_Z, 0.0)
modelpart.GetProperties()[1][COEFFICIENT_OF_RESTITUTION] = number
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
for node in modelpart.Nodes:
if node.Id == 1:
final_vel = node.GetSolutionStepValue(VELOCITY_X)
restitution_coefficient = -final_vel / self.initial_normal_vel
self.restitution_numbers_list.append(restitution_coefficient)
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
self.output_filename = "benchmark9_dt_" + str(dt) + '_restitution_numbers_vector_list_data.dat'
self.generated_data = open(self.output_filename, 'w')
for i in range(0, number_of_points_in_the_graphic):
if number_of_points_in_the_graphic == 1:
first_col = 0
else:
first_col = 1/(number_of_points_in_the_graphic-1) * i
self.generated_data.write("%6.4f %11.8f" % (first_col, self.restitution_numbers_list[i]) + '\n')
self.generated_data.close()
gnuplot_script_name = 'benchmark9_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name, 'w')
self.gnuplot_outfile.write("set grid; plot '" + self.output_filename + "' u 1:2 w lp lt 3 lw 1.5 ps 2 pt 4, '"\
+ self.output_filename + "' u 1:3 w lp lt 2 lw 1.5 ps 2 pt 6")
self.gnuplot_outfile.close()
self.create_gnuplot_scripts(self.output_filename, dt)
error1, error2, error3 = self.compute_errors(self.output_filename)
it_is_success = error1 < 1.0 and error2 < 1.0 and error3 < 1.0
error_measure = error1 + error2 + error3
PrintResultsMessage(self.number, it_is_success, error_measure, elapsed_time)
def create_gnuplot_scripts(self, output_filename, dt):
gnuplot_script_name_1 = 'benchmark9_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Coefficient of restitution'\nset ylabel 'Damping ratio'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:1][0:1] '" + output_filename + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark9_graph1.dat' w lp ls 1 t 'Al. oxide',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark9_graph1.dat' w lp ls 2 t 'Cast iron'\n")
self.gnuplot_outfile.close()
#print_gnuplot_files_on_screen(gnuplot_script_name_1)
def compute_errors(self, output_filename):
lines_Chung = lines_DEM = list(range(0, 6));
Chung_data = []; DEM_data = []; summation_of_Chung_data = 0
i = 0
with open('paper_data/benchmark9_graph1.dat') as inf:
for line in inf:
if i in lines_Chung:
parts = line.split()
Chung_data.append(float(parts[1]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
generated_data_error = 0
for j in Chung_data:
summation_of_Chung_data+=abs(j)
for i, j in zip(DEM_data, Chung_data):
generated_data_error+=abs(i-j)
generated_data_error/=summation_of_Chung_data
print("Error in restitution numbers =", 100*generated_data_error,"%")
error1 = 100*generated_data_error
error2 = error3 = 0
return error1, error2, error3
class Benchmark10: ########## LINEAR THORNTON
def __init__(self):
self.number = 10
self.initial_normal_vel = -5.0
self.initial_tangential_vel = 0
self.radius = 0.025
self.normalized_impact_angle_list = []
self.normalized_rebound_tangential_surface_vel_list = []
self.normalized_rebound_angular_velocity_list = []
self.tangential_coefficient_of_restitution_list = []
self.normalized_rebound_tangential_surface_vel_list_outfile = None
self.normalized_rebound_angular_velocity_list_outfile = None
self.tangential_coefficient_of_restitution_list_outfile = None
self.coeff_of_restitution = -1.0
self.coeff_of_rest_string = None
self.lines_Thornton = []
self.lines_DEM = []
self.degrees = 0
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration): # Change this function name from 'set_initial_data' to 'set_initial_data'
if iteration == 1:
self.degrees = 1
else:
self.degrees = 50 * (iteration - 1)/number_of_points_in_the_graphic
if coeff_of_restitution_iteration==1:
self.coeff_of_restitution=0.25
self.coeff_of_rest_string='025'
self.lines_Thornton = [12, 13, 15, 16, 18, 19]
self.lines_DEM = [0, 1, 3, 4, 5, 6]
elif coeff_of_restitution_iteration==2:
self.coeff_of_restitution=0.50
self.coeff_of_rest_string='050'
self.lines_Thornton = [14, 15, 17, 18, 20, 22, 23]
self.lines_DEM = [0, 1, 3, 4, 5, 6, 7]
elif coeff_of_restitution_iteration==3:
self.coeff_of_restitution=0.75
self.coeff_of_rest_string='075'
self.lines_Thornton = [14, 15, 17, 18, 19, 22, 23, 24]
self.lines_DEM = [0, 1, 3, 4, 5, 6, 7, 8]
else:
self.coeff_of_restitution=0.90
self.coeff_of_rest_string='090'
self.lines_Thornton = [13, 14, 16, 17, 18, 21, 22, 23]
self.lines_DEM = [0, 1, 3, 4, 5, 6, 7, 8]
self.initial_tangential_vel = -self.initial_normal_vel * tan(self.degrees * pi / 180.0)
for node in modelpart.Nodes:
node.SetSolutionStepValue(VELOCITY_Y, self.initial_tangential_vel)
node.SetSolutionStepValue(VELOCITY_Z, self.initial_normal_vel)
modelpart.GetProperties()[1][COEFFICIENT_OF_RESTITUTION] = self.coeff_of_restitution
print(self.coeff_of_restitution)
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
mu = 0.1
for node in modelpart.Nodes:
final_angular_vel = node.GetSolutionStepValue(ANGULAR_VELOCITY_X)
final_tangential_center_velocity = node.GetSolutionStepValue(VELOCITY_Y)
normalized_impact_angle = 2.0 * tan(self.degrees * pi / 180.0) / (mu * (1 + self.coeff_of_restitution))
normalized_rebound_tangential_surface_vel = -2.0 * (final_tangential_center_velocity + final_angular_vel * self.radius) / (self.initial_normal_vel * mu * (1 + self.coeff_of_restitution))
normalized_rebound_angular_velocity = -2.0 * self.radius * final_angular_vel / (self.initial_normal_vel * mu * (1 + self.coeff_of_restitution))
tangential_coefficient_of_restitution = 5.0/7.0 + 2.0 * normalized_rebound_tangential_surface_vel / (7.0 * normalized_impact_angle)
self.normalized_impact_angle_list.append(normalized_impact_angle)
self.normalized_rebound_tangential_surface_vel_list.append(normalized_rebound_tangential_surface_vel)
self.normalized_rebound_angular_velocity_list.append(normalized_rebound_angular_velocity)
self.tangential_coefficient_of_restitution_list.append(tangential_coefficient_of_restitution)
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
self.normalized_rebound_tangential_surface_vel_list_outfile_name = "benchmark10_dt_" + str(dt) + '_normalized_rebound_tangential_surface_vel_list_data.dat'
self.normalized_rebound_angular_velocity_list_outfile_name = "benchmark10_dt_" + str(dt) + '_normalized_rebound_angular_velocity_list_data.dat'
self.tangential_coefficient_of_restitution_list_outfile_name = "benchmark10_dt_" + str(dt) + '_tangential_coefficient_of_restitution_list_data.dat'
self.normalized_rebound_tangential_surface_vel_list_outfile = open(self.normalized_rebound_tangential_surface_vel_list_outfile_name, 'w')
self.normalized_rebound_angular_velocity_list_outfile = open(self.normalized_rebound_angular_velocity_list_outfile_name, 'w')
self.tangential_coefficient_of_restitution_list_outfile = open(self.tangential_coefficient_of_restitution_list_outfile_name, 'w')
for i in range(0, number_of_points_in_the_graphic):
self.normalized_rebound_tangential_surface_vel_list_outfile.write("%14.8f %14.8f" % (self.normalized_impact_angle_list[i], self.normalized_rebound_tangential_surface_vel_list[i]) + '\n')
self.normalized_rebound_angular_velocity_list_outfile.write("%14.8f %14.8f" % (self.normalized_impact_angle_list[i], self.normalized_rebound_angular_velocity_list[i]) + '\n')
self.tangential_coefficient_of_restitution_list_outfile.write("%14.8f %14.8f" % (self.normalized_impact_angle_list[i], self.tangential_coefficient_of_restitution_list[i]) + '\n')
self.normalized_rebound_tangential_surface_vel_list_outfile.close()
self.normalized_rebound_angular_velocity_list_outfile.close()
self.tangential_coefficient_of_restitution_list_outfile.close()
self.create_gnuplot_scripts(self.normalized_rebound_tangential_surface_vel_list_outfile_name,
self.normalized_rebound_angular_velocity_list_outfile_name,
self.tangential_coefficient_of_restitution_list_outfile_name,
self.coeff_of_rest_string, dt)
error1, error2, error3 = self.compute_errors(self.normalized_rebound_tangential_surface_vel_list_outfile_name,
self.normalized_rebound_angular_velocity_list_outfile_name,
self.tangential_coefficient_of_restitution_list_outfile_name)
coeff_of_rest = '%.2f' % self.coeff_of_restitution
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
if (coeff_of_rest=='0.25'):
error_file.write("\n===== THORNTON PAPER TESTS. FULL REGIME. LINEAR LAW =====\n\n")
error_file.write("DEM Benchmark 10:")
if (error1 < 5.0 and error2 < 5.0 and error3 < 5.0):
error_file.write(" OK!........ Test 10 (e=" + coeff_of_rest + ") SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 10 (e=" + coeff_of_rest + ") FAILED\n")
error_file.close()
self.normalized_impact_angle_list = []
self.normalized_rebound_tangential_surface_vel_list = []
self.normalized_rebound_angular_velocity_list = []
self.tangential_coefficient_of_restitution_list = []
def create_gnuplot_scripts(self, normalized_rebound_tangential_surface_vel_list_outfile_name,
normalized_rebound_angular_velocity_list_outfile_name,
tangential_coefficient_of_restitution_list_outfile_name,
coeff_of_rest_string, dt):
gnuplot_script_name_1 = 'benchmark10_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Normalized incident angle'\nset ylabel 'Normalized rebound tangential surface velocity'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:10][-2:3] '" + normalized_rebound_tangential_surface_vel_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/bench_10_norm_reb_tang_e_" + coeff_of_rest_string + ".dat' index 1 w lp ls 1 t 'Paper data'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_2 = 'benchmark10_comparison_2_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_2, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Normalized incident angle'\nset ylabel 'Normalized final angular velocity'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:14][-6:0] '" + normalized_rebound_angular_velocity_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/bench_10_norm_reb_ang_vel_e_" + coeff_of_rest_string + ".dat' index 1 w lp ls 1 t 'Paper data'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_3 = 'benchmark10_comparison_3_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_3, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Normalized incident angle'\nset ylabel 'Tangential coefficient of restitution'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:10][0.5:1.0] '" + tangential_coefficient_of_restitution_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/bench_10_tang_coeff_rest_e_" + coeff_of_rest_string + ".dat' index 1 w lp ls 1 t 'Paper data'\n")
self.gnuplot_outfile.close()
'''
print_gnuplot_files_on_screen(gnuplot_script_name_1)
print_gnuplot_files_on_screen(gnuplot_script_name_2)
print_gnuplot_files_on_screen(gnuplot_script_name_3)
'''
def compute_errors(self, normalized_rebound_tangential_surface_vel_list_outfile_name,
normalized_rebound_angular_velocity_list_outfile_name,
tangential_coefficient_of_restitution_list_outfile_name):
#
Thornton_data = []; DEM_data = []; summation_of_Thornton_data = 0
i = 0
path = "paper_data/bench_10_norm_reb_tang_e_" + self.coeff_of_rest_string + ".dat"
with open(path) as inf:
for line in inf:
if i in self.lines_Thornton:
parts = line.split(',')
Thornton_data.append(float(parts[1]))
i+=1
i = 0
with open(normalized_rebound_tangential_surface_vel_list_outfile_name) as inf:
for line in inf:
if i in self.lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_normalized_rebound_tangential_surface_vel_error = 0
for j in Thornton_data:
summation_of_Thornton_data+=abs(j)
for i, j in zip(DEM_data, Thornton_data):
final_normalized_rebound_tangential_surface_vel_error+=abs(i-j)
final_normalized_rebound_tangential_surface_vel_error/=summation_of_Thornton_data
print("Error in normalized rebound tangential surface velocity =", 100*final_normalized_rebound_tangential_surface_vel_error,"%")
#
Thornton_data = []; DEM_data = []; summation_of_Thornton_data = 0
i = 0
path = "paper_data/bench_10_norm_reb_ang_vel_e_" + self.coeff_of_rest_string + ".dat"
with open(path) as inf:
for line in inf:
if i in self.lines_Thornton:
parts = line.split(',')
Thornton_data.append(float(parts[1]))
i+=1
i = 0
with open(normalized_rebound_angular_velocity_list_outfile_name) as inf:
for line in inf:
if i in self.lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_normalized_rebound_angular_velocity_error = 0
for j in Thornton_data:
summation_of_Thornton_data+=abs(j)
for i, j in zip(DEM_data, Thornton_data):
final_normalized_rebound_angular_velocity_error+=abs(i-j)
final_normalized_rebound_angular_velocity_error/=summation_of_Thornton_data
print("Error in normalized rebound angular velocity =", 100*final_normalized_rebound_angular_velocity_error,"%")
#
Thornton_data = []; DEM_data = []; summation_of_Thornton_data = 0
i = 0
path = "paper_data/bench_10_tang_coeff_rest_e_" + self.coeff_of_rest_string + ".dat"
with open(path) as inf:
for line in inf:
if i in self.lines_Thornton:
parts = line.split(',')
Thornton_data.append(float(parts[1]))
i+=1
i = 0
with open(tangential_coefficient_of_restitution_list_outfile_name) as inf:
for line in inf:
if i in self.lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_tangential_coefficient_of_restitution_error = 0
for j in Thornton_data:
summation_of_Thornton_data+=abs(j)
for i, j in zip(DEM_data, Thornton_data):
final_tangential_coefficient_of_restitution_error+=abs(i-j)
final_tangential_coefficient_of_restitution_error/=summation_of_Thornton_data
print("Error in final tangential coefficient of restitution =", 100*final_tangential_coefficient_of_restitution_error,"%")
#
error1 = 100*final_normalized_rebound_tangential_surface_vel_error
error2 = 100*final_normalized_rebound_angular_velocity_error
error3 = 100*final_tangential_coefficient_of_restitution_error
return error1, error2, error3
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
class Benchmark11: ########## HERTZIAN THORNTON
def __init__(self):
self.number = 11
self.initial_normal_vel = -5.0
self.initial_tangential_vel = 0
self.radius = 0.025
self.normalized_impact_angle_list = []
self.normalized_rebound_tangential_surface_vel_list = []
self.normalized_rebound_angular_velocity_list = []
self.tangential_coefficient_of_restitution_list = []
self.normalized_rebound_tangential_surface_vel_list_outfile = None
self.normalized_rebound_angular_velocity_list_outfile = None
self.tangential_coefficient_of_restitution_list_outfile = None
self.coeff_of_restitution = -1.0
self.coeff_of_rest_string = None
self.lines_Thornton = []
self.lines_DEM = []
self.degrees = 0
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration): # Change this function name from 'set_initial_data' to 'set_initial_data'
if iteration == 1:
self.degrees = 1
else:
self.degrees = 50 * (iteration - 1)/number_of_points_in_the_graphic
if coeff_of_restitution_iteration==1:
self.coeff_of_restitution=0.25
self.coeff_of_rest_string='025'
self.lines_Thornton = [1, 2, 4, 5, 7, 8]
self.lines_DEM = [0, 1, 3, 4, 5, 6]
elif coeff_of_restitution_iteration==2:
self.coeff_of_restitution=0.50
self.coeff_of_rest_string='050'
self.lines_Thornton = [1, 2, 4, 5, 7, 9, 10]
self.lines_DEM = [0, 1, 3, 4, 5, 6, 7]
elif coeff_of_restitution_iteration==3:
self.coeff_of_restitution=0.75
self.coeff_of_rest_string='075'
self.lines_Thornton = [1, 2, 4, 5, 6, 8, 9, 10]
self.lines_DEM = [0, 1, 3, 4, 5, 6, 7, 8]
else:
self.coeff_of_restitution=0.90
self.coeff_of_rest_string='090'
#self.lines_Thornton = [1, 2, 4, 5, 6, 8, 9]
#self.lines_DEM = [0, 1, 3, 4, 5, 7, 8]
self.lines_Thornton = [1, 2, 4, 5, 6, 7, 8, 9]
self.lines_DEM = [0, 1, 3, 4, 5, 6, 7, 8]
self.initial_tangential_vel = -self.initial_normal_vel * tan(self.degrees * pi / 180.0)
for node in modelpart.Nodes:
node.SetSolutionStepValue(VELOCITY_Y, self.initial_tangential_vel)
node.SetSolutionStepValue(VELOCITY_Z, self.initial_normal_vel)
modelpart.GetProperties()[1][COEFFICIENT_OF_RESTITUTION] = self.coeff_of_restitution
print(self.coeff_of_restitution)
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
mu = 0.1
for node in modelpart.Nodes:
final_angular_vel = node.GetSolutionStepValue(ANGULAR_VELOCITY_X)
final_tangential_center_velocity = node.GetSolutionStepValue(VELOCITY_Y)
normalized_impact_angle = 2.0 * tan(self.degrees * pi / 180.0) / (mu * (1 + self.coeff_of_restitution))
normalized_rebound_tangential_surface_vel = -2.0 * (final_tangential_center_velocity + final_angular_vel * self.radius) / (self.initial_normal_vel * mu * (1 + self.coeff_of_restitution))
normalized_rebound_angular_velocity = -2.0 * self.radius * final_angular_vel / (self.initial_normal_vel * mu * (1 + self.coeff_of_restitution))
tangential_coefficient_of_restitution = 5.0/7.0 + 2.0 * normalized_rebound_tangential_surface_vel / (7.0 * normalized_impact_angle)
self.normalized_impact_angle_list.append(normalized_impact_angle)
self.normalized_rebound_tangential_surface_vel_list.append(normalized_rebound_tangential_surface_vel)
self.normalized_rebound_angular_velocity_list.append(normalized_rebound_angular_velocity)
self.tangential_coefficient_of_restitution_list.append(tangential_coefficient_of_restitution)
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
self.normalized_rebound_tangential_surface_vel_list_outfile_name = "benchmark11_dt_" + str(dt) + '_normalized_rebound_tangential_surface_vel_list_data.dat'
self.normalized_rebound_angular_velocity_list_outfile_name = "benchmark11_dt_" + str(dt) + '_normalized_rebound_angular_velocity_list_data.dat'
self.tangential_coefficient_of_restitution_list_outfile_name = "benchmark11_dt_" + str(dt) + '_tangential_coefficient_of_restitution_list_data.dat'
self.normalized_rebound_tangential_surface_vel_list_outfile = open(self.normalized_rebound_tangential_surface_vel_list_outfile_name, 'w')
self.normalized_rebound_angular_velocity_list_outfile = open(self.normalized_rebound_angular_velocity_list_outfile_name, 'w')
self.tangential_coefficient_of_restitution_list_outfile = open(self.tangential_coefficient_of_restitution_list_outfile_name, 'w')
for i in range(0, number_of_points_in_the_graphic):
self.normalized_rebound_tangential_surface_vel_list_outfile.write("%14.8f %14.8f" % (self.normalized_impact_angle_list[i], self.normalized_rebound_tangential_surface_vel_list[i]) + '\n')
self.normalized_rebound_angular_velocity_list_outfile.write("%14.8f %14.8f" % (self.normalized_impact_angle_list[i], self.normalized_rebound_angular_velocity_list[i]) + '\n')
self.tangential_coefficient_of_restitution_list_outfile.write("%14.8f %14.8f" % (self.normalized_impact_angle_list[i], self.tangential_coefficient_of_restitution_list[i]) + '\n')
self.normalized_rebound_tangential_surface_vel_list_outfile.close()
self.normalized_rebound_angular_velocity_list_outfile.close()
self.tangential_coefficient_of_restitution_list_outfile.close()
self.create_gnuplot_scripts(self.normalized_rebound_tangential_surface_vel_list_outfile_name,
self.normalized_rebound_angular_velocity_list_outfile_name,
self.tangential_coefficient_of_restitution_list_outfile_name,
self.coeff_of_rest_string, dt)
error1, error2, error3 = self.compute_errors(self.normalized_rebound_tangential_surface_vel_list_outfile_name,
self.normalized_rebound_angular_velocity_list_outfile_name,
self.tangential_coefficient_of_restitution_list_outfile_name)
coeff_of_rest = '%.2f' % self.coeff_of_restitution
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
if (coeff_of_rest=='0.25'):
error_file.write("\n==== THORNTON PAPER TESTS. FULL REGIME. HERTZIAN LAW ====\n\n")
error_file.write("DEM Benchmark 11:")
if (error1 < 6.0 and error2 < 6.0 and error3 < 6.0):
error_file.write(" OK!........ Test 11 (e=" + coeff_of_rest + ") SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 11 (e=" + coeff_of_rest + ") FAILED\n")
error_file.close()
self.normalized_impact_angle_list = []
self.normalized_rebound_tangential_surface_vel_list = []
self.normalized_rebound_angular_velocity_list = []
self.tangential_coefficient_of_restitution_list = []
def create_gnuplot_scripts(self, normalized_rebound_tangential_surface_vel_list_outfile_name,
normalized_rebound_angular_velocity_list_outfile_name,
tangential_coefficient_of_restitution_list_outfile_name,
coeff_of_rest_string, dt):
gnuplot_script_name_1 = 'benchmark11_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Normalized incident angle'\nset ylabel 'Normalized rebound tangential surface velocity'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:10][-2:3] '" + normalized_rebound_tangential_surface_vel_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/bench_10_norm_reb_tang_e_" + coeff_of_rest_string + ".dat' index 0 w lp ls 1 t 'Paper data'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_2 = 'benchmark11_comparison_2_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_2, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Normalized incident angle'\nset ylabel 'Normalized final angular velocity'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:14][-6:0] '" + normalized_rebound_angular_velocity_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/bench_10_norm_reb_ang_vel_e_" + coeff_of_rest_string + ".dat' index 0 w lp ls 1 t 'Paper data'\n")
self.gnuplot_outfile.close()
gnuplot_script_name_3 = 'benchmark11_comparison_3_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_3, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Normalized incident angle'\nset ylabel 'Tangential coefficient of restitution'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:10][0.5:1.0] '" + tangential_coefficient_of_restitution_list_outfile_name + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/bench_10_tang_coeff_rest_e_" + coeff_of_rest_string + ".dat' index 0 w lp ls 1 t 'Paper data'\n")
self.gnuplot_outfile.close()
'''
print_gnuplot_files_on_screen(gnuplot_script_name_1)
print_gnuplot_files_on_screen(gnuplot_script_name_2)
print_gnuplot_files_on_screen(gnuplot_script_name_3)
'''
def compute_errors(self, normalized_rebound_tangential_surface_vel_list_outfile_name,
normalized_rebound_angular_velocity_list_outfile_name,
tangential_coefficient_of_restitution_list_outfile_name):
#
Thornton_data = []; DEM_data = []; summation_of_Thornton_data = 0
i = 0
path = "paper_data/bench_10_norm_reb_tang_e_" + self.coeff_of_rest_string + ".dat"
with open(path) as inf:
for line in inf:
if i in self.lines_Thornton:
parts = line.split(',')
Thornton_data.append(float(parts[1]))
i+=1
i = 0
with open(normalized_rebound_tangential_surface_vel_list_outfile_name) as inf:
for line in inf:
if i in self.lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_normalized_rebound_tangential_surface_vel_error = 0
for j in Thornton_data:
summation_of_Thornton_data+=abs(j)
for i, j in zip(DEM_data, Thornton_data):
final_normalized_rebound_tangential_surface_vel_error+=abs(i-j)
final_normalized_rebound_tangential_surface_vel_error/=summation_of_Thornton_data
print("Error in normalized rebound tangential surface velocity =", 100*final_normalized_rebound_tangential_surface_vel_error,"%")
#
Thornton_data = []; DEM_data = []; summation_of_Thornton_data = 0
i = 0
path = "paper_data/bench_10_norm_reb_ang_vel_e_" + self.coeff_of_rest_string + ".dat"
with open(path) as inf:
for line in inf:
if i in self.lines_Thornton:
parts = line.split(',')
Thornton_data.append(float(parts[1]))
i+=1
i = 0
with open(normalized_rebound_angular_velocity_list_outfile_name) as inf:
for line in inf:
if i in self.lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_normalized_rebound_angular_velocity_error = 0
for j in Thornton_data:
summation_of_Thornton_data+=abs(j)
for i, j in zip(DEM_data, Thornton_data):
final_normalized_rebound_angular_velocity_error+=abs(i-j)
final_normalized_rebound_angular_velocity_error/=summation_of_Thornton_data
print("Error in normalized rebound angular velocity =", 100*final_normalized_rebound_angular_velocity_error,"%")
#
Thornton_data = []; DEM_data = []; summation_of_Thornton_data = 0
i = 0
path = "paper_data/bench_10_tang_coeff_rest_e_" + self.coeff_of_rest_string + ".dat"
with open(path) as inf:
for line in inf:
if i in self.lines_Thornton:
parts = line.split(',')
Thornton_data.append(float(parts[1]))
i+=1
i = 0
with open(tangential_coefficient_of_restitution_list_outfile_name) as inf:
for line in inf:
if i in self.lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
final_tangential_coefficient_of_restitution_error = 0
for j in Thornton_data:
summation_of_Thornton_data+=abs(j)
for i, j in zip(DEM_data, Thornton_data):
final_tangential_coefficient_of_restitution_error+=abs(i-j)
final_tangential_coefficient_of_restitution_error/=summation_of_Thornton_data
print("Error in final tangential coefficient of restitution =", 100*final_tangential_coefficient_of_restitution_error,"%")
#
error1 = 100*final_normalized_rebound_tangential_surface_vel_error
error2 = 100*final_normalized_rebound_angular_velocity_error
error3 = 100*final_tangential_coefficient_of_restitution_error
return error1, error2, error3
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
pass
class Benchmark12: ########## ROLLING FRICTION
def __init__(self):
self.number = 12
self.balls_graph_counter = 1
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
total_angular_velocity_z = 0.0
for node in modelpart.Nodes:
if node.Id == 1:
angular_velocity_z = node.GetSolutionStepValue(ANGULAR_VELOCITY_Z)
total_angular_velocity_z += angular_velocity_z
del node
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_angular_velocity_z).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2, error3 = self.compute_errors(self.output_filename)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("\n\n")
error_file.write("==== WENSRICH PAPER TEST. ROLLING FRICTION ====\n\n")
error_file.write("DEM Benchmark 12:")
if (error1 < 0.1 and error2 < 0.1 and error3 < 0.1):
error_file.write(" OK!........ Test 12 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 12 FAILED\n")
error_file.close()
def compute_errors(self, output_filename):
lines_analytics = lines_DEM = list(range(0, 1000));
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/benchmark' + str(sys.argv[1]) + '_graph.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
analytics_data.append(float(parts[1]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1]))
i+=1
generated_data_error = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
generated_data_error+=abs(i-j)
generated_data_error/=summation_of_analytics_data
print("Error in simulation =", 100*generated_data_error,"%")
error1 = 100*generated_data_error
error2 = error3 = 0
return error1, error2, error3
def create_gnuplot_scripts(self, output_filename, dt):
pass
class Benchmark13: ########## DEM-FEM Facet
def __init__(self):
self.number = 13
self.balls_graph_counter = 1 # deberia ser self.balls_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.velocity_list_outfile_name = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.velocity_list_outfile_name, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency): #if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
total_velocity_x = 0.0
total_velocity_z = 0.0
for node in modelpart.Nodes:
if node.Id == 1:
velocity_x = node.GetSolutionStepValue(VELOCITY_X)
velocity_z = node.GetSolutionStepValue(VELOCITY_Z)
total_velocity_x += velocity_x
total_velocity_z += velocity_z
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_velocity_x).rjust(13)+" "+str("%.6g"%total_velocity_z).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2, error3 = self.compute_errors(self.velocity_list_outfile_name)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("\n\n")
error_file.write("======== DE/FE CONTACT BENCHMARKS ==========\n\n")
error_file.write("DEM Benchmark 13:")
if (error1 < 0.1 and error2 < 0.1 and error3 < 0.1):
error_file.write(" OK!........ Test 13 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 13 FAILED\n")
error_file.close()
def compute_errors(self, velocity_list_outfile_name): #FINALIZATION STEP
lines_DEM = list(range(0, 200));
total_velocity_x = 0.0; total_velocity_z = 0.0
i = 0
with open(velocity_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
total_velocity_x += float(parts[1])
total_velocity_z += float(parts[2])
i+=1
if total_velocity_x > 0.0: #VELOCITY_X should be 0 always
error1 = 100
else:
error1 = 0
if total_velocity_z > 0.0: #VELOCITY_Z should be 0 always
error2 = 100
else:
error2 = 0
error3 = 0
print("Error in velocity X =", error1,"%")
print("Error in velocity Z =", error2,"%")
return error1, error2, error3
class Benchmark14: ########## DEM-FEM Edge
def __init__(self):
self.number = 14
self.balls_graph_counter = 1 # deberia ser self.balls_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.velocity_list_outfile_name = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.velocity_list_outfile_name, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency): #if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
total_velocity_x = 0.0
total_velocity_z = 0.0
for node in modelpart.Nodes:
if node.Id == 1:
velocity_x = node.GetSolutionStepValue(VELOCITY_X)
velocity_z = node.GetSolutionStepValue(VELOCITY_Z)
total_velocity_x += velocity_x
total_velocity_z += velocity_z
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_velocity_x).rjust(13)+" "+str("%.6g"%total_velocity_z).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2, error3 = self.compute_errors(self.velocity_list_outfile_name)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 14:")
if (error1 < 0.1 and error2 < 0.1 and error3 < 0.1):
error_file.write(" OK!........ Test 14 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 14 FAILED\n")
error_file.close()
def compute_errors(self, velocity_list_outfile_name): #FINALIZATION STEP
lines_DEM = list(range(0, 200));
total_velocity_x = 0.0; total_velocity_z = 0.0
i = 0
with open(velocity_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
total_velocity_x += float(parts[1])
total_velocity_z += float(parts[2])
i+=1
if total_velocity_x > 0.0: #VELOCITY_X should be 0 always
error1 = 100
else:
error1 = 0
if total_velocity_z > 0.0: #VELOCITY_Z should be 0 always
error2 = 100
else:
error2 = 0
error3 = 0
print("Error in velocity X =", error1,"%")
print("Error in velocity Z =", error2,"%")
return error1, error2, error3
class Benchmark15: ########## DEM-FEM Vertex
def __init__(self):
self.number = 15
self.balls_graph_counter = 1 # deberia ser self.balls_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.velocity_list_outfile_name = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.velocity_list_outfile_name, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency): #if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
total_velocity_x = 0.0
total_velocity_z = 0.0
for node in modelpart.Nodes:
if node.Id == 1:
velocity_x = node.GetSolutionStepValue(VELOCITY_X)
velocity_z = node.GetSolutionStepValue(VELOCITY_Z)
total_velocity_x += velocity_x
total_velocity_z += velocity_z
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_velocity_x).rjust(13)+" "+str("%.6g"%total_velocity_z).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2, error3 = self.compute_errors(self.velocity_list_outfile_name)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 15:")
if (error1 < 0.1 and error2 < 0.1 and error3 < 0.1):
error_file.write(" OK!........ Test 15 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 15 FAILED\n")
error_file.close()
def compute_errors(self, velocity_list_outfile_name): #FINALIZATION STEP
lines_DEM = list(range(0, 200));
total_velocity_x = 0.0; total_velocity_z = 0.0
i = 0
with open(velocity_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
total_velocity_x += float(parts[1])
total_velocity_z += float(parts[2])
i+=1
if total_velocity_x > 0.0: #VELOCITY_X should be 0 always
error1 = 100
else:
error1 = 0
if total_velocity_z > 0.0: #VELOCITY_Z should be 0 always
error2 = 100
else:
error2 = 0
error3 = 0
print("Error in velocity X =", error1,"%")
print("Error in velocity Z =", error2,"%")
return error1, error2, error3
class Benchmark16: ########## DEM-FEM Grid
def __init__(self):
self.number = 16
self.balls_graph_counter = 1 # deberia ser self.balls_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.velocity_list_outfile_name = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.velocity_list_outfile_name, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency): #if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
total_velocity_1 = 0.0
total_velocity_2 = 0.0
total_velocity_3 = 0.0
for node in modelpart.Nodes:
if node.Id == 1:
velocity_1 = node.GetSolutionStepValue(VELOCITY_Y)
total_velocity_1 += velocity_1
if node.Id == 2:
velocity_2 = node.GetSolutionStepValue(VELOCITY_Y)
total_velocity_2 += velocity_2
if node.Id == 3:
velocity_3 = node.GetSolutionStepValue(VELOCITY_Y)
total_velocity_3 += velocity_3
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_velocity_1).rjust(13)+" "+str("%.6g"%total_velocity_2).rjust(13)+" "+str("%.6g"%total_velocity_3).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2, error3 = self.compute_errors(self.velocity_list_outfile_name)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 16:")
if (error1 < 0.1 and error2 < 0.1 and error3 < 0.1):
error_file.write(" OK!........ Test 16 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 16 FAILED\n")
error_file.close()
def compute_errors(self, output_filename): #FINALIZATION STEP
lines_analytics = lines_DEM = list(range(0, 250));
ref_data1 = []; ref_data2 = []; DEM_data1 = []; ref_data3 = []; DEM_data1 = []; DEM_data2 = []; DEM_data3 = []; summation_of_ref_data1 = 0; summation_of_ref_data2 = 0; summation_of_ref_data3 = 0
times = []
i = 0
with open('paper_data/benchmark' + str(sys.argv[1]) + '_graph.dat') as inf: #with open('paper_data/reference_graph_benchmark12.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
times.append(float(parts[0]))
ref_data1.append(float(parts[1]))
ref_data2.append(float(parts[2]))
ref_data3.append(float(parts[3]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data1.append(float(parts[1]))
DEM_data2.append(float(parts[2]))
DEM_data3.append(float(parts[3]))
i+=1
final_velocity_1_error = 0
final_velocity_2_error = 0
final_velocity_3_error = 0
for j in ref_data1:
summation_of_ref_data1+=abs(j)
for k in ref_data2:
summation_of_ref_data2+=abs(k)
for l in ref_data3:
summation_of_ref_data3+=abs(l)
for i, j in zip(DEM_data1, ref_data1):
final_velocity_1_error+=abs(i-j)
final_velocity_1_error/=summation_of_ref_data1
for k, l in zip(DEM_data2, ref_data2):
final_velocity_2_error+=abs(k-l)
final_velocity_2_error/=summation_of_ref_data2
for m, n in zip(DEM_data3, ref_data3):
final_velocity_3_error+=abs(m-n)
final_velocity_3_error/=summation_of_ref_data3
#for t, v1,v2,v3 in zip(times, DEM_data1, DEM_data2, DEM_data3):
# print(t, v1, v2, v3)
print("Error in velocity sphere 1 =", 100*final_velocity_1_error,"%")
print("Error in velocity sphere 2 =", 100*final_velocity_2_error,"%")
print("Error in velocity sphere 3 =", 100*final_velocity_3_error,"%")
error1 = 100*final_velocity_1_error
error2 = 100*final_velocity_2_error
error3 = 100*final_velocity_3_error
return error1, error2, error3
class Benchmark17: ########## DEM-FEM Rolling
def __init__(self):
self.number = 17
self.balls_graph_counter = 1 # deberia ser self.balls_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.error_list_outfile_name = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.error_list_outfile_name, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency): #if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
total_velocity_err = 0.0
total_angular_velocity_err = 0.0
for node in modelpart.Nodes:
if node.Id == 1:
velocity_1 = node.GetSolutionStepValue(VELOCITY_X)
angular_velocity_1 = node.GetSolutionStepValue(ANGULAR_VELOCITY_Z)
if node.Id == 2:
velocity_2 = node.GetSolutionStepValue(VELOCITY_X)
angular_velocity_2 = node.GetSolutionStepValue(ANGULAR_VELOCITY_Z)
total_velocity_err = (abs(velocity_1 - velocity_2))/(abs(velocity_2))
total_angular_velocity_err = (abs(angular_velocity_1 - angular_velocity_2))/(abs(velocity_2))
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_velocity_err).rjust(13)+" "+str("%.6g"%total_angular_velocity_err).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2, error3 = self.compute_errors(self.error_list_outfile_name)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 17:")
if (error1 < 0.1 and error2 < 0.1 and error3 < 0.1):
error_file.write(" OK!........ Test 17 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 17 FAILED\n")
error_file.close()
def compute_errors(self, error_list_outfile_name): #FINALIZATION STEP
lines_DEM = list(range(0, 100));
total_velocity_err = 0.0; total_angular_velocity_err = 0.0
i = 0
with open(error_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
total_velocity_err += float(parts[1])
total_angular_velocity_err += float(parts[2])
i+=1
if total_velocity_err > 1e-2: #VELOCITY_X should be 0 always
error1 = 100*total_velocity_err
else:
error1 = 0
if total_angular_velocity_err > 1e-2: #VELOCITY_Z should be 0 always
error2 = 100*total_angular_velocity_err
else:
error2 = 0
error3 = 0
print("Error in velocity between meshes =", 100*total_velocity_err,"%")
print("Error in angular velocity between meshes =", 100*total_angular_velocity_err,"%")
return error1, error2, error3
class Benchmark20:
def __init__(self):
self.number = 20
self.generated_data = None
#self.graph_frequency = int(graph_print_interval/dt) # def __init__(self, graph_print_interval, dt):
self.balls_graph_counter = 1 # deberia ser self.balls_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration): #INITIALIZATION STEP
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency): #if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
self.total_force_x = 0.0
self.total_force_y = 0.0
for node in modelpart.Nodes:
if node.Id == 141:
force_node_x = node.GetSolutionStepValue(ELASTIC_FORCES)[0]
force_node_y = node.GetSolutionStepValue(ELASTIC_FORCES)[1]
self.total_force_x += force_node_x
self.total_force_y += force_node_y
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_x).rjust(13)+" "+str("%.6g"%self.total_force_y).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
'''
gnuplot_script_name = 'benchmark3_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name, 'w')
self.gnuplot_outfile.write("set grid; plot '" + self.output_filename + "' u 1:2 w lp lt 3 lw 1.5 ps 2 pt 4, '"\
+ self.output_filename + "' u 1:3 w lp lt 2 lw 1.5 ps 2 pt 6")
self.gnuplot_outfile.close()
self.create_gnuplot_scripts(self.output_filename, dt)
'''
error1, error2, error3 = self.compute_errors(self.output_filename)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("\n\n")
error_file.write("== BASIC CONTINUUM TESTS ==\n\n")
error_file.write("DEM Benchmark 20:")
if (error1 < 10.0 and error2 < 10.0 and error3 < 10.0):
error_file.write(" OK!........ Test 20 SUCCESSFUL\n")
shutil.rmtree('benchmark20_Post_Files', ignore_errors = True)
else:
error_file.write(" KO!........ Test 20 FAILED\n")
error_file.close()
def compute_errors(self, output_filename): #FINALIZATION STEP
lines_analytics = lines_DEM = list(range(0, 1000));
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_graph_benchmark' + str(sys.argv[1]) + '.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
analytics_data.append(float(parts[1]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1])) #segona component del vector ()
i+=1
generated_data_error = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
generated_data_error+=abs(i-j)
generated_data_error/=summation_of_analytics_data
print("Error in simulation =", 100*generated_data_error,"%")
error1 = 100*generated_data_error
error2 = error3 = 0
return error1, error2, error3
def create_gnuplot_scripts(self, output_filename, dt):
pass
'''
gnuplot_script_name_1 = 'benchmark20_comparison_1_dt_' + str(dt) + 's.gp'
self.gnuplot_outfile = open(gnuplot_script_name_1, 'w')
self.gnuplot_outfile.write("set grid\nset key left bottom\nset xlabel 'Data'\nset ylabel 'Damping ratio'\nset style line 1 pt 8 lt -1 ps 3\nset style line 2 pt 9 lt 3 ps 3\n")
self.gnuplot_outfile.write("plot [0:1][0:1] '" + output_filename + "' w lp lt 1 lw 1.5 ps 2 pt 5,\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark20_graph1.dat' w lp ls 1 t 'Al. oxide',\\\n")
self.gnuplot_outfile.write("'paper_data/benchmark20_graph1.dat' w lp ls 2 t 'Cast iron'\n")
self.gnuplot_outfile.close()
print_gnuplot_files_on_screen(gnuplot_script_name_1)
'''
class Benchmark21:
def __init__(self):
self.number = 21
self.generated_data = None
#self.graph_frequency = int(graph_print_interval/dt) # def __init__(self, graph_print_interval, dt):
self.balls_graph_counter = 1 # deberia ser self.balls_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration): #INITIALIZATION STEP
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency): #if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
self.total_force_x = 0.0
self.total_force_y = 0.0
for node in modelpart.Nodes:
if node.Id == 141:
force_node_x = node.GetSolutionStepValue(ELASTIC_FORCES)[0]
force_node_y = node.GetSolutionStepValue(ELASTIC_FORCES)[1]
self.total_force_x += force_node_x
self.total_force_y += force_node_y
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_x).rjust(13)+" "+str("%.6g"%self.total_force_y).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
error1, error2, error3 = self.compute_errors(self.output_filename)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 21:")
if (error1 < 10.0 and error2 < 10.0 and error3 < 10.0):
error_file.write(" OK!........ Test 21 SUCCESSFUL\n")
shutil.rmtree('benchmark21_Post_Files', ignore_errors = True)
else:
error_file.write(" KO!........ Test 21 FAILED\n")
error_file.close()
def compute_errors(self, output_filename): #FINALIZATION STEP
lines_analytics = lines_DEM = list(range(0, 1000));
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_graph_benchmark' + str(sys.argv[1]) + '.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
analytics_data.append(float(parts[1]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1])) #segona component del vector ()
i+=1
generated_data_error = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
generated_data_error+=abs(i-j)
generated_data_error/=summation_of_analytics_data
print("Error in simulation =", 100*generated_data_error,"%")
error1 = 100*generated_data_error
error2 = error3 = 0
return error1, error2, error3
def create_gnuplot_scripts(self, output_filename, dt):
pass
class Benchmark22:
def __init__(self):
self.number = 22
self.generated_data = None
#self.graph_frequency = int(graph_print_interval/dt) # def __init__(self, graph_print_interval, dt):
self.balls_graph_counter = 1 # deberia ser self.balls_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration): #INITIALIZATION STEP
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency): #if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
self.total_force_x = 0.0
self.total_force_y = 0.0
for node in modelpart.Nodes:
if node.Id == 141:
force_node_x = node.GetSolutionStepValue(ELASTIC_FORCES)[0]
force_node_y = node.GetSolutionStepValue(ELASTIC_FORCES)[1]
self.total_force_x += force_node_x
self.total_force_y += force_node_y
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_x).rjust(13)+" "+str("%.6g"%self.total_force_y).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
error1, error2, error3 = self.compute_errors(self.output_filename)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 22:")
if (error1 < 10.0 and error2 < 10.0 and error3 < 10.0):
error_file.write(" OK!........ Test 22 SUCCESSFUL\n")
shutil.rmtree('benchmark22_Post_Files', ignore_errors = True)
else:
error_file.write(" KO!........ Test 22 FAILED\n")
error_file.close()
def compute_errors(self, output_filename):
lines_analytics = lines_DEM = list(range(0, 1000));
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_graph_benchmark' + str(sys.argv[1]) + '.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
analytics_data.append(float(parts[1]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1])) #segona component del vector ()
i+=1
generated_data_error = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
generated_data_error+=abs(i-j)
generated_data_error/=summation_of_analytics_data
print("Error in simulation =", 100*generated_data_error,"%")
error1 = 100*generated_data_error
error2 = error3 = 0
return error1, error2, error3
def create_gnuplot_scripts(self, output_filename, dt):
pass
class Benchmark23:
def __init__(self):
self.number = 23
self.generated_data = None
#self.graph_frequency = int(graph_print_interval/dt) # def __init__(self, graph_print_interval, dt):
self.balls_graph_counter = 1 # deberia ser self.balls_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration): #INITIALIZATION STEP
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
#print("generate_graph_points bench23, graph_print_interval, dt - ", graph_print_interval, dt )
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.balls_graph_counter == self.graph_frequency): #if(self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
self.total_force_x = 0.0
self.total_force_y = 0.0
for node in modelpart.Nodes:
if node.Id == 141:
force_node_x = node.GetSolutionStepValue(ELASTIC_FORCES)[0]
force_node_y = node.GetSolutionStepValue(ELASTIC_FORCES)[1]
self.total_force_x += force_node_x
self.total_force_y += force_node_y
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_x).rjust(13)+" "+str("%.6g"%self.total_force_y).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2, error3 = self.compute_errors(self.output_filename)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 23:")
if (error1 < 10.0 and error2 < 10.0 and error3 < 10.0):
error_file.write(" OK!........ Test 23 SUCCESSFUL\n")
shutil.rmtree('benchmark23_Post_Files', ignore_errors = True)
else:
error_file.write(" KO!........ Test 23 FAILED\n")
error_file.close()
def compute_errors(self, output_filename): #FINALIZATION STEP
lines_analytics = lines_DEM = list(range(0, 1000));
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_graph_benchmark' + '23' + '.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
analytics_data.append(float(parts[1]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1])) #segona component del vector ()
i+=1
generated_data_error = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
generated_data_error+=abs(i-j)
generated_data_error/=summation_of_analytics_data
print("Error in simulation =", 100*generated_data_error,"%")
error1 = 100*generated_data_error
error2 = error3 = 0
return error1, error2, error3
def create_gnuplot_scripts(self, output_filename, dt):
pass
class Benchmark24:
def __init__(self):
self.number = 24
self.generated_data = None
self.balls_graph_counter = 1
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration):
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
self.simulation_graph.close()
def cross_product(self, a, b):
c = [a[1]*b[2] - a[2]*b[1], a[2]*b[0] - a[0]*b[2], a[0]*b[1] - a[1]*b[0]]
return c
def ApplyNodalRotation(self, time, dt, modelpart):
ang_vel = 20 * pi
angular_velocity = [0, 0, ang_vel]
rotation_matrix = [[cos(ang_vel * time), -1.0 * sin(ang_vel * time), 0], [sin(ang_vel * time), cos(ang_vel * time), 0], [0,0,1]]
time_dt = time - dt
rotation_matrix_minus_dt = [[cos(ang_vel * time_dt), -1.0 * sin(ang_vel * time_dt), 0], [sin(ang_vel * time_dt), cos(ang_vel * time_dt), 0], [0,0,1]] #
centroid = [-1.0, 0.0, 0.0]
relative_initial_node_coords, relative_node_coords, relative_node_coords_dt = [0]*3, [0]*3, [0]*3
sum, sum_dt = 0, 0
for node in modelpart.Nodes:
if node.Id == 141:
for j in range(3):
rot_mat = rotation_matrix[j]
rot_mat_dt = rotation_matrix_minus_dt[j]
relative_initial_node_coords[0] = node.X0 - centroid[0]
relative_initial_node_coords[1] = node.Y0 - centroid[1]
relative_initial_node_coords[2] = node.Z0 - centroid[2]
for i in range(3):
sum += rot_mat[i] * relative_initial_node_coords[i]
sum_dt += rot_mat_dt[i] * relative_initial_node_coords[i]
relative_node_coords[j], sum, relative_node_coords_dt[j], sum_dt = sum, 0, sum_dt, 0
node.X = relative_node_coords[0] + centroid[0]
node.Y = relative_node_coords[1] + centroid[1]
node.Z = relative_node_coords[2] + centroid[2]
displacement = GetDisplacement(node)
node.SetSolutionStepValue(DISPLACEMENT, displacement)
velocity = [0]*3
velocity = self.cross_product(angular_velocity, relative_node_coords)
node.SetSolutionStepValue(VELOCITY, velocity)
angular_velocity = [0]*3
node.SetSolutionStepValue(ANGULAR_VELOCITY, angular_velocity)
delta_displacement = [0]*3
delta_displacement[0] = relative_node_coords[0] - relative_node_coords_dt[0]
delta_displacement[1] = relative_node_coords[1] - relative_node_coords_dt[1]
delta_displacement[2] = relative_node_coords[2] - relative_node_coords_dt[2]
node.SetSolutionStepValue(DELTA_DISPLACEMENT, delta_displacement)
particle_rotation_angle = [0]*3
particle_rotation_angle[0] = angular_velocity[0] * time
particle_rotation_angle[1] = angular_velocity[1] * time
particle_rotation_angle[2] = angular_velocity[2] * time
node.SetSolutionStepValue(PARTICLE_ROTATION_ANGLE, particle_rotation_angle)
delta_rotation = [0]*3
delta_rotation[0] = angular_velocity[0] * dt
delta_rotation[1] = angular_velocity[1] * dt
delta_rotation[2] = angular_velocity[2] * dt
node.SetSolutionStepValue(DELTA_ROTATION, delta_rotation)
if node.Id == 140:
angular_velocity = [0]*3
node.SetSolutionStepValue(ANGULAR_VELOCITY, angular_velocity)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
#print("generate_graph_points bench24, graph_print_interval, dt - ", graph_print_interval, dt )
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1
if (self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
self.total_force_x = 0.0
self.total_force_y = 0.0
for node in modelpart.Nodes:
if node.Id == 141:
force_node_x = node.GetSolutionStepValue(ELASTIC_FORCES)[0]
force_node_y = node.GetSolutionStepValue(ELASTIC_FORCES)[1]
self.total_force_x += force_node_x
self.total_force_y += force_node_y
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_x).rjust(13)+" "+str("%.6g"%self.total_force_y).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
error1, error2, error3 = self.compute_errors(self.output_filename)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 24:")
if (error1 < 10.0 and error2 < 10.0 and error3 < 10.0):
error_file.write(" OK!........ Test 24 SUCCESSFUL\n")
shutil.rmtree('benchmark24_Post_Files', ignore_errors = True)
else:
error_file.write(" KO!........ Test 24 FAILED\n")
error_file.close()
def compute_errors(self, output_filename):
lines_analytics = lines_DEM = list(range(0, 1000));
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_graph_benchmark' + '24' + '.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
analytics_data.append(float(parts[2]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[2])) #segona component del vector ()
i+=1
generated_data_error = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
generated_data_error+=abs(i-j)
generated_data_error/=summation_of_analytics_data
print("Error in simulation =", 100*generated_data_error,"%")
error1 = 100*generated_data_error
error2 = error3 = 0
return error1, error2, error3
def create_gnuplot_scripts(self, output_filename, dt):
pass
class Benchmark25:
def __init__(self):
self.number = 25
self.generated_data = None
self.balls_graph_counter = 1
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration):
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
self.simulation_graph.close()
def cross_product(self, a, b):
c = [a[1]*b[2] - a[2]*b[1], a[2]*b[0] - a[0]*b[2], a[0]*b[1] - a[1]*b[0]]
return c
def ApplyNodalRotation(self, time, dt, modelpart):
ang_vel = 20 * pi
angular_velocity = [0, 0, ang_vel]
rotation_matrix = [[cos(ang_vel * time), -1.0 * sin(ang_vel * time), 0], [sin(ang_vel * time), cos(ang_vel * time), 0], [0,0,1]]
time_dt = time - dt
rotation_matrix_minus_dt = [[cos(ang_vel * time_dt), -1.0 * sin(ang_vel * time_dt), 0], [sin(ang_vel * time_dt), cos(ang_vel * time_dt), 0], [0,0,1]] #
centroid = [-1.0, 0.0, 0.0]
relative_initial_node_coords, relative_node_coords, relative_node_coords_dt = [0]*3, [0]*3, [0]*3
sum, sum_dt = 0, 0
for node in modelpart.Nodes:
if node.Id == 141:
for j in range(3):
rot_mat = rotation_matrix[j]
rot_mat_dt = rotation_matrix_minus_dt[j]
relative_initial_node_coords[0] = node.X0 - centroid[0]
relative_initial_node_coords[1] = node.Y0 - centroid[1]
relative_initial_node_coords[2] = node.Z0 - centroid[2]
for i in range(3):
sum += rot_mat[i] * relative_initial_node_coords[i]
sum_dt += rot_mat_dt[i] * relative_initial_node_coords[i]
relative_node_coords[j], sum, relative_node_coords_dt[j], sum_dt = sum, 0, sum_dt, 0
node.X = relative_node_coords[0] + centroid[0]
node.Y = relative_node_coords[1] + centroid[1]
node.Z = relative_node_coords[2] + centroid[2]
displacement = GetDisplacement(node)
node.SetSolutionStepValue(DISPLACEMENT, displacement)
velocity = [0]*3
velocity = self.cross_product(angular_velocity, relative_node_coords)
node.SetSolutionStepValue(VELOCITY, velocity)
angular_velocity = [0]*3
node.SetSolutionStepValue(ANGULAR_VELOCITY, angular_velocity)
delta_displacement = [0]*3
delta_displacement[0] = relative_node_coords[0] - relative_node_coords_dt[0]
delta_displacement[1] = relative_node_coords[1] - relative_node_coords_dt[1]
delta_displacement[2] = relative_node_coords[2] - relative_node_coords_dt[2]
node.SetSolutionStepValue(DELTA_DISPLACEMENT, delta_displacement)
particle_rotation_angle = [0]*3
particle_rotation_angle[0] = angular_velocity[0] * time
particle_rotation_angle[1] = angular_velocity[1] * time
particle_rotation_angle[2] = angular_velocity[2] * time
node.SetSolutionStepValue(PARTICLE_ROTATION_ANGLE, particle_rotation_angle)
delta_rotation = [0]*3
delta_rotation[0] = angular_velocity[0] * dt
delta_rotation[1] = angular_velocity[1] * dt
delta_rotation[2] = angular_velocity[2] * dt
node.SetSolutionStepValue(DELTA_ROTATION, delta_rotation)
if time > 3.8e-5:
radius = 1.0001
node.SetSolutionStepValue(RADIUS, radius)
if node.Id == 140:
angular_velocity = [0]*3
node.SetSolutionStepValue(ANGULAR_VELOCITY, angular_velocity)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1
if (self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
self.total_force_x = 0.0
self.total_force_y = 0.0
for node in modelpart.Nodes:
if node.Id == 141:
force_node_x = node.GetSolutionStepValue(ELASTIC_FORCES)[0]
force_node_y = node.GetSolutionStepValue(ELASTIC_FORCES)[1]
self.total_force_x += force_node_x
self.total_force_y += force_node_y
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_x).rjust(13)+" "+str("%.6g"%self.total_force_y).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
error1, error2, error3 = self.compute_errors(self.output_filename)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 25:")
if (error1 < 10.0 and error2 < 10.0 and error3 < 10.0):
error_file.write(" OK!........ Test 25 SUCCESSFUL\n")
shutil.rmtree('benchmark25_Post_Files', ignore_errors = True)
else:
error_file.write(" KO!........ Test 25 FAILED\n")
error_file.close()
def compute_errors(self, output_filename):
lines_analytics = lines_DEM = list(range(0, 1000));
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_graph_benchmark' + '25' + '.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
analytics_data.append(float(parts[2]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[2])) #segona component del vector ()
i+=1
generated_data_error = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
generated_data_error+=abs(i-j)
generated_data_error/=summation_of_analytics_data
print("Error in simulation =", 100*generated_data_error,"%")
error1 = 100*generated_data_error
error2 = error3 = 0
return error1, error2, error3
def create_gnuplot_scripts(self, output_filename, dt):
pass
class Benchmark26:
def __init__(self):
self.number = 26
self.generated_data = None
self.balls_graph_counter = 1
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration):
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1
if (self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
self.total_force_x = 0.0
self.total_force_y = 0.0
for node in modelpart.Nodes:
if node.Id == 141:
force_node_x = node.GetSolutionStepValue(ELASTIC_FORCES)[0]
force_node_y = node.GetSolutionStepValue(ELASTIC_FORCES)[1]
self.total_force_x += force_node_x
self.total_force_y += force_node_y
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_x).rjust(13)+" "+str("%.6g"%self.total_force_y).rjust(13)+"\n")
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt = 0):
pass
def compute_errors(self, output_filename):
pass
def create_gnuplot_scripts(self, output_filename, dt):
pass
class Benchmark27:
def __init__(self):
self.number = 27
self.generated_data = None
self.balls_graph_counter = 1
self.rigid_graph_counter = 1
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration):
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.rigid_face_file = "benchmark" + str(sys.argv[1]) + '_rigid_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
self.rigid_graph = open(self.rigid_face_file, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
self.simulation_graph.close()
self.rigid_graph.close()
def cross_product(self, a, b):
c = [a[1]*b[2] - a[2]*b[1], a[2]*b[0] - a[0]*b[2], a[0]*b[1] - a[1]*b[0]]
return c
def ApplyNodalRotation(self, time, dt, modelpart):
ang_vel = 20 * pi
angular_velocity = [0, 0, ang_vel]
rotation_matrix = [[cos(ang_vel * time), -1.0 * sin(ang_vel * time), 0], [sin(ang_vel * time), cos(ang_vel * time), 0], [0,0,1]]
time_dt = time - dt
rotation_matrix_minus_dt = [[cos(ang_vel * time_dt), -1.0 * sin(ang_vel * time_dt), 0], [sin(ang_vel * time_dt), cos(ang_vel * time_dt), 0], [0,0,1]] #
centroid = [-1.0, 0.0, 0.0]
relative_initial_node_coords, relative_node_coords, relative_node_coords_dt = [0]*3, [0]*3, [0]*3
sum, sum_dt = 0, 0
for node in modelpart.Nodes:
if node.Id == 999999:
for j in range(3):
rot_mat = rotation_matrix[j]
rot_mat_dt = rotation_matrix_minus_dt[j]
relative_initial_node_coords[0] = node.X0 - centroid[0]
relative_initial_node_coords[1] = node.Y0 - centroid[1]
relative_initial_node_coords[2] = node.Z0 - centroid[2]
for i in range(3):
sum += rot_mat[i] * relative_initial_node_coords[i]
sum_dt += rot_mat_dt[i] * relative_initial_node_coords[i]
relative_node_coords[j], sum, relative_node_coords_dt[j], sum_dt = sum, 0, sum_dt, 0
node.X = relative_node_coords[0] + centroid[0]
node.Y = relative_node_coords[1] + centroid[1]
node.Z = relative_node_coords[2] + centroid[2]
displacement = GetDisplacement(node)
node.SetSolutionStepValue(DISPLACEMENT, displacement)
velocity = [0]*3
velocity = self.cross_product(angular_velocity, relative_node_coords)
node.SetSolutionStepValue(VELOCITY, velocity)
angular_velocity = [0]*3
node.SetSolutionStepValue(ANGULAR_VELOCITY, angular_velocity)
delta_displacement = [0]*3
delta_displacement[0] = relative_node_coords[0] - relative_node_coords_dt[0]
delta_displacement[1] = relative_node_coords[1] - relative_node_coords_dt[1]
delta_displacement[2] = relative_node_coords[2] - relative_node_coords_dt[2]
node.SetSolutionStepValue(DELTA_DISPLACEMENT, delta_displacement)
particle_rotation_angle = [0]*3
particle_rotation_angle[0] = angular_velocity[0] * time
particle_rotation_angle[1] = angular_velocity[1] * time
particle_rotation_angle[2] = angular_velocity[2] * time
node.SetSolutionStepValue(PARTICLE_ROTATION_ANGLE, particle_rotation_angle)
delta_rotation = [0]*3
delta_rotation[0] = angular_velocity[0] * dt
delta_rotation[1] = angular_velocity[1] * dt
delta_rotation[2] = angular_velocity[2] * dt
node.SetSolutionStepValue(DELTA_ROTATION, delta_rotation)
if time > 3.8e-5:
radius = 1.0001
node.SetSolutionStepValue(RADIUS, radius)
if node.Id == 99999:
angular_velocity = [0]*3
node.SetSolutionStepValue(ANGULAR_VELOCITY, angular_velocity)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
#self.graph_frequency = int(5e-7/dt) #graph_print_interval/dt
self.graph_frequency = int(graph_print_interval/1/dt) #1 veces mas grf que bin
#print (self.graph_frequency)
#print (self.balls_graph_counter)
if self.graph_frequency < 1:
self.graph_frequency = 1
if (self.balls_graph_counter == self.graph_frequency):
#print (self.balls_graph_counter)
self.balls_graph_counter = 0
self.total_force_x = 0.0
self.total_force_y = 0.0
self.total_force_z = 0.0
self.total_force_sum = 0.0
self.total_angular_x = 0.0
self.total_angular_y = 0.0
self.total_angular_z = 0.0
self.total_angular_sum = 0.0
self.total_delta_x = 0.0
self.total_delta_y = 0.0
self.total_delta_z = 0.0
self.total_delta_sum = 0.0
for node in modelpart.Nodes:
if node.Id == 29:
force_node_x = node.GetSolutionStepValue(ELASTIC_FORCES)[0]
force_node_y = node.GetSolutionStepValue(ELASTIC_FORCES)[1]
force_node_z = node.GetSolutionStepValue(ELASTIC_FORCES)[2]
self.total_force_x += force_node_x
self.total_force_y += force_node_y
self.total_force_z += force_node_z
angular_node_x = node.GetSolutionStepValue(ANGULAR_VELOCITY)[0]
angular_node_y = node.GetSolutionStepValue(ANGULAR_VELOCITY)[1]
angular_node_z = node.GetSolutionStepValue(ANGULAR_VELOCITY)[2]
self.total_angular_x += angular_node_x
self.total_angular_y += angular_node_y
self.total_angular_z += angular_node_z
delta_node_x = node.GetSolutionStepValue(DELTA_DISPLACEMENT)[0]
delta_node_y = node.GetSolutionStepValue(DELTA_DISPLACEMENT)[1]
delta_node_z = node.GetSolutionStepValue(DELTA_DISPLACEMENT)[2]
self.total_delta_x += delta_node_x
self.total_delta_y += delta_node_y
self.total_delta_z += delta_node_z
self.total_force_sum = self.total_force_x + self.total_force_y + self.total_force_z
self.total_angular_sum = self.total_angular_x + self.total_angular_y + self.total_angular_z
self.total_delta_sum = self.total_delta_x + self.total_delta_y + self.total_delta_z
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_sum).rjust(13)+" "+str("%.6g"%self.total_angular_sum).rjust(13)+" "+str("%.6g"%self.total_delta_sum).rjust(13)+"\n")
self.simulation_graph.flush()
self.balls_graph_counter += 1
for mesh_number in range(1, rigid_face_model_part.NumberOfMeshes()):
if(rigid_face_model_part.GetMesh(mesh_number)[TOP]):
self.top_mesh_nodes = rigid_face_model_part.GetMesh(mesh_number).Nodes
if (self.rigid_graph_counter == self.graph_frequency):
self.rigid_graph_counter = 0
self.total_force_top = 0.0
for node in self.top_mesh_nodes:
force_node_y = node.GetSolutionStepValue(ELASTIC_FORCES)[1]
self.total_force_top += force_node_y
self.rigid_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_top).rjust(13)+"\n")
self.rigid_graph.flush()
self.rigid_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
error1, error2, error3 = self.compute_errors(self.output_filename)
error4, error5, error6 = self.compute_rigid_errors(self.rigid_face_file)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 27:")
if (error1 < 10.0 and error2 < 10.0 and error3 < 10.0):
error_file.write(" OK!........ Test 27 SUCCESSFUL (spheres)\n")
shutil.rmtree('benchmark27_Post_Files', ignore_errors = True)
else:
error_file.write(" KO!........ Test 27 FAILED (spheres)\n")
error_file.write("DEM Benchmark 27:")
if (error4 < 10.0 and error5 < 10.0 and error6 < 10.0):
error_file.write(" OK!........ Test 27 SUCCESSFUL (finite elements)\n")
else:
error_file.write(" KO!........ Test 27 FAILED (finite elements)\n")
error_file.close()
def compute_errors(self, output_filename):
reference_data = lines_DEM = list(range(0, 1000));
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_graph_benchmark' + '27' + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[1]))
i+=1
i = 0
with open(output_filename) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1])) #segona component del vector ()
i+=1
dem_error1 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error1+=abs(i-j)
dem_error1/=summation_of_analytics_data
print("Error in total force at the reference particle =", 100*dem_error1,"%")
i = 0
with open('paper_data/reference_graph_benchmark' + '27' + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[2]))
i+=1
i = 0
with open(output_filename) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[2])) #segona component del vector ()
i+=1
dem_error2 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error2+=abs(i-j)
dem_error2/=summation_of_analytics_data
print("Error in angular velocity at the reference particle =", 100*dem_error2,"%")
i = 0
with open('paper_data/reference_graph_benchmark' + '27' + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[3]))
i+=1
i = 0
with open(output_filename) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[3])) #segona component del vector ()
i+=1
dem_error3 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error3+=abs(i-j)
dem_error3/=summation_of_analytics_data
print("Error in delta displacement at the reference particle =", 100*dem_error3,"%")
error1 = 100*dem_error1
error2 = 100*dem_error2
error3 = 100*dem_error3
return error1, error2, error3
def compute_rigid_errors(self, rigid_face_file):
reference_data = lines_FEM = list(range(0, 1000));
analytics_data = []; FEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_rigid_graph_benchmark' + '27' + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[1]))
i+=1
i = 0
with open(rigid_face_file) as current_data:
for line in current_data:
if i in lines_FEM:
parts = line.split()
FEM_data.append(float(parts[1])) #segona component del vector ()
i+=1
final_error = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(FEM_data, analytics_data):
final_error+=abs(i-j)
final_error/=summation_of_analytics_data
print("Error in FEM axial force =", 100*final_error,"%")
error4 = 100*final_error
error5 = error6 = 0
return error4, error5, error6
def create_gnuplot_scripts(self, output_filename, dt):
pass
class Benchmark28: #pendulo3D
def __init__(self):
self.number = 28
self.generated_data = None
self.balls_graph_counter = 1
self.rigid_graph_counter = 1
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration):
self.output_filename = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.rigid_face_file = "benchmark" + str(sys.argv[1]) + '_rigid_graph.dat'
self.simulation_graph = open(self.output_filename, 'w')
self.rigid_graph = open(self.rigid_face_file, 'w')
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
self.simulation_graph.close()
self.rigid_graph.close()
def cross_product(self, a, b):
c = [a[1]*b[2] - a[2]*b[1], a[2]*b[0] - a[0]*b[2], a[0]*b[1] - a[1]*b[0]]
return c
def ApplyNodalRotation(self, time, dt, modelpart):
pass
ang_vel = 20 * pi
angular_velocity = [0, 0, ang_vel]
rotation_matrix = [[cos(ang_vel * time), -1.0 * sin(ang_vel * time), 0], [sin(ang_vel * time), cos(ang_vel * time), 0], [0,0,1]]
time_dt = time - dt
rotation_matrix_minus_dt = [[cos(ang_vel * time_dt), -1.0 * sin(ang_vel * time_dt), 0], [sin(ang_vel * time_dt), cos(ang_vel * time_dt), 0], [0,0,1]] #
centroid = [-1.0, 0.0, 0.0]
relative_initial_node_coords, relative_node_coords, relative_node_coords_dt = [0]*3, [0]*3, [0]*3
sum, sum_dt = 0, 0
for node in modelpart.Nodes:
if node.Id == 999999:
for j in range(3):
rot_mat = rotation_matrix[j]
rot_mat_dt = rotation_matrix_minus_dt[j]
relative_initial_node_coords[0] = node.X0 - centroid[0]
relative_initial_node_coords[1] = node.Y0 - centroid[1]
relative_initial_node_coords[2] = node.Z0 - centroid[2]
for i in range(3):
sum += rot_mat[i] * relative_initial_node_coords[i]
sum_dt += rot_mat_dt[i] * relative_initial_node_coords[i]
relative_node_coords[j], sum, relative_node_coords_dt[j], sum_dt = sum, 0, sum_dt, 0
node.X = relative_node_coords[0] + centroid[0]
node.Y = relative_node_coords[1] + centroid[1]
node.Z = relative_node_coords[2] + centroid[2]
displacement = GetDisplacement(node)
node.SetSolutionStepValue(DISPLACEMENT, displacement)
velocity = [0]*3
velocity = self.cross_product(angular_velocity, relative_node_coords)
node.SetSolutionStepValue(VELOCITY, velocity)
angular_velocity = [0]*3
node.SetSolutionStepValue(ANGULAR_VELOCITY, angular_velocity)
delta_displacement = [0]*3
delta_displacement[0] = relative_node_coords[0] - relative_node_coords_dt[0]
delta_displacement[1] = relative_node_coords[1] - relative_node_coords_dt[1]
delta_displacement[2] = relative_node_coords[2] - relative_node_coords_dt[2]
node.SetSolutionStepValue(DELTA_DISPLACEMENT, delta_displacement)
particle_rotation_angle = [0]*3
particle_rotation_angle[0] = angular_velocity[0] * time
particle_rotation_angle[1] = angular_velocity[1] * time
particle_rotation_angle[2] = angular_velocity[2] * time
node.SetSolutionStepValue(PARTICLE_ROTATION_ANGLE, particle_rotation_angle)
delta_rotation = [0]*3
delta_rotation[0] = angular_velocity[0] * dt
delta_rotation[1] = angular_velocity[1] * dt
delta_rotation[2] = angular_velocity[2] * dt
node.SetSolutionStepValue(DELTA_ROTATION, delta_rotation)
if time > 3.8e-5:
radius = 1.0001
node.SetSolutionStepValue(RADIUS, radius)
if node.Id == 99999:
angular_velocity = [0]*3
node.SetSolutionStepValue(ANGULAR_VELOCITY, angular_velocity)
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
#self.graph_frequency = int(5e-7/dt) #graph_print_interval/dt
self.graph_frequency = int(graph_print_interval/1/dt) #1 veces mas grf que bin
if self.graph_frequency < 1:
self.graph_frequency = 1
if (self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
self.total_force_x = 0.0
self.total_force_y = 0.0
self.total_force_z = 0.0
self.total_force_sum = 0.0
self.total_angular_x = 0.0
self.total_angular_y = 0.0
self.total_angular_z = 0.0
self.total_angular_sum = 0.0
self.total_delta_x = 0.0
self.total_delta_y = 0.0
self.total_delta_z = 0.0
self.total_delta_sum = 0.0
for node in modelpart.Nodes:
if node.Id == 107:
force_node_x = node.GetSolutionStepValue(LOCAL_CONTACT_FORCE)[0]
force_node_y = node.GetSolutionStepValue(LOCAL_CONTACT_FORCE)[1]
force_node_z = node.GetSolutionStepValue(LOCAL_CONTACT_FORCE)[2]
self.total_force_x += force_node_x
self.total_force_y += force_node_y
self.total_force_z += force_node_z
angular_node_x = node.GetSolutionStepValue(ANGULAR_VELOCITY)[0]
angular_node_y = node.GetSolutionStepValue(ANGULAR_VELOCITY)[1]
angular_node_z = node.GetSolutionStepValue(ANGULAR_VELOCITY)[2]
self.total_angular_x += angular_node_x
self.total_angular_y += angular_node_y
self.total_angular_z += angular_node_z
delta_node_x = node.GetSolutionStepValue(DELTA_DISPLACEMENT)[0]
delta_node_y = node.GetSolutionStepValue(DELTA_DISPLACEMENT)[1]
delta_node_z = node.GetSolutionStepValue(DELTA_DISPLACEMENT)[2]
self.total_delta_x += delta_node_x
self.total_delta_y += delta_node_y
self.total_delta_z += delta_node_z
self.total_force_sum = self.total_force_x + self.total_force_y + self.total_force_z
self.total_angular_sum = self.total_angular_x + self.total_angular_y + self.total_angular_z
self.total_delta_sum = self.total_delta_x + self.total_delta_y + self.total_delta_z
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%self.total_force_sum).rjust(13)+" "+str("%.6g"%self.total_angular_sum).rjust(13)+" "+str("%.6g"%self.total_delta_sum).rjust(13)+"\n")
self.simulation_graph.flush()
self.balls_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
error1, error2, error3 = self.compute_errors(self.output_filename)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 28:")
if (error1 < 10.0 and error2 < 10.0 and error3 < 10.0):
error_file.write(" OK!........ Test 28 SUCCESSFUL (spheres)\n")
shutil.rmtree('benchmark28_Post_Files', ignore_errors = True)
else:
error_file.write(" KO!........ Test 28 FAILED (spheres)\n")
error_file.write("DEM Benchmark 28:")
def compute_errors(self, output_filename):
reference_data = lines_DEM = list(range(0, 1000));
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_graph_benchmark' + '28' + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[1]))
i+=1
i = 0
with open(output_filename) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1])) #1 component del vector ()
i+=1
dem_error1 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error1+=abs(i-j)
dem_error1/=summation_of_analytics_data
print("Error in total force at the reference particle =", 100*dem_error1,"%")
i = 0
with open('paper_data/reference_graph_benchmark' + '28' + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[2]))
i+=1
i = 0
with open(output_filename) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[2])) #segona component del vector ()
i+=1
dem_error2 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error2+=abs(i-j)
dem_error2/=summation_of_analytics_data
print("Error in angular velocity at the reference particle =", 100*dem_error2,"%")
i = 0
with open('paper_data/reference_graph_benchmark' + '28' + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[3]))
i+=1
i = 0
with open(output_filename) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[3])) #3 component del vector ()
i+=1
dem_error3 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error3+=abs(i-j)
dem_error3/=summation_of_analytics_data
print("Error in delta displacement at the reference particle =", 100*dem_error3,"%")
error1 = 100*dem_error1
error2 = 100*dem_error2
error3 = 100*dem_error3
return error1, error2, error3
def compute_rigid_errors(self, rigid_face_file):
pass
def create_gnuplot_scripts(self, output_filename, dt):
pass
class Benchmark30: ########## Cylinder with imposed angular velocity (Velocity Verlet + Zhao)
def __init__(self):
self.number = 30
self.cluster_graph_counter = 1 # deberia ser self.cluster_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.local_angular_velocity_list_outfile_name = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.local_angular_velocity_list_outfile_name, 'w')
def get_final_data(self, spheres_model_part, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, spheres_model_part, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.cluster_graph_counter == self.graph_frequency): #if(self.cluster_graph_counter == self.graph_frequency):
self.cluster_graph_counter = 0
total_local_angular_velocity_x = 0.0
total_local_angular_velocity_y = 0.0
total_local_angular_velocity_z = 0.0
for node in cluster_model_part.Nodes:
current_local_angular_velocity_x = node.GetSolutionStepValue(LOCAL_ANGULAR_VELOCITY_X)
total_local_angular_velocity_x += current_local_angular_velocity_x
current_local_angular_velocity_y = node.GetSolutionStepValue(LOCAL_ANGULAR_VELOCITY_Y)
total_local_angular_velocity_y += current_local_angular_velocity_y
current_local_angular_velocity_z = node.GetSolutionStepValue(LOCAL_ANGULAR_VELOCITY_Z)
total_local_angular_velocity_z += current_local_angular_velocity_z
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_local_angular_velocity_x).rjust(13)+" "+str("%.6g"%total_local_angular_velocity_y).rjust(13)+" "+str("%.6g"%total_local_angular_velocity_z).rjust(13)+"\n")
self.cluster_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2, error3 = self.compute_errors(self.local_angular_velocity_list_outfile_name)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("\n\n")
error_file.write("===== DISCONTINUUM CLUSTERS TESTS =====\n\n")
error_file.write("DEM Benchmark 30:")
if (error1 < 0.1 and error2 < 0.1 and error3 < 0.1):
error_file.write(" OK!........ Test 30 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 30 FAILED\n")
error_file.close()
def compute_errors(self, output_filename): #FINALIZATION STEP
lines_analytics = lines_DEM = list(range(0, 50));
ref_data1 = []; ref_data2 = []; DEM_data1 = []; ref_data3 = []; DEM_data1 = []; DEM_data2 = []; DEM_data3 = []; summation_of_ref_data1 = 0; summation_of_ref_data2 = 0; summation_of_ref_data3 = 0
i = 0
with open('paper_data/benchmark' + str(sys.argv[1]) + '_graph.dat') as inf: #with open('paper_data/reference_graph_benchmark30.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
ref_data1.append(float(parts[1]))
ref_data2.append(float(parts[2]))
ref_data3.append(float(parts[3]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data1.append(float(parts[1]))
DEM_data2.append(float(parts[2]))
DEM_data3.append(float(parts[3]))
i+=1
final_local_angular_velocity_x_error = 0
final_local_angular_velocity_y_error = 0
final_local_angular_velocity_z_error = 0
for j in ref_data1:
summation_of_ref_data1+=abs(j)
for k in ref_data2:
summation_of_ref_data2+=abs(k)
for l in ref_data3:
summation_of_ref_data3+=abs(l)
for i, j in zip(DEM_data1, ref_data1):
final_local_angular_velocity_x_error+=abs(i-j)
final_local_angular_velocity_x_error/=summation_of_ref_data1
for k, l in zip(DEM_data2, ref_data2):
final_local_angular_velocity_y_error+=abs(k-l)
final_local_angular_velocity_y_error/=summation_of_ref_data2
for m, n in zip(DEM_data3, ref_data3):
final_local_angular_velocity_z_error+=abs(m-n)
final_local_angular_velocity_z_error/=summation_of_ref_data3
print("Error in local angular velocity X =", 100*final_local_angular_velocity_x_error,"%")
print("Error in local angular velocity Y =", 100*final_local_angular_velocity_y_error,"%")
print("Error in local angular velocity Z =", 100*final_local_angular_velocity_z_error,"%")
error1 = 100*final_local_angular_velocity_x_error
error2 = 100*final_local_angular_velocity_y_error
error3 = 100*final_local_angular_velocity_z_error
return error1, error2, error3
class Benchmark31: ########## Cylinder with imposed angular velocity (Symplectic Euler + Runge-Kutta)
def __init__(self):
self.number = 31
self.cluster_graph_counter = 1 # deberia ser self.cluster_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.local_angular_velocity_list_outfile_name = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.local_angular_velocity_list_outfile_name, 'w')
def get_final_data(self, spheres_model_part, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, spheres_model_part, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.cluster_graph_counter == self.graph_frequency): #if(self.cluster_graph_counter == self.graph_frequency):
self.cluster_graph_counter = 0
total_local_angular_velocity_x = 0.0
total_local_angular_velocity_y = 0.0
total_local_angular_velocity_z = 0.0
for node in cluster_model_part.Nodes:
current_local_angular_velocity_x = node.GetSolutionStepValue(LOCAL_ANGULAR_VELOCITY_X)
total_local_angular_velocity_x += current_local_angular_velocity_x
current_local_angular_velocity_y = node.GetSolutionStepValue(LOCAL_ANGULAR_VELOCITY_Y)
total_local_angular_velocity_y += current_local_angular_velocity_y
current_local_angular_velocity_z = node.GetSolutionStepValue(LOCAL_ANGULAR_VELOCITY_Z)
total_local_angular_velocity_z += current_local_angular_velocity_z
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_local_angular_velocity_x).rjust(13)+" "+str("%.6g"%total_local_angular_velocity_y).rjust(13)+" "+str("%.6g"%total_local_angular_velocity_z).rjust(13)+"\n")
self.cluster_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2, error3 = self.compute_errors(self.local_angular_velocity_list_outfile_name)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 31:")
if (error1 < 0.1 and error2 < 0.1 and error3 < 0.1):
error_file.write(" OK!........ Test 31 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 31 FAILED\n")
error_file.close()
def compute_errors(self, output_filename): #FINALIZATION STEP
lines_analytics = lines_DEM = list(range(0, 50));
ref_data1 = []; ref_data2 = []; DEM_data1 = []; ref_data3 = []; DEM_data1 = []; DEM_data2 = []; DEM_data3 = []; summation_of_ref_data1 = 0; summation_of_ref_data2 = 0; summation_of_ref_data3 = 0
i = 0
with open('paper_data/benchmark' + str(sys.argv[1]) + '_graph.dat') as inf: #with open('paper_data/reference_graph_benchmark31.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
ref_data1.append(float(parts[1]))
ref_data2.append(float(parts[2]))
ref_data3.append(float(parts[3]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data1.append(float(parts[1]))
DEM_data2.append(float(parts[2]))
DEM_data3.append(float(parts[3]))
i+=1
final_local_angular_velocity_x_error = 0
final_local_angular_velocity_y_error = 0
final_local_angular_velocity_z_error = 0
for j in ref_data1:
summation_of_ref_data1+=abs(j)
for k in ref_data2:
summation_of_ref_data2+=abs(k)
for l in ref_data3:
summation_of_ref_data3+=abs(l)
for i, j in zip(DEM_data1, ref_data1):
final_local_angular_velocity_x_error+=abs(i-j)
final_local_angular_velocity_x_error/=summation_of_ref_data1
for k, l in zip(DEM_data2, ref_data2):
final_local_angular_velocity_y_error+=abs(k-l)
final_local_angular_velocity_y_error/=summation_of_ref_data2
for m, n in zip(DEM_data3, ref_data3):
final_local_angular_velocity_z_error+=abs(m-n)
final_local_angular_velocity_z_error/=summation_of_ref_data3
print("Error in local angular velocity X =", 100*final_local_angular_velocity_x_error,"%")
print("Error in local angular velocity Y =", 100*final_local_angular_velocity_y_error,"%")
print("Error in local angular velocity Z =", 100*final_local_angular_velocity_z_error,"%")
error1 = 100*final_local_angular_velocity_x_error
error2 = 100*final_local_angular_velocity_y_error
error3 = 100*final_local_angular_velocity_z_error
return error1, error2, error3
class Benchmark32: ########## Fiber cluster bouncing without any damping (Velocity Verlet + Zhao scheme)
def __init__(self):
self.number = 32
self.cluster_graph_counter = 1 # deberia ser self.cluster_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.velocity_list_outfile_name = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.velocity_list_outfile_name, 'w')
def get_final_data(self, spheres_model_part, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, spheres_model_part, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.cluster_graph_counter == self.graph_frequency): #if(self.cluster_graph_counter == self.graph_frequency):
self.cluster_graph_counter = 0
total_velocity_z = 0.0
total_angular_velocity_y = 0.0
for node in cluster_model_part.Nodes:
current_velocity_z = node.GetSolutionStepValue(VELOCITY_Z)
total_velocity_z += current_velocity_z
current_angular_velocity_y = node.GetSolutionStepValue(ANGULAR_VELOCITY_Y)
total_angular_velocity_y += current_angular_velocity_y
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_velocity_z).rjust(13)+" "+str("%.6g"%total_angular_velocity_y).rjust(13)+"\n")
self.cluster_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2 = self.compute_errors(self.velocity_list_outfile_name)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 32:")
if (error1 < 0.1 and error2 < 0.1):
error_file.write(" OK!........ Test 32 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 32 FAILED\n")
error_file.close()
def compute_errors(self, output_filename): #FINALIZATION STEP
lines_analytics = lines_DEM = list(range(0, 100))
ref_data1 = []; ref_data2 = []; DEM_data1 = []; DEM_data1 = []; DEM_data2 = []; summation_of_ref_data1 = 0; summation_of_ref_data2 = 0
i = 0
with open('paper_data/benchmark' + str(sys.argv[1]) + '_graph.dat') as inf: #with open('paper_data/reference_graph_benchmark32.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
ref_data1.append(float(parts[1]))
ref_data2.append(float(parts[2]))
i+=1
i = 0
with open(output_filename) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data1.append(float(parts[1]))
DEM_data2.append(float(parts[2]))
i+=1
final_velocity_z_error = 0
final_angular_velocity_y_error = 0
for j in ref_data1:
summation_of_ref_data1+=abs(j)
for k in ref_data2:
summation_of_ref_data2+=abs(k)
for i, j in zip(DEM_data1, ref_data1):
final_velocity_z_error+=abs(i-j)
final_velocity_z_error/=summation_of_ref_data1
for k, l in zip(DEM_data2, ref_data2):
final_angular_velocity_y_error+=abs(k-l)
final_angular_velocity_y_error/=summation_of_ref_data2
print("Error in velocity Z =", 100*final_velocity_z_error,"%")
print("Error in angular velocity Y =", 100*final_angular_velocity_y_error,"%")
error1 = 100*final_velocity_z_error
error2 = 100*final_angular_velocity_y_error
return error1, error2
class Benchmark33: ########## Fiber cluster bouncing without any damping (Velocity Verlet + Runge-Kutta scheme)
def __init__(self):
self.number = 33
self.cluster_graph_counter = 1 # deberia ser self.cluster_graph_counter = self.graph_frequency
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration=0):
self.velocity_list_outfile_name = "benchmark" + str(sys.argv[1]) + '_graph.dat'
self.simulation_graph = open(self.velocity_list_outfile_name, 'w')
def get_final_data(self, spheres_model_part, rigid_face_model_part, cluster_model_part): #FINALIZATION STEP
self.simulation_graph.close()
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def generate_graph_points(self, spheres_model_part, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt): #MAIN LOOP STEP
self.graph_frequency = int(graph_print_interval/dt)
if self.graph_frequency < 1:
self.graph_frequency = 1 #that means it is not possible to print results with a higher frequency than the computations delta time
if(self.cluster_graph_counter == self.graph_frequency): #if(self.cluster_graph_counter == self.graph_frequency):
self.cluster_graph_counter = 0
total_velocity_z = 0.0
total_angular_velocity_y = 0.0
for node in cluster_model_part.Nodes:
current_velocity_z = node.GetSolutionStepValue(VELOCITY_Z)
total_velocity_z += current_velocity_z
current_angular_velocity_y = node.GetSolutionStepValue(ANGULAR_VELOCITY_Y)
total_angular_velocity_y += current_angular_velocity_y
self.simulation_graph.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%total_velocity_z).rjust(13)+" "+str("%.6g"%total_angular_velocity_y).rjust(13)+"\n")
self.cluster_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0): #FINALIZATION STEP
error1, error2 = self.compute_errors(self.velocity_list_outfile_name)
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
error_file.write("DEM Benchmark 33:")
if (error1 < 0.1 and error2 < 0.1):
error_file.write(" OK!........ Test 33 SUCCESSFUL\n")
else:
error_file.write(" KO!........ Test 33 FAILED\n")
error_file.close()
def compute_errors(self, velocity_list_outfile_name): #FINALIZATION STEP
lines_analytics = lines_DEM = list(range(0, 100));
ref_data1 = []; ref_data2 = []; DEM_data1 = []; DEM_data1 = []; DEM_data2 = []; summation_of_ref_data1 = 0; summation_of_ref_data2 = 0
i = 0
with open('paper_data/benchmark' + str(sys.argv[1]) + '_graph.dat') as inf: #with open('paper_data/reference_graph_benchmark33.dat') as inf:
for line in inf:
if i in lines_analytics:
parts = line.split()
ref_data1.append(float(parts[1]))
ref_data2.append(float(parts[2]))
i+=1
i = 0
with open(velocity_list_outfile_name) as inf:
for line in inf:
if i in lines_DEM:
parts = line.split()
DEM_data1.append(float(parts[1]))
DEM_data2.append(float(parts[2]))
i+=1
final_velocity_z_error = 0
final_angular_velocity_y_error = 0
for j in ref_data1:
summation_of_ref_data1+=abs(j)
for k in ref_data2:
summation_of_ref_data2+=abs(k)
for i, j in zip(DEM_data1, ref_data1):
final_velocity_z_error+=abs(i-j)
final_velocity_z_error/=summation_of_ref_data1
for k, l in zip(DEM_data2, ref_data2):
final_angular_velocity_y_error+=abs(k-l)
final_angular_velocity_y_error/=summation_of_ref_data2
print("Error in velocity Z =", 100*final_velocity_z_error,"%")
print("Error in angular velocity Y =", 100*final_angular_velocity_y_error,"%")
error1 = 100*final_velocity_z_error
error2 = 100*final_angular_velocity_y_error
return error1, error2
class Benchmark40: # multiple benchmarks for general code verification.
def __init__(self):
self.generated_data = None
self.balls_graph_counter = 1
self.rigid_graph_counter = 1
self.number_of_DEM_benchmarks = 15
self.number_of_FEM_benchmarks = 8
def ApplyNodalRotation(self, time, dt, modelpart):
pass
def set_initial_data(self, modelpart, rigid_face_model_part, iteration, number_of_points_in_the_graphic, coeff_of_restitution_iteration):
pass
def get_final_data(self, modelpart, rigid_face_model_part, cluster_model_part):
pass
def generate_graph_points(self, modelpart, rigid_face_model_part, cluster_model_part, time, graph_print_interval, dt):
#self.graph_frequency = int(5e-7/dt) #graph_print_interval/dt
self.graph_frequency = int(graph_print_interval/dt) #1 veces mas grf que bin
if self.graph_frequency < 1:
self.graph_frequency = 1
if (self.balls_graph_counter == self.graph_frequency):
self.balls_graph_counter = 0
for node in modelpart.Nodes:
if node.Id == 10: ### stage 0 - simple dem
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=0
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 42: ### stage 1
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=1
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 71: ### stage 2
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=2
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 1354: ### stage 3
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=3
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 1534: ### stage 4 - particle injected by inlet
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=4
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 1416: ### stage 5 - inlet movement
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=5
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 1337: ### stage 6 - dem with initial velocity
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=6
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 663: ### stage 8 - gravity on sphere of spheres
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=7
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 758: ### stage 9 - dem with reduced degrees of freedom
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=8
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 789: ### stage 10 - dem falling pink
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=9
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 913: ### stage 13 - dem falling green fem
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=10
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 974: ### stage 14 - dem falling orange
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=11
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 1061: ### stage 15 - dem imposed period
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=12
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 1180: ### stage 16 - dem initial
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=13
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
for node in modelpart.Nodes:
if node.Id == 1290: ### stage 17 - dem contra fem rotatori force
force_node = MeasureError(node, TOTAL_FORCES)
angular_node = MeasureError(node, ANGULAR_VELOCITY)
displacement_node = GetNodeDisplacement(node)
i=14
data = open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%angular_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
self.balls_graph_counter += 1
if (self.rigid_graph_counter == self.graph_frequency):
self.rigid_graph_counter = 0
for sub_part in rigid_face_model_part.SubModelParts:
if sub_part.Name == '0':
name = int(sub_part.Name)
mesh_nodes = sub_part.GetMesh(0).Nodes
force_node = 0.0
for node in mesh_nodes:
force_node += MeasureError(node, ELASTIC_FORCES)
displacement_node += GetNodeDisplacement(node)
i=name # beware
data = open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
if sub_part.Name == '1':
name = int(sub_part.Name)
mesh_nodes = sub_part.GetMesh(0).Nodes
force_node = 0.0
for node in mesh_nodes:
force_node += MeasureError(node, ELASTIC_FORCES)
displacement_node += GetNodeDisplacement(node)
i=name
data = open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
if sub_part.Name == '2':
name = int(sub_part.Name)
mesh_nodes = sub_part.GetMesh(0).Nodes
force_node = 0.0
for node in mesh_nodes:
force_node += MeasureError(node, ELASTIC_FORCES)
displacement_node += GetNodeDisplacement(node)
i=name
data = open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
if sub_part.Name == '3':
name = int(sub_part.Name)
mesh_nodes = sub_part.GetMesh(0).Nodes
force_node = 0.0
for node in mesh_nodes:
force_node += MeasureError(node, ELASTIC_FORCES)
displacement_node += GetNodeDisplacement(node)
i=name
data = open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
if sub_part.Name == '4':
name = int(sub_part.Name)
mesh_nodes = sub_part.GetMesh(0).Nodes
force_node = 0.0
for node in mesh_nodes:
force_node += MeasureError(node, ELASTIC_FORCES)
displacement_node += GetNodeDisplacement(node)
i=name
data = open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
if sub_part.Name == '5':
name = int(sub_part.Name)
mesh_nodes = sub_part.GetMesh(0).Nodes
force_node = 0.0
for node in mesh_nodes:
force_node += MeasureError(node, ELASTIC_FORCES)
displacement_node += GetNodeDisplacement(node)
i=name
data = open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
if sub_part.Name == '6':
name = int(sub_part.Name)
mesh_nodes = sub_part.GetMesh(0).Nodes
force_node = 0.0
for node in mesh_nodes:
force_node += MeasureError(node, ELASTIC_FORCES)
displacement_node += GetNodeDisplacement(node)
i=name
data = open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
if sub_part.Name == '7':
name = int(sub_part.Name)
mesh_nodes = sub_part.GetMesh(0).Nodes
force_node = 0.0
for node in mesh_nodes:
force_node += MeasureError(node, ELASTIC_FORCES)
displacement_node += GetNodeDisplacement(node)
i=name
data = open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % i, 'a')
data.write(str("%.8g"%time).rjust(12)+" "+str("%.6g"%force_node).rjust(13)+" "+str("%.6g"%displacement_node).rjust(13)+"\n")
data.flush()
self.rigid_graph_counter += 1
def print_results(self, number_of_points_in_the_graphic, dt=0, elapsed_time=0.0):
error1, error2, error3 = self.compute_errors() # TOTAL_FORCES, ANGULAR_VELOCITY, NODE DISPLACEMENT FROM INITIAL POS
error4, error5 = self.compute_rigid_errors() # TOTAL_FORCES, AVG DISPLACEMENT FROM INITIAL POS
error_filename = 'errors.err'
error_file = open(error_filename, 'a')
for index in range(self.number_of_DEM_benchmarks):
error_file.write("DEM Benchmark 40:")
if (error1[index] < 10.0 and error2[index] < 10.0 and error3[index] < 10.0):
error_file.write(" OK!........ Test 40_%s SUCCESSFUL (spheres)\n" % index)
#shutil.rmtree('benchmark40_Post_Files', ignore_errors = True)
else:
error_file.write(" KO!........ Test 40_%s FAILED (spheres)\n" % index)
for index in range(self.number_of_FEM_benchmarks):
error_file.write("DEM Benchmark 40:")
if (error4[index] < 10.0 and error5[index] < 10.0):
error_file.write(" OK!........ Test 40_%s SUCCESSFUL (finite elements)\n" % index)
else:
error_file.write(" KO!........ Test 40_%s FAILED (finite elements)\n" % index)
error_file.close()
def compute_errors(self):
error1 = []
error2 = []
error3 = []
for index in range(self.number_of_DEM_benchmarks):
reference_data = lines_DEM = list(range(0, 1000))
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_graph_benchmark' + '40_%s' % index + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[1])) # ref TOTAL_FORCES
i+=1
i = 0
with open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % index) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1])) # TOTAL_FORCES
i+=1
dem_error1 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error1+=abs(i-j) # (test_data[0]-reference_data[0]) + ...
dem_error1/=summation_of_analytics_data # relative error of the above against sum of reference data
print("Error in total force at the reference particle =", 100*dem_error1,"%")
i = 0
with open('paper_data/reference_graph_benchmark' + '40_%s' % index + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[2])) # ref ANGULAR_VELOCITY
i+=1
i = 0
with open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % index) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[2])) # ANGULAR_VELOCITY
i+=1
dem_error2 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error2+=abs(i-j) # (test_data[0]-reference_data[0]) + ...
dem_error2/=summation_of_analytics_data # relative error of the above against sum of reference data
print("Error in angular velocity at the reference particle =", 100*dem_error2,"%")
i = 0
with open('paper_data/reference_graph_benchmark' + '40_%s' % index + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[3])) # ref displacement from initial pos
i+=1
i = 0
with open("benchmark" + str(sys.argv[1]) + "_graph%s.dat" % index) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[3])) # displacement from initial pos
i+=1
dem_error3 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error3+=abs(i-j)
dem_error3/=summation_of_analytics_data
print("Error in delta displacement at the reference particle =", 100*dem_error3,"%")
error1.append(100*dem_error1)
error2.append(100*dem_error2)
error3.append(100*dem_error3)
return error1, error2, error3
def compute_rigid_errors(self):
error4 = []
error5 = []
for index in range(self.number_of_FEM_benchmarks):
reference_data = lines_DEM = list(range(0, 1000))
analytics_data = []; DEM_data = []; summation_of_analytics_data = 0
i = 0
with open('paper_data/reference_rigid_graph_benchmark' + '40_%s' % index + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[1])) # REFERENCE TOTAL_FORCES
i+=1
i = 0
with open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % index) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[1])) # TOTAL_FORCES
i+=1
dem_error1 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error1+=abs(i-j)
if summation_of_analytics_data!=0.0: # (test_data[0]-reference_data[0]) + ...
dem_error1/=summation_of_analytics_data # relative error of the above against sum of reference data
print("Error in total force at the reference FEM subpart =", 100*dem_error1,"%")
i = 0
with open('paper_data/reference_rigid_graph_benchmark' + '40_%s' % index + '.dat') as reference:
for line in reference:
if i in reference_data:
parts = line.split()
analytics_data.append(float(parts[2])) # displacement from initial pos
i+=1
i = 0
with open("benchmark" + str(sys.argv[1]) + "_rigid_graph%s.dat" % index) as current_data:
for line in current_data:
if i in lines_DEM:
parts = line.split()
DEM_data.append(float(parts[2])) # ref displacement from initial pos
i+=1
dem_error2 = 0
for j in analytics_data:
summation_of_analytics_data+=abs(j)
for i, j in zip(DEM_data, analytics_data):
dem_error2+=abs(i-j)
dem_error2/=summation_of_analytics_data
print("Error in delta displacement at the reference FEM subpart =", 100*dem_error2,"%")
error4.append(100*dem_error1)
error5.append(100*dem_error2)
return error4, error5
def create_gnuplot_scripts(self, output_filename, dt):
pass
def delete_archives():
#.......................Removing extra files
files_to_delete_list = glob('*.time')
files_to_delete_list.extend(glob('*.dat'))
files_to_delete_list.extend(glob('*.gp'))
files_to_delete_list.extend(glob('*.txt'))
files_to_delete_list.extend(glob('*.lst'))
files_to_delete_list.extend(glob('*.info'))
files_to_delete_list.extend(glob('*.err'))
files_to_delete_list.extend(glob('*.hdf5'))
for to_erase_file in files_to_delete_list:
os.remove(to_erase_file)
#............Getting rid of unuseful folders
folders_to_delete_list = glob('*Data')
folders_to_delete_list.extend(glob('*ists'))
folders_to_delete_list.extend(glob('*ults'))
folders_to_delete_list.extend(glob('*he__'))
folders_to_delete_list.extend(glob('*aphs'))
folders_to_delete_list.extend(glob('*iles'))
for to_erase_folder in folders_to_delete_list:
shutil.rmtree(to_erase_folder)
def print_gnuplot_files_on_screen(gnuplot_script_name):
system('gnuplot -persist ' + gnuplot_script_name)
def create_pdf_document(pdf_script_name):
system('gnuplot -persist ' + pdf_script_name)
| 46.031289 | 243 | 0.624596 | 29,041 | 220,674 | 4.417238 | 0.020006 | 0.032156 | 0.021632 | 0.014842 | 0.953914 | 0.931822 | 0.916426 | 0.907289 | 0.893398 | 0.882009 | 0 | 0.034678 | 0.277355 | 220,674 | 4,793 | 244 | 46.040893 | 0.769748 | 0.044613 | 0 | 0.81243 | 0 | 0.006689 | 0.086391 | 0.020282 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070234 | false | 0.015886 | 0.00223 | 0.000557 | 0.092252 | 0.050167 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4074a0b13da89e250ba2310384d5eacd5560e389 | 1,356 | py | Python | diventi/tooltips/migrations/0015_auto_20200901_1950.py | flavoi/diven | 3173ca3ca3fbedc191b8eab3639a6bceb3c442c4 | [
"Apache-2.0"
] | 2 | 2019-06-27T16:00:17.000Z | 2020-08-14T07:46:05.000Z | diventi/tooltips/migrations/0015_auto_20200901_1950.py | flavoi/diven | 3173ca3ca3fbedc191b8eab3639a6bceb3c442c4 | [
"Apache-2.0"
] | 26 | 2020-02-15T22:39:35.000Z | 2022-02-19T21:09:01.000Z | diventi/tooltips/migrations/0015_auto_20200901_1950.py | flavoi/diven | 3173ca3ca3fbedc191b8eab3639a6bceb3c442c4 | [
"Apache-2.0"
] | 1 | 2021-11-12T22:30:15.000Z | 2021-11-12T22:30:15.000Z | # Generated by Django 2.2.13 on 2020-09-01 17:50
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('tooltips', '0014_auto_20200830_1804'),
]
operations = [
migrations.AlterField(
model_name='tooltip',
name='title',
field=models.CharField(max_length=80, verbose_name='title'),
),
migrations.AlterField(
model_name='tooltip',
name='title_en',
field=models.CharField(max_length=80, null=True, verbose_name='title'),
),
migrations.AlterField(
model_name='tooltip',
name='title_it',
field=models.CharField(max_length=80, null=True, verbose_name='title'),
),
migrations.AlterField(
model_name='tooltipgroup',
name='title',
field=models.CharField(max_length=80, verbose_name='title'),
),
migrations.AlterField(
model_name='tooltipgroup',
name='title_en',
field=models.CharField(max_length=80, null=True, verbose_name='title'),
),
migrations.AlterField(
model_name='tooltipgroup',
name='title_it',
field=models.CharField(max_length=80, null=True, verbose_name='title'),
),
]
| 30.818182 | 83 | 0.578171 | 137 | 1,356 | 5.540146 | 0.29927 | 0.142292 | 0.197628 | 0.229249 | 0.799737 | 0.799737 | 0.799737 | 0.752306 | 0.752306 | 0.752306 | 0 | 0.046316 | 0.29941 | 1,356 | 43 | 84 | 31.534884 | 0.752632 | 0.033923 | 0 | 0.810811 | 1 | 0 | 0.122324 | 0.017584 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
90681b31e1662f4f8ed10dcfc71a183b60796d93 | 418,809 | py | Python | ProgettoLube/WebInspector/venv/Lib/site-packages/tensorflow/python/ops/gen_experimental_dataset_ops.py | Lube-Project/ProgettoLube | cbf33971e2c2e865783ec1a2302625539186a338 | [
"MIT"
] | null | null | null | ProgettoLube/WebInspector/venv/Lib/site-packages/tensorflow/python/ops/gen_experimental_dataset_ops.py | Lube-Project/ProgettoLube | cbf33971e2c2e865783ec1a2302625539186a338 | [
"MIT"
] | null | null | null | ProgettoLube/WebInspector/venv/Lib/site-packages/tensorflow/python/ops/gen_experimental_dataset_ops.py | Lube-Project/ProgettoLube | cbf33971e2c2e865783ec1a2302625539186a338 | [
"MIT"
] | 1 | 2021-01-28T01:57:41.000Z | 2021-01-28T01:57:41.000Z | """Python wrappers around TensorFlow ops.
This file is MACHINE GENERATED! Do not edit.
Original C++ source file: experimental_dataset_ops.cc
"""
import collections
from tensorflow.python import pywrap_tfe as pywrap_tfe
from tensorflow.python.eager import context as _context
from tensorflow.python.eager import core as _core
from tensorflow.python.eager import execute as _execute
from tensorflow.python.framework import dtypes as _dtypes
from tensorflow.python.framework import op_def_registry as _op_def_registry
from tensorflow.python.framework import ops as _ops
from tensorflow.python.framework import op_def_library as _op_def_library
from tensorflow.python.util.deprecation import deprecated_endpoints
from tensorflow.python.util import dispatch as _dispatch
from tensorflow.python.util.tf_export import tf_export
def assert_cardinality_dataset(input_dataset, cardinality, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
cardinality: A `Tensor` of type `int64`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "AssertCardinalityDataset",
name, tld.op_callbacks, input_dataset, cardinality, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return assert_cardinality_dataset_eager_fallback(
input_dataset, cardinality, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'assert_cardinality_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'assert_cardinality_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"AssertCardinalityDataset", input_dataset=input_dataset,
cardinality=cardinality,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"AssertCardinalityDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
AssertCardinalityDataset = tf_export("raw_ops.AssertCardinalityDataset")(_ops.to_raw_op(assert_cardinality_dataset))
def assert_cardinality_dataset_eager_fallback(input_dataset, cardinality, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'assert_cardinality_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'assert_cardinality_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
cardinality = _ops.convert_to_tensor(cardinality, _dtypes.int64)
_inputs_flat = [input_dataset, cardinality]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"AssertCardinalityDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"AssertCardinalityDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def assert_next_dataset(input_dataset, transformations, output_types, output_shapes, name=None):
r"""A transformation that asserts which transformations happen next.
This transformation checks whether the camel-case names (i.e. "FlatMap", not
"flat_map") of the transformations following this transformation match the list
of names in the `transformations` argument. If there is a mismatch, the
transformation raises an exception.
The check occurs when iterating over the contents of the dataset, which
means that the check happens *after* any static optimizations are applied
to the dataset graph.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
`AssertNextDataset` passes through the outputs of its input dataset.
transformations: A `Tensor` of type `string`.
A `tf.string` vector `tf.Tensor` identifying the transformations that are
expected to happen next.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "AssertNextDataset", name,
tld.op_callbacks, input_dataset, transformations, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return assert_next_dataset_eager_fallback(
input_dataset, transformations, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'assert_next_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'assert_next_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"AssertNextDataset", input_dataset=input_dataset,
transformations=transformations,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"AssertNextDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
AssertNextDataset = tf_export("raw_ops.AssertNextDataset")(_ops.to_raw_op(assert_next_dataset))
def assert_next_dataset_eager_fallback(input_dataset, transformations, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'assert_next_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'assert_next_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
transformations = _ops.convert_to_tensor(transformations, _dtypes.string)
_inputs_flat = [input_dataset, transformations]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"AssertNextDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"AssertNextDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def auto_shard_dataset(input_dataset, num_workers, index, output_types, output_shapes, auto_shard_policy=0, name=None):
r"""Creates a dataset that shards the input dataset.
Creates a dataset that shards the input dataset by num_workers, returning a
sharded dataset for the index-th worker. This attempts to automatically shard
a dataset by examining the Dataset graph and inserting a shard op before the
inputs to a reader Dataset (e.g. CSVDataset, TFRecordDataset).
This dataset will throw a NotFound error if we cannot shard the dataset
automatically.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
num_workers: A `Tensor` of type `int64`.
A scalar representing the number of workers to distribute this dataset across.
index: A `Tensor` of type `int64`.
A scalar representing the index of the current worker out of num_workers.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
auto_shard_policy: An optional `int`. Defaults to `0`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "AutoShardDataset", name,
tld.op_callbacks, input_dataset, num_workers, index,
"auto_shard_policy", auto_shard_policy, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return auto_shard_dataset_eager_fallback(
input_dataset, num_workers, index,
auto_shard_policy=auto_shard_policy, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'auto_shard_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'auto_shard_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if auto_shard_policy is None:
auto_shard_policy = 0
auto_shard_policy = _execute.make_int(auto_shard_policy, "auto_shard_policy")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"AutoShardDataset", input_dataset=input_dataset,
num_workers=num_workers, index=index,
output_types=output_types,
output_shapes=output_shapes,
auto_shard_policy=auto_shard_policy, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("auto_shard_policy", _op._get_attr_int("auto_shard_policy"),
"output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"AutoShardDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
AutoShardDataset = tf_export("raw_ops.AutoShardDataset")(_ops.to_raw_op(auto_shard_dataset))
def auto_shard_dataset_eager_fallback(input_dataset, num_workers, index, output_types, output_shapes, auto_shard_policy, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'auto_shard_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'auto_shard_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if auto_shard_policy is None:
auto_shard_policy = 0
auto_shard_policy = _execute.make_int(auto_shard_policy, "auto_shard_policy")
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
num_workers = _ops.convert_to_tensor(num_workers, _dtypes.int64)
index = _ops.convert_to_tensor(index, _dtypes.int64)
_inputs_flat = [input_dataset, num_workers, index]
_attrs = ("auto_shard_policy", auto_shard_policy, "output_types",
output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"AutoShardDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"AutoShardDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def bytes_produced_stats_dataset(input_dataset, tag, output_types, output_shapes, name=None):
r"""Records the bytes size of each element of `input_dataset` in a StatsAggregator.
Args:
input_dataset: A `Tensor` of type `variant`.
tag: A `Tensor` of type `string`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "BytesProducedStatsDataset",
name, tld.op_callbacks, input_dataset, tag, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return bytes_produced_stats_dataset_eager_fallback(
input_dataset, tag, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'bytes_produced_stats_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'bytes_produced_stats_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"BytesProducedStatsDataset", input_dataset=input_dataset, tag=tag,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"BytesProducedStatsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
BytesProducedStatsDataset = tf_export("raw_ops.BytesProducedStatsDataset")(_ops.to_raw_op(bytes_produced_stats_dataset))
def bytes_produced_stats_dataset_eager_fallback(input_dataset, tag, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'bytes_produced_stats_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'bytes_produced_stats_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
tag = _ops.convert_to_tensor(tag, _dtypes.string)
_inputs_flat = [input_dataset, tag]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"BytesProducedStatsDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"BytesProducedStatsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def csv_dataset(filenames, compression_type, buffer_size, header, field_delim, use_quote_delim, na_value, select_cols, record_defaults, output_shapes, name=None):
r"""TODO: add doc.
Args:
filenames: A `Tensor` of type `string`.
compression_type: A `Tensor` of type `string`.
buffer_size: A `Tensor` of type `int64`.
header: A `Tensor` of type `bool`.
field_delim: A `Tensor` of type `string`.
use_quote_delim: A `Tensor` of type `bool`.
na_value: A `Tensor` of type `string`.
select_cols: A `Tensor` of type `int64`.
record_defaults: A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "CSVDataset", name,
tld.op_callbacks, filenames, compression_type, buffer_size, header,
field_delim, use_quote_delim, na_value, select_cols, record_defaults,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return csv_dataset_eager_fallback(
filenames, compression_type, buffer_size, header, field_delim,
use_quote_delim, na_value, select_cols, record_defaults,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'csv_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"CSVDataset", filenames=filenames, compression_type=compression_type,
buffer_size=buffer_size, header=header,
field_delim=field_delim,
use_quote_delim=use_quote_delim, na_value=na_value,
select_cols=select_cols,
record_defaults=record_defaults,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"CSVDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
CSVDataset = tf_export("raw_ops.CSVDataset")(_ops.to_raw_op(csv_dataset))
def csv_dataset_eager_fallback(filenames, compression_type, buffer_size, header, field_delim, use_quote_delim, na_value, select_cols, record_defaults, output_shapes, name, ctx):
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'csv_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_output_types, record_defaults = _execute.convert_to_mixed_eager_tensors(record_defaults, ctx)
filenames = _ops.convert_to_tensor(filenames, _dtypes.string)
compression_type = _ops.convert_to_tensor(compression_type, _dtypes.string)
buffer_size = _ops.convert_to_tensor(buffer_size, _dtypes.int64)
header = _ops.convert_to_tensor(header, _dtypes.bool)
field_delim = _ops.convert_to_tensor(field_delim, _dtypes.string)
use_quote_delim = _ops.convert_to_tensor(use_quote_delim, _dtypes.bool)
na_value = _ops.convert_to_tensor(na_value, _dtypes.string)
select_cols = _ops.convert_to_tensor(select_cols, _dtypes.int64)
_inputs_flat = [filenames, compression_type, buffer_size, header, field_delim, use_quote_delim, na_value, select_cols] + list(record_defaults)
_attrs = ("output_types", _attr_output_types, "output_shapes",
output_shapes)
_result = _execute.execute(b"CSVDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"CSVDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def choose_fastest_branch_dataset(input_dataset, ratio_numerator, ratio_denominator, other_arguments, num_elements_per_branch, branches, other_arguments_lengths, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
ratio_numerator: A `Tensor` of type `int64`.
ratio_denominator: A `Tensor` of type `int64`.
other_arguments: A list of `Tensor` objects.
num_elements_per_branch: An `int` that is `>= 1`.
branches: A list of functions decorated with @Defun that has length `>= 1`.
other_arguments_lengths: A list of `ints` that has length `>= 1`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ChooseFastestBranchDataset",
name, tld.op_callbacks, input_dataset, ratio_numerator,
ratio_denominator, other_arguments, "num_elements_per_branch",
num_elements_per_branch, "branches", branches,
"other_arguments_lengths", other_arguments_lengths, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return choose_fastest_branch_dataset_eager_fallback(
input_dataset, ratio_numerator, ratio_denominator, other_arguments,
num_elements_per_branch=num_elements_per_branch, branches=branches,
other_arguments_lengths=other_arguments_lengths,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
num_elements_per_branch = _execute.make_int(num_elements_per_branch, "num_elements_per_branch")
if not isinstance(branches, (list, tuple)):
raise TypeError(
"Expected list for 'branches' argument to "
"'choose_fastest_branch_dataset' Op, not %r." % branches)
if not isinstance(other_arguments_lengths, (list, tuple)):
raise TypeError(
"Expected list for 'other_arguments_lengths' argument to "
"'choose_fastest_branch_dataset' Op, not %r." % other_arguments_lengths)
other_arguments_lengths = [_execute.make_int(_i, "other_arguments_lengths") for _i in other_arguments_lengths]
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'choose_fastest_branch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'choose_fastest_branch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ChooseFastestBranchDataset", input_dataset=input_dataset,
ratio_numerator=ratio_numerator,
ratio_denominator=ratio_denominator,
other_arguments=other_arguments,
num_elements_per_branch=num_elements_per_branch,
branches=branches,
other_arguments_lengths=other_arguments_lengths,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("Targuments", _op.get_attr("Targuments"),
"num_elements_per_branch",
_op._get_attr_int("num_elements_per_branch"), "branches",
_op.get_attr("branches"), "other_arguments_lengths",
_op.get_attr("other_arguments_lengths"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ChooseFastestBranchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ChooseFastestBranchDataset = tf_export("raw_ops.ChooseFastestBranchDataset")(_ops.to_raw_op(choose_fastest_branch_dataset))
def choose_fastest_branch_dataset_eager_fallback(input_dataset, ratio_numerator, ratio_denominator, other_arguments, num_elements_per_branch, branches, other_arguments_lengths, output_types, output_shapes, name, ctx):
num_elements_per_branch = _execute.make_int(num_elements_per_branch, "num_elements_per_branch")
if not isinstance(branches, (list, tuple)):
raise TypeError(
"Expected list for 'branches' argument to "
"'choose_fastest_branch_dataset' Op, not %r." % branches)
if not isinstance(other_arguments_lengths, (list, tuple)):
raise TypeError(
"Expected list for 'other_arguments_lengths' argument to "
"'choose_fastest_branch_dataset' Op, not %r." % other_arguments_lengths)
other_arguments_lengths = [_execute.make_int(_i, "other_arguments_lengths") for _i in other_arguments_lengths]
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'choose_fastest_branch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'choose_fastest_branch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
ratio_numerator = _ops.convert_to_tensor(ratio_numerator, _dtypes.int64)
ratio_denominator = _ops.convert_to_tensor(ratio_denominator, _dtypes.int64)
_inputs_flat = [input_dataset, ratio_numerator, ratio_denominator] + list(other_arguments)
_attrs = ("Targuments", _attr_Targuments, "num_elements_per_branch",
num_elements_per_branch, "branches", branches, "other_arguments_lengths",
other_arguments_lengths, "output_types", output_types, "output_shapes",
output_shapes)
_result = _execute.execute(b"ChooseFastestBranchDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ChooseFastestBranchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def choose_fastest_dataset(input_datasets, num_experiments, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_datasets: A list of at least 2 `Tensor` objects with type `variant`.
num_experiments: An `int`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ChooseFastestDataset", name,
tld.op_callbacks, input_datasets, "num_experiments", num_experiments,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return choose_fastest_dataset_eager_fallback(
input_datasets, num_experiments=num_experiments,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(input_datasets, (list, tuple)):
raise TypeError(
"Expected list for 'input_datasets' argument to "
"'choose_fastest_dataset' Op, not %r." % input_datasets)
_attr_N = len(input_datasets)
num_experiments = _execute.make_int(num_experiments, "num_experiments")
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'choose_fastest_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'choose_fastest_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ChooseFastestDataset", input_datasets=input_datasets,
num_experiments=num_experiments,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("N", _op._get_attr_int("N"), "num_experiments",
_op._get_attr_int("num_experiments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ChooseFastestDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ChooseFastestDataset = tf_export("raw_ops.ChooseFastestDataset")(_ops.to_raw_op(choose_fastest_dataset))
def choose_fastest_dataset_eager_fallback(input_datasets, num_experiments, output_types, output_shapes, name, ctx):
if not isinstance(input_datasets, (list, tuple)):
raise TypeError(
"Expected list for 'input_datasets' argument to "
"'choose_fastest_dataset' Op, not %r." % input_datasets)
_attr_N = len(input_datasets)
num_experiments = _execute.make_int(num_experiments, "num_experiments")
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'choose_fastest_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'choose_fastest_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_datasets = _ops.convert_n_to_tensor(input_datasets, _dtypes.variant)
_inputs_flat = list(input_datasets)
_attrs = ("N", _attr_N, "num_experiments", num_experiments, "output_types",
output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ChooseFastestDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ChooseFastestDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def compress_element(components, name=None):
r"""Compresses a dataset element.
Args:
components: A list of `Tensor` objects.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "CompressElement", name,
tld.op_callbacks, components)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return compress_element_eager_fallback(
components, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"CompressElement", components=components, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("input_types", _op.get_attr("input_types"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"CompressElement", _inputs_flat, _attrs, _result)
_result, = _result
return _result
CompressElement = tf_export("raw_ops.CompressElement")(_ops.to_raw_op(compress_element))
def compress_element_eager_fallback(components, name, ctx):
_attr_input_types, components = _execute.convert_to_mixed_eager_tensors(components, ctx)
_inputs_flat = list(components)
_attrs = ("input_types", _attr_input_types)
_result = _execute.execute(b"CompressElement", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"CompressElement", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def data_service_dataset(dataset_id, processing_mode, address, protocol, job_name, max_outstanding_requests, iteration_counter, output_types, output_shapes, task_refresh_interval_hint_ms=-1, name=None):
r"""Creates a dataset that reads data from the tf.data service.
Args:
dataset_id: A `Tensor` of type `int64`.
processing_mode: A `Tensor` of type `string`.
address: A `Tensor` of type `string`.
protocol: A `Tensor` of type `string`.
job_name: A `Tensor` of type `string`.
max_outstanding_requests: A `Tensor` of type `int64`.
iteration_counter: A `Tensor` of type `resource`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
task_refresh_interval_hint_ms: An optional `int`. Defaults to `-1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DataServiceDataset", name,
tld.op_callbacks, dataset_id, processing_mode, address, protocol,
job_name, max_outstanding_requests, iteration_counter,
"task_refresh_interval_hint_ms", task_refresh_interval_hint_ms,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return data_service_dataset_eager_fallback(
dataset_id, processing_mode, address, protocol, job_name,
max_outstanding_requests, iteration_counter,
task_refresh_interval_hint_ms=task_refresh_interval_hint_ms,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'data_service_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'data_service_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if task_refresh_interval_hint_ms is None:
task_refresh_interval_hint_ms = -1
task_refresh_interval_hint_ms = _execute.make_int(task_refresh_interval_hint_ms, "task_refresh_interval_hint_ms")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DataServiceDataset", dataset_id=dataset_id,
processing_mode=processing_mode,
address=address, protocol=protocol,
job_name=job_name,
max_outstanding_requests=max_outstanding_requests,
iteration_counter=iteration_counter,
output_types=output_types,
output_shapes=output_shapes,
task_refresh_interval_hint_ms=task_refresh_interval_hint_ms,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("task_refresh_interval_hint_ms",
_op._get_attr_int("task_refresh_interval_hint_ms"),
"output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"DataServiceDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
DataServiceDataset = tf_export("raw_ops.DataServiceDataset")(_ops.to_raw_op(data_service_dataset))
def data_service_dataset_eager_fallback(dataset_id, processing_mode, address, protocol, job_name, max_outstanding_requests, iteration_counter, output_types, output_shapes, task_refresh_interval_hint_ms, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'data_service_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'data_service_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if task_refresh_interval_hint_ms is None:
task_refresh_interval_hint_ms = -1
task_refresh_interval_hint_ms = _execute.make_int(task_refresh_interval_hint_ms, "task_refresh_interval_hint_ms")
dataset_id = _ops.convert_to_tensor(dataset_id, _dtypes.int64)
processing_mode = _ops.convert_to_tensor(processing_mode, _dtypes.string)
address = _ops.convert_to_tensor(address, _dtypes.string)
protocol = _ops.convert_to_tensor(protocol, _dtypes.string)
job_name = _ops.convert_to_tensor(job_name, _dtypes.string)
max_outstanding_requests = _ops.convert_to_tensor(max_outstanding_requests, _dtypes.int64)
iteration_counter = _ops.convert_to_tensor(iteration_counter, _dtypes.resource)
_inputs_flat = [dataset_id, processing_mode, address, protocol, job_name, max_outstanding_requests, iteration_counter]
_attrs = ("task_refresh_interval_hint_ms", task_refresh_interval_hint_ms,
"output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"DataServiceDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DataServiceDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def dataset_from_graph(graph_def, name=None):
r"""Creates a dataset from the given `graph_def`.
Creates a dataset from the provided `graph_def`.
Args:
graph_def: A `Tensor` of type `string`.
The graph representation of the dataset (as serialized GraphDef).
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DatasetFromGraph", name,
tld.op_callbacks, graph_def)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return dataset_from_graph_eager_fallback(
graph_def, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DatasetFromGraph", graph_def=graph_def, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"DatasetFromGraph", _inputs_flat, _attrs, _result)
_result, = _result
return _result
DatasetFromGraph = tf_export("raw_ops.DatasetFromGraph")(_ops.to_raw_op(dataset_from_graph))
def dataset_from_graph_eager_fallback(graph_def, name, ctx):
graph_def = _ops.convert_to_tensor(graph_def, _dtypes.string)
_inputs_flat = [graph_def]
_attrs = None
_result = _execute.execute(b"DatasetFromGraph", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DatasetFromGraph", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def dataset_to_tf_record(input_dataset, filename, compression_type, name=None):
r"""Writes the given dataset to the given file using the TFRecord format.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the dataset to write.
filename: A `Tensor` of type `string`.
A scalar string tensor representing the filename to use.
compression_type: A `Tensor` of type `string`.
A scalar string tensor containing either (i) the empty string (no
compression), (ii) "ZLIB", or (iii) "GZIP".
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DatasetToTFRecord", name,
tld.op_callbacks, input_dataset, filename, compression_type)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return dataset_to_tf_record_eager_fallback(
input_dataset, filename, compression_type, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DatasetToTFRecord", input_dataset=input_dataset, filename=filename,
compression_type=compression_type, name=name)
return _op
DatasetToTFRecord = tf_export("raw_ops.DatasetToTFRecord")(_ops.to_raw_op(dataset_to_tf_record))
def dataset_to_tf_record_eager_fallback(input_dataset, filename, compression_type, name, ctx):
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
filename = _ops.convert_to_tensor(filename, _dtypes.string)
compression_type = _ops.convert_to_tensor(compression_type, _dtypes.string)
_inputs_flat = [input_dataset, filename, compression_type]
_attrs = None
_result = _execute.execute(b"DatasetToTFRecord", 0, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
_result = None
return _result
def dense_to_sparse_batch_dataset(input_dataset, batch_size, row_shape, output_types, output_shapes, name=None):
r"""Creates a dataset that batches input elements into a SparseTensor.
Args:
input_dataset: A `Tensor` of type `variant`.
A handle to an input dataset. Must have a single component.
batch_size: A `Tensor` of type `int64`.
A scalar representing the number of elements to accumulate in a
batch.
row_shape: A `Tensor` of type `int64`.
A vector representing the dense shape of each row in the produced
SparseTensor. The shape may be partially specified, using `-1` to indicate
that a particular dimension should use the maximum size of all batch elements.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DenseToSparseBatchDataset",
name, tld.op_callbacks, input_dataset, batch_size, row_shape,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return dense_to_sparse_batch_dataset_eager_fallback(
input_dataset, batch_size, row_shape, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'dense_to_sparse_batch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'dense_to_sparse_batch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DenseToSparseBatchDataset", input_dataset=input_dataset,
batch_size=batch_size,
row_shape=row_shape,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"DenseToSparseBatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
DenseToSparseBatchDataset = tf_export("raw_ops.DenseToSparseBatchDataset")(_ops.to_raw_op(dense_to_sparse_batch_dataset))
def dense_to_sparse_batch_dataset_eager_fallback(input_dataset, batch_size, row_shape, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'dense_to_sparse_batch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'dense_to_sparse_batch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
batch_size = _ops.convert_to_tensor(batch_size, _dtypes.int64)
row_shape = _ops.convert_to_tensor(row_shape, _dtypes.int64)
_inputs_flat = [input_dataset, batch_size, row_shape]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"DenseToSparseBatchDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DenseToSparseBatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def directed_interleave_dataset(selector_input_dataset, data_input_datasets, output_types, output_shapes, name=None):
r"""A substitute for `InterleaveDataset` on a fixed list of `N` datasets.
Args:
selector_input_dataset: A `Tensor` of type `variant`.
A dataset of scalar `DT_INT64` elements that determines which of the
`N` data inputs should produce the next output element.
data_input_datasets: A list of at least 1 `Tensor` objects with type `variant`.
`N` datasets with the same type that will be interleaved according to
the values of `selector_input_dataset`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DirectedInterleaveDataset",
name, tld.op_callbacks, selector_input_dataset, data_input_datasets,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return directed_interleave_dataset_eager_fallback(
selector_input_dataset, data_input_datasets,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(data_input_datasets, (list, tuple)):
raise TypeError(
"Expected list for 'data_input_datasets' argument to "
"'directed_interleave_dataset' Op, not %r." % data_input_datasets)
_attr_N = len(data_input_datasets)
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'directed_interleave_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'directed_interleave_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DirectedInterleaveDataset", selector_input_dataset=selector_input_dataset,
data_input_datasets=data_input_datasets,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "N", _op._get_attr_int("N"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"DirectedInterleaveDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
DirectedInterleaveDataset = tf_export("raw_ops.DirectedInterleaveDataset")(_ops.to_raw_op(directed_interleave_dataset))
def directed_interleave_dataset_eager_fallback(selector_input_dataset, data_input_datasets, output_types, output_shapes, name, ctx):
if not isinstance(data_input_datasets, (list, tuple)):
raise TypeError(
"Expected list for 'data_input_datasets' argument to "
"'directed_interleave_dataset' Op, not %r." % data_input_datasets)
_attr_N = len(data_input_datasets)
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'directed_interleave_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'directed_interleave_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
selector_input_dataset = _ops.convert_to_tensor(selector_input_dataset, _dtypes.variant)
data_input_datasets = _ops.convert_n_to_tensor(data_input_datasets, _dtypes.variant)
_inputs_flat = [selector_input_dataset] + list(data_input_datasets)
_attrs = ("output_types", output_types, "output_shapes", output_shapes, "N",
_attr_N)
_result = _execute.execute(b"DirectedInterleaveDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DirectedInterleaveDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def dummy_iteration_counter(name=None):
r"""TODO: add doc.
Args:
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "DummyIterationCounter", name,
tld.op_callbacks)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return dummy_iteration_counter_eager_fallback(
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"DummyIterationCounter", name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"DummyIterationCounter", _inputs_flat, _attrs, _result)
_result, = _result
return _result
DummyIterationCounter = tf_export("raw_ops.DummyIterationCounter")(_ops.to_raw_op(dummy_iteration_counter))
def dummy_iteration_counter_eager_fallback(name, ctx):
_inputs_flat = []
_attrs = None
_result = _execute.execute(b"DummyIterationCounter", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"DummyIterationCounter", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_assert_next_dataset(input_dataset, transformations, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
transformations: A `Tensor` of type `string`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalAssertNextDataset", name, tld.op_callbacks,
input_dataset, transformations, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_assert_next_dataset_eager_fallback(
input_dataset, transformations, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_assert_next_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_assert_next_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalAssertNextDataset", input_dataset=input_dataset,
transformations=transformations,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalAssertNextDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalAssertNextDataset = tf_export("raw_ops.ExperimentalAssertNextDataset")(_ops.to_raw_op(experimental_assert_next_dataset))
def experimental_assert_next_dataset_eager_fallback(input_dataset, transformations, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_assert_next_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_assert_next_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
transformations = _ops.convert_to_tensor(transformations, _dtypes.string)
_inputs_flat = [input_dataset, transformations]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalAssertNextDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalAssertNextDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_auto_shard_dataset(input_dataset, num_workers, index, output_types, output_shapes, auto_shard_policy=0, name=None):
r"""Creates a dataset that shards the input dataset.
Creates a dataset that shards the input dataset by num_workers, returning a
sharded dataset for the index-th worker. This attempts to automatically shard
a dataset by examining the Dataset graph and inserting a shard op before the
inputs to a reader Dataset (e.g. CSVDataset, TFRecordDataset).
This dataset will throw a NotFound error if we cannot shard the dataset
automatically.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
num_workers: A `Tensor` of type `int64`.
A scalar representing the number of workers to distribute this dataset across.
index: A `Tensor` of type `int64`.
A scalar representing the index of the current worker out of num_workers.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
auto_shard_policy: An optional `int`. Defaults to `0`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalAutoShardDataset",
name, tld.op_callbacks, input_dataset, num_workers, index,
"auto_shard_policy", auto_shard_policy, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_auto_shard_dataset_eager_fallback(
input_dataset, num_workers, index,
auto_shard_policy=auto_shard_policy, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_auto_shard_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_auto_shard_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if auto_shard_policy is None:
auto_shard_policy = 0
auto_shard_policy = _execute.make_int(auto_shard_policy, "auto_shard_policy")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalAutoShardDataset", input_dataset=input_dataset,
num_workers=num_workers, index=index,
output_types=output_types,
output_shapes=output_shapes,
auto_shard_policy=auto_shard_policy,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("auto_shard_policy", _op._get_attr_int("auto_shard_policy"),
"output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalAutoShardDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalAutoShardDataset = tf_export("raw_ops.ExperimentalAutoShardDataset")(_ops.to_raw_op(experimental_auto_shard_dataset))
def experimental_auto_shard_dataset_eager_fallback(input_dataset, num_workers, index, output_types, output_shapes, auto_shard_policy, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_auto_shard_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_auto_shard_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if auto_shard_policy is None:
auto_shard_policy = 0
auto_shard_policy = _execute.make_int(auto_shard_policy, "auto_shard_policy")
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
num_workers = _ops.convert_to_tensor(num_workers, _dtypes.int64)
index = _ops.convert_to_tensor(index, _dtypes.int64)
_inputs_flat = [input_dataset, num_workers, index]
_attrs = ("auto_shard_policy", auto_shard_policy, "output_types",
output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalAutoShardDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalAutoShardDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_bytes_produced_stats_dataset(input_dataset, tag, output_types, output_shapes, name=None):
r"""Records the bytes size of each element of `input_dataset` in a StatsAggregator.
Args:
input_dataset: A `Tensor` of type `variant`.
tag: A `Tensor` of type `string`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalBytesProducedStatsDataset", name, tld.op_callbacks,
input_dataset, tag, "output_types", output_types, "output_shapes",
output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_bytes_produced_stats_dataset_eager_fallback(
input_dataset, tag, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_bytes_produced_stats_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_bytes_produced_stats_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalBytesProducedStatsDataset", input_dataset=input_dataset,
tag=tag,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalBytesProducedStatsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalBytesProducedStatsDataset = tf_export("raw_ops.ExperimentalBytesProducedStatsDataset")(_ops.to_raw_op(experimental_bytes_produced_stats_dataset))
def experimental_bytes_produced_stats_dataset_eager_fallback(input_dataset, tag, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_bytes_produced_stats_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_bytes_produced_stats_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
tag = _ops.convert_to_tensor(tag, _dtypes.string)
_inputs_flat = [input_dataset, tag]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalBytesProducedStatsDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalBytesProducedStatsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_csv_dataset(filenames, compression_type, buffer_size, header, field_delim, use_quote_delim, na_value, select_cols, record_defaults, output_shapes, name=None):
r"""TODO: add doc.
Args:
filenames: A `Tensor` of type `string`.
compression_type: A `Tensor` of type `string`.
buffer_size: A `Tensor` of type `int64`.
header: A `Tensor` of type `bool`.
field_delim: A `Tensor` of type `string`.
use_quote_delim: A `Tensor` of type `bool`.
na_value: A `Tensor` of type `string`.
select_cols: A `Tensor` of type `int64`.
record_defaults: A list of `Tensor` objects with types from: `float32`, `float64`, `int32`, `int64`, `string`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalCSVDataset", name,
tld.op_callbacks, filenames, compression_type, buffer_size, header,
field_delim, use_quote_delim, na_value, select_cols, record_defaults,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_csv_dataset_eager_fallback(
filenames, compression_type, buffer_size, header, field_delim,
use_quote_delim, na_value, select_cols, record_defaults,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_csv_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalCSVDataset", filenames=filenames,
compression_type=compression_type,
buffer_size=buffer_size, header=header,
field_delim=field_delim,
use_quote_delim=use_quote_delim,
na_value=na_value, select_cols=select_cols,
record_defaults=record_defaults,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalCSVDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalCSVDataset = tf_export("raw_ops.ExperimentalCSVDataset")(_ops.to_raw_op(experimental_csv_dataset))
def experimental_csv_dataset_eager_fallback(filenames, compression_type, buffer_size, header, field_delim, use_quote_delim, na_value, select_cols, record_defaults, output_shapes, name, ctx):
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_csv_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_output_types, record_defaults = _execute.convert_to_mixed_eager_tensors(record_defaults, ctx)
filenames = _ops.convert_to_tensor(filenames, _dtypes.string)
compression_type = _ops.convert_to_tensor(compression_type, _dtypes.string)
buffer_size = _ops.convert_to_tensor(buffer_size, _dtypes.int64)
header = _ops.convert_to_tensor(header, _dtypes.bool)
field_delim = _ops.convert_to_tensor(field_delim, _dtypes.string)
use_quote_delim = _ops.convert_to_tensor(use_quote_delim, _dtypes.bool)
na_value = _ops.convert_to_tensor(na_value, _dtypes.string)
select_cols = _ops.convert_to_tensor(select_cols, _dtypes.int64)
_inputs_flat = [filenames, compression_type, buffer_size, header, field_delim, use_quote_delim, na_value, select_cols] + list(record_defaults)
_attrs = ("output_types", _attr_output_types, "output_shapes",
output_shapes)
_result = _execute.execute(b"ExperimentalCSVDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalCSVDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_choose_fastest_dataset(input_datasets, num_experiments, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_datasets: A list of at least 2 `Tensor` objects with type `variant`.
num_experiments: An `int`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalChooseFastestDataset", name, tld.op_callbacks,
input_datasets, "num_experiments", num_experiments, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_choose_fastest_dataset_eager_fallback(
input_datasets, num_experiments=num_experiments,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(input_datasets, (list, tuple)):
raise TypeError(
"Expected list for 'input_datasets' argument to "
"'experimental_choose_fastest_dataset' Op, not %r." % input_datasets)
_attr_N = len(input_datasets)
num_experiments = _execute.make_int(num_experiments, "num_experiments")
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_choose_fastest_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_choose_fastest_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalChooseFastestDataset", input_datasets=input_datasets,
num_experiments=num_experiments,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("N", _op._get_attr_int("N"), "num_experiments",
_op._get_attr_int("num_experiments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalChooseFastestDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalChooseFastestDataset = tf_export("raw_ops.ExperimentalChooseFastestDataset")(_ops.to_raw_op(experimental_choose_fastest_dataset))
def experimental_choose_fastest_dataset_eager_fallback(input_datasets, num_experiments, output_types, output_shapes, name, ctx):
if not isinstance(input_datasets, (list, tuple)):
raise TypeError(
"Expected list for 'input_datasets' argument to "
"'experimental_choose_fastest_dataset' Op, not %r." % input_datasets)
_attr_N = len(input_datasets)
num_experiments = _execute.make_int(num_experiments, "num_experiments")
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_choose_fastest_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_choose_fastest_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_datasets = _ops.convert_n_to_tensor(input_datasets, _dtypes.variant)
_inputs_flat = list(input_datasets)
_attrs = ("N", _attr_N, "num_experiments", num_experiments, "output_types",
output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalChooseFastestDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalChooseFastestDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_dataset_cardinality(input_dataset, name=None):
r"""Returns the cardinality of `input_dataset`.
Returns the cardinality of `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the dataset to return cardinality for.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `int64`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalDatasetCardinality", name, tld.op_callbacks,
input_dataset)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_dataset_cardinality_eager_fallback(
input_dataset, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalDatasetCardinality", input_dataset=input_dataset,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalDatasetCardinality", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalDatasetCardinality = tf_export("raw_ops.ExperimentalDatasetCardinality")(_ops.to_raw_op(experimental_dataset_cardinality))
def experimental_dataset_cardinality_eager_fallback(input_dataset, name, ctx):
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset]
_attrs = None
_result = _execute.execute(b"ExperimentalDatasetCardinality", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalDatasetCardinality", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_dataset_to_tf_record(input_dataset, filename, compression_type, name=None):
r"""Writes the given dataset to the given file using the TFRecord format.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the dataset to write.
filename: A `Tensor` of type `string`.
A scalar string tensor representing the filename to use.
compression_type: A `Tensor` of type `string`.
A scalar string tensor containing either (i) the empty string (no
compression), (ii) "ZLIB", or (iii) "GZIP".
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalDatasetToTFRecord", name, tld.op_callbacks,
input_dataset, filename, compression_type)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_dataset_to_tf_record_eager_fallback(
input_dataset, filename, compression_type, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalDatasetToTFRecord", input_dataset=input_dataset,
filename=filename,
compression_type=compression_type,
name=name)
return _op
ExperimentalDatasetToTFRecord = tf_export("raw_ops.ExperimentalDatasetToTFRecord")(_ops.to_raw_op(experimental_dataset_to_tf_record))
def experimental_dataset_to_tf_record_eager_fallback(input_dataset, filename, compression_type, name, ctx):
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
filename = _ops.convert_to_tensor(filename, _dtypes.string)
compression_type = _ops.convert_to_tensor(compression_type, _dtypes.string)
_inputs_flat = [input_dataset, filename, compression_type]
_attrs = None
_result = _execute.execute(b"ExperimentalDatasetToTFRecord", 0,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
_result = None
return _result
def experimental_dense_to_sparse_batch_dataset(input_dataset, batch_size, row_shape, output_types, output_shapes, name=None):
r"""Creates a dataset that batches input elements into a SparseTensor.
Args:
input_dataset: A `Tensor` of type `variant`.
A handle to an input dataset. Must have a single component.
batch_size: A `Tensor` of type `int64`.
A scalar representing the number of elements to accumulate in a
batch.
row_shape: A `Tensor` of type `int64`.
A vector representing the dense shape of each row in the produced
SparseTensor. The shape may be partially specified, using `-1` to indicate
that a particular dimension should use the maximum size of all batch elements.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalDenseToSparseBatchDataset", name, tld.op_callbacks,
input_dataset, batch_size, row_shape, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_dense_to_sparse_batch_dataset_eager_fallback(
input_dataset, batch_size, row_shape, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_dense_to_sparse_batch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_dense_to_sparse_batch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalDenseToSparseBatchDataset", input_dataset=input_dataset,
batch_size=batch_size,
row_shape=row_shape,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalDenseToSparseBatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalDenseToSparseBatchDataset = tf_export("raw_ops.ExperimentalDenseToSparseBatchDataset")(_ops.to_raw_op(experimental_dense_to_sparse_batch_dataset))
def experimental_dense_to_sparse_batch_dataset_eager_fallback(input_dataset, batch_size, row_shape, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_dense_to_sparse_batch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_dense_to_sparse_batch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
batch_size = _ops.convert_to_tensor(batch_size, _dtypes.int64)
row_shape = _ops.convert_to_tensor(row_shape, _dtypes.int64)
_inputs_flat = [input_dataset, batch_size, row_shape]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalDenseToSparseBatchDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalDenseToSparseBatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_directed_interleave_dataset(selector_input_dataset, data_input_datasets, output_types, output_shapes, name=None):
r"""A substitute for `InterleaveDataset` on a fixed list of `N` datasets.
Args:
selector_input_dataset: A `Tensor` of type `variant`.
A dataset of scalar `DT_INT64` elements that determines which of the
`N` data inputs should produce the next output element.
data_input_datasets: A list of at least 1 `Tensor` objects with type `variant`.
`N` datasets with the same type that will be interleaved according to
the values of `selector_input_dataset`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalDirectedInterleaveDataset", name, tld.op_callbacks,
selector_input_dataset, data_input_datasets, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_directed_interleave_dataset_eager_fallback(
selector_input_dataset, data_input_datasets,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(data_input_datasets, (list, tuple)):
raise TypeError(
"Expected list for 'data_input_datasets' argument to "
"'experimental_directed_interleave_dataset' Op, not %r." % data_input_datasets)
_attr_N = len(data_input_datasets)
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_directed_interleave_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_directed_interleave_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalDirectedInterleaveDataset", selector_input_dataset=selector_input_dataset,
data_input_datasets=data_input_datasets,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "N", _op._get_attr_int("N"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalDirectedInterleaveDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalDirectedInterleaveDataset = tf_export("raw_ops.ExperimentalDirectedInterleaveDataset")(_ops.to_raw_op(experimental_directed_interleave_dataset))
def experimental_directed_interleave_dataset_eager_fallback(selector_input_dataset, data_input_datasets, output_types, output_shapes, name, ctx):
if not isinstance(data_input_datasets, (list, tuple)):
raise TypeError(
"Expected list for 'data_input_datasets' argument to "
"'experimental_directed_interleave_dataset' Op, not %r." % data_input_datasets)
_attr_N = len(data_input_datasets)
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_directed_interleave_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_directed_interleave_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
selector_input_dataset = _ops.convert_to_tensor(selector_input_dataset, _dtypes.variant)
data_input_datasets = _ops.convert_n_to_tensor(data_input_datasets, _dtypes.variant)
_inputs_flat = [selector_input_dataset] + list(data_input_datasets)
_attrs = ("output_types", output_types, "output_shapes", output_shapes, "N",
_attr_N)
_result = _execute.execute(b"ExperimentalDirectedInterleaveDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalDirectedInterleaveDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_group_by_reducer_dataset(input_dataset, key_func_other_arguments, init_func_other_arguments, reduce_func_other_arguments, finalize_func_other_arguments, key_func, init_func, reduce_func, finalize_func, output_types, output_shapes, name=None):
r"""Creates a dataset that computes a group-by on `input_dataset`.
Creates a dataset that computes a group-by on `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
key_func_other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `key_func`.
init_func_other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `init_func`.
reduce_func_other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `reduce_func`.
finalize_func_other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `finalize_func`.
key_func: A function decorated with @Defun.
A function mapping an element of `input_dataset`, concatenated
with `key_func_other_arguments` to a scalar value of type DT_INT64.
init_func: A function decorated with @Defun.
A function mapping a key of type DT_INT64, concatenated with
`init_func_other_arguments` to the initial reducer state.
reduce_func: A function decorated with @Defun.
A function mapping the current reducer state and an element of `input_dataset`,
concatenated with `reduce_func_other_arguments` to a new reducer state.
finalize_func: A function decorated with @Defun.
A function mapping the final reducer state to an output element.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalGroupByReducerDataset", name, tld.op_callbacks,
input_dataset, key_func_other_arguments, init_func_other_arguments,
reduce_func_other_arguments, finalize_func_other_arguments,
"key_func", key_func, "init_func", init_func, "reduce_func",
reduce_func, "finalize_func", finalize_func, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_group_by_reducer_dataset_eager_fallback(
input_dataset, key_func_other_arguments, init_func_other_arguments,
reduce_func_other_arguments, finalize_func_other_arguments,
key_func=key_func, init_func=init_func, reduce_func=reduce_func,
finalize_func=finalize_func, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_group_by_reducer_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_group_by_reducer_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalGroupByReducerDataset", input_dataset=input_dataset,
key_func_other_arguments=key_func_other_arguments,
init_func_other_arguments=init_func_other_arguments,
reduce_func_other_arguments=reduce_func_other_arguments,
finalize_func_other_arguments=finalize_func_other_arguments,
key_func=key_func,
init_func=init_func,
reduce_func=reduce_func,
finalize_func=finalize_func,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("key_func", _op.get_attr("key_func"), "init_func",
_op.get_attr("init_func"), "reduce_func",
_op.get_attr("reduce_func"), "finalize_func",
_op.get_attr("finalize_func"), "Tkey_func_other_arguments",
_op.get_attr("Tkey_func_other_arguments"),
"Tinit_func_other_arguments",
_op.get_attr("Tinit_func_other_arguments"),
"Treduce_func_other_arguments",
_op.get_attr("Treduce_func_other_arguments"),
"Tfinalize_func_other_arguments",
_op.get_attr("Tfinalize_func_other_arguments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalGroupByReducerDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalGroupByReducerDataset = tf_export("raw_ops.ExperimentalGroupByReducerDataset")(_ops.to_raw_op(experimental_group_by_reducer_dataset))
def experimental_group_by_reducer_dataset_eager_fallback(input_dataset, key_func_other_arguments, init_func_other_arguments, reduce_func_other_arguments, finalize_func_other_arguments, key_func, init_func, reduce_func, finalize_func, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_group_by_reducer_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_group_by_reducer_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_Tkey_func_other_arguments, key_func_other_arguments = _execute.convert_to_mixed_eager_tensors(key_func_other_arguments, ctx)
_attr_Tinit_func_other_arguments, init_func_other_arguments = _execute.convert_to_mixed_eager_tensors(init_func_other_arguments, ctx)
_attr_Treduce_func_other_arguments, reduce_func_other_arguments = _execute.convert_to_mixed_eager_tensors(reduce_func_other_arguments, ctx)
_attr_Tfinalize_func_other_arguments, finalize_func_other_arguments = _execute.convert_to_mixed_eager_tensors(finalize_func_other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset] + list(key_func_other_arguments) + list(init_func_other_arguments) + list(reduce_func_other_arguments) + list(finalize_func_other_arguments)
_attrs = ("key_func", key_func, "init_func", init_func, "reduce_func",
reduce_func, "finalize_func", finalize_func, "Tkey_func_other_arguments",
_attr_Tkey_func_other_arguments, "Tinit_func_other_arguments",
_attr_Tinit_func_other_arguments, "Treduce_func_other_arguments",
_attr_Treduce_func_other_arguments, "Tfinalize_func_other_arguments",
_attr_Tfinalize_func_other_arguments, "output_types", output_types,
"output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalGroupByReducerDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalGroupByReducerDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_group_by_window_dataset(input_dataset, key_func_other_arguments, reduce_func_other_arguments, window_size_func_other_arguments, key_func, reduce_func, window_size_func, output_types, output_shapes, name=None):
r"""Creates a dataset that computes a windowed group-by on `input_dataset`.
// TODO(mrry): Support non-int64 keys.
Args:
input_dataset: A `Tensor` of type `variant`.
key_func_other_arguments: A list of `Tensor` objects.
reduce_func_other_arguments: A list of `Tensor` objects.
window_size_func_other_arguments: A list of `Tensor` objects.
key_func: A function decorated with @Defun.
A function mapping an element of `input_dataset`, concatenated
with `key_func_other_arguments` to a scalar value of type DT_INT64.
reduce_func: A function decorated with @Defun.
window_size_func: A function decorated with @Defun.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalGroupByWindowDataset", name, tld.op_callbacks,
input_dataset, key_func_other_arguments, reduce_func_other_arguments,
window_size_func_other_arguments, "key_func", key_func, "reduce_func",
reduce_func, "window_size_func", window_size_func, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_group_by_window_dataset_eager_fallback(
input_dataset, key_func_other_arguments,
reduce_func_other_arguments, window_size_func_other_arguments,
key_func=key_func, reduce_func=reduce_func,
window_size_func=window_size_func, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_group_by_window_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_group_by_window_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalGroupByWindowDataset", input_dataset=input_dataset,
key_func_other_arguments=key_func_other_arguments,
reduce_func_other_arguments=reduce_func_other_arguments,
window_size_func_other_arguments=window_size_func_other_arguments,
key_func=key_func,
reduce_func=reduce_func,
window_size_func=window_size_func,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("key_func", _op.get_attr("key_func"), "reduce_func",
_op.get_attr("reduce_func"), "window_size_func",
_op.get_attr("window_size_func"), "Tkey_func_other_arguments",
_op.get_attr("Tkey_func_other_arguments"),
"Treduce_func_other_arguments",
_op.get_attr("Treduce_func_other_arguments"),
"Twindow_size_func_other_arguments",
_op.get_attr("Twindow_size_func_other_arguments"),
"output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalGroupByWindowDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalGroupByWindowDataset = tf_export("raw_ops.ExperimentalGroupByWindowDataset")(_ops.to_raw_op(experimental_group_by_window_dataset))
def experimental_group_by_window_dataset_eager_fallback(input_dataset, key_func_other_arguments, reduce_func_other_arguments, window_size_func_other_arguments, key_func, reduce_func, window_size_func, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_group_by_window_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_group_by_window_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_Tkey_func_other_arguments, key_func_other_arguments = _execute.convert_to_mixed_eager_tensors(key_func_other_arguments, ctx)
_attr_Treduce_func_other_arguments, reduce_func_other_arguments = _execute.convert_to_mixed_eager_tensors(reduce_func_other_arguments, ctx)
_attr_Twindow_size_func_other_arguments, window_size_func_other_arguments = _execute.convert_to_mixed_eager_tensors(window_size_func_other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset] + list(key_func_other_arguments) + list(reduce_func_other_arguments) + list(window_size_func_other_arguments)
_attrs = ("key_func", key_func, "reduce_func", reduce_func,
"window_size_func", window_size_func, "Tkey_func_other_arguments",
_attr_Tkey_func_other_arguments, "Treduce_func_other_arguments",
_attr_Treduce_func_other_arguments, "Twindow_size_func_other_arguments",
_attr_Twindow_size_func_other_arguments, "output_types", output_types,
"output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalGroupByWindowDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalGroupByWindowDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_ignore_errors_dataset(input_dataset, output_types, output_shapes, name=None):
r"""Creates a dataset that contains the elements of `input_dataset` ignoring errors.
Args:
input_dataset: A `Tensor` of type `variant`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalIgnoreErrorsDataset", name, tld.op_callbacks,
input_dataset, "output_types", output_types, "output_shapes",
output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_ignore_errors_dataset_eager_fallback(
input_dataset, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_ignore_errors_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_ignore_errors_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalIgnoreErrorsDataset", input_dataset=input_dataset,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalIgnoreErrorsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalIgnoreErrorsDataset = tf_export("raw_ops.ExperimentalIgnoreErrorsDataset")(_ops.to_raw_op(experimental_ignore_errors_dataset))
def experimental_ignore_errors_dataset_eager_fallback(input_dataset, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_ignore_errors_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_ignore_errors_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalIgnoreErrorsDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalIgnoreErrorsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_iterator_get_device(resource, name=None):
r"""Returns the name of the device on which `resource` has been placed.
Args:
resource: A `Tensor` of type `resource`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalIteratorGetDevice", name, tld.op_callbacks, resource)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_iterator_get_device_eager_fallback(
resource, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalIteratorGetDevice", resource=resource, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalIteratorGetDevice", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalIteratorGetDevice = tf_export("raw_ops.ExperimentalIteratorGetDevice")(_ops.to_raw_op(experimental_iterator_get_device))
def experimental_iterator_get_device_eager_fallback(resource, name, ctx):
resource = _ops.convert_to_tensor(resource, _dtypes.resource)
_inputs_flat = [resource]
_attrs = None
_result = _execute.execute(b"ExperimentalIteratorGetDevice", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalIteratorGetDevice", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_lmdb_dataset(filenames, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
filenames: A `Tensor` of type `string`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalLMDBDataset",
name, tld.op_callbacks, filenames, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_lmdb_dataset_eager_fallback(
filenames, output_types=output_types, output_shapes=output_shapes,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_lmdb_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_lmdb_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalLMDBDataset", filenames=filenames,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalLMDBDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalLMDBDataset = tf_export("raw_ops.ExperimentalLMDBDataset")(_ops.to_raw_op(experimental_lmdb_dataset))
def experimental_lmdb_dataset_eager_fallback(filenames, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_lmdb_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_lmdb_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
filenames = _ops.convert_to_tensor(filenames, _dtypes.string)
_inputs_flat = [filenames]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalLMDBDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalLMDBDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_latency_stats_dataset(input_dataset, tag, output_types, output_shapes, name=None):
r"""Records the latency of producing `input_dataset` elements in a StatsAggregator.
Args:
input_dataset: A `Tensor` of type `variant`.
tag: A `Tensor` of type `string`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalLatencyStatsDataset", name, tld.op_callbacks,
input_dataset, tag, "output_types", output_types, "output_shapes",
output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_latency_stats_dataset_eager_fallback(
input_dataset, tag, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_latency_stats_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_latency_stats_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalLatencyStatsDataset", input_dataset=input_dataset,
tag=tag, output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalLatencyStatsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalLatencyStatsDataset = tf_export("raw_ops.ExperimentalLatencyStatsDataset")(_ops.to_raw_op(experimental_latency_stats_dataset))
def experimental_latency_stats_dataset_eager_fallback(input_dataset, tag, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_latency_stats_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_latency_stats_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
tag = _ops.convert_to_tensor(tag, _dtypes.string)
_inputs_flat = [input_dataset, tag]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalLatencyStatsDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalLatencyStatsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_map_and_batch_dataset(input_dataset, other_arguments, batch_size, num_parallel_calls, drop_remainder, f, output_types, output_shapes, preserve_cardinality=False, name=None):
r"""Creates a dataset that fuses mapping with batching.
Creates a dataset that applies `f` to the outputs of `input_dataset` and then
batches `batch_size` of them.
Unlike a "MapDataset", which applies `f` sequentially, this dataset invokes up
to `batch_size * num_parallel_batches` copies of `f` in parallel.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when building a closure
for `f`.
batch_size: A `Tensor` of type `int64`.
A scalar representing the number of elements to accumulate in a
batch. It determines the number of concurrent invocations of `f` that process
elements from `input_dataset` in parallel.
num_parallel_calls: A `Tensor` of type `int64`.
A scalar representing the maximum number of parallel invocations of the `map_fn`
function. Applying the `map_fn` on consecutive input elements in parallel has
the potential to improve input pipeline throughput.
drop_remainder: A `Tensor` of type `bool`.
A scalar representing whether the last batch should be dropped in case its size
is smaller than desired.
f: A function decorated with @Defun.
A function to apply to the outputs of `input_dataset`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
preserve_cardinality: An optional `bool`. Defaults to `False`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalMapAndBatchDataset", name, tld.op_callbacks,
input_dataset, other_arguments, batch_size, num_parallel_calls,
drop_remainder, "f", f, "output_types", output_types, "output_shapes",
output_shapes, "preserve_cardinality", preserve_cardinality)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_map_and_batch_dataset_eager_fallback(
input_dataset, other_arguments, batch_size, num_parallel_calls,
drop_remainder, f=f, output_types=output_types,
output_shapes=output_shapes,
preserve_cardinality=preserve_cardinality, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_map_and_batch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_map_and_batch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalMapAndBatchDataset", input_dataset=input_dataset,
other_arguments=other_arguments,
batch_size=batch_size,
num_parallel_calls=num_parallel_calls,
drop_remainder=drop_remainder, f=f,
output_types=output_types,
output_shapes=output_shapes,
preserve_cardinality=preserve_cardinality,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("f", _op.get_attr("f"), "Targuments",
_op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "preserve_cardinality",
_op._get_attr_bool("preserve_cardinality"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalMapAndBatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalMapAndBatchDataset = tf_export("raw_ops.ExperimentalMapAndBatchDataset")(_ops.to_raw_op(experimental_map_and_batch_dataset))
def experimental_map_and_batch_dataset_eager_fallback(input_dataset, other_arguments, batch_size, num_parallel_calls, drop_remainder, f, output_types, output_shapes, preserve_cardinality, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_map_and_batch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_map_and_batch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
batch_size = _ops.convert_to_tensor(batch_size, _dtypes.int64)
num_parallel_calls = _ops.convert_to_tensor(num_parallel_calls, _dtypes.int64)
drop_remainder = _ops.convert_to_tensor(drop_remainder, _dtypes.bool)
_inputs_flat = [input_dataset] + list(other_arguments) + [batch_size, num_parallel_calls, drop_remainder]
_attrs = ("f", f, "Targuments", _attr_Targuments, "output_types",
output_types, "output_shapes", output_shapes, "preserve_cardinality",
preserve_cardinality)
_result = _execute.execute(b"ExperimentalMapAndBatchDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalMapAndBatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_map_dataset(input_dataset, other_arguments, f, output_types, output_shapes, use_inter_op_parallelism=True, preserve_cardinality=False, name=None):
r"""Creates a dataset that applies `f` to the outputs of `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
other_arguments: A list of `Tensor` objects.
f: A function decorated with @Defun.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
use_inter_op_parallelism: An optional `bool`. Defaults to `True`.
preserve_cardinality: An optional `bool`. Defaults to `False`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalMapDataset", name,
tld.op_callbacks, input_dataset, other_arguments, "f", f,
"output_types", output_types, "output_shapes", output_shapes,
"use_inter_op_parallelism", use_inter_op_parallelism,
"preserve_cardinality", preserve_cardinality)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_map_dataset_eager_fallback(
input_dataset, other_arguments, f=f, output_types=output_types,
output_shapes=output_shapes,
use_inter_op_parallelism=use_inter_op_parallelism,
preserve_cardinality=preserve_cardinality, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_map_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_map_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if use_inter_op_parallelism is None:
use_inter_op_parallelism = True
use_inter_op_parallelism = _execute.make_bool(use_inter_op_parallelism, "use_inter_op_parallelism")
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalMapDataset", input_dataset=input_dataset,
other_arguments=other_arguments, f=f,
output_types=output_types,
output_shapes=output_shapes,
use_inter_op_parallelism=use_inter_op_parallelism,
preserve_cardinality=preserve_cardinality,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("f", _op.get_attr("f"), "Targuments",
_op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "use_inter_op_parallelism",
_op._get_attr_bool("use_inter_op_parallelism"),
"preserve_cardinality",
_op._get_attr_bool("preserve_cardinality"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalMapDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalMapDataset = tf_export("raw_ops.ExperimentalMapDataset")(_ops.to_raw_op(experimental_map_dataset))
def experimental_map_dataset_eager_fallback(input_dataset, other_arguments, f, output_types, output_shapes, use_inter_op_parallelism, preserve_cardinality, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_map_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_map_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if use_inter_op_parallelism is None:
use_inter_op_parallelism = True
use_inter_op_parallelism = _execute.make_bool(use_inter_op_parallelism, "use_inter_op_parallelism")
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset] + list(other_arguments)
_attrs = ("f", f, "Targuments", _attr_Targuments, "output_types",
output_types, "output_shapes", output_shapes, "use_inter_op_parallelism",
use_inter_op_parallelism, "preserve_cardinality", preserve_cardinality)
_result = _execute.execute(b"ExperimentalMapDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalMapDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_matching_files_dataset(patterns, name=None):
r"""TODO: add doc.
Args:
patterns: A `Tensor` of type `string`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalMatchingFilesDataset", name, tld.op_callbacks, patterns)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_matching_files_dataset_eager_fallback(
patterns, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalMatchingFilesDataset", patterns=patterns, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalMatchingFilesDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalMatchingFilesDataset = tf_export("raw_ops.ExperimentalMatchingFilesDataset")(_ops.to_raw_op(experimental_matching_files_dataset))
def experimental_matching_files_dataset_eager_fallback(patterns, name, ctx):
patterns = _ops.convert_to_tensor(patterns, _dtypes.string)
_inputs_flat = [patterns]
_attrs = None
_result = _execute.execute(b"ExperimentalMatchingFilesDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalMatchingFilesDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_max_intra_op_parallelism_dataset(input_dataset, max_intra_op_parallelism, output_types, output_shapes, name=None):
r"""Creates a dataset that overrides the maximum intra-op parallelism.
Args:
input_dataset: A `Tensor` of type `variant`.
max_intra_op_parallelism: A `Tensor` of type `int64`.
Identifies the maximum intra-op parallelism to use.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalMaxIntraOpParallelismDataset", name, tld.op_callbacks,
input_dataset, max_intra_op_parallelism, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_max_intra_op_parallelism_dataset_eager_fallback(
input_dataset, max_intra_op_parallelism, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_max_intra_op_parallelism_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_max_intra_op_parallelism_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalMaxIntraOpParallelismDataset", input_dataset=input_dataset,
max_intra_op_parallelism=max_intra_op_parallelism,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalMaxIntraOpParallelismDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalMaxIntraOpParallelismDataset = tf_export("raw_ops.ExperimentalMaxIntraOpParallelismDataset")(_ops.to_raw_op(experimental_max_intra_op_parallelism_dataset))
def experimental_max_intra_op_parallelism_dataset_eager_fallback(input_dataset, max_intra_op_parallelism, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_max_intra_op_parallelism_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_max_intra_op_parallelism_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
max_intra_op_parallelism = _ops.convert_to_tensor(max_intra_op_parallelism, _dtypes.int64)
_inputs_flat = [input_dataset, max_intra_op_parallelism]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalMaxIntraOpParallelismDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalMaxIntraOpParallelismDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_non_serializable_dataset(input_dataset, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalNonSerializableDataset", name, tld.op_callbacks,
input_dataset, "output_types", output_types, "output_shapes",
output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_non_serializable_dataset_eager_fallback(
input_dataset, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_non_serializable_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_non_serializable_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalNonSerializableDataset", input_dataset=input_dataset,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalNonSerializableDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalNonSerializableDataset = tf_export("raw_ops.ExperimentalNonSerializableDataset")(_ops.to_raw_op(experimental_non_serializable_dataset))
def experimental_non_serializable_dataset_eager_fallback(input_dataset, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_non_serializable_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_non_serializable_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalNonSerializableDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalNonSerializableDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_parallel_interleave_dataset(input_dataset, other_arguments, cycle_length, block_length, sloppy, buffer_output_elements, prefetch_input_elements, f, output_types, output_shapes, name=None):
r"""Creates a dataset that applies `f` to the outputs of `input_dataset`.
The resulting dataset is similar to the `InterleaveDataset`, with the exception
that if retrieving the next value from a dataset would cause the requester to
block, it will skip that input dataset. This dataset is especially useful
when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it
allows the training step to proceed so long as some data is available.
!! WARNING !! This dataset is not deterministic!
Args:
input_dataset: A `Tensor` of type `variant`.
other_arguments: A list of `Tensor` objects.
cycle_length: A `Tensor` of type `int64`.
block_length: A `Tensor` of type `int64`.
sloppy: A `Tensor` of type `bool`.
buffer_output_elements: A `Tensor` of type `int64`.
prefetch_input_elements: A `Tensor` of type `int64`.
f: A function decorated with @Defun.
A function mapping elements of `input_dataset`, concatenated with
`other_arguments`, to a Dataset variant that contains elements matching
`output_types` and `output_shapes`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalParallelInterleaveDataset", name, tld.op_callbacks,
input_dataset, other_arguments, cycle_length, block_length, sloppy,
buffer_output_elements, prefetch_input_elements, "f", f,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_parallel_interleave_dataset_eager_fallback(
input_dataset, other_arguments, cycle_length, block_length, sloppy,
buffer_output_elements, prefetch_input_elements, f=f,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_parallel_interleave_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_parallel_interleave_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalParallelInterleaveDataset", input_dataset=input_dataset,
other_arguments=other_arguments,
cycle_length=cycle_length,
block_length=block_length,
sloppy=sloppy,
buffer_output_elements=buffer_output_elements,
prefetch_input_elements=prefetch_input_elements,
f=f,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("f", _op.get_attr("f"), "Targuments",
_op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalParallelInterleaveDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalParallelInterleaveDataset = tf_export("raw_ops.ExperimentalParallelInterleaveDataset")(_ops.to_raw_op(experimental_parallel_interleave_dataset))
def experimental_parallel_interleave_dataset_eager_fallback(input_dataset, other_arguments, cycle_length, block_length, sloppy, buffer_output_elements, prefetch_input_elements, f, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_parallel_interleave_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_parallel_interleave_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
cycle_length = _ops.convert_to_tensor(cycle_length, _dtypes.int64)
block_length = _ops.convert_to_tensor(block_length, _dtypes.int64)
sloppy = _ops.convert_to_tensor(sloppy, _dtypes.bool)
buffer_output_elements = _ops.convert_to_tensor(buffer_output_elements, _dtypes.int64)
prefetch_input_elements = _ops.convert_to_tensor(prefetch_input_elements, _dtypes.int64)
_inputs_flat = [input_dataset] + list(other_arguments) + [cycle_length, block_length, sloppy, buffer_output_elements, prefetch_input_elements]
_attrs = ("f", f, "Targuments", _attr_Targuments, "output_types",
output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalParallelInterleaveDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalParallelInterleaveDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_parse_example_dataset(input_dataset, num_parallel_calls, dense_defaults, sparse_keys, dense_keys, sparse_types, dense_shapes, output_types, output_shapes, sloppy=False, name=None):
r"""Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.
Args:
input_dataset: A `Tensor` of type `variant`.
num_parallel_calls: A `Tensor` of type `int64`.
dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.
A dict mapping string keys to `Tensor`s.
The keys of the dict must match the dense_keys of the feature.
sparse_keys: A list of `strings`.
A list of string keys in the examples features.
The results for these keys will be returned as `SparseTensor` objects.
dense_keys: A list of `strings`.
A list of Ndense string Tensors (scalars).
The keys expected in the Examples features associated with dense values.
sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.
A list of `DTypes` of the same length as `sparse_keys`.
Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),
and `tf.string` (`BytesList`) are supported.
dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).
List of tuples with the same length as `dense_keys`.
The shape of the data for each dense feature referenced by `dense_keys`.
Required for any input tensors identified by `dense_keys`. Must be
either fully defined, or may contain an unknown first dimension.
An unknown first dimension means the feature is treated as having
a variable number of blocks, and the output shape along this dimension
is considered unknown at graph build time. Padding is applied for
minibatch elements smaller than the maximum number of blocks for the
given feature along this dimension.
output_types: A list of `tf.DTypes` that has length `>= 1`.
The type list for the return values.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
The list of shapes being produced.
sloppy: An optional `bool`. Defaults to `False`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalParseExampleDataset", name, tld.op_callbacks,
input_dataset, num_parallel_calls, dense_defaults, "sparse_keys",
sparse_keys, "dense_keys", dense_keys, "sparse_types", sparse_types,
"dense_shapes", dense_shapes, "output_types", output_types,
"output_shapes", output_shapes, "sloppy", sloppy)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_parse_example_dataset_eager_fallback(
input_dataset, num_parallel_calls, dense_defaults,
sparse_keys=sparse_keys, dense_keys=dense_keys,
sparse_types=sparse_types, dense_shapes=dense_shapes,
output_types=output_types, output_shapes=output_shapes,
sloppy=sloppy, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'experimental_parse_example_dataset' Op, not %r." % sparse_keys)
sparse_keys = [_execute.make_str(_s, "sparse_keys") for _s in sparse_keys]
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'experimental_parse_example_dataset' Op, not %r." % dense_keys)
dense_keys = [_execute.make_str(_s, "dense_keys") for _s in dense_keys]
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'experimental_parse_example_dataset' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'experimental_parse_example_dataset' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_parse_example_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_parse_example_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if sloppy is None:
sloppy = False
sloppy = _execute.make_bool(sloppy, "sloppy")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalParseExampleDataset", input_dataset=input_dataset,
num_parallel_calls=num_parallel_calls,
dense_defaults=dense_defaults,
sparse_keys=sparse_keys,
dense_keys=dense_keys,
sparse_types=sparse_types,
dense_shapes=dense_shapes,
output_types=output_types,
output_shapes=output_shapes,
sloppy=sloppy, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("sparse_keys", _op.get_attr("sparse_keys"), "dense_keys",
_op.get_attr("dense_keys"), "sparse_types",
_op.get_attr("sparse_types"), "Tdense", _op.get_attr("Tdense"),
"dense_shapes", _op.get_attr("dense_shapes"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "sloppy",
_op._get_attr_bool("sloppy"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalParseExampleDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalParseExampleDataset = tf_export("raw_ops.ExperimentalParseExampleDataset")(_ops.to_raw_op(experimental_parse_example_dataset))
def experimental_parse_example_dataset_eager_fallback(input_dataset, num_parallel_calls, dense_defaults, sparse_keys, dense_keys, sparse_types, dense_shapes, output_types, output_shapes, sloppy, name, ctx):
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'experimental_parse_example_dataset' Op, not %r." % sparse_keys)
sparse_keys = [_execute.make_str(_s, "sparse_keys") for _s in sparse_keys]
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'experimental_parse_example_dataset' Op, not %r." % dense_keys)
dense_keys = [_execute.make_str(_s, "dense_keys") for _s in dense_keys]
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'experimental_parse_example_dataset' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'experimental_parse_example_dataset' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_parse_example_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_parse_example_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if sloppy is None:
sloppy = False
sloppy = _execute.make_bool(sloppy, "sloppy")
_attr_Tdense, dense_defaults = _execute.convert_to_mixed_eager_tensors(dense_defaults, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
num_parallel_calls = _ops.convert_to_tensor(num_parallel_calls, _dtypes.int64)
_inputs_flat = [input_dataset, num_parallel_calls] + list(dense_defaults)
_attrs = ("sparse_keys", sparse_keys, "dense_keys", dense_keys,
"sparse_types", sparse_types, "Tdense", _attr_Tdense, "dense_shapes",
dense_shapes, "output_types", output_types, "output_shapes", output_shapes,
"sloppy", sloppy)
_result = _execute.execute(b"ExperimentalParseExampleDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalParseExampleDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_private_thread_pool_dataset(input_dataset, num_threads, output_types, output_shapes, name=None):
r"""Creates a dataset that uses a custom thread pool to compute `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
num_threads: A `Tensor` of type `int64`.
Identifies the number of threads to use for the private threadpool.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalPrivateThreadPoolDataset", name, tld.op_callbacks,
input_dataset, num_threads, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_private_thread_pool_dataset_eager_fallback(
input_dataset, num_threads, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_private_thread_pool_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_private_thread_pool_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalPrivateThreadPoolDataset", input_dataset=input_dataset,
num_threads=num_threads,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalPrivateThreadPoolDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalPrivateThreadPoolDataset = tf_export("raw_ops.ExperimentalPrivateThreadPoolDataset")(_ops.to_raw_op(experimental_private_thread_pool_dataset))
def experimental_private_thread_pool_dataset_eager_fallback(input_dataset, num_threads, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_private_thread_pool_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_private_thread_pool_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
num_threads = _ops.convert_to_tensor(num_threads, _dtypes.int64)
_inputs_flat = [input_dataset, num_threads]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalPrivateThreadPoolDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalPrivateThreadPoolDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_random_dataset(seed, seed2, output_types, output_shapes, name=None):
r"""Creates a Dataset that returns pseudorandom numbers.
Args:
seed: A `Tensor` of type `int64`.
A scalar seed for the random number generator. If either seed or
seed2 is set to be non-zero, the random number generator is seeded
by the given seed. Otherwise, a random seed is used.
seed2: A `Tensor` of type `int64`.
A second scalar seed to avoid seed collision.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalRandomDataset",
name, tld.op_callbacks, seed, seed2, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_random_dataset_eager_fallback(
seed, seed2, output_types=output_types, output_shapes=output_shapes,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_random_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_random_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalRandomDataset", seed=seed, seed2=seed2,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalRandomDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalRandomDataset = tf_export("raw_ops.ExperimentalRandomDataset")(_ops.to_raw_op(experimental_random_dataset))
def experimental_random_dataset_eager_fallback(seed, seed2, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_random_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_random_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
seed = _ops.convert_to_tensor(seed, _dtypes.int64)
seed2 = _ops.convert_to_tensor(seed2, _dtypes.int64)
_inputs_flat = [seed, seed2]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalRandomDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalRandomDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_rebatch_dataset(input_dataset, num_replicas, output_types, output_shapes, use_fallback=True, name=None):
r"""Creates a dataset that changes the batch size.
Creates a dataset that changes the batch size of the dataset to current batch
size // num_replicas.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
num_replicas: A `Tensor` of type `int64`.
A scalar representing the number of replicas to distribute this batch across. As
a result of this transformation the current batch size would end up being
divided by this parameter.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
use_fallback: An optional `bool`. Defaults to `True`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalRebatchDataset",
name, tld.op_callbacks, input_dataset, num_replicas, "output_types",
output_types, "output_shapes", output_shapes, "use_fallback",
use_fallback)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_rebatch_dataset_eager_fallback(
input_dataset, num_replicas, output_types=output_types,
output_shapes=output_shapes, use_fallback=use_fallback, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_rebatch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_rebatch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if use_fallback is None:
use_fallback = True
use_fallback = _execute.make_bool(use_fallback, "use_fallback")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalRebatchDataset", input_dataset=input_dataset,
num_replicas=num_replicas,
output_types=output_types,
output_shapes=output_shapes,
use_fallback=use_fallback, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "use_fallback",
_op._get_attr_bool("use_fallback"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalRebatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalRebatchDataset = tf_export("raw_ops.ExperimentalRebatchDataset")(_ops.to_raw_op(experimental_rebatch_dataset))
def experimental_rebatch_dataset_eager_fallback(input_dataset, num_replicas, output_types, output_shapes, use_fallback, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_rebatch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_rebatch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if use_fallback is None:
use_fallback = True
use_fallback = _execute.make_bool(use_fallback, "use_fallback")
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
num_replicas = _ops.convert_to_tensor(num_replicas, _dtypes.int64)
_inputs_flat = [input_dataset, num_replicas]
_attrs = ("output_types", output_types, "output_shapes", output_shapes,
"use_fallback", use_fallback)
_result = _execute.execute(b"ExperimentalRebatchDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalRebatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_scan_dataset(input_dataset, initial_state, other_arguments, f, output_types, output_shapes, preserve_cardinality=False, name=None):
r"""Creates a dataset successively reduces `f` over the elements of `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
initial_state: A list of `Tensor` objects.
other_arguments: A list of `Tensor` objects.
f: A function decorated with @Defun.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
preserve_cardinality: An optional `bool`. Defaults to `False`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalScanDataset",
name, tld.op_callbacks, input_dataset, initial_state, other_arguments,
"f", f, "output_types", output_types, "output_shapes", output_shapes,
"preserve_cardinality", preserve_cardinality)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_scan_dataset_eager_fallback(
input_dataset, initial_state, other_arguments, f=f,
output_types=output_types, output_shapes=output_shapes,
preserve_cardinality=preserve_cardinality, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_scan_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_scan_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalScanDataset", input_dataset=input_dataset,
initial_state=initial_state,
other_arguments=other_arguments, f=f,
output_types=output_types,
output_shapes=output_shapes,
preserve_cardinality=preserve_cardinality,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("f", _op.get_attr("f"), "Tstate", _op.get_attr("Tstate"),
"Targuments", _op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "preserve_cardinality",
_op._get_attr_bool("preserve_cardinality"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalScanDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalScanDataset = tf_export("raw_ops.ExperimentalScanDataset")(_ops.to_raw_op(experimental_scan_dataset))
def experimental_scan_dataset_eager_fallback(input_dataset, initial_state, other_arguments, f, output_types, output_shapes, preserve_cardinality, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_scan_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_scan_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
_attr_Tstate, initial_state = _execute.convert_to_mixed_eager_tensors(initial_state, ctx)
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset] + list(initial_state) + list(other_arguments)
_attrs = ("f", f, "Tstate", _attr_Tstate, "Targuments", _attr_Targuments,
"output_types", output_types, "output_shapes", output_shapes,
"preserve_cardinality", preserve_cardinality)
_result = _execute.execute(b"ExperimentalScanDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalScanDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_set_stats_aggregator_dataset(input_dataset, stats_aggregator, tag, counter_prefix, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
stats_aggregator: A `Tensor` of type `resource`.
tag: A `Tensor` of type `string`.
counter_prefix: A `Tensor` of type `string`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalSetStatsAggregatorDataset", name, tld.op_callbacks,
input_dataset, stats_aggregator, tag, counter_prefix, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_set_stats_aggregator_dataset_eager_fallback(
input_dataset, stats_aggregator, tag, counter_prefix,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_set_stats_aggregator_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_set_stats_aggregator_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalSetStatsAggregatorDataset", input_dataset=input_dataset,
stats_aggregator=stats_aggregator,
tag=tag,
counter_prefix=counter_prefix,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalSetStatsAggregatorDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalSetStatsAggregatorDataset = tf_export("raw_ops.ExperimentalSetStatsAggregatorDataset")(_ops.to_raw_op(experimental_set_stats_aggregator_dataset))
def experimental_set_stats_aggregator_dataset_eager_fallback(input_dataset, stats_aggregator, tag, counter_prefix, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_set_stats_aggregator_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_set_stats_aggregator_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
stats_aggregator = _ops.convert_to_tensor(stats_aggregator, _dtypes.resource)
tag = _ops.convert_to_tensor(tag, _dtypes.string)
counter_prefix = _ops.convert_to_tensor(counter_prefix, _dtypes.string)
_inputs_flat = [input_dataset, stats_aggregator, tag, counter_prefix]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalSetStatsAggregatorDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalSetStatsAggregatorDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_sleep_dataset(input_dataset, sleep_microseconds, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
sleep_microseconds: A `Tensor` of type `int64`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalSleepDataset",
name, tld.op_callbacks, input_dataset, sleep_microseconds,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_sleep_dataset_eager_fallback(
input_dataset, sleep_microseconds, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_sleep_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_sleep_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalSleepDataset", input_dataset=input_dataset,
sleep_microseconds=sleep_microseconds,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalSleepDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalSleepDataset = tf_export("raw_ops.ExperimentalSleepDataset")(_ops.to_raw_op(experimental_sleep_dataset))
def experimental_sleep_dataset_eager_fallback(input_dataset, sleep_microseconds, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_sleep_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_sleep_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
sleep_microseconds = _ops.convert_to_tensor(sleep_microseconds, _dtypes.int64)
_inputs_flat = [input_dataset, sleep_microseconds]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalSleepDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalSleepDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_sliding_window_dataset(input_dataset, window_size, window_shift, window_stride, output_types, output_shapes, name=None):
r"""Creates a dataset that passes a sliding window over `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
window_size: A `Tensor` of type `int64`.
A scalar representing the number of elements in the
sliding window.
window_shift: A `Tensor` of type `int64`.
A scalar representing the steps moving the sliding window
forward in one iteration. It must be positive.
window_stride: A `Tensor` of type `int64`.
A scalar representing the stride of the input elements of the sliding window.
It must be positive.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalSlidingWindowDataset", name, tld.op_callbacks,
input_dataset, window_size, window_shift, window_stride,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_sliding_window_dataset_eager_fallback(
input_dataset, window_size, window_shift, window_stride,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_sliding_window_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_sliding_window_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalSlidingWindowDataset", input_dataset=input_dataset,
window_size=window_size,
window_shift=window_shift,
window_stride=window_stride,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalSlidingWindowDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalSlidingWindowDataset = tf_export("raw_ops.ExperimentalSlidingWindowDataset")(_ops.to_raw_op(experimental_sliding_window_dataset))
def experimental_sliding_window_dataset_eager_fallback(input_dataset, window_size, window_shift, window_stride, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_sliding_window_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_sliding_window_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
window_size = _ops.convert_to_tensor(window_size, _dtypes.int64)
window_shift = _ops.convert_to_tensor(window_shift, _dtypes.int64)
window_stride = _ops.convert_to_tensor(window_stride, _dtypes.int64)
_inputs_flat = [input_dataset, window_size, window_shift, window_stride]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalSlidingWindowDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalSlidingWindowDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_sql_dataset(driver_name, data_source_name, query, output_types, output_shapes, name=None):
r"""Creates a dataset that executes a SQL query and emits rows of the result set.
Args:
driver_name: A `Tensor` of type `string`.
The database type. Currently, the only supported type is 'sqlite'.
data_source_name: A `Tensor` of type `string`.
A connection string to connect to the database.
query: A `Tensor` of type `string`. A SQL query to execute.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalSqlDataset", name,
tld.op_callbacks, driver_name, data_source_name, query,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_sql_dataset_eager_fallback(
driver_name, data_source_name, query, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_sql_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_sql_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalSqlDataset", driver_name=driver_name,
data_source_name=data_source_name,
query=query, output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalSqlDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalSqlDataset = tf_export("raw_ops.ExperimentalSqlDataset")(_ops.to_raw_op(experimental_sql_dataset))
def experimental_sql_dataset_eager_fallback(driver_name, data_source_name, query, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_sql_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_sql_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
driver_name = _ops.convert_to_tensor(driver_name, _dtypes.string)
data_source_name = _ops.convert_to_tensor(data_source_name, _dtypes.string)
query = _ops.convert_to_tensor(query, _dtypes.string)
_inputs_flat = [driver_name, data_source_name, query]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalSqlDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalSqlDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_stats_aggregator_handle(container="", shared_name="", name=None):
r"""Creates a statistics manager resource.
Args:
container: An optional `string`. Defaults to `""`.
shared_name: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalStatsAggregatorHandle", name, tld.op_callbacks,
"container", container, "shared_name", shared_name)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_stats_aggregator_handle_eager_fallback(
container=container, shared_name=shared_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalStatsAggregatorHandle", container=container,
shared_name=shared_name,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalStatsAggregatorHandle", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalStatsAggregatorHandle = tf_export("raw_ops.ExperimentalStatsAggregatorHandle")(_ops.to_raw_op(experimental_stats_aggregator_handle))
def experimental_stats_aggregator_handle_eager_fallback(container, shared_name, name, ctx):
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_inputs_flat = []
_attrs = ("container", container, "shared_name", shared_name)
_result = _execute.execute(b"ExperimentalStatsAggregatorHandle", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalStatsAggregatorHandle", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_stats_aggregator_summary(iterator, name=None):
r"""Produces a summary of any statistics recorded by the given statistics manager.
Args:
iterator: A `Tensor` of type `resource`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalStatsAggregatorSummary", name, tld.op_callbacks,
iterator)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_stats_aggregator_summary_eager_fallback(
iterator, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalStatsAggregatorSummary", iterator=iterator, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalStatsAggregatorSummary", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalStatsAggregatorSummary = tf_export("raw_ops.ExperimentalStatsAggregatorSummary")(_ops.to_raw_op(experimental_stats_aggregator_summary))
def experimental_stats_aggregator_summary_eager_fallback(iterator, name, ctx):
iterator = _ops.convert_to_tensor(iterator, _dtypes.resource)
_inputs_flat = [iterator]
_attrs = None
_result = _execute.execute(b"ExperimentalStatsAggregatorSummary", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalStatsAggregatorSummary", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_take_while_dataset(input_dataset, other_arguments, predicate, output_types, output_shapes, name=None):
r"""Creates a dataset that stops iteration when predicate` is false.
The `predicate` function must return a scalar boolean and accept the
following arguments:
* One tensor for each component of an element of `input_dataset`.
* One tensor for each value in `other_arguments`.
Args:
input_dataset: A `Tensor` of type `variant`.
other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `predicate`.
predicate: A function decorated with @Defun.
A function returning a scalar boolean.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalTakeWhileDataset",
name, tld.op_callbacks, input_dataset, other_arguments, "predicate",
predicate, "output_types", output_types, "output_shapes",
output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_take_while_dataset_eager_fallback(
input_dataset, other_arguments, predicate=predicate,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_take_while_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_take_while_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalTakeWhileDataset", input_dataset=input_dataset,
other_arguments=other_arguments,
predicate=predicate,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("predicate", _op.get_attr("predicate"), "Targuments",
_op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalTakeWhileDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalTakeWhileDataset = tf_export("raw_ops.ExperimentalTakeWhileDataset")(_ops.to_raw_op(experimental_take_while_dataset))
def experimental_take_while_dataset_eager_fallback(input_dataset, other_arguments, predicate, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_take_while_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_take_while_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset] + list(other_arguments)
_attrs = ("predicate", predicate, "Targuments", _attr_Targuments,
"output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalTakeWhileDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalTakeWhileDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_thread_pool_dataset(input_dataset, thread_pool, output_types, output_shapes, name=None):
r"""Creates a dataset that uses a custom thread pool to compute `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
thread_pool: A `Tensor` of type `resource`.
A resource produced by the ThreadPoolHandle op.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"ExperimentalThreadPoolDataset", name, tld.op_callbacks,
input_dataset, thread_pool, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_thread_pool_dataset_eager_fallback(
input_dataset, thread_pool, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_thread_pool_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_thread_pool_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalThreadPoolDataset", input_dataset=input_dataset,
thread_pool=thread_pool,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalThreadPoolDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalThreadPoolDataset = tf_export("raw_ops.ExperimentalThreadPoolDataset")(_ops.to_raw_op(experimental_thread_pool_dataset))
def experimental_thread_pool_dataset_eager_fallback(input_dataset, thread_pool, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_thread_pool_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_thread_pool_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
thread_pool = _ops.convert_to_tensor(thread_pool, _dtypes.resource)
_inputs_flat = [input_dataset, thread_pool]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalThreadPoolDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalThreadPoolDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_thread_pool_handle(num_threads, display_name, max_intra_op_parallelism=1, container="", shared_name="", name=None):
r"""Creates a dataset that uses a custom thread pool to compute `input_dataset`.
Args:
num_threads: An `int`. The number of threads in the thread pool.
display_name: A `string`.
A human-readable name for the threads that may be visible in some
visualizations.
threadpool.
max_intra_op_parallelism: An optional `int`. Defaults to `1`.
The maximum degree of parallelism to use within operations that execute on this
threadpool.
container: An optional `string`. Defaults to `""`.
shared_name: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalThreadPoolHandle",
name, tld.op_callbacks, "num_threads", num_threads,
"max_intra_op_parallelism", max_intra_op_parallelism, "display_name",
display_name, "container", container, "shared_name", shared_name)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_thread_pool_handle_eager_fallback(
num_threads=num_threads,
max_intra_op_parallelism=max_intra_op_parallelism,
display_name=display_name, container=container,
shared_name=shared_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
num_threads = _execute.make_int(num_threads, "num_threads")
display_name = _execute.make_str(display_name, "display_name")
if max_intra_op_parallelism is None:
max_intra_op_parallelism = 1
max_intra_op_parallelism = _execute.make_int(max_intra_op_parallelism, "max_intra_op_parallelism")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalThreadPoolHandle", num_threads=num_threads,
display_name=display_name,
max_intra_op_parallelism=max_intra_op_parallelism,
container=container,
shared_name=shared_name, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("num_threads", _op._get_attr_int("num_threads"),
"max_intra_op_parallelism",
_op._get_attr_int("max_intra_op_parallelism"), "display_name",
_op.get_attr("display_name"), "container",
_op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalThreadPoolHandle", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalThreadPoolHandle = tf_export("raw_ops.ExperimentalThreadPoolHandle")(_ops.to_raw_op(experimental_thread_pool_handle))
def experimental_thread_pool_handle_eager_fallback(num_threads, display_name, max_intra_op_parallelism, container, shared_name, name, ctx):
num_threads = _execute.make_int(num_threads, "num_threads")
display_name = _execute.make_str(display_name, "display_name")
if max_intra_op_parallelism is None:
max_intra_op_parallelism = 1
max_intra_op_parallelism = _execute.make_int(max_intra_op_parallelism, "max_intra_op_parallelism")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_inputs_flat = []
_attrs = ("num_threads", num_threads, "max_intra_op_parallelism",
max_intra_op_parallelism, "display_name", display_name, "container",
container, "shared_name", shared_name)
_result = _execute.execute(b"ExperimentalThreadPoolHandle", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalThreadPoolHandle", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_unbatch_dataset(input_dataset, output_types, output_shapes, name=None):
r"""A dataset that splits the elements of its input into multiple elements.
Args:
input_dataset: A `Tensor` of type `variant`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalUnbatchDataset",
name, tld.op_callbacks, input_dataset, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_unbatch_dataset_eager_fallback(
input_dataset, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_unbatch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_unbatch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalUnbatchDataset", input_dataset=input_dataset,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalUnbatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalUnbatchDataset = tf_export("raw_ops.ExperimentalUnbatchDataset")(_ops.to_raw_op(experimental_unbatch_dataset))
def experimental_unbatch_dataset_eager_fallback(input_dataset, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_unbatch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_unbatch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalUnbatchDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalUnbatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def experimental_unique_dataset(input_dataset, output_types, output_shapes, name=None):
r"""Creates a dataset that contains the unique elements of `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ExperimentalUniqueDataset",
name, tld.op_callbacks, input_dataset, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return experimental_unique_dataset_eager_fallback(
input_dataset, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_unique_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_unique_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ExperimentalUniqueDataset", input_dataset=input_dataset,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ExperimentalUniqueDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ExperimentalUniqueDataset = tf_export("raw_ops.ExperimentalUniqueDataset")(_ops.to_raw_op(experimental_unique_dataset))
def experimental_unique_dataset_eager_fallback(input_dataset, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'experimental_unique_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'experimental_unique_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ExperimentalUniqueDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ExperimentalUniqueDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def group_by_reducer_dataset(input_dataset, key_func_other_arguments, init_func_other_arguments, reduce_func_other_arguments, finalize_func_other_arguments, key_func, init_func, reduce_func, finalize_func, output_types, output_shapes, name=None):
r"""Creates a dataset that computes a group-by on `input_dataset`.
Creates a dataset that computes a group-by on `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
key_func_other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `key_func`.
init_func_other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `init_func`.
reduce_func_other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `reduce_func`.
finalize_func_other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `finalize_func`.
key_func: A function decorated with @Defun.
A function mapping an element of `input_dataset`, concatenated
with `key_func_other_arguments` to a scalar value of type DT_INT64.
init_func: A function decorated with @Defun.
A function mapping a key of type DT_INT64, concatenated with
`init_func_other_arguments` to the initial reducer state.
reduce_func: A function decorated with @Defun.
A function mapping the current reducer state and an element of `input_dataset`,
concatenated with `reduce_func_other_arguments` to a new reducer state.
finalize_func: A function decorated with @Defun.
A function mapping the final reducer state to an output element.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "GroupByReducerDataset", name,
tld.op_callbacks, input_dataset, key_func_other_arguments,
init_func_other_arguments, reduce_func_other_arguments,
finalize_func_other_arguments, "key_func", key_func, "init_func",
init_func, "reduce_func", reduce_func, "finalize_func", finalize_func,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return group_by_reducer_dataset_eager_fallback(
input_dataset, key_func_other_arguments, init_func_other_arguments,
reduce_func_other_arguments, finalize_func_other_arguments,
key_func=key_func, init_func=init_func, reduce_func=reduce_func,
finalize_func=finalize_func, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'group_by_reducer_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'group_by_reducer_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"GroupByReducerDataset", input_dataset=input_dataset,
key_func_other_arguments=key_func_other_arguments,
init_func_other_arguments=init_func_other_arguments,
reduce_func_other_arguments=reduce_func_other_arguments,
finalize_func_other_arguments=finalize_func_other_arguments,
key_func=key_func, init_func=init_func,
reduce_func=reduce_func,
finalize_func=finalize_func,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("key_func", _op.get_attr("key_func"), "init_func",
_op.get_attr("init_func"), "reduce_func",
_op.get_attr("reduce_func"), "finalize_func",
_op.get_attr("finalize_func"), "Tkey_func_other_arguments",
_op.get_attr("Tkey_func_other_arguments"),
"Tinit_func_other_arguments",
_op.get_attr("Tinit_func_other_arguments"),
"Treduce_func_other_arguments",
_op.get_attr("Treduce_func_other_arguments"),
"Tfinalize_func_other_arguments",
_op.get_attr("Tfinalize_func_other_arguments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"GroupByReducerDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
GroupByReducerDataset = tf_export("raw_ops.GroupByReducerDataset")(_ops.to_raw_op(group_by_reducer_dataset))
def group_by_reducer_dataset_eager_fallback(input_dataset, key_func_other_arguments, init_func_other_arguments, reduce_func_other_arguments, finalize_func_other_arguments, key_func, init_func, reduce_func, finalize_func, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'group_by_reducer_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'group_by_reducer_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_Tkey_func_other_arguments, key_func_other_arguments = _execute.convert_to_mixed_eager_tensors(key_func_other_arguments, ctx)
_attr_Tinit_func_other_arguments, init_func_other_arguments = _execute.convert_to_mixed_eager_tensors(init_func_other_arguments, ctx)
_attr_Treduce_func_other_arguments, reduce_func_other_arguments = _execute.convert_to_mixed_eager_tensors(reduce_func_other_arguments, ctx)
_attr_Tfinalize_func_other_arguments, finalize_func_other_arguments = _execute.convert_to_mixed_eager_tensors(finalize_func_other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset] + list(key_func_other_arguments) + list(init_func_other_arguments) + list(reduce_func_other_arguments) + list(finalize_func_other_arguments)
_attrs = ("key_func", key_func, "init_func", init_func, "reduce_func",
reduce_func, "finalize_func", finalize_func, "Tkey_func_other_arguments",
_attr_Tkey_func_other_arguments, "Tinit_func_other_arguments",
_attr_Tinit_func_other_arguments, "Treduce_func_other_arguments",
_attr_Treduce_func_other_arguments, "Tfinalize_func_other_arguments",
_attr_Tfinalize_func_other_arguments, "output_types", output_types,
"output_shapes", output_shapes)
_result = _execute.execute(b"GroupByReducerDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"GroupByReducerDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def group_by_window_dataset(input_dataset, key_func_other_arguments, reduce_func_other_arguments, window_size_func_other_arguments, key_func, reduce_func, window_size_func, output_types, output_shapes, name=None):
r"""Creates a dataset that computes a windowed group-by on `input_dataset`.
// TODO(mrry): Support non-int64 keys.
Args:
input_dataset: A `Tensor` of type `variant`.
key_func_other_arguments: A list of `Tensor` objects.
reduce_func_other_arguments: A list of `Tensor` objects.
window_size_func_other_arguments: A list of `Tensor` objects.
key_func: A function decorated with @Defun.
A function mapping an element of `input_dataset`, concatenated
with `key_func_other_arguments` to a scalar value of type DT_INT64.
reduce_func: A function decorated with @Defun.
window_size_func: A function decorated with @Defun.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "GroupByWindowDataset", name,
tld.op_callbacks, input_dataset, key_func_other_arguments,
reduce_func_other_arguments, window_size_func_other_arguments,
"key_func", key_func, "reduce_func", reduce_func, "window_size_func",
window_size_func, "output_types", output_types, "output_shapes",
output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return group_by_window_dataset_eager_fallback(
input_dataset, key_func_other_arguments,
reduce_func_other_arguments, window_size_func_other_arguments,
key_func=key_func, reduce_func=reduce_func,
window_size_func=window_size_func, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'group_by_window_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'group_by_window_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"GroupByWindowDataset", input_dataset=input_dataset,
key_func_other_arguments=key_func_other_arguments,
reduce_func_other_arguments=reduce_func_other_arguments,
window_size_func_other_arguments=window_size_func_other_arguments,
key_func=key_func, reduce_func=reduce_func,
window_size_func=window_size_func,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("key_func", _op.get_attr("key_func"), "reduce_func",
_op.get_attr("reduce_func"), "window_size_func",
_op.get_attr("window_size_func"), "Tkey_func_other_arguments",
_op.get_attr("Tkey_func_other_arguments"),
"Treduce_func_other_arguments",
_op.get_attr("Treduce_func_other_arguments"),
"Twindow_size_func_other_arguments",
_op.get_attr("Twindow_size_func_other_arguments"),
"output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"GroupByWindowDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
GroupByWindowDataset = tf_export("raw_ops.GroupByWindowDataset")(_ops.to_raw_op(group_by_window_dataset))
def group_by_window_dataset_eager_fallback(input_dataset, key_func_other_arguments, reduce_func_other_arguments, window_size_func_other_arguments, key_func, reduce_func, window_size_func, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'group_by_window_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'group_by_window_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_Tkey_func_other_arguments, key_func_other_arguments = _execute.convert_to_mixed_eager_tensors(key_func_other_arguments, ctx)
_attr_Treduce_func_other_arguments, reduce_func_other_arguments = _execute.convert_to_mixed_eager_tensors(reduce_func_other_arguments, ctx)
_attr_Twindow_size_func_other_arguments, window_size_func_other_arguments = _execute.convert_to_mixed_eager_tensors(window_size_func_other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset] + list(key_func_other_arguments) + list(reduce_func_other_arguments) + list(window_size_func_other_arguments)
_attrs = ("key_func", key_func, "reduce_func", reduce_func,
"window_size_func", window_size_func, "Tkey_func_other_arguments",
_attr_Tkey_func_other_arguments, "Treduce_func_other_arguments",
_attr_Treduce_func_other_arguments, "Twindow_size_func_other_arguments",
_attr_Twindow_size_func_other_arguments, "output_types", output_types,
"output_shapes", output_shapes)
_result = _execute.execute(b"GroupByWindowDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"GroupByWindowDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def ignore_errors_dataset(input_dataset, output_types, output_shapes, name=None):
r"""Creates a dataset that contains the elements of `input_dataset` ignoring errors.
Args:
input_dataset: A `Tensor` of type `variant`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "IgnoreErrorsDataset", name,
tld.op_callbacks, input_dataset, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return ignore_errors_dataset_eager_fallback(
input_dataset, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'ignore_errors_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'ignore_errors_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"IgnoreErrorsDataset", input_dataset=input_dataset,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"IgnoreErrorsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
IgnoreErrorsDataset = tf_export("raw_ops.IgnoreErrorsDataset")(_ops.to_raw_op(ignore_errors_dataset))
def ignore_errors_dataset_eager_fallback(input_dataset, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'ignore_errors_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'ignore_errors_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"IgnoreErrorsDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"IgnoreErrorsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def iterator_get_device(resource, name=None):
r"""Returns the name of the device on which `resource` has been placed.
Args:
resource: A `Tensor` of type `resource`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "IteratorGetDevice", name,
tld.op_callbacks, resource)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return iterator_get_device_eager_fallback(
resource, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"IteratorGetDevice", resource=resource, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"IteratorGetDevice", _inputs_flat, _attrs, _result)
_result, = _result
return _result
IteratorGetDevice = tf_export("raw_ops.IteratorGetDevice")(_ops.to_raw_op(iterator_get_device))
def iterator_get_device_eager_fallback(resource, name, ctx):
resource = _ops.convert_to_tensor(resource, _dtypes.resource)
_inputs_flat = [resource]
_attrs = None
_result = _execute.execute(b"IteratorGetDevice", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"IteratorGetDevice", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def lmdb_dataset(filenames, output_types, output_shapes, name=None):
r"""Creates a dataset that emits the key-value pairs in one or more LMDB files.
The Lightning Memory-Mapped Database Manager, or LMDB, is an embedded binary
key-value database. This dataset can read the contents of LMDB database files,
the names of which generally have the `.mdb` suffix.
Each output element consists of a key-value pair represented as a pair of
scalar string `Tensor`s, where the first `Tensor` contains the key and the
second `Tensor` contains the value.
LMDB uses different file formats on big- and little-endian machines.
`LMDBDataset` can only read files in the format of the host machine.
Args:
filenames: A `Tensor` of type `string`.
A scalar or a vector containing the name(s) of the binary file(s) to be
read.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "LMDBDataset", name,
tld.op_callbacks, filenames, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return lmdb_dataset_eager_fallback(
filenames, output_types=output_types, output_shapes=output_shapes,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'lmdb_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'lmdb_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"LMDBDataset", filenames=filenames, output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"LMDBDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
LMDBDataset = tf_export("raw_ops.LMDBDataset")(_ops.to_raw_op(lmdb_dataset))
def lmdb_dataset_eager_fallback(filenames, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'lmdb_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'lmdb_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
filenames = _ops.convert_to_tensor(filenames, _dtypes.string)
_inputs_flat = [filenames]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"LMDBDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"LMDBDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def latency_stats_dataset(input_dataset, tag, output_types, output_shapes, name=None):
r"""Records the latency of producing `input_dataset` elements in a StatsAggregator.
Args:
input_dataset: A `Tensor` of type `variant`.
tag: A `Tensor` of type `string`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "LatencyStatsDataset", name,
tld.op_callbacks, input_dataset, tag, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return latency_stats_dataset_eager_fallback(
input_dataset, tag, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'latency_stats_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'latency_stats_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"LatencyStatsDataset", input_dataset=input_dataset, tag=tag,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"LatencyStatsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
LatencyStatsDataset = tf_export("raw_ops.LatencyStatsDataset")(_ops.to_raw_op(latency_stats_dataset))
def latency_stats_dataset_eager_fallback(input_dataset, tag, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'latency_stats_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'latency_stats_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
tag = _ops.convert_to_tensor(tag, _dtypes.string)
_inputs_flat = [input_dataset, tag]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"LatencyStatsDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"LatencyStatsDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def legacy_parallel_interleave_dataset_v2(input_dataset, other_arguments, cycle_length, block_length, buffer_output_elements, prefetch_input_elements, f, output_types, output_shapes, deterministic="default", name=None):
r"""Creates a dataset that applies `f` to the outputs of `input_dataset`.
The resulting dataset is similar to the `InterleaveDataset`, with the exception
that if retrieving the next value from a dataset would cause the requester to
block, it will skip that input dataset. This dataset is especially useful
when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it
allows the training step to proceed so long as some data is available.
!! WARNING !! This dataset is not deterministic!
Args:
input_dataset: A `Tensor` of type `variant`.
other_arguments: A list of `Tensor` objects.
cycle_length: A `Tensor` of type `int64`.
block_length: A `Tensor` of type `int64`.
buffer_output_elements: A `Tensor` of type `int64`.
prefetch_input_elements: A `Tensor` of type `int64`.
f: A function decorated with @Defun.
A function mapping elements of `input_dataset`, concatenated with
`other_arguments`, to a Dataset variant that contains elements matching
`output_types` and `output_shapes`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
deterministic: An optional `string`. Defaults to `"default"`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"LegacyParallelInterleaveDatasetV2", name, tld.op_callbacks,
input_dataset, other_arguments, cycle_length, block_length,
buffer_output_elements, prefetch_input_elements, "f", f,
"deterministic", deterministic, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return legacy_parallel_interleave_dataset_v2_eager_fallback(
input_dataset, other_arguments, cycle_length, block_length,
buffer_output_elements, prefetch_input_elements, f=f,
deterministic=deterministic, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'legacy_parallel_interleave_dataset_v2' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'legacy_parallel_interleave_dataset_v2' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if deterministic is None:
deterministic = "default"
deterministic = _execute.make_str(deterministic, "deterministic")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"LegacyParallelInterleaveDatasetV2", input_dataset=input_dataset,
other_arguments=other_arguments,
cycle_length=cycle_length,
block_length=block_length,
buffer_output_elements=buffer_output_elements,
prefetch_input_elements=prefetch_input_elements,
f=f, output_types=output_types,
output_shapes=output_shapes,
deterministic=deterministic,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("f", _op.get_attr("f"), "deterministic",
_op.get_attr("deterministic"), "Targuments",
_op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"LegacyParallelInterleaveDatasetV2", _inputs_flat, _attrs, _result)
_result, = _result
return _result
LegacyParallelInterleaveDatasetV2 = tf_export("raw_ops.LegacyParallelInterleaveDatasetV2")(_ops.to_raw_op(legacy_parallel_interleave_dataset_v2))
def legacy_parallel_interleave_dataset_v2_eager_fallback(input_dataset, other_arguments, cycle_length, block_length, buffer_output_elements, prefetch_input_elements, f, output_types, output_shapes, deterministic, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'legacy_parallel_interleave_dataset_v2' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'legacy_parallel_interleave_dataset_v2' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if deterministic is None:
deterministic = "default"
deterministic = _execute.make_str(deterministic, "deterministic")
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
cycle_length = _ops.convert_to_tensor(cycle_length, _dtypes.int64)
block_length = _ops.convert_to_tensor(block_length, _dtypes.int64)
buffer_output_elements = _ops.convert_to_tensor(buffer_output_elements, _dtypes.int64)
prefetch_input_elements = _ops.convert_to_tensor(prefetch_input_elements, _dtypes.int64)
_inputs_flat = [input_dataset] + list(other_arguments) + [cycle_length, block_length, buffer_output_elements, prefetch_input_elements]
_attrs = ("f", f, "deterministic", deterministic, "Targuments",
_attr_Targuments, "output_types", output_types, "output_shapes",
output_shapes)
_result = _execute.execute(b"LegacyParallelInterleaveDatasetV2", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"LegacyParallelInterleaveDatasetV2", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def load_dataset(path, reader_func_other_args, output_types, output_shapes, reader_func, compression="", name=None):
r"""TODO: add doc.
Args:
path: A `Tensor` of type `string`.
reader_func_other_args: A list of `Tensor` objects.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
reader_func: A function decorated with @Defun.
compression: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "LoadDataset", name,
tld.op_callbacks, path, reader_func_other_args, "output_types",
output_types, "output_shapes", output_shapes, "compression",
compression, "reader_func", reader_func)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return load_dataset_eager_fallback(
path, reader_func_other_args, output_types=output_types,
output_shapes=output_shapes, compression=compression,
reader_func=reader_func, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'load_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'load_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if compression is None:
compression = ""
compression = _execute.make_str(compression, "compression")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"LoadDataset", path=path,
reader_func_other_args=reader_func_other_args,
output_types=output_types, output_shapes=output_shapes,
reader_func=reader_func, compression=compression,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "compression",
_op.get_attr("compression"), "reader_func",
_op.get_attr("reader_func"), "Treader_func_args",
_op.get_attr("Treader_func_args"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"LoadDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
LoadDataset = tf_export("raw_ops.LoadDataset")(_ops.to_raw_op(load_dataset))
def load_dataset_eager_fallback(path, reader_func_other_args, output_types, output_shapes, reader_func, compression, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'load_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'load_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if compression is None:
compression = ""
compression = _execute.make_str(compression, "compression")
_attr_Treader_func_args, reader_func_other_args = _execute.convert_to_mixed_eager_tensors(reader_func_other_args, ctx)
path = _ops.convert_to_tensor(path, _dtypes.string)
_inputs_flat = [path] + list(reader_func_other_args)
_attrs = ("output_types", output_types, "output_shapes", output_shapes,
"compression", compression, "reader_func", reader_func, "Treader_func_args",
_attr_Treader_func_args)
_result = _execute.execute(b"LoadDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"LoadDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def map_and_batch_dataset(input_dataset, other_arguments, batch_size, num_parallel_calls, drop_remainder, f, output_types, output_shapes, preserve_cardinality=False, name=None):
r"""Creates a dataset that fuses mapping with batching.
Creates a dataset that applies `f` to the outputs of `input_dataset` and then
batches `batch_size` of them.
Unlike a "MapDataset", which applies `f` sequentially, this dataset invokes up
to `batch_size * num_parallel_batches` copies of `f` in parallel.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when building a closure
for `f`.
batch_size: A `Tensor` of type `int64`.
A scalar representing the number of elements to accumulate in a
batch. It determines the number of concurrent invocations of `f` that process
elements from `input_dataset` in parallel.
num_parallel_calls: A `Tensor` of type `int64`.
A scalar representing the maximum number of parallel invocations of the `map_fn`
function. Applying the `map_fn` on consecutive input elements in parallel has
the potential to improve input pipeline throughput.
drop_remainder: A `Tensor` of type `bool`.
A scalar representing whether the last batch should be dropped in case its size
is smaller than desired.
f: A function decorated with @Defun.
A function to apply to the outputs of `input_dataset`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
preserve_cardinality: An optional `bool`. Defaults to `False`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "MapAndBatchDataset", name,
tld.op_callbacks, input_dataset, other_arguments, batch_size,
num_parallel_calls, drop_remainder, "f", f, "output_types",
output_types, "output_shapes", output_shapes, "preserve_cardinality",
preserve_cardinality)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return map_and_batch_dataset_eager_fallback(
input_dataset, other_arguments, batch_size, num_parallel_calls,
drop_remainder, f=f, output_types=output_types,
output_shapes=output_shapes,
preserve_cardinality=preserve_cardinality, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'map_and_batch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'map_and_batch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"MapAndBatchDataset", input_dataset=input_dataset,
other_arguments=other_arguments,
batch_size=batch_size,
num_parallel_calls=num_parallel_calls,
drop_remainder=drop_remainder, f=f,
output_types=output_types,
output_shapes=output_shapes,
preserve_cardinality=preserve_cardinality,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("f", _op.get_attr("f"), "Targuments",
_op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "preserve_cardinality",
_op._get_attr_bool("preserve_cardinality"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"MapAndBatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
MapAndBatchDataset = tf_export("raw_ops.MapAndBatchDataset")(_ops.to_raw_op(map_and_batch_dataset))
def map_and_batch_dataset_eager_fallback(input_dataset, other_arguments, batch_size, num_parallel_calls, drop_remainder, f, output_types, output_shapes, preserve_cardinality, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'map_and_batch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'map_and_batch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
batch_size = _ops.convert_to_tensor(batch_size, _dtypes.int64)
num_parallel_calls = _ops.convert_to_tensor(num_parallel_calls, _dtypes.int64)
drop_remainder = _ops.convert_to_tensor(drop_remainder, _dtypes.bool)
_inputs_flat = [input_dataset] + list(other_arguments) + [batch_size, num_parallel_calls, drop_remainder]
_attrs = ("f", f, "Targuments", _attr_Targuments, "output_types",
output_types, "output_shapes", output_shapes, "preserve_cardinality",
preserve_cardinality)
_result = _execute.execute(b"MapAndBatchDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"MapAndBatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def matching_files_dataset(patterns, name=None):
r"""TODO: add doc.
Args:
patterns: A `Tensor` of type `string`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "MatchingFilesDataset", name,
tld.op_callbacks, patterns)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return matching_files_dataset_eager_fallback(
patterns, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"MatchingFilesDataset", patterns=patterns, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"MatchingFilesDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
MatchingFilesDataset = tf_export("raw_ops.MatchingFilesDataset")(_ops.to_raw_op(matching_files_dataset))
def matching_files_dataset_eager_fallback(patterns, name, ctx):
patterns = _ops.convert_to_tensor(patterns, _dtypes.string)
_inputs_flat = [patterns]
_attrs = None
_result = _execute.execute(b"MatchingFilesDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"MatchingFilesDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def max_intra_op_parallelism_dataset(input_dataset, max_intra_op_parallelism, output_types, output_shapes, name=None):
r"""Creates a dataset that overrides the maximum intra-op parallelism.
Args:
input_dataset: A `Tensor` of type `variant`.
max_intra_op_parallelism: A `Tensor` of type `int64`.
Identifies the maximum intra-op parallelism to use.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "MaxIntraOpParallelismDataset",
name, tld.op_callbacks, input_dataset, max_intra_op_parallelism,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return max_intra_op_parallelism_dataset_eager_fallback(
input_dataset, max_intra_op_parallelism, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'max_intra_op_parallelism_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'max_intra_op_parallelism_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"MaxIntraOpParallelismDataset", input_dataset=input_dataset,
max_intra_op_parallelism=max_intra_op_parallelism,
output_types=output_types,
output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"MaxIntraOpParallelismDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
MaxIntraOpParallelismDataset = tf_export("raw_ops.MaxIntraOpParallelismDataset")(_ops.to_raw_op(max_intra_op_parallelism_dataset))
def max_intra_op_parallelism_dataset_eager_fallback(input_dataset, max_intra_op_parallelism, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'max_intra_op_parallelism_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'max_intra_op_parallelism_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
max_intra_op_parallelism = _ops.convert_to_tensor(max_intra_op_parallelism, _dtypes.int64)
_inputs_flat = [input_dataset, max_intra_op_parallelism]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"MaxIntraOpParallelismDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"MaxIntraOpParallelismDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def non_serializable_dataset(input_dataset, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "NonSerializableDataset", name,
tld.op_callbacks, input_dataset, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return non_serializable_dataset_eager_fallback(
input_dataset, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'non_serializable_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'non_serializable_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"NonSerializableDataset", input_dataset=input_dataset,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"NonSerializableDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
NonSerializableDataset = tf_export("raw_ops.NonSerializableDataset")(_ops.to_raw_op(non_serializable_dataset))
def non_serializable_dataset_eager_fallback(input_dataset, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'non_serializable_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'non_serializable_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"NonSerializableDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"NonSerializableDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def parallel_interleave_dataset(input_dataset, other_arguments, cycle_length, block_length, sloppy, buffer_output_elements, prefetch_input_elements, f, output_types, output_shapes, name=None):
r"""Creates a dataset that applies `f` to the outputs of `input_dataset`.
The resulting dataset is similar to the `InterleaveDataset`, with the exception
that if retrieving the next value from a dataset would cause the requester to
block, it will skip that input dataset. This dataset is especially useful
when loading data from a variable-latency datastores (e.g. HDFS, GCS), as it
allows the training step to proceed so long as some data is available.
!! WARNING !! If the `sloppy` parameter is set to `True`, the operation of this
dataset will not be deterministic!
This dataset has been superseded by `ParallelInterleaveDatasetV2`. New code
should use `ParallelInterleaveDatasetV2`.
The Python API `tf.data.experimental.parallel_interleave` creates instances of
this op. `tf.data.experimental.parallel_interleave` is a deprecated API.
Args:
input_dataset: A `Tensor` of type `variant`.
Dataset that produces a stream of arguments for the function `f`.
other_arguments: A list of `Tensor` objects.
Additional arguments to pass to `f` beyond those produced by `input_dataset`.
Evaluated once when the dataset is instantiated.
cycle_length: A `Tensor` of type `int64`.
Number of datasets (each created by applying `f` to the elements of
`input_dataset`) among which the `ParallelInterleaveDataset` will cycle in a
round-robin fashion.
block_length: A `Tensor` of type `int64`.
Number of elements at a time to produce from each interleaved invocation of a
dataset returned by `f`.
sloppy: A `Tensor` of type `bool`.
If `True`, return elements as they become available, even if that means returning
these elements in a non-deterministic order. Sloppy operation may result in better
performance in the presence of stragglers, but the dataset will still block if
all of its open streams are blocked.
If `False`, always return elements in a deterministic order.
buffer_output_elements: A `Tensor` of type `int64`.
The number of elements each iterator being interleaved should buffer (similar
to the `.prefetch()` transformation for each interleaved iterator).
prefetch_input_elements: A `Tensor` of type `int64`.
Determines the number of iterators to prefetch, allowing buffers to warm up and
data to be pre-fetched without blocking the main thread.
f: A function decorated with @Defun.
A function mapping elements of `input_dataset`, concatenated with
`other_arguments`, to a Dataset variant that contains elements matching
`output_types` and `output_shapes`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParallelInterleaveDataset",
name, tld.op_callbacks, input_dataset, other_arguments, cycle_length,
block_length, sloppy, buffer_output_elements, prefetch_input_elements,
"f", f, "output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return parallel_interleave_dataset_eager_fallback(
input_dataset, other_arguments, cycle_length, block_length, sloppy,
buffer_output_elements, prefetch_input_elements, f=f,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'parallel_interleave_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'parallel_interleave_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParallelInterleaveDataset", input_dataset=input_dataset,
other_arguments=other_arguments,
cycle_length=cycle_length,
block_length=block_length, sloppy=sloppy,
buffer_output_elements=buffer_output_elements,
prefetch_input_elements=prefetch_input_elements,
f=f, output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("f", _op.get_attr("f"), "Targuments",
_op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParallelInterleaveDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ParallelInterleaveDataset = tf_export("raw_ops.ParallelInterleaveDataset")(_ops.to_raw_op(parallel_interleave_dataset))
def parallel_interleave_dataset_eager_fallback(input_dataset, other_arguments, cycle_length, block_length, sloppy, buffer_output_elements, prefetch_input_elements, f, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'parallel_interleave_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'parallel_interleave_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
cycle_length = _ops.convert_to_tensor(cycle_length, _dtypes.int64)
block_length = _ops.convert_to_tensor(block_length, _dtypes.int64)
sloppy = _ops.convert_to_tensor(sloppy, _dtypes.bool)
buffer_output_elements = _ops.convert_to_tensor(buffer_output_elements, _dtypes.int64)
prefetch_input_elements = _ops.convert_to_tensor(prefetch_input_elements, _dtypes.int64)
_inputs_flat = [input_dataset] + list(other_arguments) + [cycle_length, block_length, sloppy, buffer_output_elements, prefetch_input_elements]
_attrs = ("f", f, "Targuments", _attr_Targuments, "output_types",
output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ParallelInterleaveDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParallelInterleaveDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def parse_example_dataset(input_dataset, num_parallel_calls, dense_defaults, sparse_keys, dense_keys, sparse_types, dense_shapes, output_types, output_shapes, sloppy=False, ragged_keys=[], ragged_value_types=[], ragged_split_types=[], name=None):
r"""Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.
Args:
input_dataset: A `Tensor` of type `variant`.
num_parallel_calls: A `Tensor` of type `int64`.
dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.
A dict mapping string keys to `Tensor`s.
The keys of the dict must match the dense_keys of the feature.
sparse_keys: A list of `strings`.
A list of string keys in the examples features.
The results for these keys will be returned as `SparseTensor` objects.
dense_keys: A list of `strings`.
A list of Ndense string Tensors (scalars).
The keys expected in the Examples features associated with dense values.
sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.
A list of `DTypes` of the same length as `sparse_keys`.
Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),
and `tf.string` (`BytesList`) are supported.
dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).
List of tuples with the same length as `dense_keys`.
The shape of the data for each dense feature referenced by `dense_keys`.
Required for any input tensors identified by `dense_keys`. Must be
either fully defined, or may contain an unknown first dimension.
An unknown first dimension means the feature is treated as having
a variable number of blocks, and the output shape along this dimension
is considered unknown at graph build time. Padding is applied for
minibatch elements smaller than the maximum number of blocks for the
given feature along this dimension.
output_types: A list of `tf.DTypes` that has length `>= 1`.
The type list for the return values.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
The list of shapes being produced.
sloppy: An optional `bool`. Defaults to `False`.
ragged_keys: An optional list of `strings`. Defaults to `[]`.
ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParseExampleDataset", name,
tld.op_callbacks, input_dataset, num_parallel_calls, dense_defaults,
"sparse_keys", sparse_keys, "dense_keys", dense_keys, "sparse_types",
sparse_types, "dense_shapes", dense_shapes, "output_types",
output_types, "output_shapes", output_shapes, "sloppy", sloppy,
"ragged_keys", ragged_keys, "ragged_value_types", ragged_value_types,
"ragged_split_types", ragged_split_types)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return parse_example_dataset_eager_fallback(
input_dataset, num_parallel_calls, dense_defaults,
sparse_keys=sparse_keys, dense_keys=dense_keys,
sparse_types=sparse_types, dense_shapes=dense_shapes,
output_types=output_types, output_shapes=output_shapes,
sloppy=sloppy, ragged_keys=ragged_keys,
ragged_value_types=ragged_value_types,
ragged_split_types=ragged_split_types, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'parse_example_dataset' Op, not %r." % sparse_keys)
sparse_keys = [_execute.make_str(_s, "sparse_keys") for _s in sparse_keys]
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'parse_example_dataset' Op, not %r." % dense_keys)
dense_keys = [_execute.make_str(_s, "dense_keys") for _s in dense_keys]
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_example_dataset' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_example_dataset' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'parse_example_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'parse_example_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if sloppy is None:
sloppy = False
sloppy = _execute.make_bool(sloppy, "sloppy")
if ragged_keys is None:
ragged_keys = []
if not isinstance(ragged_keys, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_keys' argument to "
"'parse_example_dataset' Op, not %r." % ragged_keys)
ragged_keys = [_execute.make_str(_s, "ragged_keys") for _s in ragged_keys]
if ragged_value_types is None:
ragged_value_types = []
if not isinstance(ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_value_types' argument to "
"'parse_example_dataset' Op, not %r." % ragged_value_types)
ragged_value_types = [_execute.make_type(_t, "ragged_value_types") for _t in ragged_value_types]
if ragged_split_types is None:
ragged_split_types = []
if not isinstance(ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_split_types' argument to "
"'parse_example_dataset' Op, not %r." % ragged_split_types)
ragged_split_types = [_execute.make_type(_t, "ragged_split_types") for _t in ragged_split_types]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParseExampleDataset", input_dataset=input_dataset,
num_parallel_calls=num_parallel_calls,
dense_defaults=dense_defaults,
sparse_keys=sparse_keys, dense_keys=dense_keys,
sparse_types=sparse_types,
dense_shapes=dense_shapes,
output_types=output_types,
output_shapes=output_shapes, sloppy=sloppy,
ragged_keys=ragged_keys,
ragged_value_types=ragged_value_types,
ragged_split_types=ragged_split_types,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("sparse_keys", _op.get_attr("sparse_keys"), "dense_keys",
_op.get_attr("dense_keys"), "sparse_types",
_op.get_attr("sparse_types"), "Tdense", _op.get_attr("Tdense"),
"dense_shapes", _op.get_attr("dense_shapes"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "sloppy",
_op._get_attr_bool("sloppy"), "ragged_keys",
_op.get_attr("ragged_keys"), "ragged_value_types",
_op.get_attr("ragged_value_types"), "ragged_split_types",
_op.get_attr("ragged_split_types"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParseExampleDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ParseExampleDataset = tf_export("raw_ops.ParseExampleDataset")(_ops.to_raw_op(parse_example_dataset))
def parse_example_dataset_eager_fallback(input_dataset, num_parallel_calls, dense_defaults, sparse_keys, dense_keys, sparse_types, dense_shapes, output_types, output_shapes, sloppy, ragged_keys, ragged_value_types, ragged_split_types, name, ctx):
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'parse_example_dataset' Op, not %r." % sparse_keys)
sparse_keys = [_execute.make_str(_s, "sparse_keys") for _s in sparse_keys]
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'parse_example_dataset' Op, not %r." % dense_keys)
dense_keys = [_execute.make_str(_s, "dense_keys") for _s in dense_keys]
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_example_dataset' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_example_dataset' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'parse_example_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'parse_example_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if sloppy is None:
sloppy = False
sloppy = _execute.make_bool(sloppy, "sloppy")
if ragged_keys is None:
ragged_keys = []
if not isinstance(ragged_keys, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_keys' argument to "
"'parse_example_dataset' Op, not %r." % ragged_keys)
ragged_keys = [_execute.make_str(_s, "ragged_keys") for _s in ragged_keys]
if ragged_value_types is None:
ragged_value_types = []
if not isinstance(ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_value_types' argument to "
"'parse_example_dataset' Op, not %r." % ragged_value_types)
ragged_value_types = [_execute.make_type(_t, "ragged_value_types") for _t in ragged_value_types]
if ragged_split_types is None:
ragged_split_types = []
if not isinstance(ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_split_types' argument to "
"'parse_example_dataset' Op, not %r." % ragged_split_types)
ragged_split_types = [_execute.make_type(_t, "ragged_split_types") for _t in ragged_split_types]
_attr_Tdense, dense_defaults = _execute.convert_to_mixed_eager_tensors(dense_defaults, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
num_parallel_calls = _ops.convert_to_tensor(num_parallel_calls, _dtypes.int64)
_inputs_flat = [input_dataset, num_parallel_calls] + list(dense_defaults)
_attrs = ("sparse_keys", sparse_keys, "dense_keys", dense_keys,
"sparse_types", sparse_types, "Tdense", _attr_Tdense, "dense_shapes",
dense_shapes, "output_types", output_types, "output_shapes", output_shapes,
"sloppy", sloppy, "ragged_keys", ragged_keys, "ragged_value_types",
ragged_value_types, "ragged_split_types", ragged_split_types)
_result = _execute.execute(b"ParseExampleDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParseExampleDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def parse_example_dataset_v2(input_dataset, num_parallel_calls, dense_defaults, sparse_keys, dense_keys, sparse_types, dense_shapes, output_types, output_shapes, deterministic="default", ragged_keys=[], ragged_value_types=[], ragged_split_types=[], name=None):
r"""Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.
Args:
input_dataset: A `Tensor` of type `variant`.
num_parallel_calls: A `Tensor` of type `int64`.
dense_defaults: A list of `Tensor` objects with types from: `float32`, `int64`, `string`.
A dict mapping string keys to `Tensor`s.
The keys of the dict must match the dense_keys of the feature.
sparse_keys: A list of `strings`.
A list of string keys in the examples features.
The results for these keys will be returned as `SparseTensor` objects.
dense_keys: A list of `strings`.
A list of Ndense string Tensors (scalars).
The keys expected in the Examples features associated with dense values.
sparse_types: A list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`.
A list of `DTypes` of the same length as `sparse_keys`.
Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),
and `tf.string` (`BytesList`) are supported.
dense_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`).
List of tuples with the same length as `dense_keys`.
The shape of the data for each dense feature referenced by `dense_keys`.
Required for any input tensors identified by `dense_keys`. Must be
either fully defined, or may contain an unknown first dimension.
An unknown first dimension means the feature is treated as having
a variable number of blocks, and the output shape along this dimension
is considered unknown at graph build time. Padding is applied for
minibatch elements smaller than the maximum number of blocks for the
given feature along this dimension.
output_types: A list of `tf.DTypes` that has length `>= 1`.
The type list for the return values.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
The list of shapes being produced.
deterministic: An optional `string`. Defaults to `"default"`.
A string indicating the op-level determinism to use. Deterministic controls
whether the dataset is allowed to return elements out of order if the next
element to be returned isn't available, but a later element is. Options are
"true", "false", and "default". "default" indicates that determinism should be
decided by the `experimental_deterministic` parameter of `tf.data.Options`.
ragged_keys: An optional list of `strings`. Defaults to `[]`.
ragged_value_types: An optional list of `tf.DTypes` from: `tf.float32, tf.int64, tf.string`. Defaults to `[]`.
ragged_split_types: An optional list of `tf.DTypes` from: `tf.int32, tf.int64`. Defaults to `[]`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ParseExampleDatasetV2", name,
tld.op_callbacks, input_dataset, num_parallel_calls, dense_defaults,
"sparse_keys", sparse_keys, "dense_keys", dense_keys, "sparse_types",
sparse_types, "dense_shapes", dense_shapes, "output_types",
output_types, "output_shapes", output_shapes, "deterministic",
deterministic, "ragged_keys", ragged_keys, "ragged_value_types",
ragged_value_types, "ragged_split_types", ragged_split_types)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return parse_example_dataset_v2_eager_fallback(
input_dataset, num_parallel_calls, dense_defaults,
sparse_keys=sparse_keys, dense_keys=dense_keys,
sparse_types=sparse_types, dense_shapes=dense_shapes,
output_types=output_types, output_shapes=output_shapes,
deterministic=deterministic, ragged_keys=ragged_keys,
ragged_value_types=ragged_value_types,
ragged_split_types=ragged_split_types, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'parse_example_dataset_v2' Op, not %r." % sparse_keys)
sparse_keys = [_execute.make_str(_s, "sparse_keys") for _s in sparse_keys]
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'parse_example_dataset_v2' Op, not %r." % dense_keys)
dense_keys = [_execute.make_str(_s, "dense_keys") for _s in dense_keys]
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_example_dataset_v2' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_example_dataset_v2' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'parse_example_dataset_v2' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'parse_example_dataset_v2' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if deterministic is None:
deterministic = "default"
deterministic = _execute.make_str(deterministic, "deterministic")
if ragged_keys is None:
ragged_keys = []
if not isinstance(ragged_keys, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_keys' argument to "
"'parse_example_dataset_v2' Op, not %r." % ragged_keys)
ragged_keys = [_execute.make_str(_s, "ragged_keys") for _s in ragged_keys]
if ragged_value_types is None:
ragged_value_types = []
if not isinstance(ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_value_types' argument to "
"'parse_example_dataset_v2' Op, not %r." % ragged_value_types)
ragged_value_types = [_execute.make_type(_t, "ragged_value_types") for _t in ragged_value_types]
if ragged_split_types is None:
ragged_split_types = []
if not isinstance(ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_split_types' argument to "
"'parse_example_dataset_v2' Op, not %r." % ragged_split_types)
ragged_split_types = [_execute.make_type(_t, "ragged_split_types") for _t in ragged_split_types]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ParseExampleDatasetV2", input_dataset=input_dataset,
num_parallel_calls=num_parallel_calls,
dense_defaults=dense_defaults,
sparse_keys=sparse_keys,
dense_keys=dense_keys,
sparse_types=sparse_types,
dense_shapes=dense_shapes,
output_types=output_types,
output_shapes=output_shapes,
deterministic=deterministic,
ragged_keys=ragged_keys,
ragged_value_types=ragged_value_types,
ragged_split_types=ragged_split_types,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("sparse_keys", _op.get_attr("sparse_keys"), "dense_keys",
_op.get_attr("dense_keys"), "sparse_types",
_op.get_attr("sparse_types"), "Tdense", _op.get_attr("Tdense"),
"dense_shapes", _op.get_attr("dense_shapes"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "deterministic",
_op.get_attr("deterministic"), "ragged_keys",
_op.get_attr("ragged_keys"), "ragged_value_types",
_op.get_attr("ragged_value_types"), "ragged_split_types",
_op.get_attr("ragged_split_types"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ParseExampleDatasetV2", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ParseExampleDatasetV2 = tf_export("raw_ops.ParseExampleDatasetV2")(_ops.to_raw_op(parse_example_dataset_v2))
def parse_example_dataset_v2_eager_fallback(input_dataset, num_parallel_calls, dense_defaults, sparse_keys, dense_keys, sparse_types, dense_shapes, output_types, output_shapes, deterministic, ragged_keys, ragged_value_types, ragged_split_types, name, ctx):
if not isinstance(sparse_keys, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_keys' argument to "
"'parse_example_dataset_v2' Op, not %r." % sparse_keys)
sparse_keys = [_execute.make_str(_s, "sparse_keys") for _s in sparse_keys]
if not isinstance(dense_keys, (list, tuple)):
raise TypeError(
"Expected list for 'dense_keys' argument to "
"'parse_example_dataset_v2' Op, not %r." % dense_keys)
dense_keys = [_execute.make_str(_s, "dense_keys") for _s in dense_keys]
if not isinstance(sparse_types, (list, tuple)):
raise TypeError(
"Expected list for 'sparse_types' argument to "
"'parse_example_dataset_v2' Op, not %r." % sparse_types)
sparse_types = [_execute.make_type(_t, "sparse_types") for _t in sparse_types]
if not isinstance(dense_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'dense_shapes' argument to "
"'parse_example_dataset_v2' Op, not %r." % dense_shapes)
dense_shapes = [_execute.make_shape(_s, "dense_shapes") for _s in dense_shapes]
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'parse_example_dataset_v2' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'parse_example_dataset_v2' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if deterministic is None:
deterministic = "default"
deterministic = _execute.make_str(deterministic, "deterministic")
if ragged_keys is None:
ragged_keys = []
if not isinstance(ragged_keys, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_keys' argument to "
"'parse_example_dataset_v2' Op, not %r." % ragged_keys)
ragged_keys = [_execute.make_str(_s, "ragged_keys") for _s in ragged_keys]
if ragged_value_types is None:
ragged_value_types = []
if not isinstance(ragged_value_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_value_types' argument to "
"'parse_example_dataset_v2' Op, not %r." % ragged_value_types)
ragged_value_types = [_execute.make_type(_t, "ragged_value_types") for _t in ragged_value_types]
if ragged_split_types is None:
ragged_split_types = []
if not isinstance(ragged_split_types, (list, tuple)):
raise TypeError(
"Expected list for 'ragged_split_types' argument to "
"'parse_example_dataset_v2' Op, not %r." % ragged_split_types)
ragged_split_types = [_execute.make_type(_t, "ragged_split_types") for _t in ragged_split_types]
_attr_Tdense, dense_defaults = _execute.convert_to_mixed_eager_tensors(dense_defaults, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
num_parallel_calls = _ops.convert_to_tensor(num_parallel_calls, _dtypes.int64)
_inputs_flat = [input_dataset, num_parallel_calls] + list(dense_defaults)
_attrs = ("sparse_keys", sparse_keys, "dense_keys", dense_keys,
"sparse_types", sparse_types, "Tdense", _attr_Tdense, "dense_shapes",
dense_shapes, "output_types", output_types, "output_shapes", output_shapes,
"deterministic", deterministic, "ragged_keys", ragged_keys,
"ragged_value_types", ragged_value_types, "ragged_split_types",
ragged_split_types)
_result = _execute.execute(b"ParseExampleDatasetV2", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ParseExampleDatasetV2", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def private_thread_pool_dataset(input_dataset, num_threads, output_types, output_shapes, name=None):
r"""Creates a dataset that uses a custom thread pool to compute `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
num_threads: A `Tensor` of type `int64`.
Identifies the number of threads to use for the private threadpool.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "PrivateThreadPoolDataset",
name, tld.op_callbacks, input_dataset, num_threads, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return private_thread_pool_dataset_eager_fallback(
input_dataset, num_threads, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'private_thread_pool_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'private_thread_pool_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"PrivateThreadPoolDataset", input_dataset=input_dataset,
num_threads=num_threads,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"PrivateThreadPoolDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
PrivateThreadPoolDataset = tf_export("raw_ops.PrivateThreadPoolDataset")(_ops.to_raw_op(private_thread_pool_dataset))
def private_thread_pool_dataset_eager_fallback(input_dataset, num_threads, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'private_thread_pool_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'private_thread_pool_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
num_threads = _ops.convert_to_tensor(num_threads, _dtypes.int64)
_inputs_flat = [input_dataset, num_threads]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"PrivateThreadPoolDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"PrivateThreadPoolDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def random_dataset(seed, seed2, output_types, output_shapes, name=None):
r"""Creates a Dataset that returns pseudorandom numbers.
Creates a Dataset that returns a stream of uniformly distributed
pseudorandom 64-bit signed integers.
In the TensorFlow Python API, you can instantiate this dataset via the
class `tf.data.experimental.RandomDataset`.
Instances of this dataset are also created as a result of the
`hoist_random_uniform` static optimization. Whether this optimization is
performed is determined by the `experimental_optimization.hoist_random_uniform`
option of `tf.data.Options`.
Args:
seed: A `Tensor` of type `int64`.
A scalar seed for the random number generator. If either seed or
seed2 is set to be non-zero, the random number generator is seeded
by the given seed. Otherwise, a random seed is used.
seed2: A `Tensor` of type `int64`.
A second scalar seed to avoid seed collision.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "RandomDataset", name,
tld.op_callbacks, seed, seed2, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return random_dataset_eager_fallback(
seed, seed2, output_types=output_types, output_shapes=output_shapes,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'random_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'random_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"RandomDataset", seed=seed, seed2=seed2, output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"RandomDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
RandomDataset = tf_export("raw_ops.RandomDataset")(_ops.to_raw_op(random_dataset))
def random_dataset_eager_fallback(seed, seed2, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'random_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'random_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
seed = _ops.convert_to_tensor(seed, _dtypes.int64)
seed2 = _ops.convert_to_tensor(seed2, _dtypes.int64)
_inputs_flat = [seed, seed2]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"RandomDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"RandomDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def rebatch_dataset(input_dataset, num_replicas, output_types, output_shapes, use_fallback=True, name=None):
r"""Creates a dataset that changes the batch size.
Creates a dataset that changes the batch size of the dataset to current batch
size // num_workers.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
num_replicas: A `Tensor` of type `int64`.
A scalar representing the number of replicas to distribute this batch across. As
a result of this transformation the current batch size would end up being
divided by this parameter.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
use_fallback: An optional `bool`. Defaults to `True`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "RebatchDataset", name,
tld.op_callbacks, input_dataset, num_replicas, "output_types",
output_types, "output_shapes", output_shapes, "use_fallback",
use_fallback)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return rebatch_dataset_eager_fallback(
input_dataset, num_replicas, output_types=output_types,
output_shapes=output_shapes, use_fallback=use_fallback, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'rebatch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'rebatch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if use_fallback is None:
use_fallback = True
use_fallback = _execute.make_bool(use_fallback, "use_fallback")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"RebatchDataset", input_dataset=input_dataset,
num_replicas=num_replicas,
output_types=output_types,
output_shapes=output_shapes,
use_fallback=use_fallback, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "use_fallback",
_op._get_attr_bool("use_fallback"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"RebatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
RebatchDataset = tf_export("raw_ops.RebatchDataset")(_ops.to_raw_op(rebatch_dataset))
def rebatch_dataset_eager_fallback(input_dataset, num_replicas, output_types, output_shapes, use_fallback, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'rebatch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'rebatch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if use_fallback is None:
use_fallback = True
use_fallback = _execute.make_bool(use_fallback, "use_fallback")
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
num_replicas = _ops.convert_to_tensor(num_replicas, _dtypes.int64)
_inputs_flat = [input_dataset, num_replicas]
_attrs = ("output_types", output_types, "output_shapes", output_shapes,
"use_fallback", use_fallback)
_result = _execute.execute(b"RebatchDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"RebatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def register_dataset(dataset, address, protocol, external_state_policy, name=None):
r"""Registers a dataset with the tf.data service.
Args:
dataset: A `Tensor` of type `variant`.
address: A `Tensor` of type `string`.
protocol: A `Tensor` of type `string`.
external_state_policy: An `int`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `int64`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "RegisterDataset", name,
tld.op_callbacks, dataset, address, protocol, "external_state_policy",
external_state_policy)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return register_dataset_eager_fallback(
dataset, address, protocol,
external_state_policy=external_state_policy, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
external_state_policy = _execute.make_int(external_state_policy, "external_state_policy")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"RegisterDataset", dataset=dataset, address=address,
protocol=protocol,
external_state_policy=external_state_policy,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("external_state_policy",
_op._get_attr_int("external_state_policy"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"RegisterDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
RegisterDataset = tf_export("raw_ops.RegisterDataset")(_ops.to_raw_op(register_dataset))
def register_dataset_eager_fallback(dataset, address, protocol, external_state_policy, name, ctx):
external_state_policy = _execute.make_int(external_state_policy, "external_state_policy")
dataset = _ops.convert_to_tensor(dataset, _dtypes.variant)
address = _ops.convert_to_tensor(address, _dtypes.string)
protocol = _ops.convert_to_tensor(protocol, _dtypes.string)
_inputs_flat = [dataset, address, protocol]
_attrs = ("external_state_policy", external_state_policy)
_result = _execute.execute(b"RegisterDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"RegisterDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def sampling_dataset(input_dataset, rate, seed, seed2, output_types, output_shapes, name=None):
r"""Creates a dataset that takes a Bernoulli sample of the contents of another dataset.
There is no transformation in the `tf.data` Python API for creating this dataset.
Instead, it is created as a result of the `filter_with_random_uniform_fusion`
static optimization. Whether this optimization is performed is determined by the
`experimental_optimization.filter_with_random_uniform_fusion` option of
`tf.data.Options`.
Args:
input_dataset: A `Tensor` of type `variant`.
rate: A `Tensor` of type `float32`.
A scalar representing the sample rate. Each element of `input_dataset` is
retained with this probability, independent of all other elements.
seed: A `Tensor` of type `int64`.
A scalar representing seed of random number generator.
seed2: A `Tensor` of type `int64`.
A scalar representing seed2 of random number generator.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "SamplingDataset", name,
tld.op_callbacks, input_dataset, rate, seed, seed2, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return sampling_dataset_eager_fallback(
input_dataset, rate, seed, seed2, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'sampling_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'sampling_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"SamplingDataset", input_dataset=input_dataset, rate=rate, seed=seed,
seed2=seed2, output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"SamplingDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
SamplingDataset = tf_export("raw_ops.SamplingDataset")(_ops.to_raw_op(sampling_dataset))
def sampling_dataset_eager_fallback(input_dataset, rate, seed, seed2, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'sampling_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'sampling_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
rate = _ops.convert_to_tensor(rate, _dtypes.float32)
seed = _ops.convert_to_tensor(seed, _dtypes.int64)
seed2 = _ops.convert_to_tensor(seed2, _dtypes.int64)
_inputs_flat = [input_dataset, rate, seed, seed2]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"SamplingDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"SamplingDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def save_dataset(input_dataset, path, shard_func_other_args, shard_func, compression="", use_shard_func=True, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
path: A `Tensor` of type `string`.
shard_func_other_args: A list of `Tensor` objects.
shard_func: A function decorated with @Defun.
compression: An optional `string`. Defaults to `""`.
use_shard_func: An optional `bool`. Defaults to `True`.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "SaveDataset", name,
tld.op_callbacks, input_dataset, path, shard_func_other_args,
"compression", compression, "shard_func", shard_func,
"use_shard_func", use_shard_func)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return save_dataset_eager_fallback(
input_dataset, path, shard_func_other_args, compression=compression,
shard_func=shard_func, use_shard_func=use_shard_func, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if compression is None:
compression = ""
compression = _execute.make_str(compression, "compression")
if use_shard_func is None:
use_shard_func = True
use_shard_func = _execute.make_bool(use_shard_func, "use_shard_func")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"SaveDataset", input_dataset=input_dataset, path=path,
shard_func_other_args=shard_func_other_args,
shard_func=shard_func, compression=compression,
use_shard_func=use_shard_func, name=name)
return _op
SaveDataset = tf_export("raw_ops.SaveDataset")(_ops.to_raw_op(save_dataset))
def save_dataset_eager_fallback(input_dataset, path, shard_func_other_args, shard_func, compression, use_shard_func, name, ctx):
if compression is None:
compression = ""
compression = _execute.make_str(compression, "compression")
if use_shard_func is None:
use_shard_func = True
use_shard_func = _execute.make_bool(use_shard_func, "use_shard_func")
_attr_Tshard_func_args, shard_func_other_args = _execute.convert_to_mixed_eager_tensors(shard_func_other_args, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
path = _ops.convert_to_tensor(path, _dtypes.string)
_inputs_flat = [input_dataset, path] + list(shard_func_other_args)
_attrs = ("compression", compression, "shard_func", shard_func,
"use_shard_func", use_shard_func, "Tshard_func_args",
_attr_Tshard_func_args)
_result = _execute.execute(b"SaveDataset", 0, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
_result = None
return _result
def scan_dataset(input_dataset, initial_state, other_arguments, f, output_types, output_shapes, preserve_cardinality=False, use_default_device=True, name=None):
r"""Creates a dataset successively reduces `f` over the elements of `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
initial_state: A list of `Tensor` objects.
other_arguments: A list of `Tensor` objects.
f: A function decorated with @Defun.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
preserve_cardinality: An optional `bool`. Defaults to `False`.
use_default_device: An optional `bool`. Defaults to `True`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ScanDataset", name,
tld.op_callbacks, input_dataset, initial_state, other_arguments, "f",
f, "output_types", output_types, "output_shapes", output_shapes,
"preserve_cardinality", preserve_cardinality, "use_default_device",
use_default_device)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return scan_dataset_eager_fallback(
input_dataset, initial_state, other_arguments, f=f,
output_types=output_types, output_shapes=output_shapes,
preserve_cardinality=preserve_cardinality,
use_default_device=use_default_device, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'scan_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'scan_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
if use_default_device is None:
use_default_device = True
use_default_device = _execute.make_bool(use_default_device, "use_default_device")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ScanDataset", input_dataset=input_dataset,
initial_state=initial_state,
other_arguments=other_arguments, f=f,
output_types=output_types, output_shapes=output_shapes,
preserve_cardinality=preserve_cardinality,
use_default_device=use_default_device, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("f", _op.get_attr("f"), "Tstate", _op.get_attr("Tstate"),
"Targuments", _op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "preserve_cardinality",
_op._get_attr_bool("preserve_cardinality"),
"use_default_device", _op._get_attr_bool("use_default_device"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ScanDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ScanDataset = tf_export("raw_ops.ScanDataset")(_ops.to_raw_op(scan_dataset))
def scan_dataset_eager_fallback(input_dataset, initial_state, other_arguments, f, output_types, output_shapes, preserve_cardinality, use_default_device, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'scan_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'scan_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if preserve_cardinality is None:
preserve_cardinality = False
preserve_cardinality = _execute.make_bool(preserve_cardinality, "preserve_cardinality")
if use_default_device is None:
use_default_device = True
use_default_device = _execute.make_bool(use_default_device, "use_default_device")
_attr_Tstate, initial_state = _execute.convert_to_mixed_eager_tensors(initial_state, ctx)
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset] + list(initial_state) + list(other_arguments)
_attrs = ("f", f, "Tstate", _attr_Tstate, "Targuments", _attr_Targuments,
"output_types", output_types, "output_shapes", output_shapes,
"preserve_cardinality", preserve_cardinality, "use_default_device",
use_default_device)
_result = _execute.execute(b"ScanDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ScanDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def set_stats_aggregator_dataset(input_dataset, stats_aggregator, tag, counter_prefix, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
stats_aggregator: A `Tensor` of type `resource`.
tag: A `Tensor` of type `string`.
counter_prefix: A `Tensor` of type `string`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "SetStatsAggregatorDataset",
name, tld.op_callbacks, input_dataset, stats_aggregator, tag,
counter_prefix, "output_types", output_types, "output_shapes",
output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return set_stats_aggregator_dataset_eager_fallback(
input_dataset, stats_aggregator, tag, counter_prefix,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'set_stats_aggregator_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'set_stats_aggregator_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"SetStatsAggregatorDataset", input_dataset=input_dataset,
stats_aggregator=stats_aggregator,
tag=tag, counter_prefix=counter_prefix,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"SetStatsAggregatorDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
SetStatsAggregatorDataset = tf_export("raw_ops.SetStatsAggregatorDataset")(_ops.to_raw_op(set_stats_aggregator_dataset))
def set_stats_aggregator_dataset_eager_fallback(input_dataset, stats_aggregator, tag, counter_prefix, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'set_stats_aggregator_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'set_stats_aggregator_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
stats_aggregator = _ops.convert_to_tensor(stats_aggregator, _dtypes.resource)
tag = _ops.convert_to_tensor(tag, _dtypes.string)
counter_prefix = _ops.convert_to_tensor(counter_prefix, _dtypes.string)
_inputs_flat = [input_dataset, stats_aggregator, tag, counter_prefix]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"SetStatsAggregatorDataset", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"SetStatsAggregatorDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def sleep_dataset(input_dataset, sleep_microseconds, output_types, output_shapes, name=None):
r"""TODO: add doc.
Args:
input_dataset: A `Tensor` of type `variant`.
sleep_microseconds: A `Tensor` of type `int64`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "SleepDataset", name,
tld.op_callbacks, input_dataset, sleep_microseconds, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return sleep_dataset_eager_fallback(
input_dataset, sleep_microseconds, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'sleep_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'sleep_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"SleepDataset", input_dataset=input_dataset,
sleep_microseconds=sleep_microseconds,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"SleepDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
SleepDataset = tf_export("raw_ops.SleepDataset")(_ops.to_raw_op(sleep_dataset))
def sleep_dataset_eager_fallback(input_dataset, sleep_microseconds, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'sleep_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'sleep_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
sleep_microseconds = _ops.convert_to_tensor(sleep_microseconds, _dtypes.int64)
_inputs_flat = [input_dataset, sleep_microseconds]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"SleepDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"SleepDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def sliding_window_dataset(input_dataset, window_size, window_shift, window_stride, output_types, output_shapes, name=None):
r"""Creates a dataset that passes a sliding window over `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
window_size: A `Tensor` of type `int64`.
A scalar representing the number of elements in the
sliding window.
window_shift: A `Tensor` of type `int64`.
A scalar representing the steps moving the sliding window
forward in one iteration. It must be positive.
window_stride: A `Tensor` of type `int64`.
A scalar representing the stride of the input elements of the sliding window.
It must be positive.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "SlidingWindowDataset", name,
tld.op_callbacks, input_dataset, window_size, window_shift,
window_stride, "output_types", output_types, "output_shapes",
output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return sliding_window_dataset_eager_fallback(
input_dataset, window_size, window_shift, window_stride,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'sliding_window_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'sliding_window_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"SlidingWindowDataset", input_dataset=input_dataset,
window_size=window_size,
window_shift=window_shift,
window_stride=window_stride,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"SlidingWindowDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
SlidingWindowDataset = tf_export("raw_ops.SlidingWindowDataset")(_ops.to_raw_op(sliding_window_dataset))
def sliding_window_dataset_eager_fallback(input_dataset, window_size, window_shift, window_stride, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'sliding_window_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'sliding_window_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
window_size = _ops.convert_to_tensor(window_size, _dtypes.int64)
window_shift = _ops.convert_to_tensor(window_shift, _dtypes.int64)
window_stride = _ops.convert_to_tensor(window_stride, _dtypes.int64)
_inputs_flat = [input_dataset, window_size, window_shift, window_stride]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"SlidingWindowDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"SlidingWindowDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def snapshot_dataset(input_dataset, path, output_types, output_shapes, compression="", reader_path_prefix="", writer_path_prefix="", shard_size_bytes=10737418240, pending_snapshot_expiry_seconds=86400, num_reader_threads=1, reader_buffer_size=1, num_writer_threads=1, writer_buffer_size=1, shuffle_on_read=False, seed=0, seed2=0, mode="auto", snapshot_name="", name=None):
r"""Creates a dataset that will write to / read from a snapshot.
This dataset attempts to determine whether a valid snapshot exists at the
`snapshot_path`, and reads from the snapshot in lieu of using `input_dataset`.
If not, it will run the preprocessing pipeline as usual, and write out a
snapshot of the data processed for future use.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
path: A `Tensor` of type `string`.
The path we should write snapshots to / read snapshots from.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
compression: An optional `string`. Defaults to `""`.
reader_path_prefix: An optional `string`. Defaults to `""`.
writer_path_prefix: An optional `string`. Defaults to `""`.
shard_size_bytes: An optional `int`. Defaults to `10737418240`.
pending_snapshot_expiry_seconds: An optional `int`. Defaults to `86400`.
num_reader_threads: An optional `int`. Defaults to `1`.
reader_buffer_size: An optional `int`. Defaults to `1`.
num_writer_threads: An optional `int`. Defaults to `1`.
writer_buffer_size: An optional `int`. Defaults to `1`.
shuffle_on_read: An optional `bool`. Defaults to `False`.
seed: An optional `int`. Defaults to `0`.
seed2: An optional `int`. Defaults to `0`.
mode: An optional `string`. Defaults to `"auto"`.
snapshot_name: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "SnapshotDataset", name,
tld.op_callbacks, input_dataset, path, "output_types", output_types,
"output_shapes", output_shapes, "compression", compression,
"reader_path_prefix", reader_path_prefix, "writer_path_prefix",
writer_path_prefix, "shard_size_bytes", shard_size_bytes,
"pending_snapshot_expiry_seconds", pending_snapshot_expiry_seconds,
"num_reader_threads", num_reader_threads, "reader_buffer_size",
reader_buffer_size, "num_writer_threads", num_writer_threads,
"writer_buffer_size", writer_buffer_size, "shuffle_on_read",
shuffle_on_read, "seed", seed, "seed2", seed2, "mode", mode,
"snapshot_name", snapshot_name)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return snapshot_dataset_eager_fallback(
input_dataset, path, output_types=output_types,
output_shapes=output_shapes, compression=compression,
reader_path_prefix=reader_path_prefix,
writer_path_prefix=writer_path_prefix,
shard_size_bytes=shard_size_bytes,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds,
num_reader_threads=num_reader_threads,
reader_buffer_size=reader_buffer_size,
num_writer_threads=num_writer_threads,
writer_buffer_size=writer_buffer_size,
shuffle_on_read=shuffle_on_read, seed=seed, seed2=seed2, mode=mode,
snapshot_name=snapshot_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'snapshot_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'snapshot_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if compression is None:
compression = ""
compression = _execute.make_str(compression, "compression")
if reader_path_prefix is None:
reader_path_prefix = ""
reader_path_prefix = _execute.make_str(reader_path_prefix, "reader_path_prefix")
if writer_path_prefix is None:
writer_path_prefix = ""
writer_path_prefix = _execute.make_str(writer_path_prefix, "writer_path_prefix")
if shard_size_bytes is None:
shard_size_bytes = 10737418240
shard_size_bytes = _execute.make_int(shard_size_bytes, "shard_size_bytes")
if pending_snapshot_expiry_seconds is None:
pending_snapshot_expiry_seconds = 86400
pending_snapshot_expiry_seconds = _execute.make_int(pending_snapshot_expiry_seconds, "pending_snapshot_expiry_seconds")
if num_reader_threads is None:
num_reader_threads = 1
num_reader_threads = _execute.make_int(num_reader_threads, "num_reader_threads")
if reader_buffer_size is None:
reader_buffer_size = 1
reader_buffer_size = _execute.make_int(reader_buffer_size, "reader_buffer_size")
if num_writer_threads is None:
num_writer_threads = 1
num_writer_threads = _execute.make_int(num_writer_threads, "num_writer_threads")
if writer_buffer_size is None:
writer_buffer_size = 1
writer_buffer_size = _execute.make_int(writer_buffer_size, "writer_buffer_size")
if shuffle_on_read is None:
shuffle_on_read = False
shuffle_on_read = _execute.make_bool(shuffle_on_read, "shuffle_on_read")
if seed is None:
seed = 0
seed = _execute.make_int(seed, "seed")
if seed2 is None:
seed2 = 0
seed2 = _execute.make_int(seed2, "seed2")
if mode is None:
mode = "auto"
mode = _execute.make_str(mode, "mode")
if snapshot_name is None:
snapshot_name = ""
snapshot_name = _execute.make_str(snapshot_name, "snapshot_name")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"SnapshotDataset", input_dataset=input_dataset, path=path,
output_types=output_types,
output_shapes=output_shapes,
compression=compression,
reader_path_prefix=reader_path_prefix,
writer_path_prefix=writer_path_prefix,
shard_size_bytes=shard_size_bytes,
pending_snapshot_expiry_seconds=pending_snapshot_expiry_seconds,
num_reader_threads=num_reader_threads,
reader_buffer_size=reader_buffer_size,
num_writer_threads=num_writer_threads,
writer_buffer_size=writer_buffer_size,
shuffle_on_read=shuffle_on_read, seed=seed,
seed2=seed2, mode=mode,
snapshot_name=snapshot_name, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "compression",
_op.get_attr("compression"), "reader_path_prefix",
_op.get_attr("reader_path_prefix"), "writer_path_prefix",
_op.get_attr("writer_path_prefix"), "shard_size_bytes",
_op._get_attr_int("shard_size_bytes"),
"pending_snapshot_expiry_seconds",
_op._get_attr_int("pending_snapshot_expiry_seconds"),
"num_reader_threads", _op._get_attr_int("num_reader_threads"),
"reader_buffer_size", _op._get_attr_int("reader_buffer_size"),
"num_writer_threads", _op._get_attr_int("num_writer_threads"),
"writer_buffer_size", _op._get_attr_int("writer_buffer_size"),
"shuffle_on_read", _op._get_attr_bool("shuffle_on_read"),
"seed", _op._get_attr_int("seed"), "seed2",
_op._get_attr_int("seed2"), "mode", _op.get_attr("mode"),
"snapshot_name", _op.get_attr("snapshot_name"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"SnapshotDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
SnapshotDataset = tf_export("raw_ops.SnapshotDataset")(_ops.to_raw_op(snapshot_dataset))
def snapshot_dataset_eager_fallback(input_dataset, path, output_types, output_shapes, compression, reader_path_prefix, writer_path_prefix, shard_size_bytes, pending_snapshot_expiry_seconds, num_reader_threads, reader_buffer_size, num_writer_threads, writer_buffer_size, shuffle_on_read, seed, seed2, mode, snapshot_name, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'snapshot_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'snapshot_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if compression is None:
compression = ""
compression = _execute.make_str(compression, "compression")
if reader_path_prefix is None:
reader_path_prefix = ""
reader_path_prefix = _execute.make_str(reader_path_prefix, "reader_path_prefix")
if writer_path_prefix is None:
writer_path_prefix = ""
writer_path_prefix = _execute.make_str(writer_path_prefix, "writer_path_prefix")
if shard_size_bytes is None:
shard_size_bytes = 10737418240
shard_size_bytes = _execute.make_int(shard_size_bytes, "shard_size_bytes")
if pending_snapshot_expiry_seconds is None:
pending_snapshot_expiry_seconds = 86400
pending_snapshot_expiry_seconds = _execute.make_int(pending_snapshot_expiry_seconds, "pending_snapshot_expiry_seconds")
if num_reader_threads is None:
num_reader_threads = 1
num_reader_threads = _execute.make_int(num_reader_threads, "num_reader_threads")
if reader_buffer_size is None:
reader_buffer_size = 1
reader_buffer_size = _execute.make_int(reader_buffer_size, "reader_buffer_size")
if num_writer_threads is None:
num_writer_threads = 1
num_writer_threads = _execute.make_int(num_writer_threads, "num_writer_threads")
if writer_buffer_size is None:
writer_buffer_size = 1
writer_buffer_size = _execute.make_int(writer_buffer_size, "writer_buffer_size")
if shuffle_on_read is None:
shuffle_on_read = False
shuffle_on_read = _execute.make_bool(shuffle_on_read, "shuffle_on_read")
if seed is None:
seed = 0
seed = _execute.make_int(seed, "seed")
if seed2 is None:
seed2 = 0
seed2 = _execute.make_int(seed2, "seed2")
if mode is None:
mode = "auto"
mode = _execute.make_str(mode, "mode")
if snapshot_name is None:
snapshot_name = ""
snapshot_name = _execute.make_str(snapshot_name, "snapshot_name")
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
path = _ops.convert_to_tensor(path, _dtypes.string)
_inputs_flat = [input_dataset, path]
_attrs = ("output_types", output_types, "output_shapes", output_shapes,
"compression", compression, "reader_path_prefix", reader_path_prefix,
"writer_path_prefix", writer_path_prefix, "shard_size_bytes",
shard_size_bytes, "pending_snapshot_expiry_seconds",
pending_snapshot_expiry_seconds, "num_reader_threads", num_reader_threads,
"reader_buffer_size", reader_buffer_size, "num_writer_threads",
num_writer_threads, "writer_buffer_size", writer_buffer_size,
"shuffle_on_read", shuffle_on_read, "seed", seed, "seed2", seed2, "mode",
mode, "snapshot_name", snapshot_name)
_result = _execute.execute(b"SnapshotDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"SnapshotDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def snapshot_dataset_v2(input_dataset, path, reader_func_other_args, shard_func_other_args, output_types, output_shapes, reader_func, shard_func, compression="", name=None):
r"""Creates a dataset that will write to / read from a snapshot.
This dataset attempts to determine whether a valid snapshot exists at the
`snapshot_path`, and reads from the snapshot in lieu of using `input_dataset`.
If not, it will run the preprocessing pipeline as usual, and write out a
snapshot of the data processed for future use.
Args:
input_dataset: A `Tensor` of type `variant`.
A variant tensor representing the input dataset.
path: A `Tensor` of type `string`.
The path we should write snapshots to / read snapshots from.
reader_func_other_args: A list of `Tensor` objects.
shard_func_other_args: A list of `Tensor` objects.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
reader_func: A function decorated with @Defun.
Optional. A function to control how to read data from snapshot shards.
shard_func: A function decorated with @Defun.
Optional. A function to control how to shard data when writing a snapshot.
compression: An optional `string`. Defaults to `""`.
The type of compression to be applied to the saved snapshot files.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "SnapshotDatasetV2", name,
tld.op_callbacks, input_dataset, path, reader_func_other_args,
shard_func_other_args, "output_types", output_types, "output_shapes",
output_shapes, "compression", compression, "reader_func", reader_func,
"shard_func", shard_func)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return snapshot_dataset_v2_eager_fallback(
input_dataset, path, reader_func_other_args, shard_func_other_args,
output_types=output_types, output_shapes=output_shapes,
compression=compression, reader_func=reader_func,
shard_func=shard_func, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'snapshot_dataset_v2' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'snapshot_dataset_v2' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if compression is None:
compression = ""
compression = _execute.make_str(compression, "compression")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"SnapshotDatasetV2", input_dataset=input_dataset, path=path,
reader_func_other_args=reader_func_other_args,
shard_func_other_args=shard_func_other_args,
output_types=output_types,
output_shapes=output_shapes,
reader_func=reader_func, shard_func=shard_func,
compression=compression, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"), "compression",
_op.get_attr("compression"), "reader_func",
_op.get_attr("reader_func"), "shard_func",
_op.get_attr("shard_func"), "Treader_func_args",
_op.get_attr("Treader_func_args"), "Tshard_func_args",
_op.get_attr("Tshard_func_args"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"SnapshotDatasetV2", _inputs_flat, _attrs, _result)
_result, = _result
return _result
SnapshotDatasetV2 = tf_export("raw_ops.SnapshotDatasetV2")(_ops.to_raw_op(snapshot_dataset_v2))
def snapshot_dataset_v2_eager_fallback(input_dataset, path, reader_func_other_args, shard_func_other_args, output_types, output_shapes, reader_func, shard_func, compression, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'snapshot_dataset_v2' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'snapshot_dataset_v2' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
if compression is None:
compression = ""
compression = _execute.make_str(compression, "compression")
_attr_Treader_func_args, reader_func_other_args = _execute.convert_to_mixed_eager_tensors(reader_func_other_args, ctx)
_attr_Tshard_func_args, shard_func_other_args = _execute.convert_to_mixed_eager_tensors(shard_func_other_args, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
path = _ops.convert_to_tensor(path, _dtypes.string)
_inputs_flat = [input_dataset, path] + list(reader_func_other_args) + list(shard_func_other_args)
_attrs = ("output_types", output_types, "output_shapes", output_shapes,
"compression", compression, "reader_func", reader_func, "shard_func",
shard_func, "Treader_func_args", _attr_Treader_func_args,
"Tshard_func_args", _attr_Tshard_func_args)
_result = _execute.execute(b"SnapshotDatasetV2", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"SnapshotDatasetV2", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def sql_dataset(driver_name, data_source_name, query, output_types, output_shapes, name=None):
r"""Creates a dataset that executes a SQL query and emits rows of the result set.
Args:
driver_name: A `Tensor` of type `string`.
The database type. Currently, the only supported type is 'sqlite'.
data_source_name: A `Tensor` of type `string`.
A connection string to connect to the database.
query: A `Tensor` of type `string`. A SQL query to execute.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "SqlDataset", name,
tld.op_callbacks, driver_name, data_source_name, query,
"output_types", output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return sql_dataset_eager_fallback(
driver_name, data_source_name, query, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'sql_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'sql_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"SqlDataset", driver_name=driver_name,
data_source_name=data_source_name, query=query,
output_types=output_types, output_shapes=output_shapes,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"SqlDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
SqlDataset = tf_export("raw_ops.SqlDataset")(_ops.to_raw_op(sql_dataset))
def sql_dataset_eager_fallback(driver_name, data_source_name, query, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'sql_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'sql_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
driver_name = _ops.convert_to_tensor(driver_name, _dtypes.string)
data_source_name = _ops.convert_to_tensor(data_source_name, _dtypes.string)
query = _ops.convert_to_tensor(query, _dtypes.string)
_inputs_flat = [driver_name, data_source_name, query]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"SqlDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"SqlDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def stats_aggregator_handle(container="", shared_name="", name=None):
r"""Creates a statistics manager resource.
Args:
container: An optional `string`. Defaults to `""`.
shared_name: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "StatsAggregatorHandle", name,
tld.op_callbacks, "container", container, "shared_name", shared_name)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return stats_aggregator_handle_eager_fallback(
container=container, shared_name=shared_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"StatsAggregatorHandle", container=container, shared_name=shared_name,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"StatsAggregatorHandle", _inputs_flat, _attrs, _result)
_result, = _result
return _result
StatsAggregatorHandle = tf_export("raw_ops.StatsAggregatorHandle")(_ops.to_raw_op(stats_aggregator_handle))
def stats_aggregator_handle_eager_fallback(container, shared_name, name, ctx):
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_inputs_flat = []
_attrs = ("container", container, "shared_name", shared_name)
_result = _execute.execute(b"StatsAggregatorHandle", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"StatsAggregatorHandle", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def stats_aggregator_handle_v2(container="", shared_name="", name=None):
r"""TODO: add doc.
Args:
container: An optional `string`. Defaults to `""`.
shared_name: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "StatsAggregatorHandleV2",
name, tld.op_callbacks, "container", container, "shared_name",
shared_name)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return stats_aggregator_handle_v2_eager_fallback(
container=container, shared_name=shared_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"StatsAggregatorHandleV2", container=container,
shared_name=shared_name, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"StatsAggregatorHandleV2", _inputs_flat, _attrs, _result)
_result, = _result
return _result
StatsAggregatorHandleV2 = tf_export("raw_ops.StatsAggregatorHandleV2")(_ops.to_raw_op(stats_aggregator_handle_v2))
def stats_aggregator_handle_v2_eager_fallback(container, shared_name, name, ctx):
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_inputs_flat = []
_attrs = ("container", container, "shared_name", shared_name)
_result = _execute.execute(b"StatsAggregatorHandleV2", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"StatsAggregatorHandleV2", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def stats_aggregator_set_summary_writer(stats_aggregator, summary, name=None):
r"""Set a summary_writer_interface to record statistics using given stats_aggregator.
Args:
stats_aggregator: A `Tensor` of type `resource`.
summary: A `Tensor` of type `resource`.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name,
"StatsAggregatorSetSummaryWriter", name, tld.op_callbacks,
stats_aggregator, summary)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return stats_aggregator_set_summary_writer_eager_fallback(
stats_aggregator, summary, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"StatsAggregatorSetSummaryWriter", stats_aggregator=stats_aggregator,
summary=summary, name=name)
return _op
StatsAggregatorSetSummaryWriter = tf_export("raw_ops.StatsAggregatorSetSummaryWriter")(_ops.to_raw_op(stats_aggregator_set_summary_writer))
def stats_aggregator_set_summary_writer_eager_fallback(stats_aggregator, summary, name, ctx):
stats_aggregator = _ops.convert_to_tensor(stats_aggregator, _dtypes.resource)
summary = _ops.convert_to_tensor(summary, _dtypes.resource)
_inputs_flat = [stats_aggregator, summary]
_attrs = None
_result = _execute.execute(b"StatsAggregatorSetSummaryWriter", 0,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
_result = None
return _result
def stats_aggregator_summary(iterator, name=None):
r"""Produces a summary of any statistics recorded by the given statistics manager.
Args:
iterator: A `Tensor` of type `resource`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "StatsAggregatorSummary", name,
tld.op_callbacks, iterator)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return stats_aggregator_summary_eager_fallback(
iterator, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"StatsAggregatorSummary", iterator=iterator, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ()
_inputs_flat = _op.inputs
_execute.record_gradient(
"StatsAggregatorSummary", _inputs_flat, _attrs, _result)
_result, = _result
return _result
StatsAggregatorSummary = tf_export("raw_ops.StatsAggregatorSummary")(_ops.to_raw_op(stats_aggregator_summary))
def stats_aggregator_summary_eager_fallback(iterator, name, ctx):
iterator = _ops.convert_to_tensor(iterator, _dtypes.resource)
_inputs_flat = [iterator]
_attrs = None
_result = _execute.execute(b"StatsAggregatorSummary", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"StatsAggregatorSummary", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def take_while_dataset(input_dataset, other_arguments, predicate, output_types, output_shapes, name=None):
r"""Creates a dataset that stops iteration when predicate` is false.
The `predicate` function must return a scalar boolean and accept the
following arguments:
* One tensor for each component of an element of `input_dataset`.
* One tensor for each value in `other_arguments`.
Args:
input_dataset: A `Tensor` of type `variant`.
other_arguments: A list of `Tensor` objects.
A list of tensors, typically values that were captured when
building a closure for `predicate`.
predicate: A function decorated with @Defun.
A function returning a scalar boolean.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "TakeWhileDataset", name,
tld.op_callbacks, input_dataset, other_arguments, "predicate",
predicate, "output_types", output_types, "output_shapes",
output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return take_while_dataset_eager_fallback(
input_dataset, other_arguments, predicate=predicate,
output_types=output_types, output_shapes=output_shapes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'take_while_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'take_while_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"TakeWhileDataset", input_dataset=input_dataset,
other_arguments=other_arguments,
predicate=predicate, output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("predicate", _op.get_attr("predicate"), "Targuments",
_op.get_attr("Targuments"), "output_types",
_op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"TakeWhileDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
TakeWhileDataset = tf_export("raw_ops.TakeWhileDataset")(_ops.to_raw_op(take_while_dataset))
def take_while_dataset_eager_fallback(input_dataset, other_arguments, predicate, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'take_while_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'take_while_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_attr_Targuments, other_arguments = _execute.convert_to_mixed_eager_tensors(other_arguments, ctx)
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset] + list(other_arguments)
_attrs = ("predicate", predicate, "Targuments", _attr_Targuments,
"output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"TakeWhileDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"TakeWhileDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def thread_pool_dataset(input_dataset, thread_pool, output_types, output_shapes, name=None):
r"""Creates a dataset that uses a custom thread pool to compute `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
thread_pool: A `Tensor` of type `resource`.
A resource produced by the ThreadPoolHandle op.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ThreadPoolDataset", name,
tld.op_callbacks, input_dataset, thread_pool, "output_types",
output_types, "output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return thread_pool_dataset_eager_fallback(
input_dataset, thread_pool, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'thread_pool_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'thread_pool_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ThreadPoolDataset", input_dataset=input_dataset,
thread_pool=thread_pool,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ThreadPoolDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ThreadPoolDataset = tf_export("raw_ops.ThreadPoolDataset")(_ops.to_raw_op(thread_pool_dataset))
def thread_pool_dataset_eager_fallback(input_dataset, thread_pool, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'thread_pool_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'thread_pool_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
thread_pool = _ops.convert_to_tensor(thread_pool, _dtypes.resource)
_inputs_flat = [input_dataset, thread_pool]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"ThreadPoolDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ThreadPoolDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def thread_pool_handle(num_threads, display_name, max_intra_op_parallelism=1, container="", shared_name="", name=None):
r"""Creates a dataset that uses a custom thread pool to compute `input_dataset`.
Args:
num_threads: An `int`. The number of threads in the thread pool.
display_name: A `string`.
A human-readable name for the threads that may be visible in some
visualizations.
threadpool.
max_intra_op_parallelism: An optional `int`. Defaults to `1`.
The maximum degree of parallelism to use within operations that execute on this
threadpool.
container: An optional `string`. Defaults to `""`.
shared_name: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "ThreadPoolHandle", name,
tld.op_callbacks, "num_threads", num_threads,
"max_intra_op_parallelism", max_intra_op_parallelism, "display_name",
display_name, "container", container, "shared_name", shared_name)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return thread_pool_handle_eager_fallback(
num_threads=num_threads,
max_intra_op_parallelism=max_intra_op_parallelism,
display_name=display_name, container=container,
shared_name=shared_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
num_threads = _execute.make_int(num_threads, "num_threads")
display_name = _execute.make_str(display_name, "display_name")
if max_intra_op_parallelism is None:
max_intra_op_parallelism = 1
max_intra_op_parallelism = _execute.make_int(max_intra_op_parallelism, "max_intra_op_parallelism")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"ThreadPoolHandle", num_threads=num_threads,
display_name=display_name,
max_intra_op_parallelism=max_intra_op_parallelism,
container=container, shared_name=shared_name,
name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("num_threads", _op._get_attr_int("num_threads"),
"max_intra_op_parallelism",
_op._get_attr_int("max_intra_op_parallelism"), "display_name",
_op.get_attr("display_name"), "container",
_op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"ThreadPoolHandle", _inputs_flat, _attrs, _result)
_result, = _result
return _result
ThreadPoolHandle = tf_export("raw_ops.ThreadPoolHandle")(_ops.to_raw_op(thread_pool_handle))
def thread_pool_handle_eager_fallback(num_threads, display_name, max_intra_op_parallelism, container, shared_name, name, ctx):
num_threads = _execute.make_int(num_threads, "num_threads")
display_name = _execute.make_str(display_name, "display_name")
if max_intra_op_parallelism is None:
max_intra_op_parallelism = 1
max_intra_op_parallelism = _execute.make_int(max_intra_op_parallelism, "max_intra_op_parallelism")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_inputs_flat = []
_attrs = ("num_threads", num_threads, "max_intra_op_parallelism",
max_intra_op_parallelism, "display_name", display_name, "container",
container, "shared_name", shared_name)
_result = _execute.execute(b"ThreadPoolHandle", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"ThreadPoolHandle", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def unbatch_dataset(input_dataset, output_types, output_shapes, name=None):
r"""A dataset that splits the elements of its input into multiple elements.
Args:
input_dataset: A `Tensor` of type `variant`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "UnbatchDataset", name,
tld.op_callbacks, input_dataset, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return unbatch_dataset_eager_fallback(
input_dataset, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'unbatch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'unbatch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"UnbatchDataset", input_dataset=input_dataset,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"UnbatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
UnbatchDataset = tf_export("raw_ops.UnbatchDataset")(_ops.to_raw_op(unbatch_dataset))
def unbatch_dataset_eager_fallback(input_dataset, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'unbatch_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'unbatch_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"UnbatchDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"UnbatchDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
def uncompress_element(compressed, output_types, output_shapes, name=None):
r"""Uncompresses a compressed dataset element.
Args:
compressed: A `Tensor` of type `variant`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A list of `Tensor` objects of type `output_types`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "UncompressElement", name,
tld.op_callbacks, compressed, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return uncompress_element_eager_fallback(
compressed, output_types=output_types, output_shapes=output_shapes,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'uncompress_element' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'uncompress_element' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"UncompressElement", compressed=compressed, output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"UncompressElement", _inputs_flat, _attrs, _result)
return _result
UncompressElement = tf_export("raw_ops.UncompressElement")(_ops.to_raw_op(uncompress_element))
def uncompress_element_eager_fallback(compressed, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'uncompress_element' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'uncompress_element' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
compressed = _ops.convert_to_tensor(compressed, _dtypes.variant)
_inputs_flat = [compressed]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"UncompressElement", len(output_types),
inputs=_inputs_flat, attrs=_attrs, ctx=ctx,
name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"UncompressElement", _inputs_flat, _attrs, _result)
return _result
def unique_dataset(input_dataset, output_types, output_shapes, name=None):
r"""Creates a dataset that contains the unique elements of `input_dataset`.
Args:
input_dataset: A `Tensor` of type `variant`.
output_types: A list of `tf.DTypes` that has length `>= 1`.
output_shapes: A list of shapes (each a `tf.TensorShape` or list of `ints`) that has length `>= 1`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `variant`.
"""
_ctx = _context._context or _context.context()
tld = _ctx._thread_local_data
if tld.is_eager:
try:
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "UniqueDataset", name,
tld.op_callbacks, input_dataset, "output_types", output_types,
"output_shapes", output_shapes)
return _result
except _core._NotOkStatusException as e:
_ops.raise_from_not_ok_status(e, name)
except _core._FallbackException:
pass
try:
return unique_dataset_eager_fallback(
input_dataset, output_types=output_types,
output_shapes=output_shapes, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
# Add nodes to the TensorFlow graph.
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'unique_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'unique_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
_, _, _op, _outputs = _op_def_library._apply_op_helper(
"UniqueDataset", input_dataset=input_dataset,
output_types=output_types,
output_shapes=output_shapes, name=name)
_result = _outputs[:]
if _execute.must_record_gradient():
_attrs = ("output_types", _op.get_attr("output_types"), "output_shapes",
_op.get_attr("output_shapes"))
_inputs_flat = _op.inputs
_execute.record_gradient(
"UniqueDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
UniqueDataset = tf_export("raw_ops.UniqueDataset")(_ops.to_raw_op(unique_dataset))
def unique_dataset_eager_fallback(input_dataset, output_types, output_shapes, name, ctx):
if not isinstance(output_types, (list, tuple)):
raise TypeError(
"Expected list for 'output_types' argument to "
"'unique_dataset' Op, not %r." % output_types)
output_types = [_execute.make_type(_t, "output_types") for _t in output_types]
if not isinstance(output_shapes, (list, tuple)):
raise TypeError(
"Expected list for 'output_shapes' argument to "
"'unique_dataset' Op, not %r." % output_shapes)
output_shapes = [_execute.make_shape(_s, "output_shapes") for _s in output_shapes]
input_dataset = _ops.convert_to_tensor(input_dataset, _dtypes.variant)
_inputs_flat = [input_dataset]
_attrs = ("output_types", output_types, "output_shapes", output_shapes)
_result = _execute.execute(b"UniqueDataset", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=ctx, name=name)
if _execute.must_record_gradient():
_execute.record_gradient(
"UniqueDataset", _inputs_flat, _attrs, _result)
_result, = _result
return _result
| 48.155571 | 372 | 0.708344 | 52,353 | 418,809 | 5.262411 | 0.0153 | 0.075266 | 0.053992 | 0.039488 | 0.929623 | 0.921101 | 0.912288 | 0.9075 | 0.903471 | 0.901954 | 0 | 0.002388 | 0.203253 | 418,809 | 8,696 | 373 | 48.161109 | 0.823249 | 0.165546 | 0 | 0.847605 | 1 | 0 | 0.17252 | 0.060308 | 0 | 0 | 0 | 0.00253 | 0.005838 | 1 | 0.026647 | false | 0.026647 | 0.001796 | 0 | 0.081737 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
292bb9b384a73fe1162f08ece62915839fe71ee9 | 2,183 | py | Python | koalixcrm/crm/exceptions.py | Cataldir/koalixcrm | 87d125379845d6ab990c19500d63cbed4051040a | [
"BSD-3-Clause"
] | 290 | 2015-01-11T03:01:05.000Z | 2019-12-17T03:56:17.000Z | koalixcrm/crm/exceptions.py | Cataldir/koalixcrm | 87d125379845d6ab990c19500d63cbed4051040a | [
"BSD-3-Clause"
] | 178 | 2016-02-26T14:41:49.000Z | 2019-12-29T08:34:21.000Z | koalixcrm/crm/exceptions.py | Cataldir/koalixcrm | 87d125379845d6ab990c19500d63cbed4051040a | [
"BSD-3-Clause"
] | 124 | 2015-02-28T20:56:37.000Z | 2019-12-13T18:15:35.000Z | # -*- coding: utf-8 -*-
class TemplateSetMissing(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class TemplateMissingInTemplateSet(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class TemplateSetMissingInContract(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class TemplateFOPConfigFileMissing(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class TemplateXSLTFileMissing(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class NoSerializationPatternFound(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class OpenInterestAccountMissing(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class IncompleteInvoice(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class InvoiceAlreadyRegistered(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class UserIsNoHumanResource(Exception):
def __init__(self, value):
self.value = value
self.view = "/koalixcrm/crm/reporting/user_is_not_human_resource"
def __str__(self):
return repr(self.value)
class ReportingPeriodDoneDeleteNotPossible(Exception):
def __init__(self, value=None):
self.value = value
self.view = "/koalixcrm/crm/reporting/reporting_period_done_delete_not_possible"
def __str__(self):
return repr(self.value)
class ReportingPeriodNotFound(Exception):
def __init__(self, value):
self.value = value
self.view = "/koalixcrm/crm/reporting/reporting_period_missing"
def __str__(self):
return repr(self.value)
| 21.613861 | 88 | 0.670179 | 237 | 2,183 | 5.721519 | 0.164557 | 0.238938 | 0.141593 | 0.176991 | 0.728614 | 0.710177 | 0.710177 | 0.688791 | 0.638643 | 0.638643 | 0 | 0.000596 | 0.231791 | 2,183 | 100 | 89 | 21.83 | 0.80799 | 0.00962 | 0 | 0.746032 | 0 | 0 | 0.076852 | 0.076852 | 0 | 0 | 0 | 0 | 0 | 1 | 0.380952 | false | 0 | 0 | 0.190476 | 0.761905 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 9 |
462ea2bf45658be8b30e3a375c2f0edb5fcec79f | 19,591 | py | Python | NERuselocal4/NERuselocal4/NERuselocal/generateTrainDataFromTaggedFile.py | 15629069885/xiao_feixia | 12aa31f4dd3eff78b811d931b331c053977c5cff | [
"MIT"
] | 1 | 2020-02-27T06:35:06.000Z | 2020-02-27T06:35:06.000Z | NERuselocal4/NERuselocal4/NERuselocal/generateTrainDataFromTaggedFile.py | 15629069885/xiao_feixia | 12aa31f4dd3eff78b811d931b331c053977c5cff | [
"MIT"
] | null | null | null | NERuselocal4/NERuselocal4/NERuselocal/generateTrainDataFromTaggedFile.py | 15629069885/xiao_feixia | 12aa31f4dd3eff78b811d931b331c053977c5cff | [
"MIT"
] | null | null | null | #encoding=utf8
import csv
rows=csv.reader(open("D:\\data\\taggeddata\\G45.901.csv",'r'))
rowCount=0
dev=open("example.dev",'w',encoding='utf8')
train=open("example.train",'w',encoding='utf8')
test=open("example.test",'w',encoding='utf8')
huanhang=['銆�,'!','锛�,'"','锛�,'?']
skip=['-']
flag=0
lenthTotal=0
devFlag = True
trainFlag = False
testFlag = False
lenthTotal=29247
lenthTotal2=34543
lenthTotal3=29906
biaoji = ['DIS', 'SYM', 'SGN', 'TES', 'DRU', 'SUR', 'PRE', 'PT', 'Dur', 'TP', 'REG', 'ORG', 'AT', 'PSB', 'DEG', 'FW','CL']
for row in rows:
if flag==0:
flag=1
continue
if len(row)==6 and devFlag :
rowCount+=1
if rowCount<lenthTotal/15-1:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
dev.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "B-"+ row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
if row[2].strip() and row[3].strip():
dev.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
else:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
dev.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "B-"+row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
devFlag=False
testFlag=True
if row[2].strip() and row[3].strip():
dev.write(row[2].strip() + " " + row[3].strip() + "\n"+"\n")
if len(row)==6 and testFlag :
rowCount+=1
if rowCount > lenthTotal / 15 and rowCount < 2 * lenthTotal / 15-1 :
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
test.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "B-"+ row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
if row[2].strip() and row[3].strip():
test.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
if rowCount >= 2 * lenthTotal / 15-1 and testFlag:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
test.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "B-"+row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
testFlag = False
trainFlag=True
if row[2].strip() and row[3].strip():
test.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
if len(row)==6 and trainFlag:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip() == 'O':
train.write(a.strip() + " " + row[3].strip() + "\n")
else:
if a == row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split()
assert len(text) == 2
train.write(a.strip() + " " + "B-" + row[3].strip() + "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split()
assert len(text) == 2
train.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split()
if len(text) != 2:
print(string)
if row[2].strip() and row[3].strip():
train.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
flag=0
devFlag = True
trainFlag = False
testFlag = False
rowCount=0
for row in rows1:
if flag==0:
flag=1
continue
if len(row)==6 and devFlag :
rowCount+=1
if rowCount<lenthTotal/15-1:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
dev.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "B-"+ row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
if row[2].strip() and row[3].strip():
dev.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
else:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
dev.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "B-"+row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
devFlag=False
testFlag=True
if row[2].strip() and row[3].strip():
dev.write(row[2].strip() + " " + row[3].strip() + "\n"+"\n")
if len(row)==6 and testFlag :
rowCount+=1
if rowCount > lenthTotal / 15 and rowCount < 2 * lenthTotal / 15-1 :
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
test.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "B-"+ row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
if row[2].strip() and row[3].strip():
test.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
if rowCount >= 2 * lenthTotal / 15-1 and testFlag:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
test.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "B-"+row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
testFlag = False
trainFlag=True
if row[2].strip() and row[3].strip():
test.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
if len(row)==6 and trainFlag:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip() == 'O':
train.write(a.strip() + " " + row[3].strip() + "\n")
else:
if a == row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split()
assert len(text) == 2
train.write(a.strip() + " " + "B-" + row[3].strip() + "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split()
assert len(text) == 2
train.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split()
assert len(text) == 2
if row[2].strip() and row[3].strip():
train.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
flag=0
devFlag = True
trainFlag = False
testFlag = False
rowCount=0
for row in rows2:
if flag==0:
flag=1
continue
if len(row)==6 and devFlag :
rowCount+=1
if rowCount<lenthTotal/15-1:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
dev.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "B-"+ row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
if row[2].strip() and row[3].strip():
dev.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
else:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
dev.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "B-"+row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
dev.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
devFlag=False
testFlag=True
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
if row[2].strip() and row[3].strip():
dev.write(row[2].strip() + " " + row[3].strip() + "\n"+"\n")
if len(row)==6 and testFlag :
rowCount+=1
if rowCount > lenthTotal / 15 and rowCount < 2 * lenthTotal / 15-1 :
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
test.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "B-"+ row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split()
assert len(text) == 2
if row[2].strip() and row[3].strip():
test.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
if rowCount >= 2 * lenthTotal / 15-1 and testFlag:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip()=='O':
string = a.strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip()+" "+row[3].strip()+"\n")
else:
if a==row[2].strip()[0]:
string = a.strip() + " " + "B-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "B-"+row[3].strip()+ "\n")
else:
string = a.strip() + " " + "I-" + row[3].strip()
text = string.split( )
assert len(text) == 2
test.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
testFlag = False
trainFlag=True
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
if row[2].strip() and row[3].strip():
test.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
if len(row)==6 and trainFlag:
if row[2].strip() and row[2].strip() not in huanhang:
for a in row[2].strip():
if row[3].strip() == 'O':
string = a.strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
train.write(a.strip() + " " + row[3].strip() + "\n")
else:
if a == row[2].strip()[0]:
string=a.strip() + " " + "B-" + row[3].strip()
text=string.split( )
assert len(text)==2
train.write(a.strip() + " " + "B-" + row[3].strip() + "\n")
else:
string=a.strip() + " " + "I-" + row[3].strip()
text=string.split( )
assert len(text)==2
train.write(a.strip() + " " + "I-" + row[3].strip() + "\n")
else:
string = row[2].strip() + " " + row[3].strip()
text = string.split( )
assert len(text) == 2
if row[2].strip() and row[3].strip():
train.write(row[2].strip() + " " + row[3].strip() + "\n" + "\n")
dev.close()
test.close()
train.close()
| 47.550971 | 122 | 0.354806 | 2,081 | 19,591 | 3.341663 | 0.040365 | 0.078804 | 0.177308 | 0.086281 | 0.940754 | 0.940754 | 0.940754 | 0.935145 | 0.935145 | 0.935145 | 0.000153 | 0.037946 | 0.463274 | 19,591 | 411 | 123 | 47.666667 | 0.62311 | 0.000664 | 0 | 0.950372 | 0 | 0.109181 | 0.027175 | 0.001686 | 0 | 0 | 0 | 0 | 0.114144 | 0 | null | null | 0 | 0.002481 | null | null | 0.002481 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
46a735427b1adc743be5a3c237189c910a122475 | 188 | py | Python | airtrack/src/definitions/__init__.py | ckarageorgkaneen/airtrack-pybpod | 86cad41dbea4f7ba496868d171758c348ed7c1f2 | [
"MIT"
] | 1 | 2021-09-16T17:42:29.000Z | 2021-09-16T17:42:29.000Z | airtrack/src/definitions/__init__.py | ckarageorgkaneen/airtrack-pybpod | 86cad41dbea4f7ba496868d171758c348ed7c1f2 | [
"MIT"
] | 12 | 2021-08-01T17:50:27.000Z | 2021-08-08T17:33:58.000Z | airtrack/src/definitions/__init__.py | ckarageorgkaneen/airtrack | 86cad41dbea4f7ba496868d171758c348ed7c1f2 | [
"MIT"
] | null | null | null | from airtrack.src.definitions.actuator import AirtrackActuatorState
from airtrack.src.definitions.camera import AirtrackCameraObject
from airtrack.src.definitions.sma import AirtrackState
| 47 | 67 | 0.888298 | 21 | 188 | 7.952381 | 0.52381 | 0.215569 | 0.269461 | 0.467066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 188 | 3 | 68 | 62.666667 | 0.948864 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
46a7e515b77f11d5f50c10a6f7eee30e9e5b5e9a | 60 | py | Python | pws/hash/__init__.py | pqlx/pws-crypto | 23dcf59d0b37d811d0a9bda995a3ea7d09051416 | [
"MIT"
] | 1 | 2020-12-10T01:14:29.000Z | 2020-12-10T01:14:29.000Z | pws/hash/__init__.py | pqlx/pws-crypto | 23dcf59d0b37d811d0a9bda995a3ea7d09051416 | [
"MIT"
] | null | null | null | pws/hash/__init__.py | pqlx/pws-crypto | 23dcf59d0b37d811d0a9bda995a3ea7d09051416 | [
"MIT"
] | null | null | null | from pws.hash.md5 import MD5
from pws.hash.sha1 import SHA1
| 20 | 30 | 0.8 | 12 | 60 | 4 | 0.5 | 0.291667 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.133333 | 60 | 2 | 31 | 30 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
46a9b4bcbdcc6547386bf1f91d1ac5549f1db5b6 | 10,878 | py | Python | tests/test_types_uscircuitcasemeta.py | eltrompetero/caseapi-wrapper-1 | d9bd83c530ced3d3576930ef2cdc522f3e2aab68 | [
"MIT"
] | null | null | null | tests/test_types_uscircuitcasemeta.py | eltrompetero/caseapi-wrapper-1 | d9bd83c530ced3d3576930ef2cdc522f3e2aab68 | [
"MIT"
] | null | null | null | tests/test_types_uscircuitcasemeta.py | eltrompetero/caseapi-wrapper-1 | d9bd83c530ced3d3576930ef2cdc522f3e2aab68 | [
"MIT"
] | 1 | 2021-10-03T20:24:35.000Z | 2021-10-03T20:24:35.000Z | from lcsscaseapi.types import CaseMeta, USCircuitCaseMeta
import datetime
import pytest
def test_eq():
obj1 = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
circuit_name = USCircuitCaseMeta.FIFTH_CIRCUIT
)
obj2 = USCircuitCaseMeta(
case_id = "blah",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
circuit_name = USCircuitCaseMeta.FIFTH_CIRCUIT
)
obj2.case_id = "X44DV3"
assert obj1 == obj2
def test_eq_order():
# check that ordering of the fields doesn't affect equality
obj1 = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
circuit_name = USCircuitCaseMeta.FIFTH_CIRCUIT
)
obj2 = USCircuitCaseMeta(
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
case_id = "blah",
case_name = "Barker v. United States",
circuit_name = USCircuitCaseMeta.FIFTH_CIRCUIT
)
obj2.case_id = "X44DV3"
assert obj1 == obj2
def test_eq_tag_order():
# check that ordering of the tags doesn't affect equality
obj1 = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
tags = ["WORLD", "HELLO"],
circuit_name = USCircuitCaseMeta.SIXTH_CIRCUIT
)
obj2 = USCircuitCaseMeta(
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
case_id = "blah",
case_name = "Barker v. United States",
circuit_name = USCircuitCaseMeta.SIXTH_CIRCUIT,
tags = ["HELLO", "WORLD"]
)
obj2.case_id = "X44DV3"
assert obj1 == obj2
def test_neq():
obj1 = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
tags = ["WORLD", "HELLO"],
circuit_name = USCircuitCaseMeta.SIXTH_CIRCUIT
)
obj2 = USCircuitCaseMeta(
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
case_id = "blah",
case_name = "Barker v. United States",
circuit_name = USCircuitCaseMeta.EIGHTH_CIRCUIT,
tags = ["HELLO", "WORLD"]
)
obj2.case_id = "X44DV3"
assert obj1 != obj2
def test_hash():
obj1 = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
tags = ["WORLD", "HELLO"],
circuit_name = USCircuitCaseMeta.SIXTH_CIRCUIT
)
obj2 = USCircuitCaseMeta(
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
case_id = "blah",
case_name = "Barker v. United States",
circuit_name = USCircuitCaseMeta.SIXTH_CIRCUIT,
tags = ["HELLO", "WORLD"]
)
assert obj1.__hash__() != obj2.__hash__()
s=set()
s.add(obj1)
assert obj2 not in s
obj2.case_id = "X44DV3"
assert obj1.__hash__() == obj2.__hash__()
assert obj2 in s
def test_invalid_circuit_constructor():
# check that supplying an invalid circuit name triggers an exception with the right message
with pytest.raises(Exception) as e:
obj1 = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
tags = ["WORLD", "HELLO"],
circuit_name = "12th Circuit"
)
assert str(e.value) == "circuit_name is not a valid circuit name or None. Valid names must be one of the following (or a None): Federal Circuit, 1st Circuit, 2nd Circuit, 3rd Circuit, 4th Circuit, 5th Circuit, 6th Circuit, 7th Circuit, 8th Circuit, 9th Circuit, 10th Circuit, 11th Circuit, DC Circuit"
def test_circuit_setter():
# check that the circuit_name setter only accepts certain valid options
obj1 = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
tags = ["WORLD", "HELLO"]
)
obj1.circuit_name = USCircuitCaseMeta.TENTH_CIRCUIT
assert obj1.circuit_name == USCircuitCaseMeta.TENTH_CIRCUIT
with pytest.raises(Exception) as e:
obj1.circuit_name = "13th Circuit"
assert str(e.value) == "circuit_name is not a valid circuit name or None. Valid names must be one of the following (or a None): Federal Circuit, 1st Circuit, 2nd Circuit, 3rd Circuit, 4th Circuit, 5th Circuit, 6th Circuit, 7th Circuit, 8th Circuit, 9th Circuit, 10th Circuit, 11th Circuit, DC Circuit"
obj1.circuit_name = None
assert obj1.circuit_name == None
def test_circuit_num():
# test that the circuit_num method works as expected
obj1 = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
tags = ["WORLD", "HELLO"],
circuit_name = USCircuitCaseMeta.ELEVENTH_CIRCUIT
)
assert obj1.circuit_num() == 11
obj1.circuit_name = USCircuitCaseMeta.FIRST_CIRCUIT
assert obj1.circuit_num() == 1
obj1.circuit_name = None
assert obj1.circuit_num() == None
def test_from_json_dict():
data = {
'case_id': 'X1111',
'circuit_name': USCircuitCaseMeta.THIRD_CIRCUIT,
'outcome': 'Affirmed (In Part)',
'date': '1972-03-01',
'self_cite': None
}
ucm = USCircuitCaseMeta.from_json_dict(data)
assert ucm == USCircuitCaseMeta(
case_id= "X1111",
circuit_name = USCircuitCaseMeta.THIRD_CIRCUIT,
outcome = "Affirmed (In Part)",
date = datetime.date(1972,3,1),
self_cite = None
)
def test_from_json_dict_invalid_circuit():
data = {
'case_id': 'X1111',
'circuit_name': 'Twelfth Court',
'outcome': 'Affirmed (In Part)'
}
with pytest.raises(Exception) as e:
USCircuitCaseMeta.from_json_dict(data)
assert str(e.value) == "circuit_name is not a valid circuit name or None. Valid names must be one of the following (or a None): Federal Circuit, 1st Circuit, 2nd Circuit, 3rd Circuit, 4th Circuit, 5th Circuit, 6th Circuit, 7th Circuit, 8th Circuit, 9th Circuit, 10th Circuit, 11th Circuit, DC Circuit"
def test_from_json_dict_no_circuit():
data = {
'case_id': 'X1111',
'circuit_name': None,
'outcome': 'Affirmed (In Part)',
'date': '1972-03-01',
'self_cite': None
}
ucm = USCircuitCaseMeta.from_json_dict(data)
assert ucm == USCircuitCaseMeta(
case_id= "X1111",
circuit_name = None,
outcome = "Affirmed (In Part)",
date = datetime.date(1972,3,1),
self_cite = None
)
def test_to_json_dict():
ucm = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
tags = ["WORLD", "HELLO"],
circuit_name = USCircuitCaseMeta.ELEVENTH_CIRCUIT,
date = datetime.date(1984,8,5)
)
# tags should be sorted alphabetically, so the same object always produces the same dictionary
data = {
'case_id' : "X44DV3",
'case_name' : "Barker v. United States",
'title' : "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
'doc_title' : "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
'tags' : ["HELLO", "WORLD"],
'circuit_name' : USCircuitCaseMeta.ELEVENTH_CIRCUIT,
'doc_id': None,
'doc_type': None,
'docket_number': None,
'outcome': None,
'self_cite': None,
'date': '1984-08-05'
}
assert ucm.to_json_dict() == data
def test_to_json_dict_no_date():
ucm = USCircuitCaseMeta(
case_id = "X44DV3",
case_name = "Barker v. United States",
title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
doc_title = "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
tags = ["WORLD", "HELLO"],
circuit_name = USCircuitCaseMeta.ELEVENTH_CIRCUIT,
self_cite = None
)
# tags should be sorted alphabetically, so the same object always produces the same dictionary
data = {
'case_id' : "X44DV3",
'case_name' : "Barker v. United States",
'title' : "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
'doc_title' : "Barker v. United States, 198 F.2d 932 (9th Cir. 1952), Court Opinion",
'tags' : ["HELLO", "WORLD"],
'circuit_name' : USCircuitCaseMeta.ELEVENTH_CIRCUIT,
'doc_id': None,
'doc_type': None,
'docket_number': None,
'outcome': None,
'self_cite': None,
'date': None
}
assert ucm.to_json_dict() == data
| 37.902439 | 305 | 0.619875 | 1,397 | 10,878 | 4.689334 | 0.105941 | 0.054496 | 0.101206 | 0.147916 | 0.914212 | 0.895894 | 0.847962 | 0.821096 | 0.821096 | 0.81957 | 0 | 0.076913 | 0.270914 | 10,878 | 287 | 306 | 37.902439 | 0.749086 | 0.046884 | 0 | 0.708696 | 0 | 0.013043 | 0.410675 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.056522 | false | 0 | 0.013043 | 0 | 0.069565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d3d7a121226cd3ba7fe4c83accb27e8855874ebb | 5,356 | py | Python | dep2label/decoding.py | mstrise/seq2label-crossrep | db55c42ece8ab02af9c170eaba1d503b494032cc | [
"MIT"
] | 13 | 2019-07-02T22:27:17.000Z | 2021-11-20T10:39:20.000Z | dep2label/decoding.py | mstrise/seq2label-crossrep | db55c42ece8ab02af9c170eaba1d503b494032cc | [
"MIT"
] | null | null | null | dep2label/decoding.py | mstrise/seq2label-crossrep | db55c42ece8ab02af9c170eaba1d503b494032cc | [
"MIT"
] | 1 | 2021-02-03T12:36:53.000Z | 2021-02-03T12:36:53.000Z | def decode_3(decoded_sentence, root):
decoded_words = {}
homeless_nodes = {}
# 1 : ['The', 'DT', '+1', 'det', 'NN']
for index_of_word in decoded_sentence:
if not index_of_word == 0:
word_line = decoded_sentence.get(index_of_word)
info_about_word = word_line
found_head = False
if not word_line[2]=="-EOS-" and not word_line[2]=="-BOS-":
position_head = int(word_line[2])
post_of_head = word_line[4]
abs_posit = abs(position_head)
abs_posit_minus = abs_posit - 1
abs_posit_plus = abs_posit + 1
# find head with the relative position -1,-2....
if position_head < 0:
words_full_info = assignHeadL(index_of_word, info_about_word, post_of_head,
decoded_sentence, abs_posit)
if words_full_info:
decoded_words.update({index_of_word: words_full_info})
found_head = True
elif not abs_posit_minus == 0:
words_full_info = assignHeadL(index_of_word, info_about_word, post_of_head,
decoded_sentence, abs_posit_minus)
if words_full_info:
decoded_words.update({index_of_word: words_full_info})
found_head = True
else:
words_full_info = assignHeadL(index_of_word, info_about_word, post_of_head,
decoded_sentence, abs_posit_plus)
if words_full_info:
found_head = True
decoded_words.update({index_of_word: words_full_info})
# find head with the relative position +1,+2....
elif position_head > 0:
found_head = False
words_full_info = assignHeadR(index_of_word, info_about_word, post_of_head,
decoded_sentence, abs_posit)
if words_full_info:
decoded_words.update({index_of_word: words_full_info})
found_head = True
elif not abs_posit_minus == 0:
words_full_info = assignHeadR(index_of_word, info_about_word, post_of_head,
decoded_sentence, abs_posit_minus)
if words_full_info:
decoded_words.update({index_of_word: words_full_info})
found_head = True
else:
words_full_info = assignHeadR(index_of_word, info_about_word, post_of_head,
decoded_sentence, abs_posit_plus)
if words_full_info:
decoded_words.update({index_of_word: words_full_info})
found_head = True
if not found_head:
words_full_info = {1: index_of_word, 2: info_about_word[0],
3: "_", 4: info_about_word[1],
5: -1, 6: info_about_word[3]}
homeless_nodes.update({index_of_word: words_full_info})
decoded_words.update({index_of_word: words_full_info})
else:
words_full_info = {1: index_of_word, 2: info_about_word[0],
3: "_", 4: info_about_word[1],
5: -1, 6: root}
homeless_nodes.update({index_of_word: words_full_info})
decoded_words.update({index_of_word: words_full_info})
return decoded_words, homeless_nodes
def assignHeadL(node_index, info_about_word, head, decoded, abs_posit):
# assign new head to the wrong roots
count_posit = 0
# find head with the relative position -1,-2....
for index in range(node_index - 1, -1, -1):
info_candidate_word = decoded[index]
postag_candidate = info_candidate_word[1]
if postag_candidate == head:
count_posit += 1
if abs_posit == count_posit:
words_full_info = {1: node_index, 2: info_about_word[0],
3: "_", 4: info_about_word[1],
5: index, 6: info_about_word[3]}
return words_full_info
def assignHeadR(node_index, info_about_word, head, decoded, abs_posit):
count_posit = 0
# find head with the relative position +1,+2....
for index in range(node_index + 1, len(decoded)):
info_candidate_word = decoded[index]
postag_candidate = info_candidate_word[1]
if postag_candidate == head:
count_posit += 1
if abs_posit == count_posit:
words_full_info = {1: node_index, 2: info_about_word[0],
3: "_", 4: info_about_word[1],
5: index, 6: info_about_word[3]}
return words_full_info
| 46.982456 | 99 | 0.508402 | 593 | 5,356 | 4.173693 | 0.109612 | 0.101818 | 0.147071 | 0.068687 | 0.809293 | 0.803636 | 0.788687 | 0.788687 | 0.76202 | 0.711919 | 0 | 0.022229 | 0.420463 | 5,356 | 113 | 100 | 47.39823 | 0.775129 | 0.048357 | 0 | 0.738636 | 0 | 0 | 0.00275 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034091 | false | 0 | 0 | 0 | 0.068182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
313cd13194f73a14c32dbf9b2d2a00397b6e7525 | 147 | py | Python | test.py | CRImier/pyrtitions | 5be4e72e25b735144a74c5ec76300e71d0a1b445 | [
"MIT"
] | null | null | null | test.py | CRImier/pyrtitions | 5be4e72e25b735144a74c5ec76300e71d0a1b445 | [
"MIT"
] | null | null | null | test.py | CRImier/pyrtitions | 5be4e72e25b735144a74c5ec76300e71d0a1b445 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import pyrtitions
assert(pyrtitions.label_filter("привет") == "privet")
assert(pyrtitions.label_filter("#####") == None)
| 21 | 53 | 0.666667 | 16 | 147 | 6 | 0.6875 | 0.333333 | 0.4375 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007576 | 0.102041 | 147 | 6 | 54 | 24.5 | 0.719697 | 0.142857 | 0 | 0 | 0 | 0 | 0.137097 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
31469f8ec0f71a04dbcc2b76daa40b3332e02cfc | 973 | py | Python | tests/unit_tests/test_text_to_concept.py | JobtechSwe/sokannonser-api | 84214c51429fcedffa9a5d7d93afd9fdc080dcbb | [
"Apache-2.0"
] | 14 | 2018-09-12T14:08:54.000Z | 2021-09-20T11:54:20.000Z | tests/unit_tests/test_text_to_concept.py | JobtechSwe/sokannonser-api | 84214c51429fcedffa9a5d7d93afd9fdc080dcbb | [
"Apache-2.0"
] | 43 | 2018-09-25T14:39:02.000Z | 2021-10-01T08:40:23.000Z | tests/unit_tests/test_text_to_concept.py | JobtechSwe/sokannonser-api | 84214c51429fcedffa9a5d7d93afd9fdc080dcbb | [
"Apache-2.0"
] | 8 | 2018-11-21T23:51:47.000Z | 2021-06-04T10:34:16.000Z | import pytest
from sokannonser.repository.helpers import clean_plus_minus
pytestmark = pytest.mark.unit
def test_clean_plus_minus():
"""
+ and - signs at the beginning of words are removed, those at the end or in the middle are not removed
"""
cleaned_text = clean_plus_minus(
'-mållare målare +undersköterska java-utvecklare -key account manager c-sharp -java -noggrann flexibel')
assert cleaned_text == 'mållare målare undersköterska java-utvecklare key account manager c-sharp java noggrann flexibel'
def test_clean_plus_minus2():
"""
+ and - signs at the beginning of words are removed, those at the end or in the middle are not removed
"""
cleaned_text = clean_plus_minus(
'mållare- målare undersköterska+ java-utvecklare key- account manager c-sharp -java- -noggrann- flexibel')
assert cleaned_text == 'mållare- målare undersköterska+ java-utvecklare key- account manager c-sharp java- noggrann- flexibel'
| 42.304348 | 130 | 0.743063 | 132 | 973 | 5.356061 | 0.340909 | 0.063649 | 0.079208 | 0.175389 | 0.81471 | 0.81471 | 0.81471 | 0.81471 | 0.81471 | 0.81471 | 0 | 0.001258 | 0.182939 | 973 | 22 | 131 | 44.227273 | 0.88805 | 0.210689 | 0 | 0.181818 | 0 | 0.090909 | 0.544098 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3156527915583b5bfb4ada6769f3b425874e9f47 | 10,731 | py | Python | tests/test_dec_RuleCheckAnyKw.py | Amourspirit/python-kwargshelper | 4851ad69cf26f0656bc4264c70f956226bf5017e | [
"MIT"
] | null | null | null | tests/test_dec_RuleCheckAnyKw.py | Amourspirit/python-kwargshelper | 4851ad69cf26f0656bc4264c70f956226bf5017e | [
"MIT"
] | 4 | 2021-10-16T20:11:42.000Z | 2021-12-11T09:54:06.000Z | tests/test_dec_RuleCheckAnyKw.py | Amourspirit/python-kwargshelper | 4851ad69cf26f0656bc4264c70f956226bf5017e | [
"MIT"
] | null | null | null | import unittest
if __name__ == '__main__':
import os
import sys
sys.path.append(os.path.realpath('.'))
from kwhelp.exceptions import RuleError
from kwhelp import rules
from kwhelp.decorator import DecFuncEnum, RuleCheckAnyKw
from tests.ex_logger import test_logger, clear_log, get_logged_errors
from tests.ex_log_adapter import LogIndentAdapter
class TestRuleCheckAnyKw(unittest.TestCase):
def test_rule_check_anykw_gen(self):
@RuleCheckAnyKw(arg_info={"first": 0, "last": 0, "hours": 1, "name": 2},
rules=[rules.RuleIntPositive,
(rules.RuleIntPositive, rules.RuleFloatPositive),
rules.RuleStrNotNullEmptyWs])
def foo(first, last, **kwargs):
d = {**kwargs}
d["first"] = first
d["last"] = last
return d
result = foo(first=1, last=100, hours=12.5, name="test")
assert result["first"] == 1
assert result["last"] == 100
assert result["hours"] == 12.5
assert result["name"] == "test"
result = foo(first=1, last=100, hours=12, name="test")
assert result["first"] == 1
assert result["last"] == 100
assert result["hours"] == 12
assert result["name"] == "test"
with self.assertRaises(RuleError):
foo(first="1", last=100, hours=12.5, name="test")
with self.assertRaises(RuleError):
foo(first=1, last="100", hours=12.5, name="test")
with self.assertRaises(RuleError):
foo(first=1, last=100, hours="12.5", name="test")
with self.assertRaises(RuleError):
foo(first=1, last=100, hours=12.5, name=" ")
with self.assertRaises(RuleError):
foo(first=-1, last=100, hours=12.5, name="test")
with self.assertRaises(RuleError):
foo(first=1, last=100, hours=-12.5, name="test")
def test_rule_check_anykw_opt_return(self):
@RuleCheckAnyKw(arg_info={"first": 0, "last": 0, "hours": 1, "name": 2},
rules=[rules.RuleIntPositive,
(rules.RuleIntPositive, rules.RuleFloatPositive),
rules.RuleStrNotNullEmptyWs], opt_return=None)
def foo(first, last, **kwargs):
d = {**kwargs}
d["first"] = first
d["last"] = last
return d
result = foo(first=1, last=100, hours=12.5, name="test")
assert result["first"] == 1
assert result["last"] == 100
assert result["hours"] == 12.5
assert result["name"] == "test"
result = False
result = foo(first="1", last=100, hours=12.5, name="test")
assert result == None
result = False
result = foo(first=1, last="100", hours=12.5, name="test")
assert result == None
result = False
result = foo(first=1, last=100, hours="12.5", name="test")
assert result == None
result = False
result = foo(first=1, last=100, hours=12.5, name="")
assert result == None
def test_rule_check_anykw_raise_error_opt_return(self):
@RuleCheckAnyKw(arg_info={"first": 0, "last": 0, "hours": 1, "name": 2},
rules=[rules.RuleIntPositive,
(rules.RuleIntPositive, rules.RuleFloatPositive),
rules.RuleStrNotNullEmptyWs], opt_return=None, raise_error=False)
def foo(first, last, **kwargs):
d = {**kwargs}
d["first"] = first
d["last"] = last
return d
assert foo.is_rules_any_valid == True
result = foo(first=1, last=100, hours=12.5, name="test")
assert result["first"] == 1
assert result["last"] == 100
assert result["hours"] == 12.5
assert result["name"] == "test"
assert foo.is_rules_any_valid == True
result = False
result = foo(first="1", last=100, hours=12.5, name="test")
assert result == None
assert foo.is_rules_any_valid == False
result = False
result = foo(first=1, last="100", hours=12.5, name="test")
assert result == None
assert foo.is_rules_any_valid == False
result = False
result = foo(first=1, last=100, hours="12.5", name="test")
assert result == None
assert foo.is_rules_any_valid == False
result = False
result = foo(first=1, last=100, hours=12.5, name=" ")
assert result == None
assert foo.is_rules_any_valid == False
class TestRuleCheckAnyKwClass(unittest.TestCase):
def test_rule_check_anykw_gen(self):
class Bar:
@RuleCheckAnyKw(arg_info={"first": 0, "last": 0, "hours": 1, "name": 2},
rules=[rules.RuleIntPositive,
(rules.RuleIntPositive, rules.RuleFloatPositive),
rules.RuleStrNotNullEmptyWs], ftype=DecFuncEnum.METHOD)
def foo(self, first, last, **kwargs):
d = {**kwargs}
d["first"] = first
d["last"] = last
return d
b = Bar()
result = b.foo(first=1, last=100, hours=12.5, name="test")
assert result["first"] == 1
assert result["last"] == 100
assert result["hours"] == 12.5
assert result["name"] == "test"
with self.assertRaises(RuleError):
b.foo(first="1", last=100, hours=12.5, name="test")
with self.assertRaises(RuleError):
b.foo(first=1, last="100", hours=12.5, name="test")
with self.assertRaises(RuleError):
b.foo(first=1, last=100, hours="12.5", name="test")
with self.assertRaises(RuleError):
b.foo(first=1, last=100, hours=12.5, name=" ")
with self.assertRaises(RuleError):
b.foo(first=-1, last=100, hours=12.5, name="test")
with self.assertRaises(RuleError):
b.foo(first=1, last=100, hours=-12.5, name="test")
def test_rule_check_anykw_opt_return(self):
class Bar:
@RuleCheckAnyKw(arg_info={"first": 0, "last": 0, "hours": 1, "name": 2},
rules=[rules.RuleIntPositive,
(rules.RuleIntPositive, rules.RuleFloatPositive),
rules.RuleStrNotNullEmptyWs],
opt_return=None, ftype=DecFuncEnum.METHOD)
def foo(self, first, last, **kwargs):
d = {**kwargs}
d["first"] = first
d["last"] = last
return d
b = Bar()
result = b.foo(first=1, last=100, hours=12.5, name="test")
assert result["first"] == 1
assert result["last"] == 100
assert result["hours"] == 12.5
assert result["name"] == "test"
result = False
result = b.foo(first="1", last=100, hours=12.5, name="test")
assert result == None
result = False
result = b.foo(first=1, last="100", hours=12.5, name="test")
assert result == None
result = False
result = b.foo(first=1, last=100, hours="12.5", name="test")
assert result == None
result = False
result = b.foo(first=1, last=100, hours=12.5, name="")
assert result == None
def test_rule_check_anykw_raise_error_opt_return(self):
class Bar:
@RuleCheckAnyKw(arg_info={"first": 0, "last": 0, "hours": 1, "name": 2},
rules=[rules.RuleIntPositive,
(rules.RuleIntPositive, rules.RuleFloatPositive),
rules.RuleStrNotNullEmptyWs],
opt_return=None, raise_error=False,
ftype=DecFuncEnum.METHOD)
def foo(self, first, last, **kwargs):
d = {**kwargs}
d["first"] = first
d["last"] = last
return d
b = Bar()
assert b.foo.is_rules_any_valid == True
result = b.foo(first=1, last=100, hours=12.5, name="test")
assert result["first"] == 1
assert result["last"] == 100
assert result["hours"] == 12.5
assert result["name"] == "test"
assert b.foo.is_rules_any_valid == True
result = False
result = b.foo(first="1", last=100, hours=12.5, name="test")
assert result == None
assert b.foo.is_rules_any_valid == False
result = False
result = b.foo(first=1, last="100", hours=12.5, name="test")
assert result == None
assert b.foo.is_rules_any_valid == False
result = False
result = b.foo(first=1, last=100, hours="12.5", name="test")
assert result == None
assert b.foo.is_rules_any_valid == False
result = False
result = b.foo(first=1, last=100, hours=12.5, name=" ")
assert result == None
assert b.foo.is_rules_any_valid == False
class TestRuleCheckAnyKwLogger(unittest.TestCase):
# region setup/teardown
@classmethod
def setUpClass(cls):
cls.log_adapt = LogIndentAdapter(test_logger, {})
cls.logger = test_logger
@classmethod
def tearDownClass(cls):
pass
def setUp(self):
pass
def tearDown(self):
pass
# endregion setup/teardown
def test_rule_check_anykw_gen(self):
for i in range(2):
clear_log()
if i == 0:
log = self.logger
else:
log = self.log_adapt
@RuleCheckAnyKw(arg_info={"first": 0, "last": 0, "hours": 1, "name": 2},
rules=[rules.RuleIntPositive,
(rules.RuleIntPositive, rules.RuleFloatPositive),
rules.RuleStrNotNullEmptyWs], opt_logger=log)
def foo(first, last, **kwargs):
pass
with self.assertRaises(RuleError):
foo(first="1", last=100, hours=12.5, name="test")
with self.assertRaises(RuleError):
foo(first=1, last="100", hours=12.5, name="test")
with self.assertRaises(RuleError):
foo(first=1, last=100, hours="12.5", name="test")
with self.assertRaises(RuleError):
foo(first=1, last=100, hours=12.5, name=" ")
with self.assertRaises(RuleError):
foo(first=-1, last=100, hours=12.5, name="test")
with self.assertRaises(RuleError):
foo(first=1, last=100, hours=-12.5, name="test")
errors = get_logged_errors()
assert len(errors) == 6
if __name__ == '__main__':
unittest.main()
| 41.593023 | 89 | 0.546081 | 1,265 | 10,731 | 4.540711 | 0.072727 | 0.050139 | 0.064067 | 0.092792 | 0.881616 | 0.87796 | 0.87796 | 0.871344 | 0.860376 | 0.833565 | 0 | 0.049897 | 0.31833 | 10,731 | 257 | 90 | 41.754864 | 0.735338 | 0.004287 | 0 | 0.821577 | 0 | 0 | 0.051301 | 0 | 0 | 0 | 0 | 0 | 0.311203 | 1 | 0.074689 | false | 0.016598 | 0.033195 | 0 | 0.157676 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
318070b61f4bb744494229b99739acc7c687c072 | 16,934 | py | Python | controllers/calculos_controller.py | SergioCMDev/Busines-Inteligence-applied-to-tourism | 61834a46fce22453e94b7bbdf8d4ecdcf128285a | [
"Apache-2.0"
] | null | null | null | controllers/calculos_controller.py | SergioCMDev/Busines-Inteligence-applied-to-tourism | 61834a46fce22453e94b7bbdf8d4ecdcf128285a | [
"Apache-2.0"
] | null | null | null | controllers/calculos_controller.py | SergioCMDev/Busines-Inteligence-applied-to-tourism | 61834a46fce22453e94b7bbdf8d4ecdcf128285a | [
"Apache-2.0"
] | null | null | null | import connexion
from swagger_server.models.body1 import Body1
from swagger_server.models.body2 import Body2
from swagger_server.models.body3 import Body3
from swagger_server.models.body4 import Body4
from swagger_server.models.body5 import Body5
from swagger_server.models.body6 import Body6
from swagger_server.models.body7 import Body7
from ..Utilidades.UtilidadesTensorFlow import UtilidadesTensorFlow as Tensorflow
tensorflow = Tensorflow()
from ..Utilidades.Conversores import Conversores
conversor = Conversores()
from ..Utilidades.DeteccionOutliers import DeteccionOutliers
outliers = DeteccionOutliers()
from ..Utilidades.Graphics import Graphics as Graphics
graphics = Graphics()
def obtener_outliers_ciudad_cantidad(ciudadInicioIniciales, CiudadFinIniciales, Metodo, body): #OK
"""
Obtener los valores fuera de lo comun dado unos valores iniciales y unos valores a tratar
Este metodo trata los valores de cada mes de cada año y tras pasarle valores a probar decide los inliers y los outliers
:param ciudadInicioIniciales: Ciudad inicial de la matriz de datos
:type ciudadInicioIniciales: str
:param CiudadFinIniciales: Ciudad final de la matriz de datos, las siguientes se probaran mediante los metodos de deteccion de outliers
:type CiudadFinIniciales: str
:param Metodo: Metodo a usar para obtener los outliers
:type Metodo: str
:param body: Datos de entrada obtenidos previamente junto con los datos a testear
:type body: list | bytes
:rtype: Dict[str, int]
"""
if connexion.request.is_json:
body = [Body3.from_dict(d) for d in connexion.request.get_json()]
listaLabels = list()
listaLabels.append('Ciudad')
listaLabels.append('Cantidad')
listaValores, listaValoresAComprobar, listaValoresCentrales = conversor.separarValoresBody(body, CiudadFinIniciales)
# print('Lista VALORES')
# print(listaValores)
# print("\n")
#
# print('Lista VALORES A COMPROBAR')
# print(listaValoresAComprobar)
#
# print('Lista LABELS')
# print(listaLabels)
matriz, listaColumnas = conversor.ConvertirTuplasToMatriz(listaValores, listaLabels)
# print('MATRIZ')
# print(matriz)
#
# print("\n")
#
# print('Lista COLUMNAS')
# print(listaColumnas)
listaValoresOutliers, listaValoresInliers = outliers.ObtenerOutliersDadaMatrizAniosYTipo(matriz, ciudadInicioIniciales, CiudadFinIniciales, listaValoresAComprobar, listaLabels, Metodo)
# print(listaValoresOutliers)
# print("\n")
# print(listaValoresInliers)
# outliers.MostrarOutliersMedianteEnvolturaElipticaDadosDatos(matriz, ciudadInicioIniciales, CiudadFinIniciales, listaValoresAComprobar, listaLabels, listaColumnas)
# outliers.MostrarOutliersMedianteIsolationForestDadosDatos(matriz, ciudadInicioIniciales, CiudadFinIniciales, listaValoresAComprobar, listaLabels, listaColumnas)
return conversor.ObtenerJSONDeListasOutliersInliers(listaValoresInliers, listaValoresOutliers, listaLabels, listaValoresCentrales)
#OK
"""
"[{\"Anio\":2009,\"Numero_Vuelos\":209861},{\"Anio\":2010,\"Numero_Vuelos\":208851},{\"Anio\":2011,\"Numero_Vuelos\":205476},{\"Anio\":2012,\"Numero_Vuelos\":130233},{\"Anio\":2013,\"Numero_Vuelos\":121931},{\"Anio\":2014,\"Numero_Vuelos\":126893},{\"Anio\":2015,\"Numero_Vuelos\":139735}]"
"""
def obtener_outliers_inliers_anios_cantidad(AnioInicio, AnioFin, Metodo, body): #OK
"""
Obtener los valores fuera de lo comun dado unos valores iniciales y unos valores a tratar
Obtener los valores fuera de lo comun dado unos valores iniciales y unos valores a tratar
:param AnioInicio: Año inicial de la matriz de datos
:type AnioInicio: str
:param AnioFin: Año final de la matriz de datos, Año final de la matriz de datos, año desde el cual obtenemos los datos a probar
:type AnioFin: str
:param Metodo: Metodo a usar para obtener los outliers
:type Metodo: str
:param body: Datos de entrada obtenidos previamente
:type body: list | bytes
:rtype: Dict[str, int]
"""
if connexion.request.is_json:
body = [Body1.from_dict(d) for d in connexion.request.get_json()]
listaLabels = list()
listaLabels.append('Anio')
listaLabels.append('Cantidad')
listaValores, listaValoresAComprobar, listaValoresCentrales = conversor.separarValoresBody(body, AnioFin)
# print(listaValoresAComprobar)
matriz, listaColumnas = conversor.ConvertirTuplasToMatriz(listaValores, listaLabels)
# print(matriz)
listaValoresOutliers, listaValoresInliers = outliers.ObtenerOutliersDadaMatrizAniosYTipo(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, Metodo)
outliers.MostrarOutliersMedianteEnvolturaElipticaDadosDatos(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, listaColumnas)
return conversor.ObtenerJSONDeListasOutliersInliers(listaValoresInliers, listaValoresOutliers, listaLabels, listaValoresCentrales)
# outliers.MostrarOutliersMedianteEnvolturaElipticaDadosDatos(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, listaColumnas)
# outliers.MostrarOutliersMedianteIsolationForestDadosDatos(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, listaColumnas)
#OK
def obtener_outliers_pais_cantidad(PaisInicioIniciales, PaisFinIniciales, Metodo, body): #OK
"""
Obtener los valores fuera de lo comun dado unos valores iniciales y unos valores a tratar
Este metodo trata los valores de cada mes de cada año y tras pasarle valores a probar decide los inliers y los outliers
:param PaisInicioIniciales: Pais inicial de la matriz de datos
:type PaisInicioIniciales: str
:param AnioFin: Pais final de la matriz de datos, los datos siguientes se trataran mediante los metodos de deteccion de outliers
:type AnioFin: str
:param Metodo: Metodo a usar para obtener los outliers
:type Metodo: str
:param body: Datos de entrada obtenidos previamente junto con los datos a testear
:type body: list | bytes
:rtype: Dict[str, int]
"""
if connexion.request.is_json:
body = [Body4.from_dict(d) for d in connexion.request.get_json()]
listaLabels = list()
listaLabels.append('Pais')
listaLabels.append('Cantidad')
listaValores, listaValoresAComprobar, listaValoresCentrales = conversor.separarValoresBody(body, PaisFinIniciales)
matriz, listaColumnas = conversor.ConvertirTuplasToMatriz(listaValores, listaLabels)
listaValoresOutliers, listaValoresInliers = outliers.ObtenerOutliersDadaMatrizAniosYTipo(matriz, PaisInicioIniciales, PaisFinIniciales, listaValoresAComprobar, listaLabels, Metodo)
# print(listaValoresOutliers)
# print("\n")
# print(listaValoresInliers)
outliers.MostrarOutliersMedianteEnvolturaElipticaDadosDatos(matriz, PaisInicioIniciales, PaisFinIniciales, listaValoresAComprobar, listaLabels, listaColumnas)
outliers.MostrarOutliersMedianteIsolationForestDadosDatos(matriz, PaisInicioIniciales, PaisFinIniciales, listaValoresAComprobar, listaLabels, listaColumnas)
return conversor.ObtenerJSONDeListasOutliersInliers(listaValoresInliers, listaValoresOutliers, listaLabels, listaValoresCentrales)
#OK
def obtener_outliers_inliers_mes_cantidad(MesInicioIniciales, MesFinIniciales, Metodo, body): #OK
"""
Obtener los valores fuera de lo comun dado unos valores iniciales y unos valores a tratar
Este metodo trata los valores de cada mes de cada año y tras pasarle valores a probar decide los inliers y los outliers
:param MesInicio: Mes inicial de la matriz de datos
:type MesInicio: str
:param AnioFin: Mes final de la matriz de datos, los siguientes se probaran mediante los metodos de deteccion de outliers
:type AnioFin: str
:param Metodo: Metodo a usar para obtener los outliers
:type Metodo: str
:param body: Datos de entrada obtenidos previamente junto con los datos a testear
:type body: list | bytes
:rtype: Dict[str, int]
"""
if connexion.request.is_json:
body = [Body2.from_dict(d) for d in connexion.request.get_json()]
listaLabels = list()
listaLabels.append('Mes')
listaLabels.append('Cantidad')
listaValores, listaValoresAComprobar, listaValoresCentrales = conversor.separarValoresBody(body, MesFinIniciales)
matriz, listaColumnas = conversor.ConvertirTuplasToMatriz(listaValores, listaLabels)
listaValoresOutliers, listaValoresInliers = outliers.ObtenerOutliersDadaMatrizAniosYTipo(matriz, MesInicioIniciales, MesFinIniciales, listaValoresAComprobar, listaLabels, Metodo)
# print(listaValoresOutliers)
# print("\n")
# print(listaValoresInliers)
# outliers.MostrarOutliersMedianteEnvolturaElipticaDadosDatos(matriz, MesInicioIniciales, MesFinIniciales, listaValoresAComprobar, listaLabels, listaColumnas)
# outliers.MostrarOutliersMedianteIsolationForestDadosDatos(matriz, MesInicioIniciales, MesFinIniciales, listaValoresAComprobar, listaLabels, listaColumnas)
return conversor.ObtenerJSONDeListasOutliersInliers(listaValoresInliers, listaValoresOutliers, listaLabels, listaValoresCentrales)
# OK
def obtener_outliers_inliers_anios_pais_cantidad(AnioInicio, AnioFin, Metodo, body):#OK
"""
Obtener los valores fuera de lo comun dado unos valores iniciales y unos valores a tratar
Este metodo trata los valores de cada mes de cada año y tras pasarle valores a probar decide los inliers y los outliers
:param AnioInicio: Año inicial de la matriz de datos
:type AnioInicio: int
:param AnioFin: Año final de la matriz de datos, a partir de este año comienan los datos de prueba
:type AnioFin: int
:param Metodo: Metodo a usar para obtener los outliers
:type Metodo: str
:param body: Datos de entrada obtenidos previamente junto con los datos a testear
:type body: list | bytes
:rtype: Dict[str, int]
"""
if connexion.request.is_json:
body = [Body5.from_dict(d) for d in connexion.request.get_json()]
listaLabels = list()
listaLabels.append('Anio')
listaLabels.append('Pais')
listaLabels.append('Cantidad')
listaValores, listaValoresAComprobar, listaValoresCentrales = conversor.separarValoresBody(body, AnioFin)
# print( listaValores)
# print("\n")
# print(listaValoresAComprobar)
# print("\n")
# print(listaValoresCentrales)
matriz, listaColumnas = conversor.ConvertirTuplasToMatriz(listaValores, listaLabels)
listaValoresOutliers, listaValoresInliers = outliers.ObtenerOutliersDadaMatrizAniosYTipo(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, Metodo)
# print(listaColumnas)
# print(listaValoresOutliers)
# print("\n")
# print(listaValoresInliers)
# outliers.MostrarOutliersMedianteEnvolturaElipticaDadosDatos(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, listaValoresCentrales)
# outliers.MostrarOutliersMedianteIsolationForestDadosDatos(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, listaValoresCentrales)
return conversor.ObtenerJSONDeListasOutliersInliers(listaValoresInliers, listaValoresOutliers, listaLabels, listaValoresCentrales)
# OK
def obtener_outliers_inliers_anios_mes_cantidad(AnioInicio, AnioFin, Metodo, body): #OK
"""
Obtener los valores fuera de lo comun dado unos valores iniciales y unos valores a tratar
Este metodo trata los valores de cada mes de cada año y tras pasarle valores a probar decide los inliers y los outliers
:param AnioInicio: Año inicial de la matriz de datos
:type AnioInicio: int
:param AnioFin: Año final de la matriz de datos
:type AnioFin: int
:param body: Datos de entrada obtenidos previamente junto con los datos a testear
:type body: list | bytes
:rtype: Dict[str, int]
"""
if connexion.request.is_json:
body = [Body7.from_dict(d) for d in connexion.request.get_json()]
listaLabels = list()
listaLabels.append('Anio')
listaLabels.append('Mes')
listaLabels.append('Cantidad')
listaValores, listaValoresAComprobar, listaValoresCentrales = conversor.separarValoresBody(body, AnioFin)
matriz, listaColumnas = conversor.ConvertirTuplasToMatriz(listaValores, listaLabels)
# print(matriz)
listaValoresOutliers, listaValoresInliers = outliers.ObtenerOutliersDadaMatrizAniosYTipo(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, Metodo)
# print(listaValoresOutliers)
# print("\n")
#
# print(listaValoresInliers)
#
outliers.MostrarOutliersMedianteEnvolturaElipticaDadosDatos(matriz, AnioInicio, AnioFin,listaValoresAComprobar, listaLabels, listaColumnas)
outliers.MostrarOutliersMedianteIsolationForestDadosDatos(matriz, AnioInicio, AnioFin,listaValoresAComprobar, listaLabels, listaColumnas)
return conversor.ObtenerJSONDeListasOutliersInliers(listaValoresInliers, listaValoresOutliers, listaLabels, listaValoresCentrales)
#OK
def obtener_outliers_inliers_anios_ciudad_cantidad(AnioInicio, AnioFin, Metodo, body):
"""
Obtener los valores fuera de lo comun dado unos valores iniciales y unos valores a tratar
Este metodo trata los valores de cada ciudad de cada año y tras pasarle valores a probar decide los inliers y los outliers
:param AnioInicio: Año inicial de la matriz de datos
:type AnioInicio: int
:param AnioFin: Año final de la matriz de datos, año a partir del cual obtenemos los datos a probar
:type AnioFin: int
:param Metodo: Metodo a usar para obtener los outliers
:type Metodo: str
:param body: Datos de entrada obtenidos previamente junto con los datos a testear
:type body: list | bytes
:rtype: Dict[str, int]
"""
if connexion.request.is_json:
body = [Body6.from_dict(d) for d in connexion.request.get_json()]
# print(body)
listaLabels = list()
listaLabels.append('Anio')
listaLabels.append('Ciudad')
listaLabels.append('Cantidad')
listaValores, listaValoresAComprobar, listaValoresCentrales = conversor.separarValoresBody(body, AnioFin)
# print( listaValores)
# print("\n")
# print(listaValoresAComprobar)
# print("\n")
# print(listaValoresCentrales)
#OK
matriz, listaColumnas = conversor.ConvertirTuplasToMatriz(listaValores, listaLabels)
# print(listaColumnas)
listaValoresOutliers, listaValoresInliers = outliers.ObtenerOutliersDadaMatrizAniosYTipo(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, Metodo)
# print(listaValoresOutliers)
# print("\n")
# print(listaValoresInliers)
#TODO REALIZAR AMBAS GRAFICAS
# print(listaColumnas)
# outliers.MostrarOutliersMedianteEnvolturaElipticaDadosDatos(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, listaValoresCentrales)
# outliers.MostrarOutliersMedianteIsolationForestDadosDatos(matriz, AnioInicio, AnioFin, listaValoresAComprobar, listaLabels, listaValoresCentrales)
# print(listaValoresOutliers)
# print(listaValoresCentrales)
return conversor.ObtenerJSONDeListasOutliersInliers(listaValoresInliers, listaValoresOutliers, listaLabels, listaValoresCentrales)
#OK
"""
Probar en postman con entrada [{"Anio": 2009,"Cantidad": 46453}, {"Anio": 2010,"Cantidad": 44721}, {"Anio": 2011,"Cantidad": 61420}]
"""
def obtener_progresion_lineal(AnioPrediccion, body):
#def obtener_progresion_lineal_anio_cantidad(AnioPrediccion, body):
"""
Obtener la prediccion para un año dado el año a predecir y los datos de varios años anteriores
Obtener la prediccion para un año dado el año a predecir y los datos de varios años anteriores
:param AnioPrediccion: Año para predecir la cantidad
:type AnioPrediccion: str
:param body: Datos de entrada obtenidos previamente
:type body: list | bytes
:rtype: int
"""
prediccionCantidad = -1
if connexion.request.is_json:
body = [Body1.from_dict(d) for d in connexion.request.get_json()]
lista = list()
print(body)
for item in body:
if(hasattr(item, 'anio') and hasattr(item, 'cantidad')):
tupla = (item.anio, item.cantidad)
lista.append(tupla)
prediccionCantidad = tensorflow.ObtenerProgresionLineal(lista, int(AnioPrediccion))
return int(prediccionCantidad)
| 48.245014 | 290 | 0.738632 | 1,684 | 16,934 | 7.384204 | 0.10095 | 0.058384 | 0.012063 | 0.014475 | 0.833052 | 0.816968 | 0.771612 | 0.742019 | 0.724005 | 0.712505 | 0 | 0.008725 | 0.187847 | 16,934 | 350 | 291 | 48.382857 | 0.895441 | 0.458604 | 0 | 0.54717 | 0 | 0 | 0.013372 | 0 | 0 | 0 | 0 | 0.062857 | 0 | 1 | 0.075472 | false | 0 | 0.113208 | 0 | 0.264151 | 0.009434 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
31cc18fbd6840d4bbec04c03e1c756dd37e229b6 | 238 | py | Python | tests/test_sonar_project_exists.py | LucaCappelletti94/setup_python_package | 61b5f3cff1ed3181f932293c63c4fcb71cbe0062 | [
"MIT"
] | 5 | 2019-09-17T14:46:35.000Z | 2020-06-06T08:17:02.000Z | tests/test_sonar_project_exists.py | LucaCappelletti94/setup_python_package | 61b5f3cff1ed3181f932293c63c4fcb71cbe0062 | [
"MIT"
] | 2 | 2020-12-18T01:47:55.000Z | 2020-12-25T10:08:30.000Z | tests/test_sonar_project_exists.py | LucaCappelletti94/setup_python_package | 61b5f3cff1ed3181f932293c63c4fcb71cbe0062 | [
"MIT"
] | null | null | null | from setup_python_package.utils import sonar_project_exists
def test_sonar_project_exists():
assert sonar_project_exists("LucaCappelletti94_setup_python_package")
assert not sonar_project_exists("kdjhdgfkwjhfgkjhekjhwegfkjhwe")
| 34 | 73 | 0.861345 | 28 | 238 | 6.821429 | 0.535714 | 0.251309 | 0.376963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009217 | 0.088235 | 238 | 6 | 74 | 39.666667 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0.281513 | 0.281513 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
31dfb17042118802fc30a15d7150ed7f4d445b98 | 11,564 | py | Python | tests/helpers/source_scanner.py | Defense-Cyber-Crime-Center/dfvfs | da2ccbc4c989ced5ad651057bd8f5a4b18af6d37 | [
"Apache-2.0"
] | 2 | 2016-02-18T12:46:26.000Z | 2022-03-13T03:05:05.000Z | tests/helpers/source_scanner.py | Defense-Cyber-Crime-Center/dfvfs | da2ccbc4c989ced5ad651057bd8f5a4b18af6d37 | [
"Apache-2.0"
] | null | null | null | tests/helpers/source_scanner.py | Defense-Cyber-Crime-Center/dfvfs | da2ccbc4c989ced5ad651057bd8f5a4b18af6d37 | [
"Apache-2.0"
] | 5 | 2016-12-18T08:05:39.000Z | 2019-11-19T21:18:00.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""Tests for the source scanner object."""
import os
import unittest
from dfvfs.helpers import source_scanner
from dfvfs.lib import definitions
from dfvfs.lib import errors
from dfvfs.path import os_path_spec
from dfvfs.path import qcow_path_spec
from dfvfs.path import vshadow_path_spec
class SourceScannerTest(unittest.TestCase):
"""The unit test for the source scanner object."""
_BDE_PASSWORD = u'bde-TEST'
maxDiff = None
def setUp(self):
"""Sets up the needed objects used throughout the test."""
self._source_scanner = source_scanner.SourceScanner()
def _GetTestScanNode(self, scan_context):
"""Retrieves the scan node for testing.
Retrieves the first scan node, from the root upwards, with more or less
than 1 sub node.
Args:
scan_context: scan context (instance of dfvfs.ScanContext).
Returns:
A scan node (instance of dfvfs.ScanNode).
"""
scan_node = scan_context.GetRootScanNode()
while len(scan_node.sub_nodes) == 1:
scan_node = scan_node.sub_nodes[0]
return scan_node
def testScan(self):
"""Test the Scan() function."""
test_file = os.path.join(u'test_data', u'tsk_volume_system.raw')
scan_context = source_scanner.SourceScannerContext()
scan_context.OpenSourcePath(test_file)
self._source_scanner.Scan(scan_context)
self.assertEqual(
scan_context.source_type, definitions.SOURCE_TYPE_STORAGE_MEDIA_IMAGE)
scan_node = self._GetTestScanNode(scan_context)
self.assertNotEqual(scan_node, None)
self.assertEqual(
scan_node.type_indicator, definitions.TYPE_INDICATOR_TSK_PARTITION)
self.assertEqual(len(scan_node.sub_nodes), 7)
for scan_node in scan_node.sub_nodes[6].sub_nodes:
if getattr(scan_node.path_spec, u'location', None) == u'/':
break
self.assertEqual(scan_node.type_indicator, definitions.TYPE_INDICATOR_TSK)
test_file = os.path.join(u'test_data', u'vsstest.qcow2')
scan_context = source_scanner.SourceScannerContext()
scan_context.OpenSourcePath(test_file)
self._source_scanner.Scan(scan_context)
self.assertEqual(
scan_context.source_type, definitions.SOURCE_TYPE_STORAGE_MEDIA_IMAGE)
scan_node = self._GetTestScanNode(scan_context)
self.assertNotEqual(scan_node, None)
self.assertEqual(scan_node.type_indicator, definitions.TYPE_INDICATOR_QCOW)
self.assertEqual(len(scan_node.sub_nodes), 2)
scan_node = scan_node.sub_nodes[0]
self.assertEqual(
scan_node.type_indicator, definitions.TYPE_INDICATOR_VSHADOW)
self.assertEqual(len(scan_node.sub_nodes), 2)
scan_node = scan_node.sub_nodes[0]
self.assertEqual(
scan_node.type_indicator, definitions.TYPE_INDICATOR_VSHADOW)
# By default the file system inside a VSS volume is not scanned.
self.assertEqual(len(scan_node.sub_nodes), 0)
self._source_scanner.Scan(scan_context, scan_path_spec=scan_node.path_spec)
self.assertEqual(len(scan_node.sub_nodes), 1)
for scan_node in scan_node.sub_nodes:
if getattr(scan_node.path_spec, u'location', None) == u'/':
break
self.assertEqual(scan_node.type_indicator, definitions.TYPE_INDICATOR_TSK)
test_file = os.path.join(u'test_data', u'bdetogo.raw')
scan_context = source_scanner.SourceScannerContext()
scan_context.OpenSourcePath(test_file)
self._source_scanner.Scan(scan_context)
self.assertEqual(
scan_context.source_type, definitions.SOURCE_TYPE_STORAGE_MEDIA_IMAGE)
scan_node = self._GetTestScanNode(scan_context)
self.assertNotEqual(scan_node, None)
self.assertEqual(scan_node.type_indicator, definitions.TYPE_INDICATOR_RAW)
for scan_node in scan_node.sub_nodes:
if getattr(scan_node.path_spec, u'location', None) == None:
break
self.assertNotEqual(scan_node, None)
self.assertEqual(scan_node.type_indicator, definitions.TYPE_INDICATOR_BDE)
self.assertEqual(len(scan_node.sub_nodes), 0)
self._source_scanner.Unlock(
scan_context, scan_node.path_spec, u'password', self._BDE_PASSWORD)
self._source_scanner.Scan(scan_context, scan_path_spec=scan_node.path_spec)
self.assertEqual(len(scan_node.sub_nodes), 1)
for scan_node in scan_node.sub_nodes:
if getattr(scan_node.path_spec, u'location', None) == u'/':
break
self.assertNotEqual(scan_node.path_spec, None)
self.assertEqual(scan_node.type_indicator, definitions.TYPE_INDICATOR_TSK)
test_file = os.path.join(u'test_data', u'testdir_os')
scan_context = source_scanner.SourceScannerContext()
scan_context.OpenSourcePath(test_file)
self._source_scanner.Scan(scan_context)
self.assertEqual(
scan_context.source_type, definitions.SOURCE_TYPE_DIRECTORY)
scan_node = self._GetTestScanNode(scan_context)
self.assertNotEqual(scan_node, None)
self.assertNotEqual(scan_node.path_spec, None)
self.assertEqual(
scan_node.type_indicator, definitions.TYPE_INDICATOR_OS)
test_file = os.path.join(u'test_data', u'testdir_os', u'file1.txt')
scan_context = source_scanner.SourceScannerContext()
scan_context.OpenSourcePath(test_file)
self._source_scanner.Scan(scan_context)
self.assertEqual(
scan_context.source_type, definitions.SOURCE_TYPE_FILE)
scan_node = self._GetTestScanNode(scan_context)
self.assertNotEqual(scan_node, None)
self.assertNotEqual(scan_node.path_spec, None)
self.assertEqual(
scan_node.type_indicator, definitions.TYPE_INDICATOR_OS)
test_file = os.path.join(u'test_data', u'bogus.001')
scan_context = source_scanner.SourceScannerContext()
scan_context.OpenSourcePath(test_file)
self._source_scanner.Scan(scan_context)
self.assertEqual(
scan_context.source_type, definitions.SOURCE_TYPE_FILE)
scan_node = self._GetTestScanNode(scan_context)
self.assertNotEqual(scan_node, None)
self.assertNotEqual(scan_node.path_spec, None)
self.assertEqual(
scan_node.type_indicator, definitions.TYPE_INDICATOR_OS)
test_file = os.path.join(u'test_data', u'ímynd.dd')
scan_context = source_scanner.SourceScannerContext()
scan_context.OpenSourcePath(test_file)
self._source_scanner.Scan(scan_context)
self.assertEqual(
scan_context.source_type, definitions.SOURCE_TYPE_STORAGE_MEDIA_IMAGE)
scan_node = self._GetTestScanNode(scan_context)
self.assertNotEqual(scan_node, None)
self.assertNotEqual(scan_node.path_spec, None)
self.assertEqual(scan_node.type_indicator, definitions.TYPE_INDICATOR_TSK)
self.assertEqual(len(scan_node.sub_nodes), 0)
test_file = os.path.join(u'test_data', u'nosuchfile.raw')
scan_context = source_scanner.SourceScannerContext()
scan_context.OpenSourcePath(test_file)
with self.assertRaises(errors.BackEndError):
_ = self._source_scanner.Scan(scan_context)
def testScanForFileSystem(self):
"""Test the ScanForFileSystem() function."""
test_file = os.path.join(u'test_data', u'vsstest.qcow2')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
source_path_spec = qcow_path_spec.QcowPathSpec(parent=source_path_spec)
source_path_spec = vshadow_path_spec.VShadowPathSpec(
store_index=1, parent=source_path_spec)
path_spec = self._source_scanner.ScanForFileSystem(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(path_spec.type_indicator, definitions.TYPE_INDICATOR_TSK)
test_file = os.path.join(u'test_data', u'mactime.body')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForFileSystem(source_path_spec)
self.assertEqual(path_spec, None)
def testScanForStorageMediaImage(self):
"""Test the ScanForStorageMediaImage() function."""
test_file = os.path.join(u'test_data', u'ímynd.dd')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForStorageMediaImage(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(path_spec.type_indicator, definitions.TYPE_INDICATOR_RAW)
test_file = os.path.join(u'test_data', u'image.raw.000')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForStorageMediaImage(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(path_spec.type_indicator, definitions.TYPE_INDICATOR_RAW)
test_file = os.path.join(u'test_data', u'image.E01')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForStorageMediaImage(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(path_spec.type_indicator, definitions.TYPE_INDICATOR_EWF)
test_file = os.path.join(u'test_data', u'image.qcow2')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForStorageMediaImage(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(
path_spec.type_indicator, definitions.TYPE_INDICATOR_QCOW)
test_file = os.path.join(u'test_data', u'image.vhd')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForStorageMediaImage(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(
path_spec.type_indicator, definitions.TYPE_INDICATOR_VHDI)
test_file = os.path.join(u'test_data', u'image.vmdk')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForStorageMediaImage(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(
path_spec.type_indicator, definitions.TYPE_INDICATOR_VMDK)
test_file = os.path.join(u'test_data', u'mactime.body')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForStorageMediaImage(source_path_spec)
self.assertEqual(path_spec, None)
def testScanForVolumeSystem(self):
"""Test the ScanForVolumeSystem() function."""
test_file = os.path.join(u'test_data', u'tsk_volume_system.raw')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForVolumeSystem(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(
path_spec.type_indicator, definitions.TYPE_INDICATOR_TSK_PARTITION)
test_file = os.path.join(u'test_data', u'vsstest.qcow2')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
source_path_spec = qcow_path_spec.QcowPathSpec(parent=source_path_spec)
path_spec = self._source_scanner.ScanForVolumeSystem(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(
path_spec.type_indicator, definitions.TYPE_INDICATOR_VSHADOW)
test_file = os.path.join(u'test_data', u'bdetogo.raw')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForVolumeSystem(source_path_spec)
self.assertNotEqual(path_spec, None)
self.assertEqual(path_spec.type_indicator, definitions.TYPE_INDICATOR_BDE)
test_file = os.path.join(u'test_data', u'mactime.body')
source_path_spec = os_path_spec.OSPathSpec(location=test_file)
path_spec = self._source_scanner.ScanForVolumeSystem(source_path_spec)
self.assertEqual(path_spec, None)
if __name__ == '__main__':
unittest.main()
| 37.914754 | 79 | 0.764441 | 1,547 | 11,564 | 5.371041 | 0.09373 | 0.097244 | 0.053917 | 0.077506 | 0.867854 | 0.859189 | 0.843543 | 0.834276 | 0.823806 | 0.81177 | 0 | 0.002919 | 0.140782 | 11,564 | 304 | 80 | 38.039474 | 0.833333 | 0.055171 | 0 | 0.73913 | 0 | 0 | 0.046711 | 0.00387 | 0 | 0 | 0 | 0 | 0.31401 | 1 | 0.028986 | false | 0.009662 | 0.038647 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9ee18e2626d203874efeafb7f6f5443a2e6ae15c | 12,569 | py | Python | ops.py | shahryarkhorasani/brats_tumor_segmentation_vnet | 85e6a2c745bc09839aef2815c95c5dedab39aacf | [
"MIT"
] | null | null | null | ops.py | shahryarkhorasani/brats_tumor_segmentation_vnet | 85e6a2c745bc09839aef2815c95c5dedab39aacf | [
"MIT"
] | null | null | null | ops.py | shahryarkhorasani/brats_tumor_segmentation_vnet | 85e6a2c745bc09839aef2815c95c5dedab39aacf | [
"MIT"
] | null | null | null | import keras
from keras.layers import Dropout, Activation, Lambda, PReLU, LeakyReLU, Conv2D, Add, add, Conv2DTranspose, Conv3D, Conv3DTranspose, BatchNormalization
import tensorflow as tf
import keras.backend as K
def residual_block_3D(x, filters, kernel_size=(3, 3, 3), depth=3, kernel_initializer='he_normal'):
x = Conv3D(filters, kernel_size, activation='linear', padding='same')(x)
shortcut = x
for i in range(0, depth):
x = Conv3D(filters, kernel_size, activation='linear', padding='same')(x)
#x = BatchNormalization()(x)
x = LeakyReLU()(x)
x = add([shortcut, x])
x = LeakyReLU()(x)
return x
def down_conv_3D(x, filters, kernel_size=(2, 2, 2), strides=(2, 2, 2), kernel_initializer='he_normal'):
# Please add activation layer when building the model.
x = Conv3D(filters, kernel_size, strides=strides, padding='valid', activation='linear', kernel_initializer=kernel_initializer)(x)
return x
def up_conv_3D(x , filters, kernel_size=(2, 2, 2), strides=None, kernel_initializer='he_normal'):
# Please add activation layer when building the model.
if strides is None:
strides = kernel_size
x = Conv3DTranspose(filters, kernel_size=kernel_size, strides=strides, activation='linear', kernel_initializer=kernel_initializer)(x)
return x
#Remo's
def ActivationOp(layer_in, activation_type, name=None, l=0.1, shared_axes=(1, 2, 3)):
if (activation_type != 'prelu') & (activation_type != 'leakyrelu'):
return Activation(activation_type, name=name)(layer_in)
elif activation_type == 'prelu':
return PReLU(alpha_initializer=keras.initializers.Constant(value=l), shared_axes=shared_axes, name=name)(
layer_in)
else:
# TODO: check if alpha should be 0.01 instead
return LeakyReLU(l)(layer_in)
# 3D
def ResidualBlock3D(layer_in, depth=3, kernel_size=5, filters=None, bn=None, dropout=0., activation='relu', kernel_initializer='he_normal',
name=None, dropout_layer=None, training=False):
# Checking if we use BatchNorm
if bn is not None:
if bn == 'last':
bn = 1
elif bn == 'all':
bn = 2
else:
raise NotImplementedError('"bn" has to be one of [None, "last", "all"]')
else:
bn = 0
#droplayer = Dropout(dropout, name='{}_drop'.format(name)) if dropout_layer is None else dropout_layer
def drop(l):
if dropout > 0. or not dropout_layer is None:
return droplayer(l, training=training)
else:
return l
# creates a residual block with a given depth for 3D input
# there is NO non-linearity applied to the output! Has to be added manually
layer_in = drop(layer_in)
l = Conv3D(filters, kernel_size, padding='same', activation='linear', kernel_initializer=kernel_initializer,
name='{}_c0'.format(name))(layer_in)
if bn == 2 | (bn == 1 & depth == 1):
l = BatchNormalization(name='{}_bn0'.format(name))(l)
for i in range(1, depth):
a = ActivationOp(l, activation, name='{}_a{}'.format(name, i - 1))
a = drop(a)
l = Conv3D(filters, kernel_size, padding='same', activation='linear', kernel_initializer=kernel_initializer,
name='{}_c{}'.format(name, i))(a)
if bn == 2 | (bn == 1 & i == (depth-1)):
l = BatchNormalization(name='{}_bn{}'.format(name, i))(l)
o = Add()([layer_in, l])
# o = Activation_wrap(o, activation, name='{}_a{}'.format(name,depth))
return o
def ConvBlock3D(layer_in, depth=3, kernel_size=5, filters=None, bn=None, dropout=0., activation='relu', kernel_initializer='he_normal',
name=None, dropout_layer=None, training=False):
# Checking if we use BatchNorm
if bn is not None:
if bn == 'last':
bn = 1
elif bn == 'all':
bn = 2
else:
raise NotImplementedError('"bn" has to be one of [None, "last", "all"]')
else:
bn = 0
droplayer = Dropout(dropout, name='{}_drop'.format(name)) if dropout_layer is None else dropout_layer
def drop(l):
if dropout > 0. or not dropout_layer is None:
return droplayer(l, training=training)
else:
return l
# creates a block of subsequent convolutions with a given depth for 3D input
# there is NO non-linearity applied to the output! Has to be added manually
layer_in = drop(layer_in)
l = Conv3D(filters, kernel_size, padding='same', activation='linear', kernel_initializer=kernel_initializer,
name='{}_c0'.format(name))(layer_in)
if bn == 2 | (bn == 1 & depth == 1):
l = BatchNormalization(name='{}_bn0'.format(name))(l)
for i in range(1, depth):
a = ActivationOp(l, activation, name='{}_a{}'.format(name, i - 1))
a = drop(a)
l = Conv3D(filters, kernel_size, padding='same', activation='linear', kernel_initializer=kernel_initializer,
name='{}_c{}'.format(name, i))(a)
if bn == 2 | (bn == 1 & i == (depth-1)):
l = BatchNormalization(name='{}_bn{}'.format(name, i))(l)
o = l
# o = Add()([layer_in, l])
# o = Activation_wrap(o, activation, name='{}_a{}'.format(name,depth))
return o
def DownConv3D(layer_in, kernel_size=2, strides=(2, 2, 2), filters=None, dropout=0., activation='relu',
kernel_initializer='he_normal', name=None, dropout_layer=None, training=False):
if isinstance(strides, int):
strides = (strides, strides, strides)
if not dropout_layer is None:
layer_in = dropout_layer(layer_in, training=training)
elif dropout > 0.:
layer_in = Dropout(layer_in, name='{}_drop'.format(name))(layer_in, training=training)
dc = Conv3D(filters, kernel_size, strides=strides, padding='valid', activation='linear',
name='{}_dc0'.format(name), kernel_initializer=kernel_initializer)(layer_in)
dc = ActivationOp(dc, activation, name='{}_a0'.format(name))
return dc
def UpConv3D(layer_in, kernel_size=(2, 2, 2), strides=None, filters=None, dropout=0., activation='relu',
kernel_initializer='he_normal', name=None, dropout_layer=None, training=False):
if strides is None:
strides = kernel_size
elif isinstance(strides, int):
strides = (strides, strides, strides)
if not dropout_layer is None:
layer_in = dropout_layer(layer_in, training=training)
elif dropout > 0.:
layer_in = Dropout(layer_in, name='{}_drop'.format(name))(layer_in)
uc = Conv3DTranspose(filters, kernel_size=kernel_size, strides=strides, activation='linear',
name='{}_uc0'.format(name), kernel_initializer=kernel_initializer)(layer_in)
uc = ActivationOp(uc, activation, name='{}_a0'.format(name))
return uc
# 2D
def ResidualBlock2D(layer_in, depth=3, kernel_size=5, filters=None, bn=None, dropout=0., activation='relu', kernel_initializer='he_normal',
name=None, dropout_layer=None, training=False):
# creates a residual block with a given depth for 2D input
# there is NO non-linearity applied to the output! Has to be added manually
droplayer = Dropout(dropout, name='{}_drop'.format(name)) if dropout_layer is None else dropout_layer
def drop(l):
if dropout > 0. or not dropout_layer is None:
return droplayer(l, training=training)
else:
return l
if bn is not None:
if bn == 'last':
bn = 1
elif bn == 'all':
bn = 2
else:
raise NotImplementedError('"bn" has to be one of [None, "last", "all"]')
else:
bn = 0
layer_in = drop(layer_in)
l = Conv2D(filters, kernel_size, padding='same', activation='linear', kernel_initializer=kernel_initializer,
name='{}_c0'.format(name))(layer_in)
if bn == 2 or ((bn == 1) and (depth == 1)):
l = BatchNormalization(name='{}_bn0'.format(name))(l)
for i in range(1, depth):
a = ActivationOp(l, activation, name='{}_a{}'.format(name, i - 1))
a = drop(a)
l = Conv2D(filters, kernel_size, padding='same', activation='linear', kernel_initializer=kernel_initializer,
name='{}_c{}'.format(name, i))(a)
if bn == 2 or ((bn == 1) and (i == (depth-1))):
l = BatchNormalization(name='{}_bn{}'.format(name, i))(l)
o = Add(name='{}_add'.format(name))([layer_in, l])
return o
def ConvBlock2D(layer_in, depth=2, kernel_size=3, filters=None, dropout=0., activation='relu', kernel_initializer='he_normal',
name=None, dropout_layer=None, training=False):
# creates a "convolution block" (series of regular convolutions) with a given depth for 2D input
i = 0
droplayer = Dropout(dropout, name='{}_drop'.format(name)) if dropout_layer is None else dropout_layer
def drop(l):
if dropout > 0. or not dropout_layer is None:
return droplayer(l, training=training)
else:
return l
layer_in = drop(layer_in)
l = Conv2D(filters, kernel_size, padding='same', activation='linear', kernel_initializer=kernel_initializer,
name='{}_c0'.format(name))(layer_in)
for i in range(1, depth):
a = ActivationOp(l, activation, name='{}_a{}'.format(name, i - 1))
a = drop(a)
l = Conv2D(filters, kernel_size, padding='same', activation='linear', kernel_initializer=kernel_initializer,
name='{}_c{}'.format(name, i))(a)
o = ActivationOp(l, activation, name='{}_a{}'.format(name, i))
return o
def DownConv2D(layer_in, kernel_size=2, strides=(2, 2), filters=None, dropout=0., activation='relu', kernel_initializer='he_normal',
name=None, dropout_layer=None, training=False):
if isinstance(strides, int):
strides = (strides, strides)
if not dropout_layer is None:
layer_in = dropout_layer(layer_in, training=training)
elif dropout > 0.:
layer_in = Dropout(layer_in, name='{}_drop'.format(name))(layer_in)
dc = Conv2D(filters, kernel_size, strides=strides, padding='valid', activation='linear',
kernel_initializer=kernel_initializer,
name='{}_dc0'.format(name))(layer_in)
dc = ActivationOp(dc, activation, name='{}_a0'.format(name))
return dc
def UpConv2D(layer_in, kernel_size=(2, 2), strides=None, filters=None, dropout=0., activation='relu',
kernel_initializer='he_normal', name=None, dropout_layer=None, training=False):
if strides is None:
strides = kernel_size
elif isinstance(strides, int):
strides = (strides, strides)
if not dropout_layer is None:
layer_in = dropout_layer(layer_in, training=training)
elif dropout > 0.:
layer_in = Dropout(layer_in, name='{}_drop'.format(name))(layer_in)
uc = Conv2DTranspose(filters, kernel_size=kernel_size, strides=strides, activation='linear',
kernel_initializer=kernel_initializer,
name='{}_uc0'.format(name))(layer_in)
uc = ActivationOp(uc, activation, name='{}_a0'.format(name))
return uc
def UpScaleConv2D(layer_in, scale_factor=(2.,2.), kernel_size=None, strides=(1,1), filters=None, dropout=0., activation='relu', name=None, dropout_layer=None, training=False):
# https://stackoverflow.com/questions/47066635/checkpointing-keras-model-typeerror-cant-pickle-thread-lock-objects
# https://github.com/keras-team/keras/issues/5088#issuecomment-273851273
def wrap_tf_resize_nearest_neighbor(x, size):
import tensorflow as tf
return tf.image.resize_nearest_neighbor(x, size=size)
in_shape= K.int_shape(layer_in)
s1, s2 = int(scale_factor[0]*in_shape[1]), int(scale_factor[1]*in_shape[2])
up = Lambda(lambda x, scale1, scale2 : wrap_tf_resize_nearest_neighbor(x, size=K.constant([scale1, scale2], dtype='int32')), arguments={'scale1':s1,'scale2':s2}, name='{}_us0'.format(name))(layer_in)
if not dropout_layer is None:
layer_in = dropout_layer(layer_in, training=training)
elif dropout > 0.:
layer_in = Dropout(layer_in, name='{}_drop'.format(name))(layer_in, training=training)
uc = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides, padding='same', activation='linear',
name='{}_c0'.format(name))(up)
uc = ActivationOp(uc, activation, name='{}_a0'.format(name))
return uc | 43.794425 | 203 | 0.641181 | 1,673 | 12,569 | 4.666467 | 0.104603 | 0.052901 | 0.043551 | 0.060971 | 0.822083 | 0.809786 | 0.799411 | 0.768669 | 0.745869 | 0.738184 | 0 | 0.019859 | 0.22277 | 12,569 | 287 | 204 | 43.794425 | 0.779302 | 0.096109 | 0 | 0.76555 | 0 | 0 | 0.063233 | 0 | 0 | 0 | 0 | 0.003484 | 0 | 1 | 0.086124 | false | 0 | 0.023923 | 0 | 0.22488 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
732f0694ae4846526f8ed6ea8462b330bcfb72fb | 207 | py | Python | const/__init__.py | gorgeousbubble/Nightmare | b374b48877898b6193081b7a8a6d2fb571816c75 | [
"Apache-2.0"
] | 1 | 2019-10-24T15:47:18.000Z | 2019-10-24T15:47:18.000Z | const/__init__.py | gorgeousbubble/Nightmare | b374b48877898b6193081b7a8a6d2fb571816c75 | [
"Apache-2.0"
] | null | null | null | const/__init__.py | gorgeousbubble/Nightmare | b374b48877898b6193081b7a8a6d2fb571816c75 | [
"Apache-2.0"
] | 3 | 2019-10-24T15:47:25.000Z | 2020-11-01T01:26:41.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import const
from const.const import APPLICATION_NAME
from const.const import LOGS_DIR
from const.const import LOGS_TARGET
from const.const import PROJECT_NAME
| 25.875 | 40 | 0.792271 | 33 | 207 | 4.848485 | 0.484848 | 0.225 | 0.35 | 0.5 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 0.120773 | 207 | 7 | 41 | 29.571429 | 0.868132 | 0.207729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
73362ecea2bf1f65802777a4970a32a5470bc638 | 23,848 | py | Python | tests/test_simple_run.py | such-a-git/neat-spinnaker | 78294750568bb82c79f403986b2e1a948949a2b6 | [
"BSD-3-Clause"
] | 51 | 2019-02-01T19:43:37.000Z | 2022-03-16T09:07:03.000Z | tests/test_simple_run.py | desaianand1/neat-python | e3dbe77c0d776eae41d598e6439e6ac02ab90b18 | [
"BSD-3-Clause"
] | 2 | 2019-02-23T18:54:22.000Z | 2019-11-09T01:30:32.000Z | tests/test_simple_run.py | desaianand1/neat-python | e3dbe77c0d776eae41d598e6439e6ac02ab90b18 | [
"BSD-3-Clause"
] | 35 | 2019-02-08T02:00:31.000Z | 2022-03-01T23:17:00.000Z | from __future__ import print_function
import os
import neat
VERBOSE = True
def eval_dummy_genome_nn(genome, config):
net = neat.nn.FeedForwardNetwork.create(genome, config)
ignored_output = net.activate((0.5, 0.5))
return 0.0
def eval_dummy_genomes_nn(genomes, config):
for genome_id, genome in genomes:
genome.fitness = eval_dummy_genome_nn(genome, config)
def test_serial():
"""Test basic (dummy fitness function) non-parallel run."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(True))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(1, 5))
# Run for up to 19 generations.
p.run(eval_dummy_genomes_nn, 19)
stats.save()
# stats.save_genome_fitness(with_cross_validation=True)
assert len(stats.get_fitness_stdev())
# stats.get_average_cross_validation_fitness()
stats.best_unique_genomes(5)
stats.best_genomes(5)
stats.best_genome()
p.remove_reporter(stats)
def eval_dummy_genome_nn_bad(genome, config):
net = neat.nn.FeedForwardNetwork.create(genome, config)
ignored_output = net.activate((0.5, 0.5, 0.5))
return 0.0
def eval_dummy_genomes_nn_bad(genomes, config):
for genome_id, genome in genomes:
genome.fitness = eval_dummy_genome_nn_bad(genome, config)
def test_serial_bad_input():
"""Make sure get error for bad input."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
try:
p.run(eval_dummy_genomes_nn_bad, 45)
except Exception: # may change in nn.feed_forward code to more specific...
pass
else:
raise Exception("Did not get Exception from bad input")
def test_serial_random():
"""Test basic (dummy fitness function) non-parallel run w/random activation, aggregation init."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration2')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
if VERBOSE:
print("config.genome_config.__dict__: {!r}".format(
config.genome_config.__dict__))
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(VERBOSE))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(15, 1))
# Run for up to 45 generations.
p.run(eval_dummy_genomes_nn, 45)
stats.save()
# stats.save_genome_fitness(with_cross_validation=True)
stats.get_fitness_stdev()
# stats.get_average_cross_validation_fitness()
stats.best_unique_genomes(5)
stats.best_genomes(5)
stats.best_genome()
p.remove_reporter(stats)
def test_serial3():
"""Test more configuration variations for simple serial run."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration3')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
if VERBOSE:
print("config.genome_config.__dict__: {!r}".format(
config.genome_config.__dict__))
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(VERBOSE))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(15, 1))
# Run for up to 45 generations.
p.run(eval_dummy_genomes_nn, 45)
stats.save()
# stats.save_genome_fitness(with_cross_validation=True)
stats.get_fitness_stdev()
# stats.get_average_cross_validation_fitness()
stats.best_unique_genomes(5)
stats.best_genomes(5)
stats.best_genome()
p.remove_reporter(stats)
def test_serial4():
"""Test more configuration variations for simple serial run."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration4')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
if VERBOSE:
print("config.genome_config.__dict__: {!r}".format(
config.genome_config.__dict__))
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(VERBOSE))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(15, 1))
# Run for up to 45 generations.
p.run(eval_dummy_genomes_nn, 45)
stats.save()
# stats.save_genome_fitness(with_cross_validation=True)
stats.get_fitness_stdev()
# stats.get_average_cross_validation_fitness()
stats.best_unique_genomes(5)
stats.best_genomes(5)
stats.best_genome()
p.remove_reporter(stats)
def test_serial5():
"""Test more configuration variations for simple serial run."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration5')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
if VERBOSE:
print("config.genome_config.__dict__: {!r}".format(
config.genome_config.__dict__))
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(VERBOSE))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(15, 1))
# Run for up to 45 generations.
p.run(eval_dummy_genomes_nn, 45)
stats.save()
# stats.save_genome_fitness(with_cross_validation=True)
stats.get_fitness_stdev()
# stats.get_average_cross_validation_fitness()
stats.best_unique_genomes(5)
stats.best_genomes(5)
stats.best_genome()
p.remove_reporter(stats)
def test_serial4_bad():
"""Make sure no_fitness_termination and n=None give an error."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration4')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
if VERBOSE:
print("config.genome_config.__dict__: {!r}".format(
config.genome_config.__dict__))
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
try:
p.run(eval_dummy_genomes_nn, None)
except RuntimeError:
pass
else:
raise Exception(
"Should have had a RuntimeError with n=None and no_fitness_termination")
def test_serial_bad_config():
"""Test if bad_configuration1 causes a LookupError or TypeError on trying to run."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'bad_configuration1')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
try:
p.run(eval_dummy_genomes_nn, 19)
except (LookupError,TypeError):
pass
else:
raise Exception(
"Should have had a LookupError/TypeError with bad_configuration1")
def test_serial_bad_configA():
"""Test if bad_configurationA causes a RuntimeError on trying to create the population."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'bad_configurationA')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
try:
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
except RuntimeError:
pass
else:
raise Exception(
"Should have had a RuntimeError with bad_configurationA")
def test_serial_extinction_exception():
"""Test for complete extinction with exception."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
config.stagnation_config.max_stagnation = 1
config.stagnation_config.species_elitism = 0
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(True))
try:
# Run for up to 45 generations.
p.run(eval_dummy_genomes_nn, 45)
except Exception:
pass
else:
raise Exception("Should have had a complete extinction at some point!")
def test_serial_extinction_no_exception():
"""Test for complete extinction without exception."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
config.stagnation_config.max_stagnation = 1
config.stagnation_config.species_elitism = 0
config.reset_on_extinction = True
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
reporter = neat.StdOutReporter(True)
p.add_reporter(reporter)
stats = neat.StatisticsReporter()
p.add_reporter(stats)
# Run for up to 45 generations.
p.run(eval_dummy_genomes_nn, 45)
assert reporter.num_extinctions > 0, "No extinctions happened!"
stats.save()
p.remove_reporter(stats)
def test_parallel():
"""Test parallel run using ParallelEvaluator (subprocesses)."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(VERBOSE))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(1, 5))
# Run for up to 19 generations.
pe = neat.ParallelEvaluator(4, eval_dummy_genome_nn)
p.run(pe.evaluate, 19)
stats.save()
def test_threaded_evaluation():
"""Tests a neat evolution using neat.threaded.ThreadedEvaluator"""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(True))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(1, 5))
# Run for up to 19 generations.
pe = neat.ThreadedEvaluator(4, eval_dummy_genome_nn)
p.run(pe.evaluate, 19)
stats.save()
def test_threaded_evaluator():
"""Tests general functionality of neat.threaded.ThreadedEvaluator"""
n_workers = 3
e = neat.ThreadedEvaluator(n_workers, eval_dummy_genome_nn)
try:
# ensure workers are not started
if (len(e.workers) > 0) or (e.working):
raise Exception("ThreadedEvaluator started on __init__()!")
# ensure start() starts the workers
e.start()
if (len(e.workers) != n_workers) or (not e.working):
raise Exception("ThreadedEvaluator did not start on start()!")
w = e.workers[0]
if not w.is_alive():
raise Exception("Workers did not start on start()")
# ensure a second call to start() does nothing when already started
e.start()
if (len(e.workers) != n_workers) or (not e.working):
raise Exception(
"A second ThreadedEvaluator.start() call was not ignored!"
)
w = e.workers[0]
if not w.is_alive():
raise Exception("A worker died or stopped!")
# ensure stop() works
e.stop()
if (len(e.workers) != 0) or (e.working):
raise Exception(
"ThreadedEvaluator.stop() did not work!"
)
if w.is_alive():
raise Exception("A worker is still alive!")
# ensure a second call to stop() does nothing when already stopped
e.stop()
if (len(e.workers) != 0) or (e.working):
raise Exception(
"A second ThreadedEvaluator.stop() call was not ignored!"
)
if w.is_alive():
raise Exception("A worker is still alive or was resurrected!")
# ensure a restart is possible
# ensure start() starts the workers
e.start()
if (len(e.workers) != n_workers) or (not e.working):
raise Exception("ThreadedEvaluator did not restart on start()!")
w = e.workers[0]
if not w.is_alive():
raise Exception("Workers did not start on start()")
finally: # try to close if KeyboardInterrupt or similar
if len(e.workers) or e.working:
e.stop()
# ensure del stops workers
del e
# unfortunately, __del__() may never be called, even when using del
# this means that testing for __del__() to call stop() may not work
# this test had a high random failure rate, so i removed it.
# if w.is_alive():
# raise Exception("__del__() did not stop workers!")
def eval_dummy_genomes_nn_recurrent(genomes, config):
for ignored_genome_id, genome in genomes:
net = neat.nn.RecurrentNetwork.create(genome, config)
ignored_output = net.activate((0.5,0.5))
net.reset()
genome.fitness = 0.0
def test_run_nn_recurrent():
"""Basic test of nn.recurrent function."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
config.feed_forward = False
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(VERBOSE))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(1, 5))
# Run for up to 19 generations.
p.run(eval_dummy_genomes_nn_recurrent, 19)
stats.save()
def eval_dummy_genomes_nn_recurrent_bad(genomes, config):
for ignored_genome_id, genome in genomes:
net = neat.nn.RecurrentNetwork.create(genome, config)
ignored_output = net.activate((0.5,0.5,0.5))
net.reset()
genome.fitness = 0.0
def test_run_nn_recurrent_bad():
"""Make sure nn.recurrent gives error on bad input."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
config.feed_forward = False
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
try:
p.run(eval_dummy_genomes_nn_recurrent_bad, 19)
except Exception: # again, may change to more specific in nn.recurrent
pass
else:
raise Exception("Did not get Exception for bad input to nn.recurrent")
def eval_dummy_genomes_ctrnn(genomes, config):
for genome_id, genome in genomes:
net = neat.ctrnn.CTRNN.create(genome, config, 0.01)
if genome_id <= 150:
genome.fitness = 0.0
else:
net.reset()
genome.fitness = 1.0
def test_run_ctrnn():
"""Basic test of continuous-time recurrent neural network (ctrnn)."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
config.feed_forward = False
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(VERBOSE))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(1, 5))
# Run for up to 19 generations.
p.run(eval_dummy_genomes_ctrnn, 19)
stats.save()
unique_genomes = stats.best_unique_genomes(5)
assert 1 <= len(unique_genomes) <= 5, "Unique genomes: {!r}".format(unique_genomes)
genomes = stats.best_genomes(5)
assert 1 <= len(genomes) <= 5, "Genomes: {!r}".format(genomes)
stats.best_genome()
p.remove_reporter(stats)
def eval_dummy_genomes_ctrnn_bad(genomes, config):
for genome_id, genome in genomes:
net = neat.ctrnn.CTRNN.create(genome, config, 0.01)
net.advance([0.5,0.5,0.5], 0.01, 0.05)
if genome_id <= 150:
genome.fitness = 0.0
else:
net.reset()
genome.fitness = 1.0
def test_run_ctrnn_bad():
"""Make sure ctrnn gives error on bad input."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
config.feed_forward = False
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
try:
p.run(eval_dummy_genomes_ctrnn_bad, 19)
except RuntimeError:
pass
else:
raise Exception("Did not get RuntimeError for bad input to ctrnn")
def eval_dummy_genomes_iznn(genomes, config):
for genome_id, genome in genomes:
net = neat.iznn.IZNN.create(genome, config)
if genome_id < 10:
net.reset()
genome.fitness = 0.0
elif genome_id <= 150:
genome.fitness = 0.5
else:
genome.fitness = 1.0
def test_run_iznn():
"""
Basic test of spiking neural network (iznn).
[TODO: Takes the longest of any of the tests in this file, by far. Why?]
Was because had population of 290 thanks to too much speciation -
too-high compatibility_weight_coefficient relative to range for weights.
"""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration_iznn')
config = neat.Config(neat.iznn.IZGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
# Add a stdout reporter to show progress in the terminal.
p.add_reporter(neat.StdOutReporter(True))
stats = neat.StatisticsReporter()
p.add_reporter(stats)
p.add_reporter(neat.Checkpointer(2, 10))
# Run for up to 20 generations.
p.run(eval_dummy_genomes_iznn, 20)
stats.save()
unique_genomes = stats.best_unique_genomes(5)
assert 1 <= len(unique_genomes) <= 5, "Unique genomes: {!r}".format(unique_genomes)
genomes = stats.best_genomes(5)
assert len(genomes) == 5, "Genomes: {!r}".format(genomes)
stats.best_genome()
p.remove_reporter(stats)
def eval_dummy_genomes_iznn_bad(genomes, config):
for genome_id, genome in genomes:
net = neat.iznn.IZNN.create(genome, config)
net.set_inputs([0.5,0.5,0.5])
if genome_id < 10:
net.reset()
genome.fitness = 0.0
elif genome_id <= 150:
genome.fitness = 0.5
else:
genome.fitness = 1.0
def test_run_iznn_bad():
"""Make sure iznn gives error on bad input."""
# Load configuration.
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'test_configuration_iznn')
config = neat.Config(neat.iznn.IZGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
# Create the population, which is the top-level object for a NEAT run.
p = neat.Population(config)
try:
p.run(eval_dummy_genomes_iznn_bad, 19)
except RuntimeError:
pass
else:
raise Exception("Did not get RuntimeError for bad input to iznn")
if __name__ == '__main__':
VERBOSE = False
test_serial()
test_serial_random()
test_serial3()
test_serial4()
test_serial5()
test_serial_bad_config()
test_serial_bad_configA()
test_serial_extinction_exception()
test_serial_extinction_no_exception()
test_parallel()
test_threaded_evaluation()
test_threaded_evaluator()
test_run_nn_recurrent()
test_run_nn_recurrent_bad()
test_run_ctrnn()
test_run_ctrnn_bad()
test_run_iznn()
test_run_iznn_bad()
| 34.865497 | 101 | 0.667016 | 3,033 | 23,848 | 5.037257 | 0.087372 | 0.019898 | 0.02592 | 0.021992 | 0.852271 | 0.829559 | 0.811428 | 0.801807 | 0.786425 | 0.786229 | 0 | 0.012351 | 0.239517 | 23,848 | 683 | 102 | 34.916545 | 0.830062 | 0.22136 | 0 | 0.758389 | 0 | 0 | 0.080669 | 0.016994 | 0 | 0 | 0 | 0.001464 | 0.013423 | 1 | 0.067114 | false | 0.017897 | 0.006711 | 0 | 0.0783 | 0.013423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b408599ae22fb3008cb81f43c6e81be9ebe6bd7b | 14,188 | py | Python | migrations/versions/3f0b987d79ff_var_types.py | radioglaciology/radarfilmstudio | 461830bcd459d9e6be3ed50c9e975ad17b799903 | [
"MIT"
] | null | null | null | migrations/versions/3f0b987d79ff_var_types.py | radioglaciology/radarfilmstudio | 461830bcd459d9e6be3ed50c9e975ad17b799903 | [
"MIT"
] | null | null | null | migrations/versions/3f0b987d79ff_var_types.py | radioglaciology/radarfilmstudio | 461830bcd459d9e6be3ed50c9e975ad17b799903 | [
"MIT"
] | null | null | null | """var types
Revision ID: 3f0b987d79ff
Revises: 8415ee5e38b2
Create Date: 2021-02-22 23:01:23.384196
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision = '3f0b987d79ff'
down_revision = '8415ee5e38b2'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.alter_column('film_segment', 'dataset',
existing_type=sa.VARCHAR(length=100),
nullable=True)
op.alter_column('film_segment', 'first_cbd',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True)
op.alter_column('film_segment', 'first_frame',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True)
op.alter_column('film_segment', 'flight',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True)
op.alter_column('film_segment', 'id',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=False,
autoincrement=True,
existing_server_default=sa.text("nextval('film_segment_id_seq'::regclass)"))
op.alter_column('film_segment', 'instrument_type',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True)
op.alter_column('film_segment', 'last_cbd',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True)
op.alter_column('film_segment', 'last_changed',
existing_type=postgresql.TIMESTAMP(timezone=True),
type_=sa.DateTime(),
existing_nullable=True)
op.alter_column('film_segment', 'last_frame',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True)
op.alter_column('film_segment', 'notes',
existing_type=sa.TEXT(),
type_=sa.String(),
existing_nullable=True)
op.alter_column('film_segment', 'path',
existing_type=sa.TEXT(),
type_=sa.String(length=300),
existing_nullable=True)
op.alter_column('film_segment', 'reel',
existing_type=sa.BIGINT(),
type_=sa.String(length=100),
existing_nullable=True)
op.alter_column('film_segment', 'scope_type',
existing_type=sa.TEXT(),
type_=sa.String(length=100),
existing_nullable=True)
op.alter_column('film_segment', 'updated_by',
existing_type=sa.TEXT(),
type_=sa.String(),
existing_nullable=True)
op.alter_column('film_segment_version', 'first_cbd',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'first_frame',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'flight',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'id',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=False,
autoincrement=False)
op.alter_column('film_segment_version', 'instrument_type',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'last_cbd',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'last_changed',
existing_type=postgresql.TIMESTAMP(timezone=True),
type_=sa.DateTime(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'last_frame',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'notes',
existing_type=sa.TEXT(),
type_=sa.String(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'path',
existing_type=sa.TEXT(),
type_=sa.String(length=300),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'reel',
existing_type=sa.BIGINT(),
type_=sa.String(length=100),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'scope_type',
existing_type=sa.TEXT(),
type_=sa.String(length=100),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'updated_by',
existing_type=sa.TEXT(),
type_=sa.String(),
existing_nullable=True,
autoincrement=False)
op.alter_column('flasklogin-users', 'created_on',
existing_type=postgresql.TIMESTAMP(timezone=True),
type_=sa.DateTime(),
existing_nullable=False)
op.alter_column('flasklogin-users', 'email',
existing_type=sa.TEXT(),
type_=sa.String(),
existing_nullable=False)
op.alter_column('flasklogin-users', 'first_name',
existing_type=sa.TEXT(),
type_=sa.String(),
existing_nullable=False)
op.alter_column('flasklogin-users', 'id',
existing_type=sa.BIGINT(),
type_=sa.Integer(),
existing_nullable=False,
autoincrement=True)
op.alter_column('flasklogin-users', 'last_login',
existing_type=postgresql.TIMESTAMP(timezone=True),
type_=sa.DateTime(),
existing_nullable=True)
op.alter_column('flasklogin-users', 'last_name',
existing_type=sa.TEXT(),
type_=sa.String(),
existing_nullable=False)
op.alter_column('flasklogin-users', 'password',
existing_type=sa.TEXT(),
type_=sa.String(length=200),
existing_nullable=False)
op.alter_column('transaction', 'issued_at',
existing_type=postgresql.TIMESTAMP(timezone=True),
type_=sa.DateTime(),
existing_nullable=True)
op.alter_column('transaction', 'remote_addr',
existing_type=sa.TEXT(),
type_=sa.String(length=50),
existing_nullable=True)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.alter_column('transaction', 'remote_addr',
existing_type=sa.String(length=50),
type_=sa.TEXT(),
existing_nullable=True)
op.alter_column('transaction', 'issued_at',
existing_type=sa.DateTime(),
type_=postgresql.TIMESTAMP(timezone=True),
existing_nullable=True)
op.alter_column('flasklogin-users', 'password',
existing_type=sa.String(length=200),
type_=sa.TEXT(),
existing_nullable=False)
op.alter_column('flasklogin-users', 'last_name',
existing_type=sa.String(),
type_=sa.TEXT(),
existing_nullable=False)
op.alter_column('flasklogin-users', 'last_login',
existing_type=sa.DateTime(),
type_=postgresql.TIMESTAMP(timezone=True),
existing_nullable=True)
op.alter_column('flasklogin-users', 'id',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=False,
autoincrement=True)
op.alter_column('flasklogin-users', 'first_name',
existing_type=sa.String(),
type_=sa.TEXT(),
existing_nullable=False)
op.alter_column('flasklogin-users', 'email',
existing_type=sa.String(),
type_=sa.TEXT(),
existing_nullable=False)
op.alter_column('flasklogin-users', 'created_on',
existing_type=sa.DateTime(),
type_=postgresql.TIMESTAMP(timezone=True),
existing_nullable=False)
op.alter_column('film_segment_version', 'updated_by',
existing_type=sa.String(),
type_=sa.TEXT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'scope_type',
existing_type=sa.String(length=100),
type_=sa.TEXT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'reel',
existing_type=sa.String(length=100),
type_=sa.BIGINT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'path',
existing_type=sa.String(length=300),
type_=sa.TEXT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'notes',
existing_type=sa.String(),
type_=sa.TEXT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'last_frame',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'last_changed',
existing_type=sa.DateTime(),
type_=postgresql.TIMESTAMP(timezone=True),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'last_cbd',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'instrument_type',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'id',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=False,
autoincrement=False)
op.alter_column('film_segment_version', 'flight',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'first_frame',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment_version', 'first_cbd',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True,
autoincrement=False)
op.alter_column('film_segment', 'updated_by',
existing_type=sa.String(),
type_=sa.TEXT(),
existing_nullable=True)
op.alter_column('film_segment', 'scope_type',
existing_type=sa.String(length=100),
type_=sa.TEXT(),
existing_nullable=True)
op.alter_column('film_segment', 'reel',
existing_type=sa.String(length=100),
type_=sa.BIGINT(),
existing_nullable=True)
op.alter_column('film_segment', 'path',
existing_type=sa.String(length=300),
type_=sa.TEXT(),
existing_nullable=True)
op.alter_column('film_segment', 'notes',
existing_type=sa.String(),
type_=sa.TEXT(),
existing_nullable=True)
op.alter_column('film_segment', 'last_frame',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True)
op.alter_column('film_segment', 'last_changed',
existing_type=sa.DateTime(),
type_=postgresql.TIMESTAMP(timezone=True),
existing_nullable=True)
op.alter_column('film_segment', 'last_cbd',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True)
op.alter_column('film_segment', 'instrument_type',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True)
op.alter_column('film_segment', 'id',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=False,
autoincrement=True,
existing_server_default=sa.text("nextval('film_segment_id_seq'::regclass)"))
op.alter_column('film_segment', 'flight',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True)
op.alter_column('film_segment', 'first_frame',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True)
op.alter_column('film_segment', 'first_cbd',
existing_type=sa.Integer(),
type_=sa.BIGINT(),
existing_nullable=True)
op.alter_column('film_segment', 'dataset',
existing_type=sa.VARCHAR(length=100),
nullable=False)
# ### end Alembic commands ###
| 41.124638 | 91 | 0.570553 | 1,417 | 14,188 | 5.415667 | 0.067749 | 0.103206 | 0.12197 | 0.119625 | 0.955173 | 0.950743 | 0.948006 | 0.948006 | 0.935627 | 0.913344 | 0 | 0.010497 | 0.315125 | 14,188 | 344 | 92 | 41.244186 | 0.779253 | 0.02051 | 0 | 0.954128 | 0 | 0 | 0.13088 | 0.005772 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006116 | false | 0.006116 | 0.009174 | 0 | 0.015291 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b409ae22cde415c6bd8450fe544b4baa3687056b | 20,835 | py | Python | spinnaker_swagger_client/api/v_2_canary_config_controller_api.py | coveooss/spinnaker_python_client | 6f5ae436798cb4985ada65cd8169fcc9494d048f | [
"Apache-2.0"
] | null | null | null | spinnaker_swagger_client/api/v_2_canary_config_controller_api.py | coveooss/spinnaker_python_client | 6f5ae436798cb4985ada65cd8169fcc9494d048f | [
"Apache-2.0"
] | null | null | null | spinnaker_swagger_client/api/v_2_canary_config_controller_api.py | coveooss/spinnaker_python_client | 6f5ae436798cb4985ada65cd8169fcc9494d048f | [
"Apache-2.0"
] | 2 | 2019-10-17T07:49:21.000Z | 2021-08-10T23:12:41.000Z | # coding: utf-8
"""
Spinnaker API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from spinnaker_swagger_client.api_client import ApiClient
class V2CanaryConfigControllerApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_canary_config_using_post(self, config, **kwargs): # noqa: E501
"""Create a canary configuration # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_canary_config_using_post(config, async_req=True)
>>> result = thread.get()
:param async_req bool
:param object config: config (required)
:param str configuration_account_name: configurationAccountName
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_canary_config_using_post_with_http_info(config, **kwargs) # noqa: E501
else:
(data) = self.create_canary_config_using_post_with_http_info(config, **kwargs) # noqa: E501
return data
def create_canary_config_using_post_with_http_info(self, config, **kwargs): # noqa: E501
"""Create a canary configuration # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_canary_config_using_post_with_http_info(config, async_req=True)
>>> result = thread.get()
:param async_req bool
:param object config: config (required)
:param str configuration_account_name: configurationAccountName
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['config', 'configuration_account_name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_canary_config_using_post" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'config' is set
if ('config' not in params or
params['config'] is None):
raise ValueError("Missing the required parameter `config` when calling `create_canary_config_using_post`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'configuration_account_name' in params:
query_params.append(('configurationAccountName', params['configuration_account_name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'config' in params:
body_params = params['config']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v2/canaryConfig', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_canary_config_using_delete(self, id, **kwargs): # noqa: E501
"""Delete a canary configuration # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_canary_config_using_delete(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: id (required)
:param str configuration_account_name: configurationAccountName
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_canary_config_using_delete_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.delete_canary_config_using_delete_with_http_info(id, **kwargs) # noqa: E501
return data
def delete_canary_config_using_delete_with_http_info(self, id, **kwargs): # noqa: E501
"""Delete a canary configuration # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_canary_config_using_delete_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: id (required)
:param str configuration_account_name: configurationAccountName
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'configuration_account_name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_canary_config_using_delete" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `delete_canary_config_using_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
if 'configuration_account_name' in params:
query_params.append(('configurationAccountName', params['configuration_account_name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v2/canaryConfig/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_canary_config_using_get(self, id, **kwargs): # noqa: E501
"""Retrieve a canary configuration by id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_canary_config_using_get(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: id (required)
:param str configuration_account_name: configurationAccountName
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_canary_config_using_get_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.get_canary_config_using_get_with_http_info(id, **kwargs) # noqa: E501
return data
def get_canary_config_using_get_with_http_info(self, id, **kwargs): # noqa: E501
"""Retrieve a canary configuration by id # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_canary_config_using_get_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: id (required)
:param str configuration_account_name: configurationAccountName
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'configuration_account_name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_canary_config_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_canary_config_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
if 'configuration_account_name' in params:
query_params.append(('configurationAccountName', params['configuration_account_name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v2/canaryConfig/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_canary_configs_using_get(self, **kwargs): # noqa: E501
"""Retrieve a list of canary configurations # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_canary_configs_using_get(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str application: application
:param str configuration_account_name: configurationAccountName
:return: list[object]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_canary_configs_using_get_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_canary_configs_using_get_with_http_info(**kwargs) # noqa: E501
return data
def get_canary_configs_using_get_with_http_info(self, **kwargs): # noqa: E501
"""Retrieve a list of canary configurations # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_canary_configs_using_get_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str application: application
:param str configuration_account_name: configurationAccountName
:return: list[object]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['application', 'configuration_account_name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_canary_configs_using_get" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'application' in params:
query_params.append(('application', params['application'])) # noqa: E501
if 'configuration_account_name' in params:
query_params.append(('configurationAccountName', params['configuration_account_name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v2/canaryConfig', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[object]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_canary_config_using_put(self, config, id, **kwargs): # noqa: E501
"""Update a canary configuration # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_canary_config_using_put(config, id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param object config: config (required)
:param str id: id (required)
:param str configuration_account_name: configurationAccountName
:return: object
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.update_canary_config_using_put_with_http_info(config, id, **kwargs) # noqa: E501
else:
(data) = self.update_canary_config_using_put_with_http_info(config, id, **kwargs) # noqa: E501
return data
def update_canary_config_using_put_with_http_info(self, config, id, **kwargs): # noqa: E501
"""Update a canary configuration # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_canary_config_using_put_with_http_info(config, id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param object config: config (required)
:param str id: id (required)
:param str configuration_account_name: configurationAccountName
:return: object
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['config', 'id', 'configuration_account_name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_canary_config_using_put" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'config' is set
if ('config' not in params or
params['config'] is None):
raise ValueError("Missing the required parameter `config` when calling `update_canary_config_using_put`") # noqa: E501
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `update_canary_config_using_put`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
if 'configuration_account_name' in params:
query_params.append(('configurationAccountName', params['configuration_account_name'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'config' in params:
body_params = params['config']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['*/*']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/v2/canaryConfig/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='object', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 39.163534 | 132 | 0.619486 | 2,362 | 20,835 | 5.181626 | 0.068163 | 0.045102 | 0.045837 | 0.029414 | 0.95596 | 0.939701 | 0.928017 | 0.911512 | 0.893374 | 0.883732 | 0 | 0.015038 | 0.291433 | 20,835 | 531 | 133 | 39.237288 | 0.813994 | 0.317975 | 0 | 0.803509 | 1 | 0 | 0.194425 | 0.092377 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038596 | false | 0 | 0.014035 | 0 | 0.108772 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b409fd4792839a18253c3032cf09d53ebebee586 | 59,704 | py | Python | unrolled-lutnet/lutnet/h5py-2-hls/CIFAR_10/h52header_5lut_sparse.py | awai54st/LUTNet | 81b044f31d1131bee1a7fae41fc4d2fb102ea73a | [
"BSD-2-Clause"
] | 38 | 2019-10-28T10:06:33.000Z | 2022-02-21T21:38:39.000Z | unrolled-lutnet/lutnet/h5py-2-hls/CIFAR_10/h52header_5lut_sparse.py | awai54st/LUTNet | 81b044f31d1131bee1a7fae41fc4d2fb102ea73a | [
"BSD-2-Clause"
] | null | null | null | unrolled-lutnet/lutnet/h5py-2-hls/CIFAR_10/h52header_5lut_sparse.py | awai54st/LUTNet | 81b044f31d1131bee1a7fae41fc4d2fb102ea73a | [
"BSD-2-Clause"
] | 13 | 2019-10-28T10:17:48.000Z | 2021-08-10T21:37:11.000Z | import h5py
import numpy as np
def SignNumpy(x):
return np.greater(x,0)
# convert a fully connected binarized layer plus batch normalization into
# the simplified form (binary weight and positive threshold)
# note that the neurons are assumed to be in the columns of the weight
# matrix
def makeBNComplex(after_bn_thres, fanin, beta, gamma, mean, invstd, use_rowmajor=False, usePopCount=True):
outs = fanin.shape[0]
print ("Extracting FCBN complex, outs = %d" % (outs))
# we'll fill in the binarized weights and thresholds iteratively
# w_bin = range(ins*outs)
thresholds = range(outs)
for neuron in range(outs):
# compute a preliminary threshold from the batchnorm parameters
thres = mean[neuron] + ((after_bn_thres - beta[neuron]) / (abs(gamma[neuron]*invstd[neuron])+1e-4))
need_flip = 0
# ensure all neurons activate on the "positive" side, so we can use
# greater-than-threshold activation
# if gamma[neuron]*invstd[neuron] < 0:
# need_flip = 1
# thres = -thres
# if thres > 32767:
# thres = 32767
# if thres < -32768:
# thres = -32768
# turn threshold into "number of 1s" (popcount) instead of signed sum
if usePopCount:
#thresholds[neuron] = int((fanin[neuron] + thres) / 2)
thresholds[neuron] = (fanin[neuron] + thres) / 2
else:
thresholds[neuron] = thres
# # binarize the synapses
# for synapse in range(ins):
# # note how we change from col major to row major if requested
# dest_ind = neuron*ins+synapse if use_rowmajor else synapse*outs+neuron
# if need_flip:
# w_bin[dest_ind] = binarize(-weights[synapse][neuron])
# else:
# w_bin[dest_ind] = binarize(weights[synapse][neuron])
# # reshape the output as desired
# if use_rowmajor:
# w_bin = np.asarray(w_bin).reshape((outs, ins))
# else:
# w_bin = np.asarray(w_bin).reshape((ins, outs))
#return (w_bin, thresholds)
return thresholds
# binarize and pack convolutional layer weights into a matrix and compute
# thresholds from the conv bias and batchnorm parameters
def makeConvBNComplex(fanin, beta, gamma, mean, invstd, interleaveChannels=False, usePopCount=True):
numOut = fanin.shape[0]
print ("Extracting conv-BN complex, OFM=%d" % (numOut))
# the fanin is used to ensure positive-only threshold
# w_bin = range(numOut * numIn * k * k)
# one threshold per output channel
thresholds = range(numOut)
# dest_ind = 0
# we'll fill in the binarized weights and thresholds iteratively
for neuron in range(numOut):
# compute a preliminary threshold from the batchnorm parameters,
# subtracting the conv bias from the batchnorm mean
thres = mean[neuron] - (beta[neuron] / (gamma[neuron]*invstd[neuron]))
# need_flip = 0
# ensure all neurons activate on the "positive" side, so we can use
# greater-than-threshold activation
if gamma[neuron]*invstd[neuron] < 0:
# need_flip = 1
thres = -thres
# turn threshold into "number of 1s" (popcount) instead of signed sum
if usePopCount:
thresholds[neuron] = int((fanin[neuron] + thres) / 2)
else:
thresholds[neuron] = thres
# # go through each weight of each convolutional kernel
# if interleaveChannels:
# for ky in range(k):
# for kx in range(k):
# for ifm in range(numIn):
# f = -1 if need_flip else +1
# w_bin[dest_ind] = binarize(f*weights[neuron][ifm][ky][kx])
# dest_ind += 1
# else:
# for ifm in range(numIn):
# for ky in range(k):
# for kx in range(k):
# f = -1 if need_flip else +1
# w_bin[dest_ind] = binarize(f*weights[neuron][ifm][ky][kx])
# dest_ind += 1
#
# # reshape the output as desired
# w_bin = np.asarray(w_bin).reshape((numOut, fanin))
# return (w_bin, thresholds)
return thresholds
if __name__ == "__main__":
print("Loading the pretrained parameters...")
bl = h5py.File("pretrained_network_5lut.h5", 'r')
#bl = h5py.File("dummy.h5", 'r')
# init model parameter lists
batch_norm_eps=1e-4
weights = []
gammas = []
means = []
pruning_masks = []
rand_maps = []
bn_betas = []
bn_gammas = []
bn_means = []
bn_inv_stds = []
# conv layer 1
bl_w1 = np.array(bl["model_weights"]["binary_conv_1"]["binary_conv_1"]["Variable_1:0"])
bl_rand_map = np.array(bl["model_weights"]["binary_conv_1"]["binary_conv_1"]["rand_map_0:0"])
bl_pruning_mask = np.array(bl["model_weights"]["binary_conv_1"]["binary_conv_1"]["pruning_mask:0"]).reshape(bl_w1.shape)
bl_gamma = np.array(bl["model_weights"]["binary_conv_1"]["binary_conv_1"]["Variable:0"])
bl_bn_beta = np.array(bl["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["beta:0"])
bl_bn_gamma = np.array(bl["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["gamma:0"])
bl_bn_mean = np.array(bl["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["moving_mean:0"])
bl_bn_inv_std = 1/np.sqrt(np.array(bl["model_weights"]["batch_normalization_1"]["batch_normalization_1"]["moving_variance:0"])+batch_norm_eps)
bl_means = bl["model_weights"]["residual_sign_1"]["residual_sign_1"]["means:0"]
##Pruning
#bl_w1 = bl_w1 * np.reshape(bl_pruning_mask, (bl_w1.shape))
w_lut = [bl_w1]
#weights = [weights, w_lut]
weights = [w_lut]
#gammas = [gammas, bl_gamma]
gammas=[bl_gamma]
#pruning_masks = [pruning_masks, bl_pruning_mask]
pruning_masks=[bl_pruning_mask]
#rand_maps = [rand_maps, bl_rand_map]
rand_maps=[bl_rand_map]
#means = [means, bl_means]
means=[bl_means]
#bn_betas = [bn_betas, bl_bn_beta]
bn_betas=[bl_bn_beta]
#bn_gammas = [bn_gammas, bl_bn_gamma]
bn_gammas=[bl_bn_gamma]
#bn_means = [bn_means, bl_bn_mean]
bn_means=[bl_bn_mean]
#bn_inv_stds = [bn_inv_stds, bl_bn_inv_std]
bn_inv_stds=[bl_bn_inv_std]
# conv layer 2
bl_w1 = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_1:0"])
#bl_w2 = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_2:0"])
#bl_w3 = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_3:0"])
#bl_w4 = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_4:0"])
#bl_w5 = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_5:0"])
#bl_w6 = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_6:0"])
#bl_w7 = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_7:0"])
#bl_w8 = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable_8:0"])
bl_rand_map = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["rand_map_0:0"])
bl_pruning_mask = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["pruning_mask:0"]).reshape(bl_w1.shape)
bl_gamma = np.array(bl["model_weights"]["binary_conv_2"]["binary_conv_2"]["Variable:0"])
bl_bn_beta = np.array(bl["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["beta:0"])
bl_bn_gamma = np.array(bl["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["gamma:0"])
bl_bn_mean = np.array(bl["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["moving_mean:0"])
bl_bn_inv_std = 1/np.sqrt(np.array(bl["model_weights"]["batch_normalization_2"]["batch_normalization_2"]["moving_variance:0"])+batch_norm_eps)
bl_means = bl["model_weights"]["residual_sign_2"]["residual_sign_2"]["means:0"]
#w_lut = [bl_w1*bl_pruning_mask, bl_w2*bl_pruning_mask, bl_w3*bl_pruning_mask, bl_w4*bl_pruning_mask, bl_w5*bl_pruning_mask, bl_w6*bl_pruning_mask, bl_w7*bl_pruning_mask, bl_w8*bl_pruning_mask]
w_lut = [bl_w1]
#weights = [weights, w_lut]
weights.extend([w_lut])
#gammas = [gammas, bl_gamma]
gammas.extend([bl_gamma])
#pruning_masks = [pruning_masks, bl_pruning_mask]
pruning_masks.extend([bl_pruning_mask])
#rand_maps = [rand_maps, bl_rand_map]
rand_maps.extend([bl_rand_map])
#means = [means, bl_means]
means.extend([bl_means])
#bn_betas = [bn_betas, bl_bn_beta]
bn_betas.extend([bl_bn_beta])
#bn_gammas = [bn_gammas, bl_bn_gamma]
bn_gammas.extend([bl_bn_gamma])
#bn_means = [bn_means, bl_bn_mean]
bn_means.extend([bl_bn_mean])
#bn_inv_stds = [bn_inv_stds, bl_bn_inv_std]
bn_inv_stds.extend([bl_bn_inv_std])
# conv layer 3
bl_w1 = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_1:0"])
#bl_w2 = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_2:0"])
#bl_w3 = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_3:0"])
#bl_w4 = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_4:0"])
#bl_w5 = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_5:0"])
#bl_w6 = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_6:0"])
#bl_w7 = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_7:0"])
#bl_w8 = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable_8:0"])
bl_rand_map = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["rand_map_0:0"])
bl_pruning_mask = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["pruning_mask:0"]).reshape(bl_w1.shape)
bl_gamma = np.array(bl["model_weights"]["binary_conv_3"]["binary_conv_3"]["Variable:0"])
bl_bn_beta = np.array(bl["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["beta:0"])
bl_bn_gamma = np.array(bl["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["gamma:0"])
bl_bn_mean = np.array(bl["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["moving_mean:0"])
bl_bn_inv_std = 1/np.sqrt(np.array(bl["model_weights"]["batch_normalization_3"]["batch_normalization_3"]["moving_variance:0"])+batch_norm_eps)
bl_means = bl["model_weights"]["residual_sign_3"]["residual_sign_3"]["means:0"]
#w_lut = [bl_w1*bl_pruning_mask, bl_w2*bl_pruning_mask, bl_w3*bl_pruning_mask, bl_w4*bl_pruning_mask, bl_w5*bl_pruning_mask, bl_w6*bl_pruning_mask, bl_w7*bl_pruning_mask, bl_w8*bl_pruning_mask]
w_lut = [bl_w1]
#weights = [weights, w_lut]
weights.extend([w_lut])
#gammas = [gammas, bl_gamma]
gammas.extend([bl_gamma])
#pruning_masks = [pruning_masks, bl_pruning_mask]
pruning_masks.extend([bl_pruning_mask])
#rand_maps = [rand_maps, bl_rand_map]
rand_maps.extend([bl_rand_map])
#means = [means, bl_means]
means.extend([bl_means])
#bn_betas = [bn_betas, bl_bn_beta]
bn_betas.extend([bl_bn_beta])
#bn_gammas = [bn_gammas, bl_bn_gamma]
bn_gammas.extend([bl_bn_gamma])
#bn_means = [bn_means, bl_bn_mean]
bn_means.extend([bl_bn_mean])
#bn_inv_stds = [bn_inv_stds, bl_bn_inv_std]
bn_inv_stds.extend([bl_bn_inv_std])
# conv layer 4
bl_w1 = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_1:0"])
#bl_w2 = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_2:0"])
#bl_w3 = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_3:0"])
#bl_w4 = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_4:0"])
#bl_w5 = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_5:0"])
#bl_w6 = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_6:0"])
#bl_w7 = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_7:0"])
#bl_w8 = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable_8:0"])
bl_rand_map = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["rand_map_0:0"])
bl_pruning_mask = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["pruning_mask:0"]).reshape(bl_w1.shape)
bl_gamma = np.array(bl["model_weights"]["binary_conv_4"]["binary_conv_4"]["Variable:0"])
bl_bn_beta = np.array(bl["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["beta:0"])
bl_bn_gamma = np.array(bl["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["gamma:0"])
bl_bn_mean = np.array(bl["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["moving_mean:0"])
bl_bn_inv_std = 1/np.sqrt(np.array(bl["model_weights"]["batch_normalization_4"]["batch_normalization_4"]["moving_variance:0"])+batch_norm_eps)
bl_means = bl["model_weights"]["residual_sign_4"]["residual_sign_4"]["means:0"]
#w_lut = [bl_w1*bl_pruning_mask, bl_w2*bl_pruning_mask, bl_w3*bl_pruning_mask, bl_w4*bl_pruning_mask, bl_w5*bl_pruning_mask, bl_w6*bl_pruning_mask, bl_w7*bl_pruning_mask, bl_w8*bl_pruning_mask]
w_lut = [bl_w1]
#weights = [weights, w_lut]
weights.extend([w_lut])
#gammas = [gammas, bl_gamma]
gammas.extend([bl_gamma])
#pruning_masks = [pruning_masks, bl_pruning_mask]
pruning_masks.extend([bl_pruning_mask])
#rand_maps = [rand_maps, bl_rand_map]
rand_maps.extend([bl_rand_map])
#means = [means, bl_means]
means.extend([bl_means])
#bn_betas = [bn_betas, bl_bn_beta]
bn_betas.extend([bl_bn_beta])
#bn_gammas = [bn_gammas, bl_bn_gamma]
bn_gammas.extend([bl_bn_gamma])
#bn_means = [bn_means, bl_bn_mean]
bn_means.extend([bl_bn_mean])
#bn_inv_stds = [bn_inv_stds, bl_bn_inv_std]
bn_inv_stds.extend([bl_bn_inv_std])
# conv layer 5
bl_w1 = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_1:0"])
#bl_w2 = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_2:0"])
#bl_w3 = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_3:0"])
#bl_w4 = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_4:0"])
#bl_w5 = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_5:0"])
#bl_w6 = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_6:0"])
#bl_w7 = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_7:0"])
#bl_w8 = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable_8:0"])
bl_rand_map = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["rand_map_0:0"])
bl_pruning_mask = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["pruning_mask:0"]).reshape(bl_w1.shape)
bl_gamma = np.array(bl["model_weights"]["binary_conv_5"]["binary_conv_5"]["Variable:0"])
bl_bn_beta = np.array(bl["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["beta:0"])
bl_bn_gamma = np.array(bl["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["gamma:0"])
bl_bn_mean = np.array(bl["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["moving_mean:0"])
bl_bn_inv_std = 1/np.sqrt(np.array(bl["model_weights"]["batch_normalization_5"]["batch_normalization_5"]["moving_variance:0"])+batch_norm_eps)
bl_means = bl["model_weights"]["residual_sign_5"]["residual_sign_5"]["means:0"]
#w_lut = [bl_w1*bl_pruning_mask, bl_w2*bl_pruning_mask, bl_w3*bl_pruning_mask, bl_w4*bl_pruning_mask, bl_w5*bl_pruning_mask, bl_w6*bl_pruning_mask, bl_w7*bl_pruning_mask, bl_w8*bl_pruning_mask]
w_lut = [bl_w1]
#weights = [weights, w_lut]
weights.extend([w_lut])
#gammas = [gammas, bl_gamma]
gammas.extend([bl_gamma])
#pruning_masks = [pruning_masks, bl_pruning_mask]
pruning_masks.extend([bl_pruning_mask])
#rand_maps = [rand_maps, bl_rand_map]
rand_maps.extend([bl_rand_map])
#means = [means, bl_means]
means.extend([bl_means])
#bn_betas = [bn_betas, bl_bn_beta]
bn_betas.extend([bl_bn_beta])
#bn_gammas = [bn_gammas, bl_bn_gamma]
bn_gammas.extend([bl_bn_gamma])
#bn_means = [bn_means, bl_bn_mean]
bn_means.extend([bl_bn_mean])
#bn_inv_stds = [bn_inv_stds, bl_bn_inv_std]
bn_inv_stds.extend([bl_bn_inv_std])
# conv layer 6
bl_w1 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_1:0"])
bl_w2 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_2:0"])
bl_w3 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_3:0"])
bl_w4 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_4:0"])
bl_w5 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_5:0"])
bl_w6 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_6:0"])
bl_w7 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_7:0"])
bl_w8 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_8:0"])
bl_w9 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_9:0"])
bl_w10 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_10:0"])
bl_w11 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_11:0"])
bl_w12 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_12:0"])
bl_w13 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_13:0"])
bl_w14 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_14:0"])
bl_w15 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_15:0"])
bl_w16 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_16:0"])
bl_w17 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_17:0"])
bl_w18 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_18:0"])
bl_w19 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_19:0"])
bl_w20 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_20:0"])
bl_w21 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_21:0"])
bl_w22 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_22:0"])
bl_w23 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_23:0"])
bl_w24 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_24:0"])
bl_w25 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_25:0"])
bl_w26 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_26:0"])
bl_w27 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_27:0"])
bl_w28 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_28:0"])
bl_w29 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_29:0"])
bl_w30 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_30:0"])
bl_w31 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_31:0"])
bl_w32 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_32:0"])
bl_w33 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_33:0"])
bl_w34 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_34:0"])
bl_w35 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_35:0"])
bl_w36 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_36:0"])
bl_w37 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_37:0"])
bl_w38 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_38:0"])
bl_w39 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_39:0"])
bl_w40 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_40:0"])
bl_w41 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_41:0"])
bl_w42 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_42:0"])
bl_w43 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_43:0"])
bl_w44 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_44:0"])
bl_w45 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_45:0"])
bl_w46 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_46:0"])
bl_w47 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_47:0"])
bl_w48 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_48:0"])
bl_w49 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_49:0"])
bl_w50 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_50:0"])
bl_w51 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_51:0"])
bl_w52 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_52:0"])
bl_w53 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_53:0"])
bl_w54 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_54:0"])
bl_w55 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_55:0"])
bl_w56 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_56:0"])
bl_w57 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_57:0"])
bl_w58 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_58:0"])
bl_w59 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_59:0"])
bl_w60 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_60:0"])
bl_w61 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_61:0"])
bl_w62 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_62:0"])
bl_w63 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_63:0"])
bl_w64 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable_64:0"])
bl_rand_map_0 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["rand_map_0:0"])
bl_rand_map_1 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["rand_map_1:0"])
bl_rand_map_2 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["rand_map_2:0"])
bl_rand_map_3 = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["rand_map_3:0"])
bl_pruning_mask = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["pruning_mask:0"]).reshape(bl_w1.shape)
bl_gamma = np.array(bl["model_weights"]["binary_conv_6"]["binary_conv_6"]["Variable:0"])
bl_bn_beta = np.array(bl["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["beta:0"])
bl_bn_gamma = np.array(bl["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["gamma:0"])
bl_bn_mean = np.array(bl["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["moving_mean:0"])
bl_bn_inv_std = 1/np.sqrt(np.array(bl["model_weights"]["batch_normalization_6"]["batch_normalization_6"]["moving_variance:0"])+batch_norm_eps)
bl_means = bl["model_weights"]["residual_sign_6"]["residual_sign_6"]["means:0"]
w_lut = [bl_w1*bl_pruning_mask, bl_w2*bl_pruning_mask, bl_w3*bl_pruning_mask, bl_w4*bl_pruning_mask, bl_w5*bl_pruning_mask, bl_w6*bl_pruning_mask, bl_w7*bl_pruning_mask, bl_w8*bl_pruning_mask, bl_w9*bl_pruning_mask, bl_w10*bl_pruning_mask, bl_w11*bl_pruning_mask, bl_w12*bl_pruning_mask, bl_w13*bl_pruning_mask, bl_w14*bl_pruning_mask, bl_w15*bl_pruning_mask, bl_w16*bl_pruning_mask, bl_w17*bl_pruning_mask, bl_w18*bl_pruning_mask, bl_w19*bl_pruning_mask, bl_w20*bl_pruning_mask, bl_w21*bl_pruning_mask, bl_w22*bl_pruning_mask, bl_w23*bl_pruning_mask, bl_w24*bl_pruning_mask, bl_w25*bl_pruning_mask, bl_w26*bl_pruning_mask, bl_w27*bl_pruning_mask, bl_w28*bl_pruning_mask, bl_w29*bl_pruning_mask, bl_w30*bl_pruning_mask, bl_w31*bl_pruning_mask, bl_w32*bl_pruning_mask, bl_w33*bl_pruning_mask, bl_w34*bl_pruning_mask, bl_w35*bl_pruning_mask, bl_w36*bl_pruning_mask, bl_w37*bl_pruning_mask, bl_w38*bl_pruning_mask, bl_w39*bl_pruning_mask, bl_w40*bl_pruning_mask, bl_w41*bl_pruning_mask, bl_w42*bl_pruning_mask, bl_w43*bl_pruning_mask, bl_w44*bl_pruning_mask, bl_w45*bl_pruning_mask, bl_w46*bl_pruning_mask, bl_w47*bl_pruning_mask, bl_w48*bl_pruning_mask, bl_w49*bl_pruning_mask, bl_w50*bl_pruning_mask, bl_w51*bl_pruning_mask, bl_w52*bl_pruning_mask, bl_w53*bl_pruning_mask, bl_w54*bl_pruning_mask, bl_w55*bl_pruning_mask, bl_w56*bl_pruning_mask, bl_w57*bl_pruning_mask, bl_w58*bl_pruning_mask, bl_w59*bl_pruning_mask, bl_w60*bl_pruning_mask, bl_w61*bl_pruning_mask, bl_w62*bl_pruning_mask, bl_w63*bl_pruning_mask, bl_w64*bl_pruning_mask]
#weights = [weights, w_lut]
weights.extend([w_lut])
#gammas = [gammas, bl_gamma]
gammas.extend([bl_gamma])
#pruning_masks = [pruning_masks, bl_pruning_mask]
pruning_masks.extend([bl_pruning_mask])
bl_rand_map = [bl_rand_map_0, bl_rand_map_1, bl_rand_map_2, bl_rand_map_3]
#rand_maps = [rand_maps, bl_rand_map]
rand_maps.extend([bl_rand_map])
#means = [means, bl_means]
means.extend([bl_means])
#bn_betas = [bn_betas, bl_bn_beta]
bn_betas.extend([bl_bn_beta])
#bn_gammas = [bn_gammas, bl_bn_gamma]
bn_gammas.extend([bl_bn_gamma])
#bn_means = [bn_means, bl_bn_mean]
bn_means.extend([bl_bn_mean])
#bn_inv_stds = [bn_inv_stds, bl_bn_inv_std]
bn_inv_stds.extend([bl_bn_inv_std])
# dense layer 1
bl_w1 = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_1:0"])
#bl_w2 = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_2:0"])
#bl_w3 = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_3:0"])
#bl_w4 = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_4:0"])
#bl_w5 = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_5:0"])
#bl_w6 = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_6:0"])
#bl_w7 = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_7:0"])
#bl_w8 = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable_8:0"])
bl_rand_map = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["rand_map:0"])
bl_pruning_mask = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["pruning_mask:0"]).reshape(bl_w1.shape)
bl_gamma = np.array(bl["model_weights"]["binary_dense_1"]["binary_dense_1"]["Variable:0"])
bl_bn_beta = np.array(bl["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["beta:0"])
bl_bn_gamma = np.array(bl["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["gamma:0"])
bl_bn_mean = np.array(bl["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["moving_mean:0"])
bl_bn_inv_std = 1/np.sqrt(np.array(bl["model_weights"]["batch_normalization_7"]["batch_normalization_7"]["moving_variance:0"])+batch_norm_eps)
bl_means = bl["model_weights"]["residual_sign_7"]["residual_sign_7"]["means:0"]
#w_lut = [bl_w1*bl_pruning_mask, bl_w2*bl_pruning_mask, bl_w3*bl_pruning_mask, bl_w4*bl_pruning_mask, bl_w5*bl_pruning_mask, bl_w6*bl_pruning_mask, bl_w7*bl_pruning_mask, bl_w8*bl_pruning_mask]
w_lut = [bl_w1]
#weights = [weights, w_lut]
weights.extend([w_lut])
#gammas = [gammas, bl_gamma]
gammas.extend([bl_gamma])
#pruning_masks = [pruning_masks, bl_pruning_mask]
pruning_masks.extend([bl_pruning_mask])
#rand_maps = [rand_maps, bl_rand_map]
rand_maps.extend([bl_rand_map])
#means = [means, bl_means]
means.extend([bl_means])
#bn_betas = [bn_betas, bl_bn_beta]
bn_betas.extend([bl_bn_beta])
#bn_gammas = [bn_gammas, bl_bn_gamma]
bn_gammas.extend([bl_bn_gamma])
#bn_means = [bn_means, bl_bn_mean]
bn_means.extend([bl_bn_mean])
#bn_inv_stds = [bn_inv_stds, bl_bn_inv_std]
bn_inv_stds.extend([bl_bn_inv_std])
# dense layer 2
bl_w1 = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_1:0"])
#bl_w2 = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_2:0"])
#bl_w3 = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_3:0"])
#bl_w4 = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_4:0"])
#bl_w5 = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_5:0"])
#bl_w6 = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_6:0"])
#bl_w7 = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_7:0"])
#bl_w8 = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable_8:0"])
bl_rand_map = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["rand_map:0"])
bl_pruning_mask = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["pruning_mask:0"]).reshape(bl_w1.shape)
bl_gamma = np.array(bl["model_weights"]["binary_dense_2"]["binary_dense_2"]["Variable:0"])
bl_bn_beta = np.array(bl["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["beta:0"])
bl_bn_gamma = np.array(bl["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["gamma:0"])
bl_bn_mean = np.array(bl["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["moving_mean:0"])
bl_bn_inv_std = 1/np.sqrt(np.array(bl["model_weights"]["batch_normalization_8"]["batch_normalization_8"]["moving_variance:0"])+batch_norm_eps)
bl_means = bl["model_weights"]["residual_sign_8"]["residual_sign_8"]["means:0"]
#w_lut = [bl_w1*bl_pruning_mask, bl_w2*bl_pruning_mask, bl_w3*bl_pruning_mask, bl_w4*bl_pruning_mask, bl_w5*bl_pruning_mask, bl_w6*bl_pruning_mask, bl_w7*bl_pruning_mask, bl_w8*bl_pruning_mask]
w_lut = [bl_w1]
#weights = [weights, w_lut]
weights.extend([w_lut])
#gammas = [gammas, bl_gamma]
gammas.extend([bl_gamma])
#pruning_masks = [pruning_masks, bl_pruning_mask]
pruning_masks.extend([bl_pruning_mask])
#rand_maps = [rand_maps, bl_rand_map]
rand_maps.extend([bl_rand_map])
#means = [means, bl_means]
means.extend([bl_means])
#bn_betas = [bn_betas, bl_bn_beta]
bn_betas.extend([bl_bn_beta])
#bn_gammas = [bn_gammas, bl_bn_gamma]
bn_gammas.extend([bl_bn_gamma])
#bn_means = [bn_means, bl_bn_mean]
bn_means.extend([bl_bn_mean])
#bn_inv_stds = [bn_inv_stds, bl_bn_inv_std]
bn_inv_stds.extend([bl_bn_inv_std])
# dense layer 3
bl_w1 = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_1:0"])
#bl_w2 = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_2:0"])
#bl_w3 = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_3:0"])
#bl_w4 = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_4:0"])
#bl_w5 = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_5:0"])
#bl_w6 = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_6:0"])
#bl_w7 = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_7:0"])
#bl_w8 = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable_8:0"])
bl_rand_map = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["rand_map:0"])
bl_pruning_mask = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["pruning_mask:0"]).reshape(bl_w1.shape)
bl_gamma = np.array(bl["model_weights"]["binary_dense_3"]["binary_dense_3"]["Variable:0"])
bl_bn_beta = np.array(bl["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["beta:0"])
bl_bn_gamma = np.array(bl["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["gamma:0"])
bl_bn_mean = np.array(bl["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["moving_mean:0"])
bl_bn_inv_std = 1/np.sqrt(np.array(bl["model_weights"]["batch_normalization_9"]["batch_normalization_9"]["moving_variance:0"])+batch_norm_eps)
#bl_means = bl["model_weights"]["residual_sign_9"]["residual_sign_9"]["means:0"]
#w_lut = [bl_w1*bl_pruning_mask, bl_w2*bl_pruning_mask, bl_w3*bl_pruning_mask, bl_w4*bl_pruning_mask, bl_w5*bl_pruning_mask, bl_w6*bl_pruning_mask, bl_w7*bl_pruning_mask, bl_w8*bl_pruning_mask]
w_lut = [bl_w1]
#weights = [weights, w_lut]
weights.extend([w_lut])
#gammas = [gammas, bl_gamma]
gammas.extend([bl_gamma])
#pruning_masks = [pruning_masks, bl_pruning_mask]
pruning_masks.extend([bl_pruning_mask])
#rand_maps = [rand_maps, bl_rand_map]
rand_maps.extend([bl_rand_map])
#means = [means, bl_means]
#means.extend(bl_means)
#bn_betas = [bn_betas, bl_bn_beta]
bn_betas.extend([bl_bn_beta])
#bn_gammas = [bn_gammas, bl_bn_gamma]
bn_gammas.extend([bl_bn_gamma])
#bn_means = [bn_means, bl_bn_mean]
bn_means.extend([bl_bn_mean])
#bn_inv_stds = [bn_inv_stds, bl_bn_inv_std]
bn_inv_stds.extend([bl_bn_inv_std])
print("Binarizing the pretrained parameters...")
# Binarize the weights
weights[0][0] = SignNumpy(weights[0][0])
for i in range(1,9):
if i==5:
for j in range(64):
weights[i][j] = SignNumpy(weights[i][j])
else:
for j in range(1):
weights[i][j] = SignNumpy(weights[i][j])
# write header file
with open('../src/weights.h', 'w') as f:
f.write('#pragma once\n')
with open('../src/weights.h', 'a') as f:
f.write('//Generated weights for CIFAR-10\n')
for layer_id in range(9):
# generate weights
if layer_id!=5: # first layer: fxp inputs and binary weights
weights_per_act = 1
else:
weights_per_act = 64 # weights_per_act = #_of_bits_per_act x 2 ^ #_of_lut_inputs
dims = np.shape(weights[layer_id][0])
if len(dims)==2:
layer_type = "fc"
word_length = dims[0]
nfilters = dims[1]
elif len(dims)==4:
layer_type = "conv"
word_length = dims[0]*dims[1]*dims[2]
nfilters = dims[3]
# for weight_id in range(weights_per_act):
# mat = weights[layer_id][weight_id]
# if layer_type=="fc":
# mat_flat = mat.transpose(1,0).flatten()
# elif layer_type=="conv":
# mat_flat = mat.transpose(3,0,1,2).flatten()
# else:
# print("unknown weight format!")
#
# with open('../src/weights.h', 'a') as f:
# f.write('//Array shape: {}\n'.format(dims))
# fold = (word_length-1)/32 + 1
# f.write("const ap_uint<32> " + "weights_" + layer_type + str(layer_id+1) + "_" + str(weight_id+1) + "["+str(nfilters*fold) + "] = {")
# bin_append = 0
# for i, ele in enumerate(mat_flat):
# #bin_append = (bin_append << 1) | (int(ele) # left-first bit-push
# bin_append = bin_append | (int(ele) << (i % word_length)) # right-first bit-push
# if (i % word_length == (word_length - 1)):
# mask = 0xFFFFFFFF
# for i_32b in range(fold):
# #word = (bin_append>>(32*(fold-i_32b-1))) & mask # Big-endian: left-first word-push
# word = (bin_append>>(32*i_32b)) & mask # Little-endian: right-first word-push
# hex_word = '%X' % word
# if i_32b!=0:
# f.write(', ')
# f.write('0x' + hex_word)
# bin_append = 0
# if i != nfilters*word_length-1:
# f.write(', ')
# f.write('};\n')
if layer_id==5:
# generate verilog source file for LUTARRAY: Vivado HLS will take forever
with open('../src/LUTARRAY_b0_' + str(layer_id) + '.v', 'w') as v0:
v0.write('`timescale 1 ns / 1 ps\n\n')
v0.write('module LUTARRAY_b0 (\n in_V,\n in_1_V,\n in_2_V,\n in_3_V,\n in_4_V')
for tm in range(nfilters):
v0.write(',\n ap_return_' + str(tm))
v0.write(');\n\n')
with open('../src/LUTARRAY_b1_' + str(layer_id) + '.v', 'w') as v1:
v1.write('`timescale 1 ns / 1 ps\n\n')
v1.write('module LUTARRAY_b1 (\n in_V,\n in_1_V,\n in_2_V,\n in_3_V,\n in_4_V')
for tm in range(nfilters):
v1.write(',\n ap_return_' + str(tm))
v1.write(');\n\n')
mat_flat = []
for weight_id in range(weights_per_act):
mat = weights[layer_id][weight_id]
pm = pruning_masks[layer_id]#.transpose(3,0,1,2).flatten()
if layer_type=="fc":
mat = mat.transpose(1,0)
pm_flat = pm.transpose(1,0)
elif layer_type=="conv":
mat = mat.transpose(3,0,1,2).reshape((nfilters, -1))
pm_flat = pm.transpose(3,0,1,2).reshape((nfilters, -1))
else:
print("unknown weight format!")
mat_flat.extend([mat])
with open('../src/LUTARRAY_b0_' + str(layer_id) + '.v', 'a') as v0:
v0.write('\n\n')
v0.write('input [' + str(word_length-1) + ':0] in_V;\n')
v0.write('input [' + str(word_length-1) + ':0] in_1_V;\n')
v0.write('input [' + str(word_length-1) + ':0] in_2_V;\n')
v0.write('input [' + str(word_length-1) + ':0] in_3_V;\n')
v0.write('input [' + str(word_length-1) + ':0] in_4_V;\n')
for tm in range(nfilters):
v0.write('output [' + str(word_length-1) + ':0] ap_return_' + str(tm) + ';\n')
for tm in range(nfilters):
for ti, ele in enumerate(pm_flat[tm]):
if ele==1:
v0.write('wire tmp_' + str(tm) + '_' + str(ti) + ';\n')
v0.write('assign tmp_' + str(tm) + '_' + str(ti) + ' = ')
v0.write('(' + str(int(mat_flat[32][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[33][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[34][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[35][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[36][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[37][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[38][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[39][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[40][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[41][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[42][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[43][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[44][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[45][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[46][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[47][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[48][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[49][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[50][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[51][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[52][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[53][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[54][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[55][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[56][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[57][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[58][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[59][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[60][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[61][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[62][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v0.write('(' + str(int(mat_flat[63][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']);\n ')
v0.write('assign ap_return_' + str(tm) + ' = {')
for ti, ele in enumerate(pm_flat[tm]):
if ele == 0:
v0.write("1'b0")
elif ele == 1:
v0.write('tmp_' + str(tm) + '_' + str(ti))
else:
print("pruning mask elements must be binary!")
if ti != word_length-1:
v0.write(',')
else:
v0.write('};\n')
v0.write('endmodule')
with open('../src/LUTARRAY_b1_' + str(layer_id) + '.v', 'a') as v1:
v1.write('\n\n')
v1.write('input [' + str(word_length-1) + ':0] in_V;\n')
v1.write('input [' + str(word_length-1) + ':0] in_1_V;\n')
v1.write('input [' + str(word_length-1) + ':0] in_2_V;\n')
v1.write('input [' + str(word_length-1) + ':0] in_3_V;\n')
v1.write('input [' + str(word_length-1) + ':0] in_4_V;\n')
for tm in range(nfilters):
v1.write('output [' + str(word_length-1) + ':0] ap_return_' + str(tm) + ';\n')
for tm in range(nfilters):
for ti, ele in enumerate(pm_flat[tm]):
if ele==1:
v1.write('wire tmp_' + str(tm) + '_' + str(ti) + ';\n')
v1.write('assign tmp_' + str(tm) + '_' + str(ti) + ' = ')
v1.write('(' + str(int(mat_flat[0][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[1][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[2][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[3][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[4][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[5][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[6][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[7][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[8][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[9][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[10][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[11][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[12][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[13][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[14][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[15][tm][ti])) + ' & in_V[' + str(ti) + '] & in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[16][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[17][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[18][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[19][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[20][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[21][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[22][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[23][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[24][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[25][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[26][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[27][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[28][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[29][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[30][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']) | ')
v1.write('(' + str(int(mat_flat[31][tm][ti])) + ' & in_V[' + str(ti) + '] & ~in_1_V[' + str(ti) + '] & ~in_2_V[' + str(ti) + '] & ~in_3_V[' + str(ti) + '] & ~in_4_V[' + str(ti) + ']);\n ')
v1.write('assign ap_return_' + str(tm) + ' = {')
for ti, ele in enumerate(pm_flat[tm]):
if ele == 0:
v1.write("1'b0")
elif ele == 1:
v1.write('tmp_' + str(tm) + '_' + str(ti))
else:
print("pruning mask elements must be binary!")
if ti != word_length-1:
v1.write(',')
else:
v1.write('};\n')
v1.write('endmodule')
# generate pruning mask (first layer only)
if layer_id==0:
pruning_mask_flat = pruning_masks[layer_id].transpose(3,0,1,2).flatten()
with open('../src/weights.h', 'a') as f:
fold = (word_length-1)/32 + 1
f.write("const ap_uint<32> " + "pruning_mask_" + layer_type + str(layer_id+1) + "_" + str(1) + "["+str(nfilters*fold) + "] = {")
bin_append = 0
for i, ele in enumerate(pruning_mask_flat):
#bin_append = (bin_append << 1) | (int(ele) # left-first bit-push
bin_append = bin_append | (int(ele) << (i % word_length)) # right-first bit-push
if (i % word_length == (word_length - 1)):
mask = 0xFFFFFFFF
for i_32b in range(fold):
#word = (bin_append>>(32*(fold-i_32b-1))) & mask # Big-endian: left-first word-push
word = (bin_append>>(32*i_32b)) & mask # Little-endian: right-first word-push
hex_word = '%X' % word
if i_32b!=0:
f.write(', ')
f.write('0x' + hex_word)
bin_append = 0
if i != nfilters*word_length-1:
f.write(', ')
f.write('};\n')
# generate threshold
if layer_id!=8: # the last layer does not need threshold
use_popcount = not(layer_id==0)
next_means_b0 = abs(means[layer_id][0])
print(next_means_b0)
next_means_b1 = abs(means[layer_id][1])
print(next_means_b1)
if layer_type=="conv":
fanin = np.sum(pruning_masks[layer_id].reshape(-1,dims[3]),axis=0)
elif layer_type=="fc":
fanin = np.sum(pruning_masks[layer_id],axis=0)
if layer_id!=0:
fanin = fanin * abs(gammas[layer_id] * means[layer_id-1][0]) + fanin * abs(gammas[layer_id] * means[layer_id-1][1])
thresholds = np.array(makeBNComplex(0, fanin, bn_betas[layer_id], bn_gammas[layer_id], bn_means[layer_id], bn_inv_stds[layer_id], usePopCount=use_popcount))
next_means_bn_b0 = np.array(makeBNComplex(next_means_b0, fanin, bn_betas[layer_id], bn_gammas[layer_id], bn_means[layer_id], bn_inv_stds[layer_id], usePopCount=use_popcount)) - thresholds
with open('../src/weights.h', 'a') as f:
f.write("const ap_fixed<24, 16> " + "thresh_" + layer_type + str(layer_id+1) + "["+str(len(thresholds))+"] = {")
for i, ele in enumerate(thresholds):
if i == 0:
f.write(str(ele))
else:
f.write(','+ str(ele))
f.write('};\n')
f.write("const ap_fixed<24, 16> " + "next_layer_means_" + layer_type + str(layer_id+1) + "["+str(len(next_means_bn_b0))+"] = {")
for i, ele in enumerate(next_means_bn_b0):
if i == 0:
f.write(str(ele))
else:
f.write(','+ str(ele))
f.write('};\n')
# # generate next layer mean
# if layer_id!=8:
# with open('../src/weights.h', 'a') as f:
# next_means_b0 = abs(means[layer_id][0])
# next_means_b1 = abs(means[layer_id][1])
# f.write("const ap_fixed<24, 16> " + "next_layer_means_" + layer_type + str(layer_id+1) + "[2] = {")
# f.write(str(next_means_b0))
# f.write(','+ str(next_means_b1))
# f.write('};\n')
# generate random map
for j in range(4):
with open('../src/weights.h', 'a') as f:
rand_map = rand_maps[layer_id][j].flatten().astype(np.uint32)
f.write("const unsigned int " + "rand_map_" + str(j) + "_" + layer_type + str(layer_id+1) + "["+str(len(rand_map))+"] = {")
for i, ele in enumerate(rand_map):
if i == 0:
f.write(str(ele))
else:
f.write(','+ str(ele))
f.write('};\n')
# generate alpha
with open('../src/weights.h', 'a') as f:
if layer_id!=0:
alpha_b0 = abs(gammas[layer_id] * means[layer_id-1][0])
alpha_b1 = abs(gammas[layer_id] * means[layer_id-1][1])
f.write("const ap_fixed<24, 16> " + "alpha_" + layer_type + str(layer_id+1) + "[2] = {")
f.write(str(alpha_b0))
f.write(','+ str(alpha_b1))
f.write('};\n')
else:
alpha_b0 = abs(gammas[layer_id])
f.write("const ap_fixed<24, 16> " + "alpha_" + layer_type + str(layer_id+1) + "[1] = {")
f.write(str(alpha_b0))
f.write('};\n')
| 68.46789 | 1,547 | 0.575824 | 9,173 | 59,704 | 3.385697 | 0.043933 | 0.052484 | 0.061822 | 0.065943 | 0.85005 | 0.835335 | 0.828477 | 0.81637 | 0.792543 | 0.782786 | 0 | 0.041989 | 0.220957 | 59,704 | 871 | 1,548 | 68.546498 | 0.625731 | 0.23444 | 0 | 0.284069 | 0 | 0 | 0.295004 | 0.033867 | 0 | 0 | 0.00022 | 0 | 0 | 1 | 0.005758 | false | 0 | 0.003839 | 0.001919 | 0.015355 | 0.017274 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c300ed7f05b77c00aee50e6f3009aeac7239d61e | 5,131 | py | Python | tests/serializer/test_faust_serializer_clean_payload.py | oscarjohansson94/python-schema-registry-client | 7ed99003a200d167e8a8bd66f34af8d51ad1de1d | [
"MIT"
] | 95 | 2019-05-20T06:59:06.000Z | 2022-03-01T05:30:57.000Z | tests/serializer/test_faust_serializer_clean_payload.py | oscarjohansson94/python-schema-registry-client | 7ed99003a200d167e8a8bd66f34af8d51ad1de1d | [
"MIT"
] | 94 | 2019-05-19T18:36:29.000Z | 2022-03-30T18:54:52.000Z | tests/serializer/test_faust_serializer_clean_payload.py | oscarjohansson94/python-schema-registry-client | 7ed99003a200d167e8a8bd66f34af8d51ad1de1d | [
"MIT"
] | 41 | 2019-05-20T06:59:33.000Z | 2022-03-06T16:09:53.000Z | import typing
from faust import Record
from schema_registry.serializers import faust as serializer
class DummyRecord(Record):
item: typing.Any
def test_avro_simple_record(client, avro_country_schema):
schema_subject = "test-avro-country"
faust_serializer = serializer.FaustSerializer(client, schema_subject, avro_country_schema)
result = {"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"}, "item": "test"}
dummy = DummyRecord("test")
assert result == faust_serializer.clean_payload(dummy)
def test_avro_nested_record(client, avro_country_schema):
schema_subject = "test-avro-country"
faust_serializer = serializer.FaustSerializer(client, schema_subject, avro_country_schema)
result = {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": {"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"}, "item": "test"},
}
dummy = DummyRecord(DummyRecord("test"))
assert result == faust_serializer.clean_payload(dummy)
def test_avro_list_of_records(client, avro_country_schema):
schema_subject = "test-avro-country"
faust_serializer = serializer.FaustSerializer(client, schema_subject, avro_country_schema)
result = {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": [
{"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"}, "item": "test"},
{"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"}, "item": "test"},
],
}
dummy = DummyRecord([DummyRecord("test"), DummyRecord("test")])
assert result == faust_serializer.clean_payload(dummy)
def test_avro_map_of_records(client, avro_country_schema):
schema_subject = "test-avro-country"
faust_serializer = serializer.FaustSerializer(client, schema_subject, avro_country_schema)
result = {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": {
"key1": {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": "test",
},
"key2": {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": "test",
},
},
}
dummy = DummyRecord({"key1": DummyRecord("test"), "key2": DummyRecord("test")})
assert result == faust_serializer.clean_payload(dummy)
def test_json_simple_record(client, json_country_schema):
schema_subject = "test-avro-country"
faust_serializer = serializer.FaustJsonSerializer(client, schema_subject, json_country_schema)
result = {"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"}, "item": "test"}
dummy = DummyRecord("test")
assert result == faust_serializer.clean_payload(dummy)
def test_json_nested_record(client, json_country_schema):
schema_subject = "test-avro-country"
faust_serializer = serializer.FaustJsonSerializer(client, schema_subject, json_country_schema)
result = {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": {"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"}, "item": "test"},
}
dummy = DummyRecord(DummyRecord("test"))
assert result == faust_serializer.clean_payload(dummy)
def test_json_list_of_records(client, json_country_schema):
schema_subject = "test-avro-country"
faust_serializer = serializer.FaustJsonSerializer(client, schema_subject, json_country_schema)
result = {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": [
{"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"}, "item": "test"},
{"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"}, "item": "test"},
],
}
dummy = DummyRecord([DummyRecord("test"), DummyRecord("test")])
assert result == faust_serializer.clean_payload(dummy)
def test_json_map_of_records(client, json_country_schema):
schema_subject = "test-avro-country"
faust_serializer = serializer.FaustJsonSerializer(client, schema_subject, json_country_schema)
result = {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": {
"key1": {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": "test",
},
"key2": {
"__faust": {"ns": "tests.serializer.test_faust_serializer_clean_payload.DummyRecord"},
"item": "test",
},
},
}
dummy = DummyRecord({"key1": DummyRecord("test"), "key2": DummyRecord("test")})
assert result == faust_serializer.clean_payload(dummy)
| 38.871212 | 120 | 0.689339 | 536 | 5,131 | 6.208955 | 0.070896 | 0.153245 | 0.15625 | 0.210938 | 0.949219 | 0.949219 | 0.949219 | 0.949219 | 0.949219 | 0.949219 | 0 | 0.001902 | 0.180082 | 5,131 | 131 | 121 | 39.167939 | 0.789161 | 0 | 0 | 0.736842 | 0 | 0 | 0.321575 | 0.224518 | 0 | 0 | 0 | 0 | 0.084211 | 1 | 0.084211 | false | 0 | 0.031579 | 0 | 0.136842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5effc8b81fc684bf64dbb7917b8d8a43d450abbf | 13,748 | py | Python | venv/lib/python3.8/site-packages/spaceone/api/inventory/v1/cloud_service_pb2_grpc.py | choonho/plugin-prometheus-mon-webhook | afa7d65d12715fd0480fb4f92a9c62da2d6128e0 | [
"Apache-2.0"
] | null | null | null | venv/lib/python3.8/site-packages/spaceone/api/inventory/v1/cloud_service_pb2_grpc.py | choonho/plugin-prometheus-mon-webhook | afa7d65d12715fd0480fb4f92a9c62da2d6128e0 | [
"Apache-2.0"
] | null | null | null | venv/lib/python3.8/site-packages/spaceone/api/inventory/v1/cloud_service_pb2_grpc.py | choonho/plugin-prometheus-mon-webhook | afa7d65d12715fd0480fb4f92a9c62da2d6128e0 | [
"Apache-2.0"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2
from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2
from spaceone.api.inventory.v1 import cloud_service_pb2 as spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2
class CloudServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.create = channel.unary_unary(
'/spaceone.api.inventory.v1.CloudService/create',
request_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CreateServiceRequest.SerializeToString,
response_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.FromString,
)
self.update = channel.unary_unary(
'/spaceone.api.inventory.v1.CloudService/update',
request_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.UpdateCloudServiceRequest.SerializeToString,
response_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.FromString,
)
self.pin_data = channel.unary_unary(
'/spaceone.api.inventory.v1.CloudService/pin_data',
request_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.PinCloudServiceDataRequest.SerializeToString,
response_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.FromString,
)
self.delete = channel.unary_unary(
'/spaceone.api.inventory.v1.CloudService/delete',
request_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
self.get = channel.unary_unary(
'/spaceone.api.inventory.v1.CloudService/get',
request_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.GetCloudServiceRequest.SerializeToString,
response_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.FromString,
)
self.list = channel.unary_unary(
'/spaceone.api.inventory.v1.CloudService/list',
request_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceQuery.SerializeToString,
response_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServicesInfo.FromString,
)
self.stat = channel.unary_unary(
'/spaceone.api.inventory.v1.CloudService/stat',
request_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceStatQuery.SerializeToString,
response_deserializer=google_dot_protobuf_dot_struct__pb2.Struct.FromString,
)
class CloudServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def create(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def update(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def pin_data(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def delete(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def get(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def list(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def stat(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_CloudServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'create': grpc.unary_unary_rpc_method_handler(
servicer.create,
request_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CreateServiceRequest.FromString,
response_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.SerializeToString,
),
'update': grpc.unary_unary_rpc_method_handler(
servicer.update,
request_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.UpdateCloudServiceRequest.FromString,
response_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.SerializeToString,
),
'pin_data': grpc.unary_unary_rpc_method_handler(
servicer.pin_data,
request_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.PinCloudServiceDataRequest.FromString,
response_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.SerializeToString,
),
'delete': grpc.unary_unary_rpc_method_handler(
servicer.delete,
request_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
'get': grpc.unary_unary_rpc_method_handler(
servicer.get,
request_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.GetCloudServiceRequest.FromString,
response_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.SerializeToString,
),
'list': grpc.unary_unary_rpc_method_handler(
servicer.list,
request_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceQuery.FromString,
response_serializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServicesInfo.SerializeToString,
),
'stat': grpc.unary_unary_rpc_method_handler(
servicer.stat,
request_deserializer=spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceStatQuery.FromString,
response_serializer=google_dot_protobuf_dot_struct__pb2.Struct.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'spaceone.api.inventory.v1.CloudService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class CloudService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def create(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spaceone.api.inventory.v1.CloudService/create',
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CreateServiceRequest.SerializeToString,
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def update(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spaceone.api.inventory.v1.CloudService/update',
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.UpdateCloudServiceRequest.SerializeToString,
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def pin_data(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spaceone.api.inventory.v1.CloudService/pin_data',
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.PinCloudServiceDataRequest.SerializeToString,
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def delete(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spaceone.api.inventory.v1.CloudService/delete',
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceRequest.SerializeToString,
google_dot_protobuf_dot_empty__pb2.Empty.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def get(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spaceone.api.inventory.v1.CloudService/get',
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.GetCloudServiceRequest.SerializeToString,
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceInfo.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def list(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spaceone.api.inventory.v1.CloudService/list',
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceQuery.SerializeToString,
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServicesInfo.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def stat(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/spaceone.api.inventory.v1.CloudService/stat',
spaceone_dot_api_dot_inventory_dot_v1_dot_cloud__service__pb2.CloudServiceStatQuery.SerializeToString,
google_dot_protobuf_dot_struct__pb2.Struct.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 51.490637 | 142 | 0.703157 | 1,414 | 13,748 | 6.368458 | 0.080622 | 0.050639 | 0.063298 | 0.06985 | 0.910827 | 0.907052 | 0.89317 | 0.843865 | 0.7809 | 0.76824 | 0 | 0.009537 | 0.229706 | 13,748 | 266 | 143 | 51.684211 | 0.840793 | 0.05819 | 0 | 0.553571 | 1 | 0 | 0.080165 | 0.052251 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.017857 | 0.03125 | 0.133929 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6f4b28da3f605d4a6da1979b19291530ae45d900 | 130 | py | Python | integration/tests/error_assert_invalid_predicate_type.py | youhavethewrong/hurl | 91cc14882a5f1ef7fa86be09a9f5581cef680559 | [
"Apache-2.0"
] | 1,013 | 2020-08-27T12:38:48.000Z | 2022-03-31T23:12:23.000Z | integration/tests/error_assert_invalid_predicate_type.py | youhavethewrong/hurl | 91cc14882a5f1ef7fa86be09a9f5581cef680559 | [
"Apache-2.0"
] | 217 | 2020-08-31T11:18:10.000Z | 2022-03-30T17:50:30.000Z | integration/tests/error_assert_invalid_predicate_type.py | youhavethewrong/hurl | 91cc14882a5f1ef7fa86be09a9f5581cef680559 | [
"Apache-2.0"
] | 54 | 2020-09-02T09:41:06.000Z | 2022-03-19T15:33:05.000Z | from tests import app
@app.route("/error-assert-invalid-predicate-type")
def error_assert_invalid_predicate_type():
return ''
| 21.666667 | 50 | 0.776923 | 18 | 130 | 5.388889 | 0.666667 | 0.226804 | 0.371134 | 0.556701 | 0.639175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 130 | 5 | 51 | 26 | 0.82906 | 0 | 0 | 0 | 0 | 0 | 0.276923 | 0.276923 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 10 |
6f6a53dbea37a5b6968d06d144ff6e9c49abd57c | 29,299 | py | Python | shaphypetune/shaphypetune.py | FrancisTembo/shap-hypetune | 14279a822d09988faf68cb63e2432eb8bb7a1218 | [
"MIT"
] | 1 | 2022-01-31T11:12:36.000Z | 2022-01-31T11:12:36.000Z | shaphypetune/shaphypetune.py | FrancisTembo/shap-hypetune | 14279a822d09988faf68cb63e2432eb8bb7a1218 | [
"MIT"
] | null | null | null | shaphypetune/shaphypetune.py | FrancisTembo/shap-hypetune | 14279a822d09988faf68cb63e2432eb8bb7a1218 | [
"MIT"
] | null | null | null | from sklearn.base import clone
from ._classes import _BoostSearch, _Boruta, _RFA, _RFE
class BoostSearch(_BoostSearch):
"""Hyperparamater searching and optimization on a given validation set
for LGBModel or XGBModel.
Pass a LGBModel or XGBModel, and a dictionary with the parameter boundaries
for grid, random or bayesian search.
To operate random search pass distributions in the param_grid with rvs
method for sampling (such as those from scipy.stats.distributions).
To operate bayesian search pass hyperopt distributions.
The specification of n_iter or sampling_seed is effective only with random
or hyperopt searches.
The best parameter combination is the one which obtain the better score
(as returned by eval_metric) on the provided eval_set.
If all parameters are presented as a list/floats/integers, grid-search
is performed. If at least one parameter is given as a distribution (such as
those from scipy.stats.distributions), random-search is performed computing
sampling with replacement. Bayesian search is effective only when all the
parameters to tune are in form of hyperopt distributions.
It is highly recommended to use continuous distributions for continuous
parameters.
Parameters
----------
estimator : object
A supervised learning estimator of LGBModel or XGBModel type.
param_grid : dict
Dictionary with parameters names (`str`) as keys and distributions
or lists of parameters to try.
greater_is_better : bool, default=False
Whether the quantity to monitor is a score function,
meaning high is good, or a loss function, meaning low is good.
n_iter : int, default=None
Effective only for random or hyperopt search.
Number of parameter settings that are sampled.
n_iter trades off runtime vs quality of the solution.
sampling_seed : int, default=None
Effective only for random or hyperopt search.
The seed used to sample from the hyperparameter distributions.
n_jobs : int, default=None
Effective only with grid and random search.
The number of jobs to run in parallel for model fitting.
``None`` means 1 using one processor. ``-1`` means using all
processors.
verbose : int, default=1
Verbosity mode. <=0 silent all; >0 print trial logs with the
connected score.
Attributes
----------
estimator_ : estimator
Estimator that was chosen by the search, i.e. estimator
which gave the best score on the eval_set.
best_params_ : dict
Parameter setting that gave the best results on the eval_set.
trials_ : list
A list of dicts. The dicts are all the parameter combinations tried
and derived from the param_grid.
best_score_ : float
The best score achieved by all the possible combination created.
scores_ : list
The scores achieved on the eval_set by all the models tried.
best_iter_ : int
The boosting iterations achieved by the best parameters combination.
iterations_ : list
The boosting iterations of all the models tried.
boost_type_ : str
The type of the boosting estimator (LGB or XGB).
"""
def __init__(self,
estimator, *,
param_grid,
greater_is_better=False,
n_iter=None,
sampling_seed=None,
verbose=1,
n_jobs=None):
self.estimator = estimator
self.param_grid = param_grid
self.greater_is_better = greater_is_better
self.n_iter = n_iter
self.sampling_seed = sampling_seed
self.verbose = verbose
self.n_jobs = n_jobs
def _build_model(self, params):
"""Private method to build model."""
model = clone(self.estimator)
model.set_params(**params)
return model
class BoostBoruta(_BoostSearch, _Boruta):
"""Simultaneous features selection with Boruta algorithm and hyperparamater
searching on a given validation set for LGBModel or XGBModel.
Pass a LGBModel or XGBModel to compute features selection with Boruta
algorithm. The best features are used to train a new gradient boosting
instance. When a eval_set is provided, shadow features are build also on it.
If param_grid is a dictionary with parameter boundaries, a hyperparameter
tuning is computed simultaneously. The parameter combinations are scored on
the provided eval_set.
To operate random search pass distributions in the param_grid with rvs
method for sampling (such as those from scipy.stats.distributions).
To operate bayesian search pass hyperopt distributions.
The specification of n_iter or sampling_seed is effective only with random
or hyperopt searches.
The best parameter combination is the one which obtain the better score
(as returned by eval_metric) on the provided eval_set.
If all parameters are presented as a list/floats/integers, grid-search
is performed. If at least one parameter is given as a distribution (such as
those from scipy.stats.distributions), random-search is performed computing
sampling with replacement. Bayesian search is effective only when all the
parameters to tune are in form of hyperopt distributions.
It is highly recommended to use continuous distributions for continuous
parameters.
Parameters
----------
estimator : object
A supervised learning estimator of LGBModel or XGBModel type.
perc : int, default=100
Threshold for comparison between shadow and real features.
The lower perc is the more false positives will be picked as relevant
but also the less relevant features will be left out.
100 correspond to the max.
alpha : float, default=0.05
Level at which the corrected p-values will get rejected in the
correction steps.
max_iter : int, default=100
The number of maximum Boruta iterations to perform.
early_stopping_boruta_rounds : int, default=None
The maximum amount of iterations without confirming a tentative
feature. Use early stopping to terminate the selection process
before reaching `max_iter` iterations if the algorithm cannot
confirm a tentative feature after N iterations.
None means no early stopping search.
importance_type : str, default='feature_importances'
Which importance measure to use. It can be 'feature_importances'
(the default feature importance of the gradient boosting estimator)
or 'shap_importances'.
train_importance : bool, default=True
Effective only when importance_type='shap_importances'.
Where to compute the shap feature importance: on train (True)
or on eval_set (False).
param_grid : dict, default=None
Dictionary with parameters names (`str`) as keys and distributions
or lists of parameters to try.
None means no hyperparameters search.
greater_is_better : bool, default=False
Effective only when hyperparameters searching.
Whether the quantity to monitor is a score function,
meaning high is good, or a loss function, meaning low is good.
n_iter : int, default=None
Effective only when hyperparameters searching.
Effective only for random or hyperopt seraches.
Number of parameter settings that are sampled.
n_iter trades off runtime vs quality of the solution.
sampling_seed : int, default=None
Effective only when hyperparameters searching.
Effective only for random or hyperopt serach.
The seed used to sample from the hyperparameter distributions.
n_jobs : int, default=None
Effective only when hyperparameters searching without hyperopt.
The number of jobs to run in parallel for model fitting.
``None`` means 1 using one processor. ``-1`` means using all
processors.
verbose : int, default=1
Verbosity mode. <=0 silent all; ==1 print trial logs (when
hyperparameters searching); >1 print feature selection logs plus
trial logs (when hyperparameters searching).
Attributes
----------
estimator_ : estimator
The fitted estimator with the select features and the optimal
parameter combination (when hyperparameters searching).
n_features_ : int
The number of selected features (from the best param config
when hyperparameters searching).
ranking_ : ndarray of shape (n_features,)
The feature ranking, such that ``ranking_[i]`` corresponds to the
ranking position of the i-th feature (from the best param config
when hyperparameters searching). Selected features are assigned
rank 1 (2: tentative upper bound, 3: tentative lower bound, 4:
rejected).
support_ : ndarray of shape (n_features,)
The mask of selected features (from the best param config
when hyperparameters searching).
importance_history_ : ndarray of shape (n_features, n_iters)
The importance values for each feature across all iterations.
best_params_ : dict
Available only when hyperparameters searching.
Parameter setting that gave the best results on the eval_set.
trials_ : list
Available only when hyperparameters searching.
A list of dicts. The dicts are all the parameter combinations tried
and derived from the param_grid.
best_score_ : float
Available only when hyperparameters searching.
The best score achieved by all the possible combination created.
scores_ : list
Available only when hyperparameters searching.
The scores achived on the eval_set by all the models tried.
best_iter_ : int
Available only when hyperparameters searching.
The boosting iterations achieved by the best parameters combination.
iterations_ : list
Available only when hyperparameters searching.
The boosting iterations of all the models tried.
boost_type_ : str
The type of the boosting estimator (LGB or XGB).
Notes
-----
The code for the Boruta algorithm is inspired and improved from:
https://github.com/scikit-learn-contrib/boruta_py
"""
def __init__(self,
estimator, *,
perc=100,
alpha=0.05,
max_iter=100,
early_stopping_boruta_rounds=None,
param_grid=None,
greater_is_better=False,
importance_type='feature_importances',
train_importance=True,
n_iter=None,
sampling_seed=None,
verbose=1,
n_jobs=None):
self.estimator = estimator
self.perc = perc
self.alpha = alpha
self.max_iter = max_iter
self.early_stopping_boruta_rounds = early_stopping_boruta_rounds
self.param_grid = param_grid
self.greater_is_better = greater_is_better
self.importance_type = importance_type
self.train_importance = train_importance
self.n_iter = n_iter
self.sampling_seed = sampling_seed
self.verbose = verbose
self.n_jobs = n_jobs
def _build_model(self, params=None):
"""Private method to build model."""
estimator = clone(self.estimator)
if params is None:
model = _Boruta(
estimator=estimator,
perc=self.perc,
alpha=self.alpha,
max_iter=self.max_iter,
early_stopping_boruta_rounds=self.early_stopping_boruta_rounds,
importance_type=self.importance_type,
train_importance=self.train_importance,
verbose=self.verbose
)
else:
estimator.set_params(**params)
model = _Boruta(
estimator=estimator,
perc=self.perc,
alpha=self.alpha,
max_iter=self.max_iter,
early_stopping_boruta_rounds=self.early_stopping_boruta_rounds,
importance_type=self.importance_type,
train_importance=self.train_importance,
verbose=self.verbose
)
return model
class BoostRFE(_BoostSearch, _RFE):
"""Simultaneous features selection with RFE and hyperparamater searching
on a given validation set for LGBModel or XGBModel.
Pass a LGBModel or XGBModel to compute features selection with RFE.
The gradient boosting instance with the best features is selected.
When a eval_set is provided, the best gradient boosting and the best
features are obtained evaluating the score with eval_metric.
Otherwise, the best combination is obtained looking only at feature
importance.
If param_grid is a dictionary with parameter boundaries, a hyperparameter
tuning is computed simultaneously. The parameter combinations are scored on
the provided eval_set.
To operate random search pass distributions in the param_grid with rvs
method for sampling (such as those from scipy.stats.distributions).
To operate bayesian search pass hyperopt distributions.
The specification of n_iter or sampling_seed is effective only with random
or hyperopt searches.
The best parameter combination is the one which obtain the better score
(as returned by eval_metric) on the provided eval_set.
If all parameters are presented as a list/floats/integers, grid-search
is performed. If at least one parameter is given as a distribution (such as
those from scipy.stats.distributions), random-search is performed computing
sampling with replacement. Bayesian search is effective only when all the
parameters to tune are in form of hyperopt distributions.
It is highly recommended to use continuous distributions for continuous
parameters.
Parameters
----------
estimator : object
A supervised learning estimator of LGBModel or XGBModel type.
step : int or float, default=1
If greater than or equal to 1, then `step` corresponds to the
(integer) number of features to remove at each iteration.
If within (0.0, 1.0), then `step` corresponds to the percentage
(rounded down) of features to remove at each iteration.
Note that the last iteration may remove fewer than `step` features in
order to reach `min_features_to_select`.
min_features_to_select : int, default=None
The minimum number of features to be selected. This number of features
will always be scored, even if the difference between the original
feature count and `min_features_to_select` isn't divisible by
`step`. The default value for min_features_to_select is set to 1 when a
eval_set is provided, otherwise it always corresponds to n_features // 2.
importance_type : str, default='feature_importances'
Which importance measure to use. It can be 'feature_importances'
(the default feature importance of the gradient boosting estimator)
or 'shap_importances'.
train_importance : bool, default=True
Effective only when importance_type='shap_importances'.
Where to compute the shap feature importance: on train (True)
or on eval_set (False).
param_grid : dict, default=None
Dictionary with parameters names (`str`) as keys and distributions
or lists of parameters to try.
None means no hyperparameters search.
greater_is_better : bool, default=False
Effective only when hyperparameters searching.
Whether the quantity to monitor is a score function,
meaning high is good, or a loss function, meaning low is good.
n_iter : int, default=None
Effective only when hyperparameters searching.
Effective only for random or hyperopt serach.
Number of parameter settings that are sampled.
n_iter trades off runtime vs quality of the solution.
sampling_seed : int, default=None
Effective only when hyperparameters searching.
Effective only for random or hyperopt serach.
The seed used to sample from the hyperparameter distributions.
n_jobs : int, default=None
Effective only when hyperparameters searching without hyperopt.
The number of jobs to run in parallel for model fitting.
``None`` means 1 using one processor. ``-1`` means using all
processors.
verbose : int, default=1
Verbosity mode. <=0 silent all; ==1 print trial logs (when
hyperparameters searching); >1 print feature selection logs plus
trial logs (when hyperparameters searching).
Attributes
----------
estimator_ : estimator
The fitted estimator with the select features and the optimal
parameter combination (when hyperparameters searching).
n_features_ : int
The number of selected features (from the best param config
when hyperparameters searching).
ranking_ : ndarray of shape (n_features,)
The feature ranking, such that ``ranking_[i]`` corresponds to the
ranking position of the i-th feature (from the best param config
when hyperparameters searching). Selected features are assigned
rank 1.
support_ : ndarray of shape (n_features,)
The mask of selected features (from the best param config
when hyperparameters searching).
score_history_ : list
Available only when a eval_set is provided.
Scores obtained reducing the features (from the best param config
when hyperparameters searching).
best_params_ : dict
Available only when hyperparameters searching.
Parameter setting that gave the best results on the eval_set.
trials_ : list
Available only when hyperparameters searching.
A list of dicts. The dicts are all the parameter combinations tried
and derived from the param_grid.
best_score_ : float
Available only when hyperparameters searching.
The best score achieved by all the possible combination created.
scores_ : list
Available only when hyperparameters searching.
The scores achieved on the eval_set by all the models tried.
best_iter_ : int
Available only when hyperparameters searching.
The boosting iterations achieved by the best parameters combination.
iterations_ : list
Available only when hyperparameters searching.
The boosting iterations of all the models tried.
boost_type_ : str
The type of the boosting estimator (LGB or XGB).
"""
def __init__(self,
estimator, *,
min_features_to_select=None,
step=1,
param_grid=None,
greater_is_better=False,
importance_type='feature_importances',
train_importance=True,
n_iter=None,
sampling_seed=None,
verbose=1,
n_jobs=None):
self.estimator = estimator
self.min_features_to_select = min_features_to_select
self.step = step
self.param_grid = param_grid
self.greater_is_better = greater_is_better
self.importance_type = importance_type
self.train_importance = train_importance
self.n_iter = n_iter
self.sampling_seed = sampling_seed
self.verbose = verbose
self.n_jobs = n_jobs
def _build_model(self, params=None):
"""Private method to build model."""
estimator = clone(self.estimator)
if params is None:
model = _RFE(
estimator=estimator,
min_features_to_select=self.min_features_to_select,
step=self.step,
greater_is_better=self.greater_is_better,
importance_type=self.importance_type,
train_importance=self.train_importance,
verbose=self.verbose
)
else:
estimator.set_params(**params)
model = _RFE(
estimator=estimator,
min_features_to_select=self.min_features_to_select,
step=self.step,
greater_is_better=self.greater_is_better,
importance_type=self.importance_type,
train_importance=self.train_importance,
verbose=self.verbose
)
return model
class BoostRFA(_BoostSearch, _RFA):
"""Simultaneous features selection with RFA and hyperparamater searching
on a given validation set for LGBModel or XGBModel.
Pass a LGBModel or XGBModel to compute features selection with RFA.
The gradient boosting instance with the best features is selected.
When a eval_set is provided, the best gradient boosting and the best
features are obtained evaluating the score with eval_metric.
Otherwise, the best combination is obtained looking only at feature
importance.
If param_grid is a dictionary with parameter boundaries, a hyperparameter
tuning is computed simultaneously. The parameter combinations are scored on
the provided eval_set.
To operate random search pass distributions in the param_grid with rvs
method for sampling (such as those from scipy.stats.distributions).
To operate bayesian search pass hyperopt distributions.
The specification of n_iter or sampling_seed is effective only with random
or hyperopt searches.
The best parameter combination is the one which obtain the better score
(as returned by eval_metric) on the provided eval_set.
If all parameters are presented as a list/floats/integers, grid-search
is performed. If at least one parameter is given as a distribution (such as
those from scipy.stats.distributions), random-search is performed computing
sampling with replacement. Bayesian search is effective only when all the
parameters to tune are in form of hyperopt distributions.
It is highly recommended to use continuous distributions for continuous
parameters.
Parameters
----------
estimator : object
A supervised learning estimator of LGBModel or XGBModel type.
step : int or float, default=1
If greater than or equal to 1, then `step` corresponds to the
(integer) number of features to remove at each iteration.
If within (0.0, 1.0), then `step` corresponds to the percentage
(rounded down) of features to remove at each iteration.
Note that the last iteration may remove fewer than `step` features in
order to reach `min_features_to_select`.
min_features_to_select : int, default=None
The minimum number of features to be selected. This number of features
will always be scored, even if the difference between the original
feature count and `min_features_to_select` isn't divisible by
`step`. The default value for min_features_to_select is set to 1 when a
eval_set is provided, otherwise it always corresponds to n_features // 2.
importance_type : str, default='feature_importances'
Which importance measure to use. It can be 'feature_importances'
(the default feature importance of the gradient boosting estimator)
or 'shap_importances'.
train_importance : bool, default=True
Effective only when importance_type='shap_importances'.
Where to compute the shap feature importance: on train (True)
or on eval_set (False).
param_grid : dict, default=None
Dictionary with parameters names (`str`) as keys and distributions
or lists of parameters to try.
None means no hyperparameters search.
greater_is_better : bool, default=False
Effective only when hyperparameters searching.
Whether the quantity to monitor is a score function,
meaning high is good, or a loss function, meaning low is good.
n_iter : int, default=None
Effective only when hyperparameters searching.
Effective only for random or hyperopt serach.
Number of parameter settings that are sampled.
n_iter trades off runtime vs quality of the solution.
sampling_seed : int, default=None
Effective only when hyperparameters searching.
Effective only for random or hyperopt serach.
The seed used to sample from the hyperparameter distributions.
n_jobs : int, default=None
Effective only when hyperparameters searching without hyperopt.
The number of jobs to run in parallel for model fitting.
``None`` means 1 using one processor. ``-1`` means using all
processors.
verbose : int, default=1
Verbosity mode. <=0 silent all; ==1 print trial logs (when
hyperparameters searching); >1 print feature selection logs plus
trial logs (when hyperparameters searching).
Attributes
----------
estimator_ : estimator
The fitted estimator with the select features and the optimal
parameter combination (when hyperparameters searching).
n_features_ : int
The number of selected features (from the best param config
when hyperparameters searching).
ranking_ : ndarray of shape (n_features,)
The feature ranking, such that ``ranking_[i]`` corresponds to the
ranking position of the i-th feature (from the best param config
when hyperparameters searching). Selected features are assigned
rank 1.
support_ : ndarray of shape (n_features,)
The mask of selected features (from the best param config
when hyperparameters searching).
score_history_ : list
Available only when a eval_set is provided.
Scores obtained reducing the features (from the best param config
when hyperparameters searching).
best_params_ : dict
Available only when hyperparameters searching.
Parameter setting that gave the best results on the eval_set.
trials_ : list
Available only when hyperparameters searching.
A list of dicts. The dicts are all the parameter combinations tried
and derived from the param_grid.
best_score_ : float
Available only when hyperparameters searching.
The best score achieved by all the possible combination created.
scores_ : list
Available only when hyperparameters searching.
The scores achieved on the eval_set by all the models tried.
best_iter_ : int
Available only when hyperparameters searching.
The boosting iterations achieved by the best parameters combination.
iterations_ : list
Available only when hyperparameters searching.
The boosting iterations of all the models tried.
boost_type_ : str
The type of the boosting estimator (LGB or XGB).
Notes
-----
The code for the RFA algorithm is inspired and improved from:
https://github.com/heberleh/recursive-feature-addition
"""
def __init__(self,
estimator, *,
min_features_to_select=None,
step=1,
param_grid=None,
greater_is_better=False,
importance_type='feature_importances',
train_importance=True,
n_iter=None,
sampling_seed=None,
verbose=1,
n_jobs=None):
self.estimator = estimator
self.min_features_to_select = min_features_to_select
self.step = step
self.param_grid = param_grid
self.greater_is_better = greater_is_better
self.importance_type = importance_type
self.train_importance = train_importance
self.n_iter = n_iter
self.sampling_seed = sampling_seed
self.verbose = verbose
self.n_jobs = n_jobs
def _build_model(self, params=None):
"""Private method to build model."""
estimator = clone(self.estimator)
if params is None:
model = _RFA(
estimator=estimator,
min_features_to_select=self.min_features_to_select,
step=self.step,
greater_is_better=self.greater_is_better,
importance_type=self.importance_type,
train_importance=self.train_importance,
verbose=self.verbose
)
else:
estimator.set_params(**params)
model = _RFA(
estimator=estimator,
min_features_to_select=self.min_features_to_select,
step=self.step,
greater_is_better=self.greater_is_better,
importance_type=self.importance_type,
train_importance=self.train_importance,
verbose=self.verbose
)
return model | 39.916894 | 81 | 0.68016 | 3,682 | 29,299 | 5.287887 | 0.089897 | 0.048793 | 0.071906 | 0.049307 | 0.9151 | 0.907242 | 0.90452 | 0.90452 | 0.90452 | 0.899589 | 0 | 0.003387 | 0.274514 | 29,299 | 734 | 82 | 39.916894 | 0.912589 | 0.715826 | 0 | 0.860465 | 0 | 0 | 0.008639 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.151163 | 0 | 0.244186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
48c02d5b75b83a1b52a823f425c81a3645db9a9a | 1,981 | py | Python | Programming/emacsjupyterSourceBlock.py | MooersLab/jupyterlabpymolpysnipsplus | b886750d63372434df53d4d6d7cdad6cb02ae4e7 | [
"MIT"
] | null | null | null | Programming/emacsjupyterSourceBlock.py | MooersLab/jupyterlabpymolpysnipsplus | b886750d63372434df53d4d6d7cdad6cb02ae4e7 | [
"MIT"
] | null | null | null | Programming/emacsjupyterSourceBlock.py | MooersLab/jupyterlabpymolpysnipsplus | b886750d63372434df53d4d6d7cdad6cb02ae4e7 | [
"MIT"
] | null | null | null | # Description: Source block template in org-mode with emacs-jupyter package.
# Source: placeHolder
"""
cmd.do('#+BEGIN_SRC jupyter-python :session py :kernel pymol.python :exports both :results raw drawer ')
cmd.do('from pymol import cmd')
cmd.do('cmd.do("reinitialize")')
cmd.do('cmd.bg_color("white")')
cmd.do('cmd.do("fetch 6VXX")')
cmd.do('cmd.do("zoom (resi 614 and chain A)")')
cmd.do('cmd.label(selection="chain A and resi 614 and name CB", expression=""%s-%s" % (resn,resi)")')
cmd.do('cmd.do("set label_color, black; set label_size, 48")')
cmd.do('cmd.do("set stick_radius, 0.12")')
cmd.do('cmd.do("hide cartoon; show sticks")')
cmd.do('cmd.do("set ray_shadows, 0")')
cmd.do('cmd.do("draw")')
cmd.do('cmd.do("png /Users/blaine/D614Gipython3.png, 600, 360, dpi=600")')
cmd.do('from IPython.display import Image')
cmd.do('from IPython.core.display import HTML')
cmd.do('PATH = "/Users/blaine/"')
cmd.do('Image(filename = PATH + "D614Gipython3.png", width=600, unconfined=True)')
cmd.do('#+END_SRC')
cmd.do('')
cmd.do('#+RESULTS:')
"""
cmd.do('#+BEGIN_SRC jupyter-python :session py :kernel pymol.python :exports both :results raw drawer ')
cmd.do('from pymol import cmd')
cmd.do('cmd.do("reinitialize")')
cmd.do('cmd.bg_color("white")')
cmd.do('cmd.do("fetch 6VXX")')
cmd.do('cmd.do("zoom (resi 614 and chain A)")')
cmd.do('cmd.label(selection="chain A and resi 614 and name CB", expression=""%s-%s" % (resn,resi)")')
cmd.do('cmd.do("set label_color, black; set label_size, 48")')
cmd.do('cmd.do("set stick_radius, 0.12")')
cmd.do('cmd.do("hide cartoon; show sticks")')
cmd.do('cmd.do("set ray_shadows, 0")')
cmd.do('cmd.do("draw")')
cmd.do('cmd.do("png /Users/blaine/D614Gipython3.png, 600, 360, dpi=600")')
cmd.do('from IPython.display import Image')
cmd.do('from IPython.core.display import HTML')
cmd.do('PATH = "/Users/blaine/"')
cmd.do('Image(filename = PATH + "D614Gipython3.png", width=600, unconfined=True)')
cmd.do('#+END_SRC')
cmd.do('')
cmd.do('#+RESULTS:')
| 42.148936 | 104 | 0.681979 | 337 | 1,981 | 3.967359 | 0.237389 | 0.216904 | 0.143605 | 0.149589 | 0.940912 | 0.940912 | 0.940912 | 0.940912 | 0.940912 | 0.940912 | 0 | 0.036626 | 0.090358 | 1,981 | 46 | 105 | 43.065217 | 0.705327 | 0.520949 | 0 | 0 | 0 | 0.1 | 0.760638 | 0.107447 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.15 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
48c4abf6bb5852711a70e0547dd4a8cb8c3b6913 | 5,998 | py | Python | ckanext/datastore/interfaces.py | florianm/ckan | 1cfd98d591ac70b4eb81048bcd227b6c1354b1bf | [
"Apache-2.0"
] | 12 | 2015-08-28T16:59:07.000Z | 2020-03-08T01:39:30.000Z | ckanext/datastore/interfaces.py | florianm/ckan | 1cfd98d591ac70b4eb81048bcd227b6c1354b1bf | [
"Apache-2.0"
] | 13 | 2019-05-02T21:01:28.000Z | 2020-10-20T23:34:48.000Z | ckanext/datastore/interfaces.py | florianm/ckan | 1cfd98d591ac70b4eb81048bcd227b6c1354b1bf | [
"Apache-2.0"
] | 10 | 2015-05-08T04:33:20.000Z | 2020-03-03T15:17:58.000Z | import ckan.plugins.interfaces as interfaces
class IDatastore(interfaces.Interface):
'''Allow modifying Datastore queries'''
def datastore_validate(self, context, data_dict, fields_types):
'''Validates the ``data_dict`` sent by the user
This is the first method that's called. It's used to guarantee that
there aren't any unrecognized parameters, so other methods don't need
to worry about that.
You'll need to go through the received ``data_dict`` and remove
everything that you understand as valid. For example, if your extension
supports an ``age_between`` filter, you have to remove this filter from
the filters on the ``data_dict``.
The same ``data_dict`` will be passed to every IDatastore extension in
the order they've been loaded (the ``datastore`` plugin will always
come first). One extension will get the resulting ``data_dict`` from
the previous extensions. In the end, if the ``data_dict`` is empty, it
means that it's valid. If not, it's invalid and we throw an error.
Attributes on the ``data_dict`` that can be comma-separated strings
(e.g. fields) will already be converted to lists.
:param context: the context
:type context: dictionary
:param data_dict: the parameters received from the user
:type data_dict: dictionary
:param fields_types: the current resource's fields as dict keys and
their types as values
:type fields_types: dictionary
'''
return data_dict
def datastore_search(self, context, data_dict, fields_types, query_dict):
'''Modify queries made on datastore_search
The overall design is that every IDatastore extension will receive the
``query_dict`` with the modifications made by previous extensions, then
it can add/remove stuff into it before passing it on. You can think of
it as pipes, where the ``query_dict`` is being passed to each
IDatastore extension in the order they've been loaded allowing them to
change the ``query_dict``. The ``datastore`` extension always comes
first.
The ``query_dict`` is on the form:
{
'select': [],
'ts_query': '',
'sort': [],
'where': [],
'limit': 100,
'offset': 0
}
As extensions can both add and remove those keys, it's not guaranteed
that any of them will exist when you receive the ``query_dict``, so
you're supposed to test for its existence before, for example, adding a
new column to the ``select`` key.
The ``where`` key is a special case. It's elements are on the form:
(format_string, param1, param2, ...)
The ``format_string`` isn't escaped for SQL Injection attacks, so
everything coming from the user should be in the params list. With this
format, you could do something like:
('"age" BETWEEN %s AND %s', age_between[0], age_between[1])
This escapes the ``age_between[0]`` and ``age_between[1]`` making sure
we're not vulnerable.
After finishing this, you should return your modified ``query_dict``.
:param context: the context
:type context: dictionary
:param data_dict: the parameters received from the user
:type data_dict: dictionary
:param fields_types: the current resource's fields as dict keys and
their types as values
:type fields_types: dictionary
:param query_dict: the current query_dict, as changed by the IDatastore
extensions that ran before yours
:type query_dict: dictionary
:returns: the query_dict with your modifications
:rtype: dictionary
'''
return query_dict
def datastore_delete(self, context, data_dict, fields_types, query_dict):
'''Modify queries made on datastore_delete
The overall design is that every IDatastore extension will receive the
``query_dict`` with the modifications made by previous extensions, then
it can add/remove stuff into it before passing it on. You can think of
it as pipes, where the ``query_dict`` is being passed to each
IDatastore extension in the order they've been loaded allowing them to
change the ``query_dict``. The ``datastore`` extension always comes
first.
The ``query_dict`` is on the form:
{
'where': []
}
As extensions can both add and remove those keys, it's not guaranteed
that any of them will exist when you receive the ``query_dict``, so
you're supposed to test the existence of any keys before modifying
them.
The ``where`` elements are on the form:
(format_string, param1, param2, ...)
The ``format_string`` isn't escaped for SQL Injection attacks, so
everything coming from the user should be in the params list. With this
format, you could do something like:
('"age" BETWEEN %s AND %s', age_between[0], age_between[1])
This escapes the ``age_between[0]`` and ``age_between[1]`` making sure
we're not vulnerable.
After finishing this, you should return your modified ``query_dict``.
:param context: the context
:type context: dictionary
:param data_dict: the parameters received from the user
:type data_dict: dictionary
:param fields_types: the current resource's fields as dict keys and
their types as values
:type fields_types: dictionary
:param query_dict: the current query_dict, as changed by the IDatastore
extensions that ran before yours
:type query_dict: dictionary
:returns: the query_dict with your modifications
:rtype: dictionary
'''
return query_dict
| 41.082192 | 79 | 0.650383 | 818 | 5,998 | 4.684597 | 0.246944 | 0.056367 | 0.037578 | 0.019833 | 0.735908 | 0.735908 | 0.728079 | 0.728079 | 0.728079 | 0.716336 | 0 | 0.003714 | 0.281761 | 5,998 | 145 | 80 | 41.365517 | 0.885794 | 0.790764 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.125 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
48f2d74ce12f0dc125d56d911890d42f6dff15fb | 650 | py | Python | SlopPy-tests/regression-tests/NA_binary_ops.py | pajju/SlopPy | 59a9125de0401959ee40ef02193265248d54b075 | [
"PSF-2.0"
] | 2 | 2022-02-03T23:56:24.000Z | 2022-02-08T19:18:46.000Z | SlopPy-tests/regression-tests/NA_binary_ops.py | pajju/SlopPy | 59a9125de0401959ee40ef02193265248d54b075 | [
"PSF-2.0"
] | null | null | null | SlopPy-tests/regression-tests/NA_binary_ops.py | pajju/SlopPy | 59a9125de0401959ee40ef02193265248d54b075 | [
"PSF-2.0"
] | null | null | null | # do a bunch of binary operations on an NA object
x = 1 / 0
assert type(x) is NA
assert type(x + 5) is NA
assert type(5 + x) is NA
assert type(x - 5) is NA
assert type(5 - x) is NA
assert type(x * 5) is NA
assert type(5 * x) is NA
assert type(x / 5) is NA
assert type(5 / x) is NA
assert type(x // 5) is NA
assert type(5 // x) is NA
assert type(x % 5) is NA
assert type(5 % x) is NA
assert type(x << 5) is NA
assert type(5 << x) is NA
assert type(x >> 5) is NA
assert type(5 >> x) is NA
assert type(x & 5) is NA
assert type(5 & x) is NA
assert type(x ^ 5) is NA
assert type(5 ^ x) is NA
assert type(x | 5) is NA
assert type(5 | x) is NA
| 16.666667 | 49 | 0.627692 | 150 | 650 | 2.72 | 0.113333 | 0.563725 | 0.539216 | 0.754902 | 0.875 | 0.875 | 0.875 | 0.875 | 0.875 | 0.875 | 0 | 0.049485 | 0.253846 | 650 | 38 | 50 | 17.105263 | 0.791753 | 0.072308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.958333 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
5b0abab7d1906918552498d41e0faa868f27ad7d | 1,493 | py | Python | LyricsChords/migrations/0001_initial.py | ekindeveli/spotify-plugins-django-app | 4b26be31c173866df7e9aef8ef6b59ae96f9812c | [
"MIT"
] | 1 | 2022-01-24T12:45:39.000Z | 2022-01-24T12:45:39.000Z | LyricsChords/migrations/0001_initial.py | ekindeveli/spotify-plugins-django-app | 4b26be31c173866df7e9aef8ef6b59ae96f9812c | [
"MIT"
] | null | null | null | LyricsChords/migrations/0001_initial.py | ekindeveli/spotify-plugins-django-app | 4b26be31c173866df7e9aef8ef6b59ae96f9812c | [
"MIT"
] | null | null | null | # Generated by Django 4.0 on 2021-12-18 13:52
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Chord',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('songName', models.CharField(blank=True, max_length=100)),
('artist', models.CharField(blank=True, max_length=100)),
('chords', models.TextField(blank=True)),
('source', models.CharField(blank=True, max_length=100)),
('is_playing', models.BooleanField(default=False)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Lyric',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('songName', models.CharField(blank=True, max_length=100)),
('artist', models.CharField(blank=True, max_length=100)),
('lyrics', models.TextField(blank=True)),
('source', models.CharField(blank=True, max_length=100)),
('is_playing', models.BooleanField(default=False)),
],
options={
'abstract': False,
},
),
]
| 34.72093 | 117 | 0.539183 | 139 | 1,493 | 5.690647 | 0.381295 | 0.091024 | 0.151707 | 0.182048 | 0.740834 | 0.740834 | 0.740834 | 0.740834 | 0.740834 | 0.740834 | 0 | 0.031652 | 0.32284 | 1,493 | 42 | 118 | 35.547619 | 0.750742 | 0.028801 | 0 | 0.628571 | 1 | 0 | 0.073204 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d2a55c8d494457837eab2e1a0b20b3c537af84d2 | 2,625 | py | Python | Faster RCNN/dataset_copy.py | JIN-strong/Mask_detection_pytorch_yolov3_fasterrcnn | fcd17164131ed914190cd3146cbb91f66a7d4ca3 | [
"MIT"
] | 1 | 2021-12-14T04:01:20.000Z | 2021-12-14T04:01:20.000Z | Faster RCNN/dataset_copy.py | JIN-strong/Mask_detection_pytorch_yolov3_fasterrcnn | fcd17164131ed914190cd3146cbb91f66a7d4ca3 | [
"MIT"
] | null | null | null | Faster RCNN/dataset_copy.py | JIN-strong/Mask_detection_pytorch_yolov3_fasterrcnn | fcd17164131ed914190cd3146cbb91f66a7d4ca3 | [
"MIT"
] | 2 | 2021-12-05T11:16:59.000Z | 2022-02-16T00:31:58.000Z | import os
import shutil
trainset = './AIZOO/train/'
valset = './AIZOO/val/'
trainpath = './train'
valpath = './val'
if not os.path.exists(trainpath):
os.makedirs(trainpath + '/Annotations')
os.makedirs(trainpath + '/JPEGImages')
if not os.path.exists(valpath):
os.makedirs(valpath + '/Annotations')
os.makedirs(valpath + '/JPEGImages')
i=0
j=0
f = open('./train/train.txt', 'w')
for file in sorted(os.listdir(trainset)):
if 'test' in file and i < 800:
i = i + 1
if os.path.splitext(file)[1] == '.xml':
print(file)
r = shutil.copy(trainset + file, os.path.join('./train/Annotations/'))
print('copy path is ' + r)
elif os.path.splitext(file)[1] == '.jpg':
print(file)
r = shutil.copy(trainset + file, os.path.join('./train/JPEGImages/'))
print('copy path is ' + r)
f.write(str(file) + '\n')
print("write image in txt")
if 'test' not in file and j < 800:
j = j + 1
if os.path.splitext(file)[1] == '.xml':
print(file)
r = shutil.copy(trainset + file, os.path.join('./train/Annotations/'))
print('copy path is ' + r)
elif os.path.splitext(file)[1] == '.jpg':
print(file)
r = shutil.copy(trainset + file, os.path.join('./train/JPEGImages/'))
print('copy path is ' + r)
f.write(str(file) + '\n')
print("write image in txt")
f.close()
i=0
j=0
f = open('./val/val.txt', 'w')
for file in sorted(os.listdir(valset)):
if 'test' in file and i <500:
i=i+1
if os.path.splitext(file)[1] == '.xml':
print(file)
r=shutil.copy(valset + file,os.path.join('./val/Annotations/'))
print('copy path is '+ r)
elif os.path.splitext(file)[1] == '.jpg':
print(file)
r=shutil.copy(valset + file,os.path.join('./val/JPEGImages/'))
print('copy path is '+ r)
f.write(str(file)+'\n')
print("write image in txt")
if 'test' not in file and j <500:
j = j + 1
if os.path.splitext(file)[1] == '.xml':
print(file)
r = shutil.copy(valset + file, os.path.join('./val/Annotations/'))
print('copy path is ' + r)
elif os.path.splitext(file)[1] == '.jpg':
print(file)
r = shutil.copy(valset + file, os.path.join('./val/JPEGImages/'))
print('copy path is ' + r)
f.write(str(file) + '\n')
print("write image in txt")
f.close() | 36.971831 | 82 | 0.517333 | 354 | 2,625 | 3.836158 | 0.135593 | 0.079529 | 0.082474 | 0.106038 | 0.817379 | 0.792342 | 0.755523 | 0.755523 | 0.714286 | 0.714286 | 0 | 0.015368 | 0.305905 | 2,625 | 71 | 83 | 36.971831 | 0.729967 | 0 | 0 | 0.714286 | 0 | 0 | 0.18888 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.028571 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d2ae1c16408716f5f3f5a53ae8013761cb57b982 | 392 | py | Python | Method/owlapy/owlready2/__init__.py | dice-group/LearnALCLengths | cb019ba234092a323f3785517d1cc6152a5ef7a4 | [
"MIT"
] | 2 | 2021-07-13T19:30:53.000Z | 2021-12-14T13:22:50.000Z | Method/owlapy/owlready2/__init__.py | dice-group/LearnALCLengths | cb019ba234092a323f3785517d1cc6152a5ef7a4 | [
"MIT"
] | null | null | null | Method/owlapy/owlready2/__init__.py | dice-group/LearnALCLengths | cb019ba234092a323f3785517d1cc6152a5ef7a4 | [
"MIT"
] | null | null | null | from owlapy._utils import MOVE
from owlapy.owlready2._base import OWLOntologyManager_Owlready2, OWLReasoner_Owlready2, OWLOntology_Owlready2,\
BaseReasoner_Owlready2
MOVE(OWLOntologyManager_Owlready2, OWLReasoner_Owlready2, OWLOntology_Owlready2, BaseReasoner_Owlready2)
__all__ = 'OWLOntologyManager_Owlready2', 'OWLReasoner_Owlready2', 'OWLOntology_Owlready2', 'BaseReasoner_Owlready2'
| 65.333333 | 116 | 0.875 | 36 | 392 | 9.027778 | 0.333333 | 0.249231 | 0.350769 | 0.433846 | 0.812308 | 0.812308 | 0.812308 | 0.812308 | 0 | 0 | 0 | 0.035422 | 0.063776 | 392 | 5 | 117 | 78.4 | 0.850136 | 0 | 0 | 0 | 0 | 0 | 0.234694 | 0.234694 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 9 |
d2af9657b097836862a9a2f6f2fde3efbe2271c1 | 18,916 | py | Python | consensus/poet/core/tests/test_consensus/test_poet_config_view.py | trust-tech/sawtooth-core | fcd66ff2f13dba51d7642049e0c0306dbee3b07d | [
"Apache-2.0"
] | null | null | null | consensus/poet/core/tests/test_consensus/test_poet_config_view.py | trust-tech/sawtooth-core | fcd66ff2f13dba51d7642049e0c0306dbee3b07d | [
"Apache-2.0"
] | null | null | null | consensus/poet/core/tests/test_consensus/test_poet_config_view.py | trust-tech/sawtooth-core | fcd66ff2f13dba51d7642049e0c0306dbee3b07d | [
"Apache-2.0"
] | null | null | null | # Copyright 2016, 2017 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------------------------------------------------------------------
import unittest
from unittest.mock import patch
from sawtooth_poet.poet_consensus.poet_config_view import PoetConfigView
@patch('sawtooth_poet.poet_consensus.poet_config_view.ConfigView')
class TestPoetConfigView(unittest.TestCase):
# pylint: disable=invalid-name
_EXPECTED_DEFAULT_BLOCK_CLAIM_DELAY_ = 1
_EXPECTED_DEFAULT_ENCLAVE_MODULE_NAME_ = \
'sawtooth_poet_simulator.poet_enclave_simulator.poet_enclave_simulator'
_EXPECTED_DEFAULT_INITIAL_WAIT_TIME_ = 3000.0
_EXPECTED_DEFAULT_KEY_BLOCK_CLAIM_LIMIT_ = 25
_EXPECTED_DEFAULT_MINIMUM_WAIT_TIME_ = 1.0
_EXPECTED_DEFAULT_POPULATION_ESTIMATE_SAMPLE_SIZE_ = 50
_EXPECTED_DEFAULT_SIGNUP_COMMIT_MAXIMUM_DELAY_ = 0
_EXPECTED_DEFAULT_TARGET_WAIT_TIME_ = 20.0
_EXPECTED_DEFAULT_ZTEST_MAXIMUM_WIN_DEVIATION_ = 3.075
_EXPECTED_DEFAULT_ZTEST_MINIMUM_WIN_COUNT_ = 3
def test_block_claim_delay(self, mock_config_view):
"""Verify that retrieving block claim delay works for invalid
cases (missing, invalid format, invalid value) as well as valid case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.block_claim_delay,
TestPoetConfigView._EXPECTED_DEFAULT_BLOCK_CLAIM_DELAY_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(kwargs['key'], 'sawtooth.poet.block_claim_delay')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView._EXPECTED_DEFAULT_BLOCK_CLAIM_DELAY_)
self.assertEqual(kwargs['value_type'], int)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
for bad_value in [-100, -1]:
mock_config_view.return_value.get_setting.return_value = bad_value
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.block_claim_delay,
TestPoetConfigView._EXPECTED_DEFAULT_BLOCK_CLAIM_DELAY_)
# Underlying config setting is a valid value
poet_config_view = PoetConfigView(state_view=None)
mock_config_view.return_value.get_setting.return_value = 0
self.assertEqual(poet_config_view.block_claim_delay, 0)
poet_config_view = PoetConfigView(state_view=None)
mock_config_view.return_value.get_setting.return_value = 1
self.assertEqual(poet_config_view.block_claim_delay, 1)
def test_enclave_module_name(self, mock_config_view):
"""Verify that retrieving enclave module name works for invalid
cases (missing, invalid format, invalid value) as well as valid case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.enclave_module_name,
TestPoetConfigView._EXPECTED_DEFAULT_ENCLAVE_MODULE_NAME_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(kwargs['key'], 'sawtooth.poet.enclave_module_name')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView._EXPECTED_DEFAULT_ENCLAVE_MODULE_NAME_)
self.assertEqual(kwargs['value_type'], str)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
mock_config_view.return_value.get_setting.return_value = ''
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.enclave_module_name,
TestPoetConfigView._EXPECTED_DEFAULT_ENCLAVE_MODULE_NAME_)
# Underlying config setting is a valid value
mock_config_view.return_value.get_setting.return_value = 'valid value'
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(poet_config_view.enclave_module_name, 'valid value')
def test_initial_wait_time(self, mock_config_view):
"""Verify that retrieving initial wait time works for invalid cases
(missing, invalid format, invalid value) as well as valid case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.initial_wait_time,
TestPoetConfigView._EXPECTED_DEFAULT_INITIAL_WAIT_TIME_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(kwargs['key'], 'sawtooth.poet.initial_wait_time')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView._EXPECTED_DEFAULT_INITIAL_WAIT_TIME_)
self.assertEqual(kwargs['value_type'], float)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
for bad_value in \
[-100.0, -1.0, float('nan'), float('inf'), float('-inf')]:
mock_config_view.return_value.get_setting.return_value = bad_value
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.initial_wait_time,
TestPoetConfigView._EXPECTED_DEFAULT_INITIAL_WAIT_TIME_)
# Underlying config setting is a valid value
mock_config_view.return_value.get_setting.return_value = 3.1415
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(poet_config_view.initial_wait_time, 3.1415)
def test_key_block_claim_limit(self, mock_config_view):
"""Verify that retrieving key block claim limit works for invalid
cases (missing, invalid format, invalid value) as well as valid case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.key_block_claim_limit,
TestPoetConfigView._EXPECTED_DEFAULT_KEY_BLOCK_CLAIM_LIMIT_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(kwargs['key'], 'sawtooth.poet.key_block_claim_limit')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView._EXPECTED_DEFAULT_KEY_BLOCK_CLAIM_LIMIT_)
self.assertEqual(kwargs['value_type'], int)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
for bad_value in [-100, -1, 0]:
mock_config_view.return_value.get_setting.return_value = bad_value
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.key_block_claim_limit,
TestPoetConfigView._EXPECTED_DEFAULT_KEY_BLOCK_CLAIM_LIMIT_)
# Underlying config setting is a valid value
mock_config_view.return_value.get_setting.return_value = 1
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(poet_config_view.key_block_claim_limit, 1)
def test_minimum_wait_time(self, mock_config_view):
"""Verify that retrieving minimum wait time works for invalid cases
(missing, invalid format, invalid value) as well as valid case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.minimum_wait_time,
TestPoetConfigView._EXPECTED_DEFAULT_MINIMUM_WAIT_TIME_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(kwargs['key'], 'sawtooth.poet.minimum_wait_time')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView._EXPECTED_DEFAULT_MINIMUM_WAIT_TIME_)
self.assertEqual(kwargs['value_type'], float)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
for bad_value in \
[-100.0, -1.0, 0.0, float('nan'), float('inf'), float('-inf')]:
mock_config_view.return_value.get_setting.return_value = bad_value
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.minimum_wait_time,
TestPoetConfigView._EXPECTED_DEFAULT_MINIMUM_WAIT_TIME_)
# Underlying config setting is a valid value
mock_config_view.return_value.get_setting.return_value = 3.1415
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(poet_config_view.minimum_wait_time, 3.1415)
def test_population_estimate_sample_size(self, mock_config_view):
"""Verify that retrieving population estimate sample size works for
invalid cases (missing, invalid format, invalid value) as well as valid
case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.population_estimate_sample_size,
TestPoetConfigView.
_EXPECTED_DEFAULT_POPULATION_ESTIMATE_SAMPLE_SIZE_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(
kwargs['key'],
'sawtooth.poet.population_estimate_sample_size')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView.
_EXPECTED_DEFAULT_POPULATION_ESTIMATE_SAMPLE_SIZE_)
self.assertEqual(kwargs['value_type'], int)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
for bad_value in [-100, -1, 0]:
mock_config_view.return_value.get_setting.return_value = bad_value
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.population_estimate_sample_size,
TestPoetConfigView.
_EXPECTED_DEFAULT_POPULATION_ESTIMATE_SAMPLE_SIZE_)
# Underlying config setting is a valid value
mock_config_view.return_value.get_setting.return_value = 1
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(poet_config_view.population_estimate_sample_size, 1)
def test_target_wait_time(self, mock_config_view):
"""Verify that retrieving target wait time works for invalid cases
(missing, invalid format, invalid value) as well as valid case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.target_wait_time,
TestPoetConfigView._EXPECTED_DEFAULT_TARGET_WAIT_TIME_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(kwargs['key'], 'sawtooth.poet.target_wait_time')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView._EXPECTED_DEFAULT_TARGET_WAIT_TIME_)
self.assertEqual(kwargs['value_type'], float)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
for bad_value in \
[-100.0, -1.0, 0.0, float('nan'), float('inf'), float('-inf')]:
mock_config_view.return_value.get_setting.return_value = bad_value
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.target_wait_time,
TestPoetConfigView._EXPECTED_DEFAULT_TARGET_WAIT_TIME_)
# Underlying config setting is a valid value
mock_config_view.return_value.get_setting.return_value = 3.1415
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(poet_config_view.target_wait_time, 3.1415)
def test_signup_commit_maximum_delay(self, mock_config_view):
"""Verify that retrieving signup commit maximum delay works for invalid
cases (missing, invalid format, invalid value) as well as valid case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.signup_commit_maximum_delay,
TestPoetConfigView._EXPECTED_DEFAULT_SIGNUP_COMMIT_MAXIMUM_DELAY_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(
kwargs['key'],
'sawtooth.poet.signup_commit_maximum_delay')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView._EXPECTED_DEFAULT_SIGNUP_COMMIT_MAXIMUM_DELAY_)
self.assertEqual(kwargs['value_type'], int)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
for bad_value in [-100, -1]:
mock_config_view.return_value.get_setting.return_value = bad_value
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.signup_commit_maximum_delay,
TestPoetConfigView.
_EXPECTED_DEFAULT_SIGNUP_COMMIT_MAXIMUM_DELAY_)
# Underlying config setting is a valid value
mock_config_view.return_value.get_setting.return_value = 123
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(poet_config_view.signup_commit_maximum_delay, 123)
def test_ztest_maximum_win_deviation(self, mock_config_view):
"""Verify that retrieving zTest maximum win deviation works for
invalid cases (missing, invalid format, invalid value) as well as
valid case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.ztest_maximum_win_deviation,
TestPoetConfigView._EXPECTED_DEFAULT_ZTEST_MAXIMUM_WIN_DEVIATION_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(
kwargs['key'],
'sawtooth.poet.ztest_maximum_win_deviation')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView._EXPECTED_DEFAULT_ZTEST_MAXIMUM_WIN_DEVIATION_)
self.assertEqual(kwargs['value_type'], float)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
for bad_value in \
[-100.0, -1.0, 0.0, float('nan'), float('inf'), float('-inf')]:
mock_config_view.return_value.get_setting.return_value = bad_value
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.ztest_maximum_win_deviation,
TestPoetConfigView.
_EXPECTED_DEFAULT_ZTEST_MAXIMUM_WIN_DEVIATION_)
# Underlying config setting is a valid value
mock_config_view.return_value.get_setting.return_value = 2.575
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(poet_config_view.ztest_maximum_win_deviation, 2.575)
def test_ztest_minimum_win_count(self, mock_config_view):
"""Verify that retrieving zTest minimum win observations works for
invalid cases (missing, invalid format, invalid value) as well as
valid case.
"""
poet_config_view = PoetConfigView(state_view=None)
# Simulate an underlying error parsing value
mock_config_view.return_value.get_setting.side_effect = \
ValueError('bad value')
self.assertEqual(
poet_config_view.ztest_minimum_win_count,
TestPoetConfigView._EXPECTED_DEFAULT_ZTEST_MINIMUM_WIN_COUNT_)
_, kwargs = \
mock_config_view.return_value.get_setting.call_args
self.assertEqual(
kwargs['key'],
'sawtooth.poet.ztest_minimum_win_count')
self.assertEqual(
kwargs['default_value'],
TestPoetConfigView._EXPECTED_DEFAULT_ZTEST_MINIMUM_WIN_COUNT_)
self.assertEqual(kwargs['value_type'], int)
# Underlying config setting is not a valid value
mock_config_view.return_value.get_setting.side_effect = None
for bad_value in [-100, -1]:
mock_config_view.return_value.get_setting.return_value = bad_value
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(
poet_config_view.ztest_minimum_win_count,
TestPoetConfigView._EXPECTED_DEFAULT_ZTEST_MINIMUM_WIN_COUNT_)
# Underlying config setting is a valid value
mock_config_view.return_value.get_setting.return_value = 0
poet_config_view = PoetConfigView(state_view=None)
self.assertEqual(poet_config_view.ztest_minimum_win_count, 0)
| 43.088838 | 80 | 0.695073 | 2,245 | 18,916 | 5.460579 | 0.070379 | 0.101966 | 0.073089 | 0.083204 | 0.92218 | 0.900237 | 0.872502 | 0.840688 | 0.801126 | 0.765478 | 0 | 0.009588 | 0.233559 | 18,916 | 438 | 81 | 43.187215 | 0.835977 | 0.176623 | 0 | 0.8 | 0 | 0 | 0.058354 | 0.031401 | 0 | 0 | 0 | 0 | 0.221818 | 1 | 0.036364 | false | 0 | 0.010909 | 0 | 0.087273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
824b668521f84ff2afb8c072a5501e4e428a5c39 | 464 | py | Python | 2019/01/log_loss_magic_numbers/three_class_log_loss_random.py | ericness/blog | 25628fdb254faa033bf917ef6bb56db409141cc6 | [
"MIT"
] | 3 | 2019-06-17T18:44:49.000Z | 2020-05-04T16:14:57.000Z | 2019/01/log_loss_magic_numbers/three_class_log_loss_random.py | ericness/blog | 25628fdb254faa033bf917ef6bb56db409141cc6 | [
"MIT"
] | 1 | 2019-02-26T19:58:32.000Z | 2019-02-26T23:07:09.000Z | 2019/01/log_loss_magic_numbers/three_class_log_loss_random.py | ericness/blog | 25628fdb254faa033bf917ef6bb56db409141cc6 | [
"MIT"
] | 3 | 2019-02-05T17:16:12.000Z | 2022-03-14T20:32:28.000Z | from sklearn.metrics import log_loss
actual = [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2]
predictions = [
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333],
]
print(log_loss(actual, predictions))
| 23.2 | 45 | 0.506466 | 97 | 464 | 2.402062 | 0.134021 | 0.618026 | 0.751073 | 1.201717 | 0.618026 | 0.618026 | 0.618026 | 0.618026 | 0.618026 | 0.618026 | 0 | 0.440678 | 0.237069 | 464 | 19 | 46 | 24.421053 | 0.217514 | 0 | 0 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0.058824 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
825ffdbb4bfc51dbca27f3575841ec966e042ddb | 152,842 | py | Python | turbinescripts/turbines.py | malvela/ML | 391930e7cc290e42fbb89641f092422aee5db245 | [
"MIT"
] | null | null | null | turbinescripts/turbines.py | malvela/ML | 391930e7cc290e42fbb89641f092422aee5db245 | [
"MIT"
] | null | null | null | turbinescripts/turbines.py | malvela/ML | 391930e7cc290e42fbb89641f092422aee5db245 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-04_helihoist-1_tom_acc-vel-pos_hammerhead_2019-09-01-10-20-45_2019-09-07-07-19-26
turbine-04_helihoist-1_tom_geometry_hammerhead_2019-09-01-10-20-45_2019-09-07-07-19-26
turbine-04_sbitroot_tom_acc-vel-pos_hammerhead_2019-09-07-04-48-53_2019-09-07-07-12-46
turbine-04_sbittip_tom_acc-vel-pos_hammerhead_2019-09-07-04-41-36_2019-09-07-07-25-10
turbine-04_helihoist-1_tom_acc-vel-pos_sbi1_2019-09-07-07-19-27_2019-09-07-12-40-14
turbine-04_sbitroot_tom_acc-vel-pos_sbi1_2019-09-07-07-12-47_2019-09-07-12-39-30
turbine-04_sbittip_tom_acc-vel-pos_sbi1_2019-09-07-07-25-10_2019-09-07-12-34-23
turbine-04_helihoist-1_tom_acc-vel-pos_tnhb1_2019-09-07-12-40-14_2019-09-07-21-49-58
turbine-04_helihoist-1_tom_geometry_tnhb1_2019-09-07-12-40-14_2019-09-07-21-49-58
turbine-04_sbitroot_tom_acc-vel-pos_tnhb1_2019-09-07-12-39-30_2019-09-08-04-43-15
turbine-04_sbittip_tom_acc-vel-pos_tnhb1_2019-09-07-12-34-23_2019-09-08-04-49-49
wmb-sued-2019-9-1
wmb-sued-2019-9-2
wmb-sued-2019-9-3
wmb-sued-2019-9-4
wmb-sued-2019-9-5
wmb-sued-2019-9-6
wmb-sued-2019-9-7
lidar_2019_09_01
lidar_2019_09_03
lidar_2019_09_04
lidar_2019_09_05
lidar_2019_09_06
lidar_2019_09_07
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-04**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-04**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-04**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-04**.csv'))
tnhb2 = sorted(glob('Daten/tnhb2/tnhb2/turbine-04**.csv'))
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter = ',')
sbiroot_sbi1 = pd.read_csv(sbi1[1], delimiter = ',')
sbitip_sbi1 = pd.read_csv(sbi1[2], delimiter = ',')
data.append(helihoist_sbi1) ,data.append(sbiroot_sbi1) ,data.append(sbitip_sbi1)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
sbitip_tnhb1 = pd.read_csv(tnhb1[3], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1),data.append(sbitip_tnhb1)
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-01.csv', delimiter = ' ')
wmb2= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-02.csv', delimiter = ' ')
wmb3= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-03.csv', delimiter = ' ')
wmb4= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-04.csv', delimiter = ' ')
wmb5= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-05.csv', delimiter = ' ')
wmb6= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-06.csv', delimiter = ' ')
wmb7= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-07.csv', delimiter = ' ')
data.append(wmb1), data.append(wmb2), data.append(wmb3), data.append(wmb4), data.append(wmb5), data.append(wmb6), data.append(wmb7)
wmb_all = []
wmb_all.append(wmb1), wmb_all.append(wmb2), wmb_all.append(wmb3), wmb_all.append(wmb4), wmb_all.append(wmb5), wmb_all.append(wmb6), wmb_all.append(wmb7)
lidar1= pd.read_csv('environment/environment/wind/lidar/lidar_2019-09-01.csv', delimiter = ' ')
lidar2= pd.read_csv('environment/environment/wind/lidar/lidar_2019-09-03.csv', delimiter = ' ')
lidar3= pd.read_csv('environment/environment/wind/lidar/lidar_2019-09-04.csv', delimiter = ' ')
lidar4= pd.read_csv('environment/environment/wind/lidar/lidar_2019-09-05.csv', delimiter = ' ')
lidar5= pd.read_csv('environment/environment/wind/lidar/lidar_2019-09-06.csv', delimiter = ' ')
lidar6= pd.read_csv('environment/environment/wind/lidar/lidar_2019-09-07.csv', delimiter = ' ')
data.append(lidar1), data.append(lidar2), data.append(lidar3), data.append(lidar4), data.append(lidar5), data.append(lidar6),
lidar_all =[]
lidar_all.append(lidar1), lidar_all.append(lidar2), lidar_all.append(lidar3), lidar_all.append(lidar5), data.append(lidar6)
buffer1 = []
for i in wmb_all:
i.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer1.append(i)
wmb = pd.concat(buffer1, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer2 = []
for j in lidar_all:
j.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
buffer2.append(j)
lidar = pd.concat(buffer2, axis=0)
lidar.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
counter = 0
#generating timestamps for every dataframe
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
'''
#Plotting:
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.ylabel('number of waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
#generating csv files
'''
# generating hammerhead file
#04:48:53 07:12:46
for i in range(0,4):
data[i] = data[i]['2019-09-07 04:48:53': '2019-09-07 07:12:46']
transition_wmb =wmb['2019-09-07 04:48:53': '2019-09-07 07:12:46']
transition_lidar = lidar['2019-09-07 04:48:53': '2019-09-07 07:12:46']
result =pd.concat([data[0],data[1],data[2],data[3], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine04/hammerhead_turbine04.csv')
#generating sbi1 file
#07: 19:27 12: 34:23
for i in range(4,7):
data[i] = data[i]['2019-09-07 07:25:10': '2019-09-07 12:34:23']
transition_wmb =wmb['2019-09-07 07:25:10': '2019-09-07 12:34:23']
transition_lidar = lidar['2019-09-07 07:25:10': '2019-09-07 12:34:23']
result = pd.concat([data[4], data[5], data[6], transition_lidar, transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine04/sbi1_turbine4.csv')
#generating tnhb1 file
#12:34:23 04:43:15
for i in range(7,11):
data[i] = data[i]['2019-09-07 12:40:14': '2019-09-07 21:49:58']
transition_wmb =wmb['2019-09-07 12:40:14': '2019-09-07 21:49:58']
transition_lidar = lidar['2019-09-07 12:40:14': '2019-09-07 21:49:58']
result = pd.concat([data[10], data[11], data[12], data[13], transition_lidar, transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine04/tnhb1_turbine4.csv')
'''
'''
files to extract:
01.09.2019 10:20:45 07.09.2019 07:19:26 /data[1]
07.09.2019 12:40:14 07.09.2019 21:49:58 /data[8]
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-09-01 12:20:45': '2019-09-07 09:19:26']
transition_wmb =wmb['2019-09-01 12:20:45': '2019-09-07 09:19:26']
transition_lidar = lidar['2019-09-01 12:20:45': '2019-09-07 09:19:26']
print(transition_lidar)
result = pd.concat([data[1], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_speed_3']
del result['wind_dir_3']
del result['wind_dir_3_corr']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine04.csv')
print(data[8].index[0])
print(data[8].index[-1])
data[8] = data[8]['2019-09-07 14:40:14': '2019-09-07 23:49:58']
transition_wmb =wmb['2019-09-07 14:40:14': '2019-09-07 23:49:58']
transition_lidar = lidar['2019-09-07 14:40:14': '2019-09-07 23:49:58']
print(transition_lidar)
result = pd.concat([data[8], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine04.csv')
#turbine05
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-05_helihoist-1_tom_acc-vel-pos_hammerhead_2019-09-10-16-04-47_2019-09-20-02-53-43
turbine-05_helihoist-1_tom_geometry_hammerhead_2019-09-10-16-04-47_2019-09-20-02-53-43
turbine-05_helihoist-1_tom_acc-vel-pos_sbi1_2019-09-20-02-53-43_2019-09-20-07-42-54
turbine-05_sbitroot_tom_acc-vel-pos_sbi1_2019-09-20-02-34-11_2019-09-20-07-33-33
turbine-05_sbittip_tom_acc-vel-pos_sbi1_2019-09-20-02-47-05_2019-09-20-07-43-54
turbine-05_sbittip_tom_acc-vel-pos_sbi2_2019-09-20-12-07-46_2019-09-20-13-00-55
turbine-05_sbitroot_tom_acc-vel-pos_sbi2_2019-09-20-12-03-56_2019-09-20-12-58-11
turbine-05_helihoist-1_tom_acc-vel-pos_sbi2_2019-09-20-12-01-12_2019-09-20-12-51-37
turbine-05_sbittip_tom_acc-vel-pos_tnhb1_2019-09-20-07-43-54_2019-09-20-12-07-46
turbine-05_sbitroot_tom_acc-vel-pos_tnhb1_2019-09-20-07-33-33_2019-09-20-12-03-56
turbine-05_helihoist-1_tom_geometry_tnhb1_2019-09-20-07-42-54_2019-09-20-12-01-11
turbine-05_helihoist-1_tom_acc-vel-pos_tnhb1_2019-09-20-07-42-54_2019-09-20-12-01-11
turbine-05_helihoist-1_tom_acc-vel-pos_tnhb2_2019-09-20-12-51-37_2019-09-20-16-14-47
turbine-05_helihoist-1_tom_geometry_tnhb2_2019-09-20-12-51-37_2019-09-20-16-14-47
turbine-05_sbitroot_tom_acc-vel-pos_tnhb2_2019-09-20-12-58-11_2019-09-20-16-36-36
turbine-05_sbittip_tom_acc-vel-pos_tnhb2_2019-09-20-13-00-55_2019-09-20-16-11-16
wmb-sued-2019-9-10
wmb-sued-2019-9-11
wmb-sued-2019-9-12
wmb-sued-2019-9-13
wmb-sued-2019-9-14
wmb-sued-2019-9-15
wmb-sued-2019-9-16
wmb-sued-2019-9-17
wmb-sued-2019-9-18
wmb-sued-2019-9-19
wmb-sued-2019-9-20
keine winddaten
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-05**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-05**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-05*.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-05**.csv'))
tnhb2 = sorted(glob('Daten/tnhb2/tnhb2/turbine-05**.csv'))
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter = ',')
sbiroot_sbi1 = pd.read_csv(sbi1[1], delimiter = ',')
sbitip_sbi1 = pd.read_csv(sbi1[2], delimiter = ',')
data.append(helihoist_sbi1) ,data.append(sbiroot_sbi1) ,data.append(sbitip_sbi1)
helihoist_sbi2 = pd.read_csv(sbi2[0], delimiter = ',')
sbiroot_sbi2 = pd.read_csv(sbi2[1], delimiter = ',')
sbitip_sbi2 = pd.read_csv(sbi2[2], delimiter = ',')
data.append(helihoist_sbi2) ,data.append(sbiroot_sbi2) ,data.append(sbitip_sbi2)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
sbitip_tnhb1 = pd.read_csv(tnhb1[3], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1),data.append(sbitip_tnhb1)
helihoist_tnhb2 = pd.read_csv(tnhb2[0], delimiter = ',')
helihoist_geo_tnhb2 = pd.read_csv(tnhb2[1], delimiter = ',')
sbiroot_tnhb2 = pd.read_csv(tnhb2[2], delimiter = ',')
sbitip_tnhb2 = pd.read_csv(tnhb2[3], delimiter = ',')
data.append(helihoist_tnhb2) ,data.append(helihoist_geo_tnhb2) ,data.append(sbiroot_tnhb2),data.append(sbitip_tnhb2)
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-10.csv', delimiter = ' ')
wmb2= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-11.csv', delimiter = ' ')
wmb3= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-12.csv', delimiter = ' ')
wmb4= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-13.csv', delimiter = ' ')
wmb5= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-14.csv', delimiter = ' ')
wmb6= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-15.csv', delimiter = ' ')
wmb7= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-16.csv', delimiter = ' ')
wmb8= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-17.csv', delimiter = ' ')
wmb9= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-18.csv', delimiter = ' ')
wmb10= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-19.csv', delimiter = ' ')
wmb11= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-20.csv', delimiter = ' ')
wmb_all = []
wmb_all.append(wmb1), wmb_all.append(wmb2), wmb_all.append(wmb3), wmb_all.append(wmb4),wmb_all.append(wmb5), wmb_all.append(wmb6), wmb_all.append(wmb7), wmb_all.append(wmb8),wmb_all.append(wmb9), wmb_all.append(wmb10), wmb_all.append(wmb11)
buffer1 = []
for i in wmb_all:
i.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer1.append(i)
wmb = pd.concat(buffer1, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
#generating timestamps for every dataframe
counter = 0
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
#Plotting:
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.ylabel('number of waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
'''
# generating hammerhead file
#17:29:33 02:34:11
for i in range(0,4):
data[i] = data[i]['2019-09-20 17:29:33': '2019-09-20 02:34:11']
transition_wmb =wmb['2019-09-20 17:29:33': '2019-09-20 02:34:11']
result =pd.concat([data[0],data[1],data[2],data[3], transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine05/hammerhead_turbine05.csv')
#generating sbi1 file
#02:53:43 07:33:33
for i in range(4,7):
data[i] = data[i]['2019-09-20 02:53:43': '2019-09-20 07:33:33']
transition_wmb =wmb['2019-09-20 02:53:43': '2019-09-20 07:33:33']
result = pd.concat([data[4], data[5], data[6], transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine05/sbi1_turbine5.csv')
#generating sbi2 file
#12:07:46 12:51:37
for i in range(7,10):
data[i] = data[i]['2019-09-20 12:07:46': '2019-09-20 12:51:37']
transition_wmb =wmb['2019-09-20 12:07:46': '2019-09-20 12:51:37']
result = pd.concat([data[7], data[8], data[9], transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine05/sbi2_turbine5.csv')
#generating tnhb1 file
#07:43:54 12:01:11
for i in range(10,14):
data[i] = data[i]['2019-09-20 07:43:54': '2019-09-20 12:01:11']
transition_wmb =wmb['2019-09-20 07:43:54': '2019-09-20 12:01:11']
result = pd.concat([data[10], data[11], data[12], data[13], transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine05/tnhb1_turbine5.csv')
#generating tnhb2 file
#13:00:55 16:11:16
for i in range(14,16):
data[i] = data[i]['2019-09-20 13:00:55': '2019-09-20 16:11:16']
transition_wmb = wmb['2019-09-20 13:00:55': '2019-09-20 16:11:16']
result = pd.concat([data[14], data[15], data[16], data[17], transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine05/tnhb2_turbine5.csv')
'''
'''
files to extract
10.09.2019 16:04:47 20.09.2019 02:53:43
20.09.2019 07:42:54 20.09.2019 12:01:11
20.09.2019 12:51:37 20.09.2019 16:14:47
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-09-10 18:04:48': '2019-09-20 04:53:39']
transition_wmb =wmb['2019-09-10 18:04:48': '2019-09-20 04:53:39']
result = pd.concat([data[1], transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine05.csv')
print(data[9].index[0])
print(data[9].index[-1])
data[9] = data[9]['2019-09-20 09:42:54': '2019-09-20 14:01:07']
transition_wmb =wmb['2019-09-20 09:42:54': '2019-09-20 14:01:07']
result = pd.concat([data[9], transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine05.csv')
print(data[13].index[0])
print(data[13].index[-1])
data[13] = data[13]['2019-09-20 14:51:37': '2019-09-20 18:14:40']
transition_wmb =wmb['2019-09-20 14:51:37': '2019-09-20 18:14:40']
result = pd.concat([data[13], transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb2_turbine05.csv')
#turbine06
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
import jinja2
import seaborn as sn
# every data name that keeps data for turbine6
'''
turbine-06_helihoist-1_tom_acc-vel-pos_hammerhead_2019-09-22-03-14-43_2019-09-22-12-05-47
turbine-06_helihoist-1_tom_geometry_hammerhead_2019-09-22-03-14-43_2019-09-22-12-05-47
turbine-06_sbitroot_tom_acc-vel-pos_hammerhead_2019-09-22-03-11-49_2019-09-22-12-15-43
turbine-06_sbittip_tom_acc-vel-pos_hammerhead_2019-09-22-03-16-30_2019-09-22-12-12-21
turbine-06_helihoist-1_tom_acc-vel-pos_sbi1_2019-09-22-12-05-48_2019-09-22-12-41-45
turbine-06_sbitroot_tom_acc-vel-pos_sbi1_2019-09-22-12-15-43_2019-09-22-12-42-48
turbine-06_sbittip_tom_acc-vel-pos_sbi1_2019-09-22-12-12-21_2019-09-22-12-39-12
turbine-06_helihoist-1_tom_acc-vel-pos_sbi2_2019-09-22-22-11-22_2019-09-23-00-30-45
turbine-06_sbitroot_tom_acc-vel-pos_sbi2_2019-09-22-22-13-32_2019-09-23-00-29-28
turbine-06_sbittip_tom_acc-vel-pos_sbi2_2019-09-22-22-04-11_2019-09-23-00-19-04
turbine-06_helihoist-1_tom_acc-vel-pos_tnhb1_2019-09-22-12-41-45_2019-09-22-22-11-22
turbine-06_helihoist-1_tom_geometry_tnhb1_2019-09-22-12-41-45_2019-09-22-22-11-22
turbine-06_sbitroot_tom_acc-vel-pos_tnhb1_2019-09-22-12-42-48_2019-09-22-22-13-32
turbine-06_sbittip_tom_acc-vel-pos_tnhb1_2019-09-22-12-39-13_2019-09-22-22-04-11
turbine-06_helihoist-1_tom_acc-vel-pos_tnhb2_2019-09-23-00-30-45_2019-09-23-00-42-54
turbine-06_helihoist-1_tom_geometry_tnhb2_2019-09-23-00-30-45_2019-09-23-00-42-54
turbine-06_sbitroot_tom_acc-vel-pos_tnhb2_2019-09-23-00-29-28_2019-09-23-11-16-27
turbine-06_sbittip_tom_acc-vel-pos_tnhb2_2019-09-23-00-19-04_2019-09-23-11-22-22
wmb-sued-2019-9-22
lidar_2019_09_22
everthing available
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-06**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-06**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-06**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-06**.csv'))
tnhb2 = sorted(glob('Daten/tnhb2/tnhb2/turbine-06**.csv'))
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter = ',')
sbiroot_sbi1 = pd.read_csv(sbi1[1], delimiter = ',')
sbitip_sbi1 = pd.read_csv(sbi1[2], delimiter = ',')
data.append(helihoist_sbi1) ,data.append(sbiroot_sbi1) ,data.append(sbitip_sbi1)
helihoist_sbi2 = pd.read_csv(sbi2[0], delimiter = ',')
sbiroot_sbi2 = pd.read_csv(sbi2[1], delimiter = ',')
sbitip_sbi2 = pd.read_csv(sbi2[2], delimiter = ',')
data.append(helihoist_sbi2) ,data.append(sbiroot_sbi2) ,data.append(sbitip_sbi2)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
sbitip_tnhb1 = pd.read_csv(tnhb1[3], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1),data.append(sbitip_tnhb1)
helihoist_tnhb2 = pd.read_csv(tnhb2[0], delimiter = ',')
helihoist_geo_tnhb2 = pd.read_csv(tnhb2[1], delimiter = ',')
sbiroot_tnhb2 = pd.read_csv(tnhb2[2], delimiter = ',')
sbitip_tnhb2 = pd.read_csv(tnhb2[3], delimiter = ',')
data.append(helihoist_tnhb2) ,data.append(helihoist_geo_tnhb2) ,data.append(sbiroot_tnhb2),data.append(sbitip_tnhb2)
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-22.csv', delimiter = ' ')
wmb_all = []
wmb_all.append(wmb1)
lidar1= pd.read_csv('environment/environment/wind/lidar/lidar_2019-09-22.csv', delimiter = ' ')
data.append(lidar1)
lidar_all =[]
lidar_all.append(lidar1)
buffer1 = []
for j in wmb_all:
j.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer1.append(j)
wmb = pd.concat(buffer1, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer2 = []
for i in lidar_all:
i.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
buffer2.append(i)
lidar = pd.concat(buffer2, axis=0)
lidar.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
#Plotting and generating UTC Timestamps
'''
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
#wmb = wmb.resample('1S', label='left').mean().pad() / 1800
wmb = wmb
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
#wmb = wmb.resample('1S', label='left').mean().pad() / 1800
lidar = lidar
#Plotting:
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.ylabel('number of waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
#merging dataframes together:
#by manuel analysing the data we foung following intervalls with constant enviroment circumstances
#Installation im Zeitraum vom 22.9/23.9
#4-12 Uhr
#17-18 Uhr
#19-24 Uhr
'''
hammerhead
03:16:30 12:05:47
sbi1
14:15:43 12:39:12
sbi2
22:13:32 00:19:04
tnhb1
12:39:13 22:04:11
tnhb2
00:30:45 00:42:54
'''
#for boje data resampling and then adding to sampleframes
wmb.columns = ('epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss','TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps', '#Waves')
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
wmb.pop('epoch')
wmb = wmb.resample('3S', label = 'left').mean().pad() / 1800
#same with lidar data
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
#helping method to get the beginning and end time for a dataframe, inclusive the duration
def getduration(df):
start = df.index[0]
ende = df.index[-1]
print(start, ende)
return ende-start
counter = 0
#generating timestamps for every dataframe
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
# generating hammerhead file
for i in range(4):
data[i] = data[i]['2019-09-22 03:16:30': '2019-09-22 12:05:47']
transition_wmb =wmb['2019-09-22 03:16:30': '2019-09-22 12:05:47']
transition_lidar = lidar['2019-09-22 03:16:30': '2019-09-22 12:05:47']
result =pd.concat([data[0],data[1],data[2],data[3], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine06/hammerhead_turbine06.csv')
#generating sbi1 file
for i in range(4,7):
data[i] = data[i]['2019-09-22 12:15:43': '2019-09-22 12:39:12']
transition_wmb =wmb['2019-09-22 12:15:43': '2019-09-22 12:39:12']
transition_lidar = lidar['2019-09-22 12:15:43': '2019-09-22 12:39:12']
result = pd.concat([data[4], data[5], data[6], transition_lidar, transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine06/sbi1_turbine6.csv')
#generating sbi2 file
#22:13:32 00:19:04
for i in range(7,10):
data[i] = data[i]['2019-09-22 22:13:32': '2019-09-23 00:19:04']
transition_wmb =wmb['2019-09-22 22:13:32': '2019-09-23 00:19:04']
transition_lidar = lidar['2019-09-22 22:13:32': '2019-09-23 00:19:04']
result = pd.concat([data[7], data[8], data[9], transition_lidar, transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine06/sbi2_turbine6.csv')
#generating tnhb1 file
#12:39:13 22:04:11
for i in range(10,14):
data[i] = data[i]['2019-09-22 12:41:44': '2019-09-22 22:04:11']
transition_wmb =wmb['2019-09-22 12:41:44': '2019-09-22 22:04:11']
transition_lidar = lidar['2019-09-22 12:41:44': '2019-09-22 22:04:11']
result = pd.concat([data[10], data[11], data[12], data[13], transition_lidar, transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine06/tnhb1_turbine6.csv')
#generating tnhb2 file
#00:30:45 00:42:54
for i in range(14,18):
data[i] = data[i]['2019-09-23 00:30:45': '2019-09-23 00:42:54']
transition_wmb =wmb['2019-09-23 00:30:45': '2019-09-23 00:42:54']
transition_lidar = lidar['2019-09-23 00:30:45': '2019-09-23 00:42:54']
result = pd.concat([data[14], data[15], data[16], data[17],transition_lidar, transition_wmb], axis=1)
result.to_csv('Results_preprocessing/turbine06/tnhb2_turbine6.csv')
'''
'''
#generating correlation matrix
sbi1_turbine6 = pd.read_csv('Results_preprocessing/turbine06/sbi1_turbine6.csv', delimiter = ',')
corrMatrix = sbi1_turbine6.corr() #coolwarm
sn.heatmap(corrMatrix, annot=False)
plt.show()
'''
'''
files to extract
22.09.2019 03:14:43 22.09.2019 12:05:47
22.09.2019 12:41:45 22.09.2019 22:11:22
23.09.2019 00:30:45 23.09.2019 00:42:54
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-09-22 05:14:43': '2019-09-22 14:05:42']
transition_wmb =wmb['2019-09-22 05:14:43': '2019-09-22 14:05:42']
transition_lidar = lidar['2019-09-22 05:14:43': '2019-09-22 14:05:42']
result = pd.concat([data[1], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_speed_3']
del result['wind_dir_3']
del result['wind_dir_3_corr']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine06.csv')
print(data[11].index[0])
print(data[11].index[-1])
data[11] = data[11]['2019-09-22 14:41:45': '2019-09-22 23:30:03']
transition_wmb =wmb['2019-09-22 14:41:45': '2019-09-22 23:30:03']
transition_lidar = lidar['2019-09-22 14:41:45': '2019-09-22 23:30:03']
result = pd.concat([data[11], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_speed_3']
del result['wind_dir_3']
del result['wind_dir_3_corr']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine06.csv')
print(data[15].index[0])
print(data[15].index[-1])
data[15] = data[15]['2019-09-23 02:30:46': '2019-09-23 00:42:47']
transition_wmb =wmb['2019-09-23 02:30:46': '2019-09-23 00:42:47']
transition_lidar = lidar['2019-09-23 02:30:46': '2019-09-23 00:42:47']
result = pd.concat([data[15], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb2_turbine06.csv')
#turbine07
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-07_helihoist-1_tom_acc-vel-pos_hammerhead_2019-09-24-11-38-50_2019-09-25-12-01-27
turbine-07_helihoist-1_tom_geometry_hammerhead_2019-09-24-11-38-50_2019-09-25-12-01-27
turbine-07_sbitroot_tom_acc-vel-pos_hammerhead_2019-09-24-11-56-08_2019-09-25-11-58-23
turbine-07_sbittip_tom_acc-vel-pos_hammerhead_2019-09-24-11-59-00_2019-09-25-00-25-15
turbine-07_helihoist-1_tom_acc-vel-pos_sbi1_2019-09-25-12-01-27_2019-09-25-13-49-58
turbine-07_sbitroot_tom_acc-vel-pos_sbi1_2019-09-25-11-58-23_2019-09-25-13-48-10
turbine-07_helihoist-1_tom_acc-vel-pos_sbi2_2019-09-25-18-30-36_2019-09-25-21-28-56
turbine-07_sbitroot_tom_acc-vel-pos_sbi2_2019-09-25-18-15-46_2019-09-25-21-25-41
turbine-07_helihoist-1_tom_acc-vel-pos_tnhb1_2019-09-25-13-49-58_2019-09-25-18-30-35
turbine-07_helihoist-1_tom_geometry_tnhb1_2019-09-25-13-49-58_2019-09-25-18-30-35
turbine-07_sbitroot_tom_acc-vel-pos_tnhb1_2019-09-25-13-48-10_2019-09-25-18-15-46
turbine-07_helihoist-1_tom_acc-vel-pos_tnhb2_2019-09-25-21-28-57_2019-09-26-01-04-54
turbine-07_helihoist-1_tom_geometry_tnhb2_2019-09-25-21-28-57_2019-09-26-01-04-54
turbine-07_sbitroot_tom_acc-vel-pos_tnhb2_2019-09-25-21-25-41_2019-09-26-00-37-22
wmb-sued-2019-9-24
wmb-sued-2019-9-25
lidar_2019_09_24
lidar_2019_09_25
alles vorhanden
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-07**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-07**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-07**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-07**.csv'))
tnhb2 = sorted(glob('Daten/tnhb2/tnhb2/turbine-07**.csv'))
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter= ',')
sbiroot_sbi1 = pd.read_csv(sbi1[1], delimiter = ',')
data.append(helihoist_sbi1) ,data.append(sbiroot_sbi1)
helihoist_sbi2 = pd.read_csv(sbi2[0], delimiter = ',')
sbiroot_sbi2 = pd.read_csv(sbi2[1], delimiter = ',')
data.append(helihoist_sbi2) ,data.append(sbiroot_sbi2)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1)
helihoist_tnhb2 = pd.read_csv(tnhb2[0], delimiter = ',')
helihoist_geo_tnhb2 = pd.read_csv(tnhb2[1], delimiter = ',')
sbiroot_tnhb2 = pd.read_csv(tnhb2[2], delimiter = ',')
data.append(helihoist_tnhb2) ,data.append(helihoist_geo_tnhb2) ,data.append(sbiroot_tnhb2),
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-24.csv', delimiter = ' ')
wmb2= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-09-25.csv', delimiter = ' ')
wmb_all =[]
wmb_all.append(wmb1), wmb_all.append(wmb2)
lidar1 =pd.read_csv('environment/environment/wind/lidar/lidar_2019-09-24.csv', delimiter = ' ')
lidar2 =pd.read_csv('environment/environment/wind/lidar/lidar_2019-09-25.csv', delimiter = ' ')
data.append(lidar1), data.append(lidar2)
lidar_all = []
lidar_all.append(lidar1), lidar_all.append(lidar2)
buffer1 = []
for i in wmb_all:
i.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer1.append(i)
wmb = pd.concat(buffer1, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer2 = []
for j in lidar_all:
j.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
buffer2.append(j)
lidar = pd.concat(buffer2, axis=0)
lidar.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
'''
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
counter = 0
#generating timestamps for every dataframe
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
# generating hammerhead file
#11:59:00 00:25:15
for i in range(4):
data[i] = data[i]['2019-09-24 11:59:00': '2019-09-25 00:25:15']
transition_wmb =wmb['2019-09-24 11:59:00': '2019-09-25 00:25:15']
transition_lidar = lidar['2019-09-24 11:59:00': '2019-09-25 00:25:15']
result =pd.concat([data[0],data[1],data[2],data[3], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine07/hammerhead_turbine07.csv')
#generating sbi1 file
#12:01:27 13:48:10
for i in range(4,6):
data[i] = data[i]['2019-09-25 12:01:27': '2019-09-25 13:48:10']
transition_wmb =wmb['2019-09-25 12:01:27': '2019-09-25 13:48:10']
transition_lidar = lidar['2019-09-25 12:01:27': '2019-09-25 13:48:10']
result =pd.concat([data[4],data[5], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine07/sbi1_turbine07.csv')
#generating sbi2 file
#18:30:36 21:25:41
for i in range(6,8):
data[i] = data[i]['2019-09-25 18:30:36': '2019-09-25 21:25:41']
transition_wmb =wmb['2019-09-25 18:30:36': '2019-09-25 21:25:41']
transition_lidar = lidar['2019-09-25 18:30:36': '2019-09-25 21:25:41']
result =pd.concat([data[6],data[7], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine07/sbi2_turbine07.csv')
#generating tnhb1 file
#13:49:58 18:15:46
for i in range(8,11):
data[i] = data[i]['2019-09-25 13:49:58': '2019-09-25 18:15:46']
transition_wmb =wmb['2019-09-25 13:49:58': '2019-09-25 18:15:46']
transition_lidar = lidar['2019-09-25 13:49:58': '2019-09-25 18:15:46']
result =pd.concat([data[8],data[9],data[10], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine07/tnhb1_turbine07.csv')
#generating tnhb2 file
#21:28:57 00:37:22
for i in range(11,14):
data[i] = data[i]['2019-09-25 21:28:57': '2019-09-26 00:37:22']
transition_wmb =wmb['2019-09-25 21:28:57': '2019-09-26 00:37:22']
transition_lidar = lidar['2019-09-25 21:28:57': '2019-09-26 00:37:22']
result =pd.concat([data[11],data[12],data[13], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine07/tnhb2_turbine07.csv')
'''
'''
files to extract
24.09.2019 11:38:50 25.09.2019 12:01:27
25.09.2019 13:49:58 25.09.2019 18:30:35
25.09.2019 21:28:57 26.09.2019 01:04:54
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-09-24 13:38:51': '2019-09-25 14:01:23']
transition_wmb =wmb['2019-09-24 13:38:51': '2019-09-25 14:01:23']
transition_lidar = lidar['2019-09-24 13:38:51': '2019-09-25 14:01:23']
result = pd.concat([data[1], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine07.csv')
print(data[9].index[0])
print(data[9].index[-1])
data[9] = data[9]['2019-09-25 15:49:59': '2019-09-25 20:30:32']
transition_wmb =wmb['2019-09-25 15:49:59': '2019-09-25 20:30:32']
transition_lidar = lidar['2019-09-25 15:49:59': '2019-09-25 20:30:32']
result = pd.concat([data[9], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine07.csv')
print(data[12].index[0])
print(data[12].index[-1])
data[12] = data[12]['2019-09-25 23:28:57': '2019-09-26 03:04:49']
transition_wmb =wmb['2019-09-25 23:28:57': '2019-09-26 03:04:49']
transition_lidar = lidar['2019-09-25 23:28:57': '2019-09-26 03:04:49']
result = pd.concat([data[12], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb2_turbine07.csv')
#turbine08
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-08_helihoist-1_tom_acc-vel-pos_hammerhead_2019-10-14-07-55-52_2019-10-15-06-10-33
turbine-08_helihoist-1_tom_geometry_hammerhead_2019-10-14-07-55-52_2019-10-15-06-10-33
turbine-08_sbitroot_tom_acc-vel-pos_hammerhead_2019-10-14-10-49-53_2019-10-15-06-18-48
turbine-08_sbittip_tom_acc-vel-pos_hammerhead_2019-10-14-10-45-39_2019-10-15-06-08-27
turbine-08_helihoist-1_tom_acc-vel-pos_sbi1_2019-10-15-06-10-33_2019-10-15-07-30-26
turbine-08_sbitroot_tom_acc-vel-pos_sbi1_2019-10-15-06-18-48_2019-10-15-07-40-56
turbine-08_sbittip_tom_acc-vel-pos_sbi1_2019-10-15-06-08-27_2019-10-15-07-57-03
turbine-08_helihoist-1_tom_acc-vel-pos_sbi2_2019-10-15-14-21-36_2019-10-15-15-13-04
turbine-08_sbitroot_tom_acc-vel-pos_sbi2_2019-10-15-14-10-47_2019-10-15-15-05-07
turbine-08_sbittip_tom_acc-vel-pos_sbi2_2019-10-15-14-17-49_2019-10-15-15-09-25
turbine-08_helihoist-1_tom_acc-vel-pos_tnhb1_2019-10-15-07-30-26_2019-10-15-14-21-36
turbine-08_helihoist-1_tom_geometry_tnhb1_2019-10-15-07-30-26_2019-10-15-14-21-36
turbine-08_sbitroot_tom_acc-vel-pos_tnhb1_2019-10-15-07-40-56_2019-10-15-14-10-47
turbine-08_sbittip_tom_acc-vel-pos_tnhb1_2019-10-15-07-57-03_2019-10-15-14-17-49
turbine-08_helihoist-1_tom_acc-vel-pos_tnhb2_2019-10-15-15-13-04_2019-10-15-22-19-59
wmb-sued-2019-10-14
wmb-sued-2019-10-15
lidar_2019_10_14
lidar_2019_10_15
komplett
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-08**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-08**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-08**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-08**.csv'))
tnhb2 = sorted(glob('Daten/tnhb2/tnhb2/turbine-08**.csv'))
#wmb = "wmb-sued-2019-9-22"
#lidar = "lidar_2019_09_22"
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter= ',')
sbiroot_sbi1 = pd.read_csv(sbi1[1], delimiter = ',')
sbitip_sbi1 = pd.read_csv(sbi1[2], delimiter = ',')
data.append(helihoist_sbi1) ,data.append(sbiroot_sbi1), data.append(sbitip_sbi1)
helihoist_sbi2 = pd.read_csv(sbi2[0], delimiter = ',')
sbiroot_sbi2 = pd.read_csv(sbi2[1], delimiter = ',')
sbitip_sbi2 = pd.read_csv(sbi2[2], delimiter = ',')
data.append(helihoist_sbi2) ,data.append(sbiroot_sbi2), data.append(sbitip_sbi2)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
sbitip_tnhb1 = pd.read_csv(tnhb1[3], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1), data.append(sbitip_tnhb1)
helihoist_tnhb2 = pd.read_csv(tnhb2[0], delimiter = ',')
data.append(helihoist_tnhb2)
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-10-14.csv', delimiter = ' ')
wmb2= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-10-15.csv', delimiter = ' ')
wmb_all = []
wmb_all.append(wmb1), wmb_all.append(wmb2)
lidar1 =pd.read_csv('environment/environment/wind/lidar/lidar_2019-10-14.csv', delimiter = ' ')
lidar2 =pd.read_csv('environment/environment/wind/lidar/lidar_2019-10-15.csv', delimiter = ' ')
data.append(lidar1), data.append(lidar2)
lidar_all = []
lidar_all.append(lidar1), lidar_all.append(lidar2)
lidar_all = []
lidar_all.append(lidar1), lidar_all.append(lidar2)
buffer1 = []
for i in wmb_all:
i.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer1.append(i)
wmb = pd.concat(buffer1, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer2 = []
for j in lidar_all:
j.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
buffer2.append(j)
lidar = pd.concat(buffer2, axis=0)
lidar.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
#generating timestamps for every dataframe
counter = 0
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
'''
# generating hammerhead file
#10:49:53 06:08:27
for i in range(4):
data[i] = data[i]['2019-10-14 10:49:53': '2019-10-15 06:08:27']
transition_wmb =wmb['2019-10-14 10:49:53': '2019-10-15 06:08:27']
transition_lidar = lidar['2019-10-14 10:49:53': '2019-10-15 06:08:27']
result =pd.concat([data[0],data[1],data[2],data[3], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine08/hammerhead_turbine08.csv')
#generating sbi1 file
#06:18:48 07:30:26
for i in range(4,7):
data[i] = data[i]['2019-10-15 06:18:48': '2019-10-15 07:30:26']
transition_wmb =wmb['2019-10-15 06:18:48': '2019-10-15 07:30:26']
transition_lidar = lidar['2019-10-15 06:18:48': '2019-10-15 07:30:26']
result =pd.concat([data[4],data[5],data[6], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine08/sbi1_turbine08.csv')
#generating sbi2 file
#14:21:36 15:05:07
for i in range(7,10):
data[i] = data[i]['2019-10-15 14:21:36': '2019-10-15 15:05:07']
transition_wmb =wmb['2019-10-15 14:21:36': '2019-10-15 15:05:07']
transition_lidar = lidar['2019-10-15 14:21:36': '2019-10-15 15:05:07']
result =pd.concat([data[7],data[8],data[9], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine08/sbi2_turbine08.csv')
#generating tnhb1 file
#07:57:03 14:10:47
for i in range(10,14):
data[i] = data[i]['2019-10-15 07:57:03': '2019-10-15 14:10:47']
transition_wmb =wmb['2019-10-15 07:57:03': '2019-10-15 14:10:47']
transition_lidar = lidar['2019-10-15 07:57:03': '2019-10-15 14:10:47']
result =pd.concat([data[10],data[11],data[12],data[13], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine08/tnhb1_turbine08.csv')
#generating tnhb2 file
#15:13:04 22:19:59
for i in range(14,15):
data[i] = data[i]['2019-10-15 15:13:04': '2019-10-15 22:19:59']
transition_wmb =wmb['2019-10-15 15:13:04': '2019-10-15 22:19:59']
transition_lidar = lidar['2019-10-15 15:13:04': '2019-10-15 22:19:59']
result =pd.concat([data[14], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine08/tnhb2_turbine08.csv')
'''
'''
files to extract
14.10.2019 07:55:52 15.10.2019 06:10:33
15.10.2019 07:30:26 15.10.2019 14:21:36
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-10-14 09:55:53': '2019-10-15 08:10:27']
transition_wmb =wmb['2019-10-14 09:55:53': '2019-10-15 08:10:27']
transition_lidar = lidar['2019-10-14 09:55:53': '2019-10-15 08:10:27']
result = pd.concat([data[1], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine08.csv')
print(data[11].index[0])
print(data[11].index[-1])
data[11] = data[11]['2019-10-15 09:30:27': '2019-10-15 16:21:31']
transition_wmb =wmb['2019-10-15 09:30:27': '2019-10-15 16:21:31']
transition_lidar = lidar['2019-10-15 09:30:27': '2019-10-15 16:21:31']
result = pd.concat([data[11], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine08.csv')
#turbine09
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-09_helihoist-1_tom_acc-vel-pos_hammerhead_2019-10-04-12-59-44_2019-10-08-01-41-05
turbine-09_helihoist-1_tom_geometry_hammerhead_2019-10-04-12-59-44_2019-10-08-01-41-05
turbine-09_sbitroot_tom_acc-vel-pos_hammerhead_2019-10-17-00-43-06_2019-10-17-06-47-39
turbine-09_sbittip_tom_acc-vel-pos_hammerhead_2019-10-17-00-50-54_2019-10-17-06-49-19
wmb-sued-2019-10-04
wmb-sued-2019-10-05
wmb-sued-2019-10-06
wmb-sued-2019-10-07
wmb-sued-2019-10-08
lidar fehlt
sonst keine weiteren daten vorhanden
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-09**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-09**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-09**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-09**.csv'))
tnhb2 = sorted(glob('Daten/tnhb2/tnhb2/turbine-09**.csv'))
#wmb = "wmb-sued-2019-9-22"
#lidar = "lidar_2019_09_22"
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-10-04.csv', delimiter = ' ')
wmb2= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-10-05.csv', delimiter = ' ')
wmb3= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-10-06.csv', delimiter = ' ')
wmb4= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-10-07.csv', delimiter = ' ')
wmb5= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-10-08.csv', delimiter = ' ')
data.append(wmb1), data.append(wmb2), data.append(wmb3), data.append(wmb4) , data.append(wmb5)
wmb_all = []
wmb_all.append(wmb1), wmb_all.append(wmb2), wmb_all.append(wmb3), wmb_all.append(wmb4) , wmb_all.append(wmb5)
buffer = []
for i in wmb_all:
i.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer.append(i)
wmb = pd.concat(buffer, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
#generating timestamps for every dataframe
counter = 0
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
#plotting
'''
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.ylabel('number of waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
'''
files to extract
04.10.2019 12:59:44 08.10.2019 01:41:05
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-10-4 14:59:45': '2019-10-8 03:41:00']
transition_wmb =wmb['2019-10-4 14:59:45': '2019-10-8 03:41:00']
result = pd.concat([data[1], transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine09.csv')
#turbine10
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-10_helihoist-1_tom_acc-vel-pos_hammerhead_2019-10-23-11-32-59_2019-10-23-19-42-22
turbine-10_helihoist-1_tom_geometry_hammerhead_2019-10-23-11-32-59_2019-10-23-19-42-22
turbine-10_sbitroot_tom_acc-vel-pos_hammerhead_2019-10-23-11-48-51_2019-10-23-19-45-30
turbine-10_sbittip_tom_acc-vel-pos_hammerhead_2019-10-23-11-40-32_2019-10-23-19-55-12
turbine-10_helihoist-1_tom_acc-vel-pos_sbi1_2019-10-23-19-42-31_2019-10-23-20-29-21
turbine-10_sbitroot_tom_acc-vel-pos_sbi1_2019-10-23-19-45-31_2019-10-23-22-00-39
turbine-10_sbittip_tom_acc-vel-pos_sbi1_2019-10-23-19-55-12_2019-10-23-22-05-43
wmb-sued-2019-10-23
lidar_2019_10_23
mehr haben wir hier auch nicht
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-10**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-10**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-10**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-10**.csv'))
tnhb2 = sorted(glob('Daten/tnhb2/tnhb2/turbine-10**.csv'))
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter= ',')
sbiroot_sbi1 = pd.read_csv(sbi1[1], delimiter = ',')
sbitip_sbi1 = pd.read_csv(sbi1[2], delimiter = ',')
data.append(helihoist_sbi1) ,data.append(sbiroot_sbi1)
wmb= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-10-23.csv', delimiter = ' ')
lidar=pd.read_csv('environment/environment/wind/lidar/lidar_2019-10-23.csv', delimiter = ' ')
data.append(wmb), data.append(lidar)
wmb.columns = ('epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss', 'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps', '#Waves')
lidar.columns = ( 'epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1', 'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2', 'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4', 'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5', 'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7', 'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8', 'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10', 'wind_dir_10_corr', 'height_10', 'heading')
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
print(lidar)
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
print(lidar)
#generating timestamps for every dataframe
counter = 0
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
'''
# generating hammerhead file
#11:48:51 19:42:22
for i in range(4):
data[i] = data[i]['2019-10-23 11:48:51': '2019-10-23 19:42:22']
transition_wmb =wmb['2019-10-23 11:48:51': '2019-10-23 19:42:22']
result =pd.concat([data[0],data[1],data[2],data[3], transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine10/hammerhead_turbine10.csv')
#generating sbi1 file
#19:55:12 20:29:21
for i in range(4,7):
data[i] = data[i]['2019-10-23 19:55:12': '2019-10-23 20:29:21']
transition_wmb =wmb['2019-10-23 19:55:12': '2019-10-23 20:29:21']
result =pd.concat([data[4],data[5],data[6], transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine10/sbi1_turbine10.csv')
'''
'''
files to extract
23.10.2019 11:32:59 23.10.2019 19:42:22
'''
print(lidar.columns)
print(wmb.columns)
data[1] = data[1]['2019-10-23 13:33:00': '2019-10-23 21:42:15']
transition_wmb =wmb['2019-10-23 13:33:00': '2019-10-23 21:42:15']
result = pd.concat([data[1],transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine10.csv')
#turbine11
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-11_helihoist-1_tom_acc-vel-pos_hammerhead_2019-10-31-04-18-02_2019-10-31-10-41-13
turbine-11_helihoist-1_tom_geometry_hammerhead_2019-10-31-04-18-02_2019-10-31-10-41-13
turbine-11_sbitroot_tom_acc-vel-pos_hammerhead_2019-10-31-05-10-55_2019-10-31-10-38-59
turbine-11_sbittip_tom_acc-vel-pos_hammerhead_2019-10-31-05-17-56_2019-10-31-10-23-54
turbine-11_helihoist-1_tom_acc-vel-pos_sbi1_2019-10-31-10-41-13_2019-10-31-12-54-16
turbine-11_sbitroot_tom_acc-vel-pos_sbi1_2019-10-31-10-39-00_2019-10-31-12-54-35
turbine-11_sbittip_tom_acc-vel-pos_sbi1_2019-10-31-10-23-54_2019-10-31-13-06-39
turbine-11_helihoist-1_tom_acc-vel-pos_sbi2_2019-10-31-15-30-44_2019-10-31-18-52-53
turbine-11_sbitroot_tom_acc-vel-pos_sbi2_2019-10-31-15-37-45_2019-10-31-18-49-33
turbine-11_sbittip_tom_acc-vel-pos_sbi2_2019-10-31-15-23-09_2019-10-31-19-02-46
turbine-11_helihoist-1_tom_acc-vel-pos_tnhb1_2019-10-31-12-54-16_2019-10-31-15-30-44
turbine-11_helihoist-1_tom_geometry_tnhb1_2019-10-31-12-54-16_2019-10-31-15-30-44
turbine-11_sbitroot_tom_acc-vel-pos_tnhb1_2019-10-31-12-54-35_2019-10-31-15-37-45
turbine-11_sbittip_tom_acc-vel-pos_tnhb1_2019-10-31-13-06-40_2019-10-31-15-23-09
wmb missing
lidar_2019_10_31
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-11**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-11**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-11**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-11**.csv'))
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter = ',')
sbiroot_sbi1 = pd.read_csv(sbi1[1], delimiter = ',')
sbitip_sbi1 = pd.read_csv(sbi1[2], delimiter = ',')
data.append(helihoist_sbi1) ,data.append(sbiroot_sbi1) ,data.append(sbitip_sbi1)
helihoist_sbi2 = pd.read_csv(sbi2[0], delimiter = ',')
sbiroot_sbi2 = pd.read_csv(sbi2[1], delimiter = ',')
sbitip_sbi2 = pd.read_csv(sbi2[2], delimiter = ',')
data.append(helihoist_sbi2) ,data.append(sbiroot_sbi2) ,data.append(sbitip_sbi2)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
sbitip_tnhb1 = pd.read_csv(tnhb1[3], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1),data.append(sbitip_tnhb1)
lidar=pd.read_csv('environment/environment/wind/lidar/lidar_2019-10-31.csv', delimiter = ' ')
lidar.columns = ( 'epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1', 'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2', 'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4', 'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5', 'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7', 'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8', 'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10', 'wind_dir_10_corr', 'height_10', 'heading')
data.append(lidar)
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
#generating timestamps for every dataframe
counter = 0
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
#Plotting
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
'''
# generating hammerhead file
#05:17:56 10:23:54
for i in range(4):
data[i] = data[i]['2019-10-31 05:17:56': '2019-10-31 10:23:54']
transition_lidar = lidar['2019-10-31 05:17:56': '2019-10-31 10:23:54']
result =pd.concat([data[0],data[1],data[2],data[3], transition_lidar], axis=1 )
result.to_csv('Results_preprocessing/turbine11/hammerhead_turbine11.csv')
#generating sbi1 file
#10:41:13 21:54:16
for i in range(4,7):
data[i] = data[i]['2019-10-31 10:41:13': '2019-10-31 12:54:16']
transition_lidar = lidar['2019-10-31 10:41:13': '2019-10-31 12:54:16']
result =pd.concat([data[4],data[5],data[6], transition_lidar], axis=1 )
result.to_csv('Results_preprocessing/turbine11/sbi1_turbine11.csv')
#generating sbi2 file
#15:37:45 18:49:33
for i in range(7,10):
data[i] = data[i]['2019-10-31 15:37:45': '2019-10-31 18:49:33']
transition_lidar = lidar['2019-10-31 15:37:45': '2019-10-31 18:49:33']
result =pd.concat([data[7],data[8],data[9], transition_lidar], axis=1 )
result.to_csv('Results_preprocessing/turbine11/sbi2_turbine11.csv')
#generating tnhb1 file
#13:06:40 15:23:09
for i in range(10,14):
data[i] = data[i]['2019-10-31 13:06:40': '2019-10-31 15:23:09']
transition_lidar = lidar['2019-10-31 13:06:40': '2019-10-31 15:23:09']
result =pd.concat([data[10],data[11],data[12],data[13], transition_lidar], axis=1 )
result.to_csv('Results_preprocessing/turbine11/tnhb1_turbine11.csv')
'''
'''
files to extract
31.10.2019 04:18:02 31.10.2019 10:41:13
31.10.2019 12:54:16 31.10.2019 15:30:44
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-10-31 05:18:03': '2019-10-31 11:41:08']
transition_lidar = lidar['2019-10-31 05:18:03': '2019-10-31 11:41:08']
result = pd.concat([data[1], transition_lidar], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine11.csv')
print(data[11].index[0])
print(data[11].index[-1])
data[11] = data[11]['2019-10-31 13:54:18': '2019-10-31 16:30:41']
transition_lidar = lidar['2019-10-31 13:54:18': '2019-10-31 16:30:41']
result = pd.concat([data[11], transition_lidar], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine11.csv')
#turbine12
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-12_helihoist-1_tom_acc-vel-pos_sbi1_2019-11-05-04-33-05_2019-11-05-06-55-15
turbine-12_sbitroot_tom_acc-vel-pos_sbi1_2019-11-05-04-18-04_2019-11-05-06-32-41
turbine-12_sbittip_tom_acc-vel-pos_sbi1_2019-11-05-04-11-07_2019-11-05-06-53-50
gehen teilweise länger ->
turbine-12_helihoist-1_tom_acc-vel-pos_tnhb1_2019-11-05-06-55-15_2019-11-07-01-33-43
turbine-12_helihoist-1_tom_geometry_tnhb1_2019-11-05-06-55-15_2019-11-07-01-33-43
turbine-12_sbitroot_tom_acc-vel-pos_tnhb1_2019-11-05-06-32-42_2019-11-07-01-05-50
turbine-12_sbittip_tom_acc-vel-pos_tnhb1_2019-11-05-06-53-50_2019-11-05-23-27-03
wmb-sued-2019-11-04
wmb-sued-2019-11-05
wmb-sued-2019-11-06
wmb-sued-2019-11-07
lidar_2019_11_04
lidar_2019_11_05
lidar_2019_11_06
lidar_2019_11_07
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-12**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-12**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-12**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-12**.csv'))
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter = ',')
sbiroot_sbi1 = pd.read_csv(sbi1[1], delimiter = ',')
sbitip_sbi1 = pd.read_csv(sbi1[2], delimiter = ',')
data.append(helihoist_sbi1) ,data.append(sbiroot_sbi1) ,data.append(sbitip_sbi1)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
sbitip_tnhb1 = pd.read_csv(tnhb1[3], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1), data.append(sbitip_tnhb1)
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-04.csv', delimiter = ' ')
wmb2= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-05.csv', delimiter = ' ')
wmb3= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-06.csv', delimiter = ' ')
wmb4= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-07.csv', delimiter = ' ')
wmb_all = []
wmb_all.append(wmb1), wmb_all.append(wmb2), wmb_all.append(wmb3), wmb_all.append(wmb4)
lidar1= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-04.csv', delimiter = ' ')
lidar2= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-05.csv', delimiter = ' ')
lidar3= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-06.csv', delimiter = ' ')
lidar4= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-07.csv', delimiter = ' ')
data.append(lidar1), data.append(lidar2), data.append(lidar3), data.append(lidar4),
lidar_all =[]
lidar_all.append(lidar1), lidar_all.append(lidar2), lidar_all.append(lidar3), lidar_all.append(lidar4),
buffer1 = []
for j in wmb_all:
j.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer1.append(j)
wmb = pd.concat(buffer1, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer2 = []
for i in lidar_all:
i.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
buffer2.append(i)
lidar = pd.concat(buffer2, axis=0)
lidar.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
#generating timestamps for every dataframe
counter = 0
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
#Plotting:
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.ylabel('number of waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
'''
# generating hammerhead file
#23:44:46 04:11:07
for i in range(4):
data[i] = data[i]['2019-11-04 23:44:46': '2019-11-05 04:11:07']
transition_wmb =wmb['2019-11-04 23:44:46': '2019-11-05 04:11:07']
transition_lidar = lidar['2019-11-04 23:44:46': '2019-11-05 04:11:07']
result =pd.concat([data[0],data[1],data[2],data[3], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine12/hammerhead_turbine12.csv')
#generating sbi1 file
#04:33:05 06:32:41
for i in range(4,7):
data[i] = data[i]['2019-11-05 04:33:05': '2019-11-05 06:32:41']
transition_wmb =wmb['2019-11-05 04:33:05': '2019-11-05 06:32:41']
transition_lidar = lidar['2019-11-05 04:33:05': '2019-11-05 06:32:41']
result =pd.concat([data[4],data[5],data[6], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine12/sbi1_turbine12.csv')
#generating tnhb1 file
#06:55:15 23:27:03
for i in range(7,11):
data[i] = data[i]['2019-11-05 06:55:15': '2019-11-05 23:27:03']
transition_wmb =wmb['2019-11-05 06:55:15': '2019-11-05 23:27:03']
transition_lidar = lidar['2019-11-05 06:55:15': '2019-11-05 23:27:03']
result =pd.concat([data[7],data[8],data[9],data[10], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine12/tnhb1_turbine12.csv')
'''
'''
files to extract
04.11.2019 23:16:25 05.11.2019 04:33:05
05.11.2019 06:55:15 07.11.2019 01:33:43
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-11-05 00:16:25': '2019-11-05 05:32:57']
transition_wmb =wmb['2019-11-05 00:16:25': '2019-11-05 05:32:57']
transition_lidar = lidar['2019-11-05 00:16:25': '2019-11-05 05:32:57']
result = pd.concat([data[1], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine12.csv')
print(data[8].index[0])
print(data[8].index[-1])
data[8] = data[8]['2019-11-05 07:55:15': '2019-11-07 02:33:38']
transition_wmb =wmb['2019-11-05 07:55:15': '2019-11-07 02:33:38']
transition_lidar = lidar['2019-11-05 07:55:15': '2019-11-07 02:33:38']
result = pd.concat([data[8], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine12.csv')
#turbine13
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-13_helihoist-1_tom_geometry_hammerhead_2019-11-09-12-22-51_2019-11-10-12-59-19
turbine-13_sbitroot_tom_acc-vel-pos_hammerhead_2019-11-09-12-12-04_2019-11-09-15-25-59
turbine-13_sbittip_tom_acc-vel-pos_hammerhead_2019-11-09-12-13-17_2019-11-10-13-04-29
turbine-13_helihoist-1_tom_acc-vel-pos_sbi1_2019-11-10-12-59-20_2019-11-10-13-33-08
turbine-13_helihoist-1_tom_acc-vel-pos_tnhb1_2019-11-10-13-33-08_2019-11-16-05-29-59
turbine-13_helihoist-1_tom_geometry_tnhb1_2019-11-10-13-33-08_2019-11-16-05-29-59
turbine-13_sbitroot_tom_acc-vel-pos_tnhb1_2019-11-15-17-14-58_2019-11-16-05-36-40
turbine-13_sbittip_tom_acc-vel-pos_tnhb1_2019-11-10-13-31-42_2019-11-15-22-27-35
wmb-sued-2019-11-09
wmb-sued-2019-11-10
wmb-sued-2019-11-11
wmb-sued-2019-11-12
wmb-sued-2019-11-13
wmb-sued-2019-11-14
wmb-sued-2019-11-15
wmb-sued-2019-11-16
lidar_2019_11_09
lidar_2019_11_10
lidar_2019_11_11
lidar_2019_11_12
lidar_2019_11_13
lidar_2019_11_14
lidar_2019_11_15
lidar_2019_11_16 is missing
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-13**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-13**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-13**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-13**.csv'))
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter = ',')
data.append(helihoist_sbi1)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
sbitip_tnhb1 = pd.read_csv(tnhb1[3], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1),data.append(sbitip_tnhb1)
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-09.csv', delimiter = ' ')
wmb2= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-10.csv', delimiter = ' ')
wmb3= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-11.csv', delimiter = ' ')
wmb4= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-12.csv', delimiter = ' ')
wmb5= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-13.csv', delimiter = ' ')
wmb6= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-14.csv', delimiter = ' ')
wmb7= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-15.csv', delimiter = ' ')
wmb8= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-16.csv', delimiter = ' ')
#besonders auf 11/12 und 15 achten **TODO** entfernen
data.append(wmb1), data.append(wmb2), data.append(wmb3), data.append(wmb4), data.append(wmb5), data.append(wmb6), data.append(wmb7), data.append(wmb8)
wmb_all = []
wmb_all.append(wmb1), wmb_all.append(wmb2), wmb_all.append(wmb3), wmb_all.append(wmb4)
lidar1= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-09.csv', delimiter = ' ')
lidar2= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-10.csv', delimiter = ' ')
lidar3= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-11.csv', delimiter = ' ')
lidar4= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-12.csv', delimiter = ' ')
lidar5= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-13.csv', delimiter = ' ')
lidar6= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-14.csv', delimiter = ' ')
lidar7= pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-15.csv', delimiter = ' ')
lidar_all =[]
lidar_all.append(lidar1), lidar_all.append(lidar2), lidar_all.append(lidar3), lidar_all.append(lidar4), lidar_all.append(lidar5), lidar_all.append(lidar6), lidar_all.append(lidar7),
buffer1 = []
for i in wmb_all:
i.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer1.append(i)
wmb = pd.concat(buffer1, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer2 = []
for j in lidar_all:
j.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
buffer2.append(j)
lidar = pd.concat(buffer2, axis=0)
lidar.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
#generating timestamps for every dataframe
counter = 0
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
#Plotting:
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.ylabel('number of waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
'''
# generating hammerhead file
#12:22:51 15:25:59
for i in range(4):
data[i] = data[i]['2019-11-09 12:22:51': '2019-11-09 15:25:59']
transition_wmb =wmb['2019-11-09 12:22:51': '2019-11-09 15:25:59']
transition_lidar = lidar['2019-11-09 12:22:51': '2019-11-09 15:25:59']
result =pd.concat([data[0],data[1],data[2],data[3], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine13/hammerhead_turbine13.csv')
#generating sbi1 file
#12:59:20 13:33:08
for i in range(4,5):
data[i] = data[i]['2019-11-10 12:59:20': '2019-11-10 13:33:08']
transition_wmb =wmb['2019-11-10 12:59:20': '2019-11-10 13:33:08']
transition_lidar = lidar['2019-11-10 12:59:20': '2019-11-10 13:33:08']
result =pd.concat([data[4], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine13/sbi1_turbine13.csv')
#generating sbi2 file
#05:36:40 06:31:29
for i in range(5,7):
data[i] = data[i]['2019-11-16 05:36:40': '2019-11-16 06:31:29']
transition_wmb =wmb['2019-11-16 05:36:40': '2019-11-16 06:31:29']
transition_lidar = lidar['2019-11-16 05:36:40': '2019-11-16 06:31:29']
result =pd.concat([data[5],data[6], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine13/sbi2_turbine13.csv')
#generating tnhb1 file
#17:14:58 22:27:35
for i in range(7,11):
data[i] = data[i]['2019-11-15 17:14:58': '2019-11-15 22:27:35']
transition_wmb =wmb['2019-11-15 17:14:58': '2019-11-15 22:27:35']
transition_lidar = lidar['2019-11-15 17:14:58': '2019-11-15 22:27:35']
result =pd.concat([data[7],data[8],data[9],data[10], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine13/tnhb1_turbine13.csv')
'''
'''
files to extract:
09.11.2019 12:22:51 10.11.2019 12:59:19
10.11.2019 13:33:08 16.11.2019 05:29:59
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-11-09 13:22:52': '2019-11-10 13:59:14']
transition_wmb =wmb['2019-11-09 13:22:52': '2019-11-10 13:59:14']
transition_lidar = lidar['2019-11-09 13:22:52': '2019-11-10 13:59:14']
result = pd.concat([data[1], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine13.csv')
print(data[6].index[0])
print(data[6].index[-1])
data[6] = data[6]['2019-11-10 14:33:09': '2019-11-16 06:29:54']
transition_wmb =wmb['2019-11-10 14:33:09': '2019-11-16 06:29:54']
transition_lidar = lidar['2019-11-10 14:33:09': '2019-11-16 06:29:54']
result = pd.concat([data[6], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine13.csv')
#turbine14
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-14_helihoist-1_tom_acc-vel-pos_hammerhead_2019-11-24-16-42-08_2019-11-24-21-16-51
turbine-14_helihoist-1_tom_geometry_hammerhead_2019-11-24-16-42-08_2019-11-24-21-16-51
turbine-14_sbitroot_tom_acc-vel-pos_hammerhead_2019-11-24-17-36-41_2019-11-24-21-15-48
turbine-14_sbittip_tom_acc-vel-pos_hammerhead_2019-11-24-17-46-36_2019-11-24-21-21-03
turbine-14_helihoist-1_tom_acc-vel-pos_sbi1_2019-11-24-21-16-51_2019-11-25-04-59-18
turbine-14_sbitroot_tom_acc-vel-pos_sbi1_2019-11-24-21-15-48_2019-11-25-04-54-08
turbine-14_sbittip_tom_acc-vel-pos_sbi1_2019-11-24-21-21-04_2019-11-25-04-47-45
turbine-14_helihoist-1_tom_acc-vel-pos_sbi2_2019-11-25-08-27-01_2019-11-25-11-17-00
turbine-14_sbitroot_tom_acc-vel-pos_sbi2_2019-11-25-08-33-35_2019-11-25-11-11-16
turbine-14_sbittip_tom_acc-vel-pos_sbi2_2019-11-25-08-25-08_2019-11-25-10-59-28
turbine-14_helihoist-1_tom_acc-vel-pos_tnhb1_2019-11-25-04-59-18_2019-11-25-08-27-01
turbine-14_helihoist-1_tom_geometry_tnhb1_2019-11-25-04-59-18_2019-11-25-08-27-01
turbine-14_sbitroot_tom_acc-vel-pos_tnhb1_2019-11-25-04-54-09_2019-11-25-08-33-35
turbine-14_sbittip_tom_acc-vel-pos_tnhb1_2019-11-25-04-47-45_2019-11-25-08-25-08
wmb-sued-2019-11-24
wmb-sued-2019-11-25
lidar_2019_11_24
lidar_2019_11_25
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-14**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-14**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-14**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-14**.csv'))
tnhb2 = sorted(glob('Daten/tnhb2/tnhb2/turbine-14**.csv'))
data = []
helihoist_tele_hammerhead = pd.read_csv(hammerhead[0], delimiter = ',')
helihoist_geo_hammerhead = pd.read_csv(hammerhead[1], delimiter = ',')
sbitroot_hammerhead = pd.read_csv(hammerhead[2], delimiter = ',')
sbitip_hammerhead = pd.read_csv(hammerhead[3], delimiter = ',')
data.append(helihoist_tele_hammerhead) , data.append(helihoist_geo_hammerhead), data.append(sbitroot_hammerhead) ,data.append(sbitip_hammerhead)
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter= ',')
sbiroot_sbi1 = pd.read_csv(sbi1[1], delimiter = ',')
sbitip_sbi1 = pd.read_csv(sbi1[2], delimiter = ',')
data.append(helihoist_sbi1) ,data.append(sbiroot_sbi1) ,data.append(sbitip_sbi1)
helihoist_sbi2 = pd.read_csv(sbi2[0], delimiter = ',')
sbiroot_sbi2 = pd.read_csv(sbi2[1], delimiter = ',')
sbitip_sbi2 = pd.read_csv(sbi2[2], delimiter = ',')
data.append(helihoist_sbi2) ,data.append(sbiroot_sbi2) ,data.append(sbitip_sbi2)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
sbitip_tnhb1 = pd.read_csv(tnhb1[3], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1),data.append(sbitip_tnhb1)
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-24.csv', delimiter = ' ')
wmb2= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-11-25.csv', delimiter = ' ')
wmb_all = []
wmb_all.append(wmb1), wmb_all.append(wmb2)
lidar1 =pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-24.csv', delimiter = ' ')
lidar2 =pd.read_csv('environment/environment/wind/lidar/lidar_2019-11-25.csv', delimiter = ' ')
data.append(lidar1), data.append(lidar2)
lidar_all = []
lidar_all.append(lidar1),lidar_all.append(lidar2)
buffer1 = []
for j in wmb_all:
j.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer1.append(j)
wmb = pd.concat(buffer1, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer2 = []
for i in lidar_all:
i.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
buffer2.append(i)
lidar = pd.concat(buffer2, axis=0)
lidar.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
#generating timestamps for every dataframe
counter = 0
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
#Plotting
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
'''
# generating hammerhead file
#17:46:36 21:15:48
for i in range(4):
data[i] = data[i]['2019-11-24 17:46:36': '2019-11-24 21:15:48']
transition_wmb =wmb['2019-11-24 17:46:36': '2019-11-24 21:15:48']
transition_lidar = lidar['2019-11-24 17:46:36': '2019-11-24 21:15:48']
result =pd.concat([data[0],data[1],data[2],data[3], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine14/hammerhead_turbine14.csv')
#generating sbi1 file
#21:21:04 04:47:45
for i in range(4,7):
data[i] = data[i]['2019-11-24 21:21:04': '2019-11-25 04:47:45']
transition_wmb =wmb['2019-11-24 21:21:04': '2019-11-25 04:47:45']
transition_lidar = lidar['2019-11-24 21:21:04': '2019-11-25 04:47:45']
result =pd.concat([data[4],data[5],data[6], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine14/sbi1_turbine14.csv')
#generating sbi2 file
#08:33:35 10:59:28
for i in range(7,10):
data[i] = data[i]['2019-11-25 08:33:35': '2019-11-25 10:59:28']
transition_wmb =wmb['2019-11-25 08:33:35': '2019-11-25 10:59:28']
transition_lidar = lidar['2019-11-25 08:33:35': '2019-11-25 10:59:28']
result =pd.concat([data[7],data[8],data[9], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine14/sbi2_turbine14.csv')
#generating tnhb1 file
#04:59:18 08:25:08
for i in range(10,14):
data[i] = data[i]['2019-11-25 04:59:18': '2019-11-25 08:25:08']
transition_wmb =wmb['2019-11-25 04:59:18': '2019-11-25 08:25:08']
transition_lidar = lidar['2019-11-25 04:59:18': '2019-11-25 08:25:08']
result =pd.concat([data[10],data[11],data[12],data[13], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine14/tnhb1_turbine14.csv')
'''
'''
files to extract:
24.11.2019 16:42:08 24.11.2019 21:16:51
25.11.2019 04:59:18 25.11.2019 08:27:01
'''
print(data[1].index[0])
print(data[1].index[-1])
data[1] = data[1]['2019-11-24 17:42:09': '2019-11-24 22:16:47']
transition_wmb =wmb['2019-11-24 17:42:09': '2019-11-24 22:16:47']
transition_lidar = lidar['2019-11-24 17:42:09': '2019-11-24 22:16:47']
result = pd.concat([data[1], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/hammerhead_turbine14.csv')
print(data[11].index[0])
print(data[11].index[-1])
data[11] = data[11]['2019-11-25 05:59:19': '2019-11-25 09:26:55']
transition_wmb =wmb['2019-11-25 05:59:19': '2019-11-25 09:26:55']
transition_lidar = lidar['2019-11-25 05:59:19': '2019-11-25 09:26:55']
result = pd.concat([data[11], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine14.csv')
#turbine16
import numpy as np
import pandas as pd
from glob import glob
import matplotlib.pyplot as plt
'''
turbine-16_helihoist-1_tom_acc-vel-pos_sbi1_2019-12-17-03-56-11_2019-12-17-04-22-54
turbine-16_helihoist-1_tom_acc-vel-pos_sbi2_2019-12-22-06-09-43_2019-12-22-18-47-25 22.12.2019 06:09:43 22.12.2019 18:47:25
turbine-16_sbitroot_tom_acc-vel-pos_sbi2_2019-12-22-06-15-57_2019-12-22-18-47-00 22.12.2019 06:15:57 22.12.2019 18:47:00
turbine-16_sbittip_tom_acc-vel-pos_sbi2_2019-12-22-06-18-23_2019-12-22-18-35-19 22.12.2019 06:18:23 22.12.2019 18:35:19
turbine-16_helihoist-1_tom_acc-vel-pos_tnhb1_2019-12-17-04-22-54_2019-12-22-06-09-32
turbine-16_helihoist-1_tom_geometry_tnhb1_2019-12-17-04-22-54_2019-12-22-06-09-32
turbine-16_sbitroot_tom_acc-vel-pos_tnhb1_2019-12-21-12-47-53_2019-12-22-06-15-57
turbine-16_sbittip_tom_acc-vel-pos_tnhb1_2019-12-21-12-44-57_2019-12-22-06-18-23
wmb-sued-2019-12-17
wmb-sued-2019-12-18
wmb-sued-2019-12-19
wmb-sued-2019-12-20
wmb-sued-2019-12-21
lidar_2019_12_17
lidar_2019_12_18
lidar_2019_12_19
lidar_2019_12_20
lidar_2019_12_21
'''
#loading data and filling it into an array of all dataframes
hammerhead = sorted(glob('Daten/hammerhead/hammerhead/turbine-16**.csv'))
sbi1 = sorted(glob('Daten/sbi1/sbi1/turbine-16**.csv'))
sbi2 = sorted(glob('Daten/sbi2/sbi2/turbine-16**.csv'))
tnhb1 = sorted(glob('Daten/tnhb1/tnhb1/turbine-16**.csv'))
data = []
helihoist_sbi1 = pd.read_csv(sbi1[0], delimiter = ',')
data.append(helihoist_sbi1) ,
helihoist_sbi2 = pd.read_csv(sbi2[0], delimiter = ',')
sbiroot_sbi2 = pd.read_csv(sbi2[1], delimiter = ',')
sbitip_sbi2 = pd.read_csv(sbi2[2], delimiter = ',')
data.append(helihoist_sbi2) ,data.append(sbiroot_sbi2) ,data.append(sbitip_sbi2)
helihoist_tnhb1 = pd.read_csv(tnhb1[0], delimiter = ',')
helihoist_geo_tnhb1 = pd.read_csv(tnhb1[1], delimiter = ',')
sbiroot_tnhb1 = pd.read_csv(tnhb1[2], delimiter = ',')
sbitip_tnhb1 = pd.read_csv(tnhb1[3], delimiter = ',')
data.append(helihoist_tnhb1) ,data.append(helihoist_geo_tnhb1) ,data.append(sbiroot_tnhb1),data.append(sbitip_tnhb1)
wmb1= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-12-17.csv', delimiter = ' ')
wmb2= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-12-18.csv', delimiter = ' ')
wmb3= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-12-19.csv', delimiter = ' ')
wmb4= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-12-20.csv', delimiter = ' ')
wmb5= pd.read_csv('environment/environment/waves/wmb-sued/wmb-sued_2019-12-21.csv', delimiter = ' ')
wmb_all = []
wmb_all.append(wmb1), wmb_all.append(wmb2), wmb_all.append(wmb3), wmb_all.append(wmb4), wmb_all.append(wmb5)
lidar1= pd.read_csv('environment/environment/wind/lidar/lidar_2019-12-17.csv', delimiter = ' ')
lidar2= pd.read_csv('environment/environment/wind/lidar/lidar_2019-12-18.csv', delimiter = ' ')
lidar3= pd.read_csv('environment/environment/wind/lidar/lidar_2019-12-19.csv', delimiter = ' ')
lidar4= pd.read_csv('environment/environment/wind/lidar/lidar_2019-12-20.csv', delimiter = ' ')
lidar5= pd.read_csv('environment/environment/wind/lidar/lidar_2019-12-21.csv', delimiter = ' ')
data.append(lidar1), data.append(lidar2), data.append(lidar3), data.append(lidar4),data.append(lidar5),
lidar_all =[]
lidar_all.append(lidar1), lidar_all.append(lidar2), lidar_all.append(lidar3), lidar_all.append(lidar4), lidar_all.append(lidar5),
buffer1 = []
for j in wmb_all:
j.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer1.append(j)
wmb = pd.concat(buffer1, axis=0)
wmb.columns = (
'epoch', 'Tp', 'Dirp', 'Sprp', 'Tz', 'Hm0', 'TI', 'T1', 'Tc', 'Tdw2', 'Tdw1', 'Tpc', 'nu', 'eps', 'QP', 'Ss',
'TRef', 'TSea', 'Bat', 'Percentage', 'Hmax', 'Tmax', 'H(1/10)', 'T(1/10)', 'H(1/3)', 'T(1/3)', 'Hav', 'Tav', 'Eps',
'#Waves')
buffer2 = []
for i in lidar_all:
i.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
buffer2.append(i)
lidar = pd.concat(buffer2, axis=0)
lidar.columns = ('epoch', 'wind_speed_0', 'wind_dir_0', 'wind_dir_0_corr', 'height_0', 'wind_speed_1', 'wind_dir_1',
'wind_dir_1_corr', 'height_1', 'wind_speed_2', 'wind_dir_2', 'wind_dir_2_corr', 'height_2',
'wind_speed_3', 'wind_dir_3', 'wind_dir_3_corr', 'height_3', 'wind_speed_4', 'wind_dir_4',
'wind_dir_4_corr', 'height_4', 'wind_speed_5', 'wind_dir_5', 'wind_dir_5_corr', 'height_5',
'wind_speed_6', 'wind_dir_6', 'wind_dir_6_corr', 'height_6', 'wind_speed_7', 'wind_dir_7',
'wind_dir_7_corr', 'height_7', 'wind_speed_8', 'wind_dir_8', 'wind_dir_8_corr', 'height_8',
'wind_speed_9', 'wind_dir_9', 'wind_dir_9_corr', 'height_9', 'wind_speed_10', 'wind_dir_10',
'wind_dir_10_corr', 'height_10', 'heading')
UTC = []
for k in range(len(wmb)):
UTC.append(pd.Timestamp.fromtimestamp(wmb.iloc[k, 0]))
wmb['epoch'] = UTC
wmb.index = wmb['epoch']
del wmb['epoch']
wmb = wmb.resample('3S', label='left').mean().pad() / 1800
wmb = wmb
UTC = []
for k in range(len(lidar)):
UTC.append(pd.Timestamp.fromtimestamp(lidar.iloc[k, 0]))
lidar['epoch'] = UTC
lidar.index = lidar['epoch']
del lidar['epoch']
lidar = lidar.resample('3S', label='left').mean().pad()
lidar = lidar
'''
#Plotting:
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(wmb.index, wmb['#Waves'])
plt.title('#Waves')
plt.ylabel('number of waves')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_speed_7'])
plt.title('wind_speed_7')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
fig = plt.figure(figsize=(14,6), dpi=80)
plt.plot(lidar.index, lidar['wind_dir_7_corr'])
plt.title('wind_dir_7_corr')
plt.xlabel('time')
plt.xticks(rotation= 90)
plt.show()
'''
#generating timestamps for every dataframe
counter = 0
for df in data:
UTC = []
for k in range(len(df)):
UTC.append(pd.Timestamp.fromtimestamp(df.iloc[k, 0]))
df['epoch'] = UTC
df.index = df['epoch']
del df['epoch']
df = df.resample('3S', label = 'left').mean().pad()
data[counter] = df
counter = counter+1
'''
#generating sbi1 file
#03:56:11 04:22:54
for i in range(1):
data[i] = data[i]['2019-12-17 03:56:11': '2019-12-17 04:22:54']
transition_wmb =wmb['2019-12-17 03:56:11': '2019-12-17 04:22:54']
transition_lidar = lidar['2019-12-17 03:56:11': '2019-12-17 04:22:54']
result =pd.concat([data[0], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine16/sbi1_turbine16.csv')
#generating sbi2 file
#06:18:23 18:35:19
for i in range(1,4):
data[i] = data[i]['2019-12-22 06:18:23': '2019-12-22 18:35:19']
transition_wmb =wmb['2019-12-22 06:18:23': '2019-12-22 18:35:19']
transition_lidar = lidar['2019-12-22 06:18:23': '2019-12-22 18:35:19']
result =pd.concat([data[1],data[2],data[3], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine16/sbi2_turbine16.csv')
#generating tnhb1 file
#12:47:53 06:09:32
for i in range(4,8):
data[i] = data[i]['2019-12-21 12:47:53': '2019-12-22 06:09:32']
transition_wmb =wmb['2019-12-21 12:47:53': '2019-12-22 06:09:32']
transition_lidar = lidar['2019-12-21 12:47:53': '2019-12-22 06:09:32']
result =pd.concat([data[4],data[5],data[6],data[7], transition_lidar, transition_wmb], axis=1 )
result.to_csv('Results_preprocessing/turbine16/tnhb1_turbine16.csv')
'''
'''
files to extract:
17.12.2019 04:22:54 22.12.2019 06:09:32
'''
print(data[5].index[0])
print(data[5].index[-1])
data[5] = data[5]['2019-12-17 05:22:54': '2019-12-22 07:09:32']
transition_wmb =wmb['2019-12-17 05:22:54': '2019-12-22 07:09:32']
transition_lidar = lidar['2019-12-17 05:22:54': '2019-12-22 07:09:32']
result = pd.concat([data[5], transition_lidar, transition_wmb], axis=1)
del result['max_deflection_i']
del result['ddt_max_deflection']
del result['eccentricity']
del result['ddt_axis_ratio']
del result['ddt_eccentricity']
del result['axis_angle_signed']
del result['axis_angle_unsigned']
del result['axis_azimuth']
del result['ddt_axis_angle_signed']
del result['ddt_axis_angle_unsigned']
del result['p2p_angle_unsigned']
del result['p2p_angle_signed']
del result['p2p_azimuth']
del result['ddt_p2p_azimuth_unwrapped']
del result['ddt_p2p_azimuth']
del result['ddt_p2p_angle_unsigned']
del result['ddt_p2p_angle_signed']
del result['wind_speed_0']
del result['wind_dir_0']
del result['wind_dir_0_corr']
del result['height_0']
del result['wind_speed_1']
del result['wind_dir_1']
del result['wind_dir_1_corr']
del result['height_1']
del result['wind_speed_2']
del result['wind_dir_2']
del result['wind_dir_2_corr']
del result['height_2']
del result['wind_dir_3']
del result['height_3']
del result['wind_speed_4']
del result['wind_dir_4']
del result['wind_dir_4_corr']
del result['height_4']
del result['wind_speed_5']
del result['wind_dir_5']
del result['wind_dir_5_corr']
del result['height_5']
del result['wind_speed_6']
del result['wind_dir_6']
del result['wind_dir_6_corr']
del result['height_6']
del result['wind_speed_7']
del result['wind_dir_7']
del result['wind_dir_7_corr']
del result['height_7']
del result['wind_speed_8']
del result['wind_dir_8']
del result['wind_dir_8_corr']
del result['height_8']
del result['wind_speed_9']
del result['wind_dir_9']
del result['wind_dir_9_corr']
del result['height_9']
del result['wind_speed_10']
del result['wind_dir_10']
del result['wind_dir_10_corr']
del result['height_10']
del result['heading']
del result['Tp']
del result['Sprp']
del result['Tz']
del result['Hm0']
del result['TI']
del result['T1']
del result['Tc']
del result['Tdw2']
del result['Tdw1']
del result['Tpc']
del result['nu']
del result['eps']
del result['QP']
del result['Ss']
del result['TRef']
del result['Bat']
del result['Percentage']
del result['H(1/10)']
del result['T(1/10)']
del result['H(1/3)']
del result['T(1/3)']
del result['Eps']
del result['#Waves']
result.to_csv('Results_preprocessing/geometry_files/tnhb1_turbine16.csv')
| 35.685734 | 712 | 0.713678 | 26,862 | 152,842 | 3.828717 | 0.012471 | 0.151915 | 0.075209 | 0.06254 | 0.94873 | 0.933805 | 0.917042 | 0.902972 | 0.875738 | 0.849057 | 0 | 0.121193 | 0.100974 | 152,842 | 4,282 | 713 | 35.694068 | 0.627281 | 0.012582 | 0 | 0.90411 | 0 | 0 | 0.366948 | 0.090749 | 0 | 0 | 0 | 0.000234 | 0 | 1 | 0.000342 | false | 0 | 0.017123 | 0 | 0.017808 | 0.018151 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
826f82c6eabac3045e7d8ac2cd5534ff004655d8 | 2,284 | py | Python | speaker_model/test.py | happylittlecat2333/FastSpeech2 | 55efb879db0d7458f97d79fa605c889b2df8321f | [
"MIT"
] | null | null | null | speaker_model/test.py | happylittlecat2333/FastSpeech2 | 55efb879db0d7458f97d79fa605c889b2df8321f | [
"MIT"
] | null | null | null | speaker_model/test.py | happylittlecat2333/FastSpeech2 | 55efb879db0d7458f97d79fa605c889b2df8321f | [
"MIT"
] | null | null | null | # %%
import yaml
from yaml import loader
path = "/home/xjl/Audio/Library/Models/MyFastSpeech2/speaker_model/pretrained_models/spkrec-ecapa-voxceleb/hyperparams.yaml"
config = yaml.load(open(path, 'r'), Loader=yaml.SafeLoader)
# %%
import numpy as np
p = "/home/xjl/Audio/Library/Models/MyFastSpeech2/preprocessed_data/LJSpeech_v2/mel/LJSpeech-neutral-mel-LJSpeech_neutral_LJ001-0001.npy"
x = np.load(p)
x.shape
# %%
p = "/home/xjl/Audio/Library/Models/MyFastSpeech2/preprocessed_data/EmovDB_v2/duration/bea-amused-duration-bea_amused_amused_1-15_0001.npy"
np.load(p).shape
# %%
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x = np.random.rand(100, 10, 80)
for i in x:
# i = i.reshape(-1, 1)
scaler.partial_fit(i)
# %%
p = "/home/xjl/Audio/Library/Models/MyFastSpeech2/dump/EmovDB/stats/stats.npy"
np.load(p).shape
# %%
t = x[0]
print(t.shape)
tt = (t-scaler.mean_) / scaler.scale_
print(tt.shape)
tt
# %%
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x = np.random.rand(100, 10, 1)
for i in x:
# i = i.reshape(-1, 1)
scaler.partial_fit(i)
# %%
t = x[0]
print(t.shape)
tt = (t - scaler.mean_) / scaler.scale_
print(tt.shape)
tt
# %%
import numpy as np
p = [
"/home/xjl/Audio/Library/Models/MyFastSpeech2/preprocessed_data/EmovDB_v3/energy_frame/sam-amused-energy-sam_amused_amused_1-28_0001.npy",
"/home/xjl/Audio/Library/Models/MyFastSpeech2/preprocessed_data/EmovDB_v3/mel/bea-amused-mel-bea_amused_amused_1-15_0001.npy",
"/home/xjl/Audio/Library/Models/MyFastSpeech2/preprocessed_data/EmovDB_v3/pitch_frame/sam-amused-pitch-sam_amused_amused_1-28_0001.npy",
]
p2 = [
"/home/xjl/Audio/Library/Models/MyFastSpeech2/preprocessed_data/EmovDB_v2/energy_frame/sam-amused-energy-sam_amused_amused_1-28_0001.npy",
"/home/xjl/Audio/Library/Models/MyFastSpeech2/preprocessed_data/EmovDB_v2/mel/bea-amused-mel-bea_amused_amused_1-15_0001.npy",
"/home/xjl/Audio/Library/Models/MyFastSpeech2/preprocessed_data/EmovDB_v2/pitch_frame/sam-amused-pitch-sam_amused_amused_1-28_0001.npy",
]
for i in range(len(p)):
x = np.load(p[i])
x2 = np.load(p2[i])
print(x.shape, x2.shape)
# a,b = np.load(p[i]), np.load(p2[i])
# %%
i = 1
a, b = np.load(p[i]), np.load(p2[i])
# %%
| 26.55814 | 142 | 0.735552 | 368 | 2,284 | 4.407609 | 0.214674 | 0.043157 | 0.073983 | 0.117139 | 0.805179 | 0.788533 | 0.765105 | 0.725647 | 0.725647 | 0.725647 | 0 | 0.048924 | 0.105079 | 2,284 | 85 | 143 | 26.870588 | 0.744618 | 0.048161 | 0 | 0.44898 | 0 | 0.204082 | 0.571561 | 0.571098 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.122449 | 0 | 0.122449 | 0.102041 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.